idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
1,001
R vs SAS, why is SAS preferred by private companies?
The reason I understood to be the most convincing was that SAS has an extensive library of vertical business specific modules that people in these verticals all use, so it is somewhat of a lock-in. But also that SAS has addressed the needs of these vertical segments in business and optimized around their needs - optimized in the sense of "user doesn't have to do a lot of extra work to get the results". I am not a SAS user, so this is not meant as a biased defense of the SAS business strategy.
R vs SAS, why is SAS preferred by private companies?
The reason I understood to be the most convincing was that SAS has an extensive library of vertical business specific modules that people in these verticals all use, so it is somewhat of a lock-in. Bu
R vs SAS, why is SAS preferred by private companies? The reason I understood to be the most convincing was that SAS has an extensive library of vertical business specific modules that people in these verticals all use, so it is somewhat of a lock-in. But also that SAS has addressed the needs of these vertical segments in business and optimized around their needs - optimized in the sense of "user doesn't have to do a lot of extra work to get the results". I am not a SAS user, so this is not meant as a biased defense of the SAS business strategy.
R vs SAS, why is SAS preferred by private companies? The reason I understood to be the most convincing was that SAS has an extensive library of vertical business specific modules that people in these verticals all use, so it is somewhat of a lock-in. Bu
1,002
R vs SAS, why is SAS preferred by private companies?
Being the big commercial product that SAS is, there's a strong and coordinated effort by payed salespersons to promote it. I don't think that efforts to promote the usage of R can match these.
R vs SAS, why is SAS preferred by private companies?
Being the big commercial product that SAS is, there's a strong and coordinated effort by payed salespersons to promote it. I don't think that efforts to promote the usage of R can match these.
R vs SAS, why is SAS preferred by private companies? Being the big commercial product that SAS is, there's a strong and coordinated effort by payed salespersons to promote it. I don't think that efforts to promote the usage of R can match these.
R vs SAS, why is SAS preferred by private companies? Being the big commercial product that SAS is, there's a strong and coordinated effort by payed salespersons to promote it. I don't think that efforts to promote the usage of R can match these.
1,003
R vs SAS, why is SAS preferred by private companies?
I look at Open Source or licenced software like this, be it SAS or anything else. My IT department is there to provide a service to our business. The company earns no money from IT, only from the business IT supports. The business has annual revenues of \$16 Billion. IT costs around \$200 million a year. If money was the issue I would cut costs, but if I save 10% (\$20 million) of my budget, will the business notice? Will they just reduce my budget next year? If the IT fails the business loses revenue, how much will vary on the nature of the failure. Parts of the business may no longer earn revenue. If a product like SAS fails, I can sue under a contract. If an OSS product fails, I cannot. I will not recover my \$16 Billion, but I may get some back, and realistically with SAS, you are unlikely to lose the lot. The difference in price versus cost has to justify any additional perceived risk to the business. Sometimes it is cheaper to stick with SAS than to retrain. Sometimes there are higher priority issues, so companies stay with SAS. Some companies do not need the full functionality in which case alternatives are viable. Some do not need the support and again the alternatives are viable. If you meet the business requirements then either options are valid, if you want to provide support for a business you need to look at the total cost of ownership over 5-10 years, the ability to recruit experts in the tools, stability in the product so you don't have to rewrite everything with each new release, the training courses available to skill up, the size of the potential skills available in your region... Often the biggest problems with OSS come about through the poor architecture of the products, look at Linux when 64 bit processors came out, look today at MySQL was recently ported but without support for secondary indexes which is coming later...
R vs SAS, why is SAS preferred by private companies?
I look at Open Source or licenced software like this, be it SAS or anything else. My IT department is there to provide a service to our business. The company earns no money from IT, only from the busi
R vs SAS, why is SAS preferred by private companies? I look at Open Source or licenced software like this, be it SAS or anything else. My IT department is there to provide a service to our business. The company earns no money from IT, only from the business IT supports. The business has annual revenues of \$16 Billion. IT costs around \$200 million a year. If money was the issue I would cut costs, but if I save 10% (\$20 million) of my budget, will the business notice? Will they just reduce my budget next year? If the IT fails the business loses revenue, how much will vary on the nature of the failure. Parts of the business may no longer earn revenue. If a product like SAS fails, I can sue under a contract. If an OSS product fails, I cannot. I will not recover my \$16 Billion, but I may get some back, and realistically with SAS, you are unlikely to lose the lot. The difference in price versus cost has to justify any additional perceived risk to the business. Sometimes it is cheaper to stick with SAS than to retrain. Sometimes there are higher priority issues, so companies stay with SAS. Some companies do not need the full functionality in which case alternatives are viable. Some do not need the support and again the alternatives are viable. If you meet the business requirements then either options are valid, if you want to provide support for a business you need to look at the total cost of ownership over 5-10 years, the ability to recruit experts in the tools, stability in the product so you don't have to rewrite everything with each new release, the training courses available to skill up, the size of the potential skills available in your region... Often the biggest problems with OSS come about through the poor architecture of the products, look at Linux when 64 bit processors came out, look today at MySQL was recently ported but without support for secondary indexes which is coming later...
R vs SAS, why is SAS preferred by private companies? I look at Open Source or licenced software like this, be it SAS or anything else. My IT department is there to provide a service to our business. The company earns no money from IT, only from the busi
1,004
R vs SAS, why is SAS preferred by private companies?
Some reasons that I haven't seen mentioned: Better documentation. SAS documentation is verbose, R documentation is terse. Many companies may prefer verbose documentation. Better error messages. R's error messages often seem designed to prove that the person writing the message is smarter than the person reading it. Tech support. SAS has some of the best tech support I've run into anywhere, provided by SAS. You can get help with R, but that help is scattered over different places and isn't always available. The people on the various sites that provide help with R are volunteers - and volunteers aren't obligated to help. The people at SAS tech support are paid to do what they do - and they do it well. Not only do they do it well, they do it politely a trait that is often not present in all R communities (my favorite? "I got help by typing 'help', why don't you try typing 'help'?") Ease of coordination with Word and Excel. Yes, I know you can get R to do this, but it's easier with SAS (on the other hand, R works better with $\LaTeX$ but a lot more companies use Word).
R vs SAS, why is SAS preferred by private companies?
Some reasons that I haven't seen mentioned: Better documentation. SAS documentation is verbose, R documentation is terse. Many companies may prefer verbose documentation. Better error messages. R's
R vs SAS, why is SAS preferred by private companies? Some reasons that I haven't seen mentioned: Better documentation. SAS documentation is verbose, R documentation is terse. Many companies may prefer verbose documentation. Better error messages. R's error messages often seem designed to prove that the person writing the message is smarter than the person reading it. Tech support. SAS has some of the best tech support I've run into anywhere, provided by SAS. You can get help with R, but that help is scattered over different places and isn't always available. The people on the various sites that provide help with R are volunteers - and volunteers aren't obligated to help. The people at SAS tech support are paid to do what they do - and they do it well. Not only do they do it well, they do it politely a trait that is often not present in all R communities (my favorite? "I got help by typing 'help', why don't you try typing 'help'?") Ease of coordination with Word and Excel. Yes, I know you can get R to do this, but it's easier with SAS (on the other hand, R works better with $\LaTeX$ but a lot more companies use Word).
R vs SAS, why is SAS preferred by private companies? Some reasons that I haven't seen mentioned: Better documentation. SAS documentation is verbose, R documentation is terse. Many companies may prefer verbose documentation. Better error messages. R's
1,005
R vs SAS, why is SAS preferred by private companies?
I think the legacy angle can be a big one for the following reason. An organisation hires a person, call them person X. They are a computing guru/wizard/etc. They build awesome SAS programs/tools/etc. They are so good that other people in the organisation don't feel like they need to understand how the programs work. They make it so easy to just push a button, and everything just works (the magic black boxes). Person X leaves the organisation. Unfortunately, the knowledge that person X has leaves the organisation (documentation and knowledge management was not prioritised, working programs was instead). They are replaced by person Y. Person Y is great with R but has no idea about SAS, and hence has no idea about how the SAS programs actually work. There is a huge learning curve to even figure out what needs to be translated into R. For example, person Y can not tell if a data transformation has been done to enable a proc to be called, or for merging with other data, etc. The transformations needed are probably different in R, because the functions are different, and the defaults are different (e.g. treatment of categorical variables in proc glm vs the lm()) So, you are now faced with a unequal cost trade off. Say the cost of re-writing / translating from SAS to R is $C_T$. We then need the (small?) efficiency gain from moving to R (I say small based on my experience with optimising SAS code and optimising R code for logistic regression predictions). This also needs to be compared to the license costs for SAS. It is likely that $C_T$ is significantly higher than a single year license for SAS. I expect that SAS would be doing some analysis of this trade off, and letting this influence how it sets the licence fee (well, I would if I worked at SAS). Also notice how SAS plotting procedures are way better than a decade or so ago (eg proc sgplot vs proc plot). coincidence that R did good plotting first? I think not! This effectively reduces the efficiency from switching because plotting is not so different anymore - R is still better, but not by enough to switch...
R vs SAS, why is SAS preferred by private companies?
I think the legacy angle can be a big one for the following reason. An organisation hires a person, call them person X. They are a computing guru/wizard/etc. They build awesome SAS programs/tools/etc.
R vs SAS, why is SAS preferred by private companies? I think the legacy angle can be a big one for the following reason. An organisation hires a person, call them person X. They are a computing guru/wizard/etc. They build awesome SAS programs/tools/etc. They are so good that other people in the organisation don't feel like they need to understand how the programs work. They make it so easy to just push a button, and everything just works (the magic black boxes). Person X leaves the organisation. Unfortunately, the knowledge that person X has leaves the organisation (documentation and knowledge management was not prioritised, working programs was instead). They are replaced by person Y. Person Y is great with R but has no idea about SAS, and hence has no idea about how the SAS programs actually work. There is a huge learning curve to even figure out what needs to be translated into R. For example, person Y can not tell if a data transformation has been done to enable a proc to be called, or for merging with other data, etc. The transformations needed are probably different in R, because the functions are different, and the defaults are different (e.g. treatment of categorical variables in proc glm vs the lm()) So, you are now faced with a unequal cost trade off. Say the cost of re-writing / translating from SAS to R is $C_T$. We then need the (small?) efficiency gain from moving to R (I say small based on my experience with optimising SAS code and optimising R code for logistic regression predictions). This also needs to be compared to the license costs for SAS. It is likely that $C_T$ is significantly higher than a single year license for SAS. I expect that SAS would be doing some analysis of this trade off, and letting this influence how it sets the licence fee (well, I would if I worked at SAS). Also notice how SAS plotting procedures are way better than a decade or so ago (eg proc sgplot vs proc plot). coincidence that R did good plotting first? I think not! This effectively reduces the efficiency from switching because plotting is not so different anymore - R is still better, but not by enough to switch...
R vs SAS, why is SAS preferred by private companies? I think the legacy angle can be a big one for the following reason. An organisation hires a person, call them person X. They are a computing guru/wizard/etc. They build awesome SAS programs/tools/etc.
1,006
R vs SAS, why is SAS preferred by private companies?
For industrial statistics, there are quality assurance people who (usually) have no programming, statistics, or science background and who audit statisticians, programmers, and scientists. They want to know, "How do you know that what you're doing is right?" and "If it's wrong, how can we blame somebody and how will they pay for it?". The GNU/GPL Copyleft license comes with canned text that says, "R is free software and COMES WITH ABSOLUTELY NO WARRANTY" in all-caps text exactly as I have written. This is offputting. When a quality person reads this text, they basically discredit R outright. I mean, if a product is good, it's worth adding a warranty right? Such has commercial products led us to believe. In fact, it was ultimately the FDA who said they would accept regulatory submissions in R that reflected a seachange in the software industry. (Note this statement comes after the original posting date of the question.) For someone who knows nothing about computers, the imagined scenarios of security, irreproducibility, and grave scientific errors are unbounded as a result of this ABSOLUTELY LACK OF WARRANTY. We all agree mistakes can have catastrophic costs. For your SAS license, SAS has experts who can explain their software to auditors, and in the impossible scenario that SAS actually causes such an issue, they can be accountable for fines and punishments (they also have enough money for lawyers to ensure they'd be exonerated completely in such a case). The burden and cost of having an analyst/programmer present this case for R basically amounts to a SAS license. Not that programming in SAS completely exonerates you from the crushing burden of quality compliance! So basically, I would say litigiousness has played a prominent role in necessitating a costly license software.
R vs SAS, why is SAS preferred by private companies?
For industrial statistics, there are quality assurance people who (usually) have no programming, statistics, or science background and who audit statisticians, programmers, and scientists. They want t
R vs SAS, why is SAS preferred by private companies? For industrial statistics, there are quality assurance people who (usually) have no programming, statistics, or science background and who audit statisticians, programmers, and scientists. They want to know, "How do you know that what you're doing is right?" and "If it's wrong, how can we blame somebody and how will they pay for it?". The GNU/GPL Copyleft license comes with canned text that says, "R is free software and COMES WITH ABSOLUTELY NO WARRANTY" in all-caps text exactly as I have written. This is offputting. When a quality person reads this text, they basically discredit R outright. I mean, if a product is good, it's worth adding a warranty right? Such has commercial products led us to believe. In fact, it was ultimately the FDA who said they would accept regulatory submissions in R that reflected a seachange in the software industry. (Note this statement comes after the original posting date of the question.) For someone who knows nothing about computers, the imagined scenarios of security, irreproducibility, and grave scientific errors are unbounded as a result of this ABSOLUTELY LACK OF WARRANTY. We all agree mistakes can have catastrophic costs. For your SAS license, SAS has experts who can explain their software to auditors, and in the impossible scenario that SAS actually causes such an issue, they can be accountable for fines and punishments (they also have enough money for lawyers to ensure they'd be exonerated completely in such a case). The burden and cost of having an analyst/programmer present this case for R basically amounts to a SAS license. Not that programming in SAS completely exonerates you from the crushing burden of quality compliance! So basically, I would say litigiousness has played a prominent role in necessitating a costly license software.
R vs SAS, why is SAS preferred by private companies? For industrial statistics, there are quality assurance people who (usually) have no programming, statistics, or science background and who audit statisticians, programmers, and scientists. They want t
1,007
Does causation imply correlation?
As many of the answers above have stated, causation does not imply linear correlation. Since a lot of the correlation concepts come from fields that rely heavily on linear statistics, usually correlation is seen as equal to linear correlation. The wikipedia article is an alright source for this, I really like this image: Look at some of the figures in the bottom row, for instance the parabola-ish shape in the 4th example. This is kind of what happens in @StasK answer (with a little bit of noise added). Y can be fully caused by X but if the numeric relationship is not linear and symmetric, you will still have a correlation of 0. The word you are looking for is mutual information: this is sort of the general non-linear version of correlation. In that case, your statement would be true: causation implies high mutual information.
Does causation imply correlation?
As many of the answers above have stated, causation does not imply linear correlation. Since a lot of the correlation concepts come from fields that rely heavily on linear statistics, usually correlat
Does causation imply correlation? As many of the answers above have stated, causation does not imply linear correlation. Since a lot of the correlation concepts come from fields that rely heavily on linear statistics, usually correlation is seen as equal to linear correlation. The wikipedia article is an alright source for this, I really like this image: Look at some of the figures in the bottom row, for instance the parabola-ish shape in the 4th example. This is kind of what happens in @StasK answer (with a little bit of noise added). Y can be fully caused by X but if the numeric relationship is not linear and symmetric, you will still have a correlation of 0. The word you are looking for is mutual information: this is sort of the general non-linear version of correlation. In that case, your statement would be true: causation implies high mutual information.
Does causation imply correlation? As many of the answers above have stated, causation does not imply linear correlation. Since a lot of the correlation concepts come from fields that rely heavily on linear statistics, usually correlat
1,008
Does causation imply correlation?
The strict answer is "no, causation does not necessarily imply correlation". Consider $X\sim N(0,1)$ and $Y=X^2\sim\chi^2_1$. Causation does not get any stronger: $X$ determines $Y$. Yet, correlation between $X$ and $Y$ is 0. Proof: The (joint) moments of these variables are: $E[X]=0$; $E[Y]=E[X^2]=1$; $${\rm Cov}[X,Y]=E[ (X-0)(Y-1) ] = E[XY]-E[X]1 = E[X^3]-E[X]=0$$ using the property of the standard normal distribution that its odd moments are all equal to zero (can be easily derived from its moment-generating-function, say). Hence, the correlation is equal to zero. To address some of the comments: the only reason this argument works is because the distribution of $X$ is centered at zero, and is symmetric around 0. In fact, any other distribution with these properties that would have sufficient number of moments would have worked in place of $N(0,1)$, e.g., uniform on $(-10,10)$ or Laplace $\sim \exp(-|x|)$. An oversimplified argument is that for every positive value of $X$, there is an equally likely negative value of $X$ of the same magnitude, so when you square the $X$, you can't say that greater values of $X$ are associated with greater or smaller values of $Y$. However, if you take say $X\sim N(3,1)$, then $E[X]=3$, $E[Y]=E[X^2]=10$, $E[X^3]=36$, and ${\rm Cov}[X,Y]=E[XY]-E[X]E[Y]=36-30=6\neq0$. This makes perfect sense: for each value of $X$ below zero, there is a far more likely value of $-X$ which is above zero, so larger values of $X$ are associated with larger values of $Y$. (The latter has a non-central $\chi^2$ distribution; you can pull the variance from the Wikipedia page and compute the correlation if you are interested.)
Does causation imply correlation?
The strict answer is "no, causation does not necessarily imply correlation". Consider $X\sim N(0,1)$ and $Y=X^2\sim\chi^2_1$. Causation does not get any stronger: $X$ determines $Y$. Yet, correlation
Does causation imply correlation? The strict answer is "no, causation does not necessarily imply correlation". Consider $X\sim N(0,1)$ and $Y=X^2\sim\chi^2_1$. Causation does not get any stronger: $X$ determines $Y$. Yet, correlation between $X$ and $Y$ is 0. Proof: The (joint) moments of these variables are: $E[X]=0$; $E[Y]=E[X^2]=1$; $${\rm Cov}[X,Y]=E[ (X-0)(Y-1) ] = E[XY]-E[X]1 = E[X^3]-E[X]=0$$ using the property of the standard normal distribution that its odd moments are all equal to zero (can be easily derived from its moment-generating-function, say). Hence, the correlation is equal to zero. To address some of the comments: the only reason this argument works is because the distribution of $X$ is centered at zero, and is symmetric around 0. In fact, any other distribution with these properties that would have sufficient number of moments would have worked in place of $N(0,1)$, e.g., uniform on $(-10,10)$ or Laplace $\sim \exp(-|x|)$. An oversimplified argument is that for every positive value of $X$, there is an equally likely negative value of $X$ of the same magnitude, so when you square the $X$, you can't say that greater values of $X$ are associated with greater or smaller values of $Y$. However, if you take say $X\sim N(3,1)$, then $E[X]=3$, $E[Y]=E[X^2]=10$, $E[X^3]=36$, and ${\rm Cov}[X,Y]=E[XY]-E[X]E[Y]=36-30=6\neq0$. This makes perfect sense: for each value of $X$ below zero, there is a far more likely value of $-X$ which is above zero, so larger values of $X$ are associated with larger values of $Y$. (The latter has a non-central $\chi^2$ distribution; you can pull the variance from the Wikipedia page and compute the correlation if you are interested.)
Does causation imply correlation? The strict answer is "no, causation does not necessarily imply correlation". Consider $X\sim N(0,1)$ and $Y=X^2\sim\chi^2_1$. Causation does not get any stronger: $X$ determines $Y$. Yet, correlation
1,009
Does causation imply correlation?
Essentially, yes. Correlation does not imply causation because there could be other explanations for a correlation beyond cause. But in order for A to be a cause of B they must be associated in some way. Meaning there is a correlation between them - though that correlation does not necessarily need to be linear. As some of the commenters have suggested, it's likely more appropriate to use a term like 'dependence' or 'association' rather than correlation. Though as I've mentioned in the comments, I've seen "correlation does not mean causation" in response to analysis far beyond simple linear correlation, and so for the purposes of the saying, I've essentially extended "correlation" to any association between A and B.
Does causation imply correlation?
Essentially, yes. Correlation does not imply causation because there could be other explanations for a correlation beyond cause. But in order for A to be a cause of B they must be associated in some w
Does causation imply correlation? Essentially, yes. Correlation does not imply causation because there could be other explanations for a correlation beyond cause. But in order for A to be a cause of B they must be associated in some way. Meaning there is a correlation between them - though that correlation does not necessarily need to be linear. As some of the commenters have suggested, it's likely more appropriate to use a term like 'dependence' or 'association' rather than correlation. Though as I've mentioned in the comments, I've seen "correlation does not mean causation" in response to analysis far beyond simple linear correlation, and so for the purposes of the saying, I've essentially extended "correlation" to any association between A and B.
Does causation imply correlation? Essentially, yes. Correlation does not imply causation because there could be other explanations for a correlation beyond cause. But in order for A to be a cause of B they must be associated in some w
1,010
Does causation imply correlation?
Things are definitely nuanced here. Causation does not imply correlation nor even statistical dependence, at least not in the simple way we usually think about them, or in the way some answers are suggesting (just transforming $X$ or $Y$ etc). Consider the following causal model: $$ X \rightarrow Y \leftarrow U $$ That is, both $X$ and $U$ cause $Y$. Now let: $$ X \sim bernoulli(0.5)\\ U \sim bernoulli(0.5) \\ Y = 1- X - U + 2XU $$ Suppose you don't observe $U$. Notice that $P(Y|X) = P(Y)$. That is, even though $X$ causes $Y$ (in the non-parametric structural equation sense) you don't see any dependence! You can do any non-linear transformation you want and that won't reveal any dependence because there isn't any marginal dependency of $Y$ and $X$ here. The trick is that even though $X$ and $U$ cause $Y$, marginally their average causal effect is zero. You only see the (exact) dependence when conditioning both on $X$ and $U$ together (that also shows that $X\perp Y$ and $U\perp Y$ does not imply $\{X, U\} \perp Y$). So, yes, one could argue that, even though $X$ causes $Y$, the marginal causal effect of $X$ on $Y$ is zero, so that's why we don't see dependence of $X$ and $Y$. But this just illustrates how nuanced the problem is, because $X$ does cause $Y$, not just in the way you naively would think (it interacts with $U$). So in short I would say that: (i) causality suggests dependence; but, (ii) the dependence is functional/structural dependence and it may or may not translate in the specific statistical dependence you are thinking of.
Does causation imply correlation?
Things are definitely nuanced here. Causation does not imply correlation nor even statistical dependence, at least not in the simple way we usually think about them, or in the way some answers are sug
Does causation imply correlation? Things are definitely nuanced here. Causation does not imply correlation nor even statistical dependence, at least not in the simple way we usually think about them, or in the way some answers are suggesting (just transforming $X$ or $Y$ etc). Consider the following causal model: $$ X \rightarrow Y \leftarrow U $$ That is, both $X$ and $U$ cause $Y$. Now let: $$ X \sim bernoulli(0.5)\\ U \sim bernoulli(0.5) \\ Y = 1- X - U + 2XU $$ Suppose you don't observe $U$. Notice that $P(Y|X) = P(Y)$. That is, even though $X$ causes $Y$ (in the non-parametric structural equation sense) you don't see any dependence! You can do any non-linear transformation you want and that won't reveal any dependence because there isn't any marginal dependency of $Y$ and $X$ here. The trick is that even though $X$ and $U$ cause $Y$, marginally their average causal effect is zero. You only see the (exact) dependence when conditioning both on $X$ and $U$ together (that also shows that $X\perp Y$ and $U\perp Y$ does not imply $\{X, U\} \perp Y$). So, yes, one could argue that, even though $X$ causes $Y$, the marginal causal effect of $X$ on $Y$ is zero, so that's why we don't see dependence of $X$ and $Y$. But this just illustrates how nuanced the problem is, because $X$ does cause $Y$, not just in the way you naively would think (it interacts with $U$). So in short I would say that: (i) causality suggests dependence; but, (ii) the dependence is functional/structural dependence and it may or may not translate in the specific statistical dependence you are thinking of.
Does causation imply correlation? Things are definitely nuanced here. Causation does not imply correlation nor even statistical dependence, at least not in the simple way we usually think about them, or in the way some answers are sug
1,011
Does causation imply correlation?
Adding to @EpiGrad 's answer. I think, for a lot of people, "correlation" will imply "linear correlation". And the concept of nonlinear correlation might not be intuitive. So, I would say "no they don't have to be correlated but they do have to be related". We are agreeing on the substance, but disagreeing on the best way to get the substance across. One example of such a causation (at least people think it's causal) is that between the likelihood of answering your phone and income. It is known that people at both ends of the income spectrum are less likely to answer their phones than people in the middle. It is thought that the causal pattern is different for the poor (e.g. avoid bill collectors) and rich (e.g. avoid people asking for donations).
Does causation imply correlation?
Adding to @EpiGrad 's answer. I think, for a lot of people, "correlation" will imply "linear correlation". And the concept of nonlinear correlation might not be intuitive. So, I would say "no they do
Does causation imply correlation? Adding to @EpiGrad 's answer. I think, for a lot of people, "correlation" will imply "linear correlation". And the concept of nonlinear correlation might not be intuitive. So, I would say "no they don't have to be correlated but they do have to be related". We are agreeing on the substance, but disagreeing on the best way to get the substance across. One example of such a causation (at least people think it's causal) is that between the likelihood of answering your phone and income. It is known that people at both ends of the income spectrum are less likely to answer their phones than people in the middle. It is thought that the causal pattern is different for the poor (e.g. avoid bill collectors) and rich (e.g. avoid people asking for donations).
Does causation imply correlation? Adding to @EpiGrad 's answer. I think, for a lot of people, "correlation" will imply "linear correlation". And the concept of nonlinear correlation might not be intuitive. So, I would say "no they do
1,012
Does causation imply correlation?
The cause and the effect will be correlated unless there is no variation at all in the incidence and magnitude of the cause and no variation at all in its causal force. The only other possibility would be if the cause is perfectly correlated with another causal variable with exactly the opposite effect. Basically, these are thought-experiment conditions. In the real world, causation will imply dependence in some form (although it might not be linear correlation).
Does causation imply correlation?
The cause and the effect will be correlated unless there is no variation at all in the incidence and magnitude of the cause and no variation at all in its causal force. The only other possibility wou
Does causation imply correlation? The cause and the effect will be correlated unless there is no variation at all in the incidence and magnitude of the cause and no variation at all in its causal force. The only other possibility would be if the cause is perfectly correlated with another causal variable with exactly the opposite effect. Basically, these are thought-experiment conditions. In the real world, causation will imply dependence in some form (although it might not be linear correlation).
Does causation imply correlation? The cause and the effect will be correlated unless there is no variation at all in the incidence and magnitude of the cause and no variation at all in its causal force. The only other possibility wou
1,013
Does causation imply correlation?
There are great answers here. Artem Kaznatcheev, Fomite and Peter Flom point out that causation would usually imply dependence rather than linear correlation. Carlos Cinelli gives an example where there's no dependence, because of how the generating function is set up. I want to add a point about how this dependence can disappear in practice, in the kinds of datasets that you might well work with. Situations like Carlos's example are not limited to mere "thought-experiment conditions". Dependences vanish in self-regulating processes. Homeostasis, for example, ensures that your internal body temperature remains independent of the room temperature. External heat influences your body temperature directly, but it also influences the body's cooling systems (e.g. sweating) which keep the body temperature stable. If we sample temperature in extremely fast intervals and using extremely precise measurements, we have a chance of observing the causal dependences, but at normal sampling rates, body temperature and external temperature appear independent. Self-regulating processes are common in biological systems; they are produced by evolution. Mammals that fail to regulate their body temperature are removed by natural selection. Researchers who work with biological data should be aware that causal dependences may vanish in their datasets.
Does causation imply correlation?
There are great answers here. Artem Kaznatcheev, Fomite and Peter Flom point out that causation would usually imply dependence rather than linear correlation. Carlos Cinelli gives an example where the
Does causation imply correlation? There are great answers here. Artem Kaznatcheev, Fomite and Peter Flom point out that causation would usually imply dependence rather than linear correlation. Carlos Cinelli gives an example where there's no dependence, because of how the generating function is set up. I want to add a point about how this dependence can disappear in practice, in the kinds of datasets that you might well work with. Situations like Carlos's example are not limited to mere "thought-experiment conditions". Dependences vanish in self-regulating processes. Homeostasis, for example, ensures that your internal body temperature remains independent of the room temperature. External heat influences your body temperature directly, but it also influences the body's cooling systems (e.g. sweating) which keep the body temperature stable. If we sample temperature in extremely fast intervals and using extremely precise measurements, we have a chance of observing the causal dependences, but at normal sampling rates, body temperature and external temperature appear independent. Self-regulating processes are common in biological systems; they are produced by evolution. Mammals that fail to regulate their body temperature are removed by natural selection. Researchers who work with biological data should be aware that causal dependences may vanish in their datasets.
Does causation imply correlation? There are great answers here. Artem Kaznatcheev, Fomite and Peter Flom point out that causation would usually imply dependence rather than linear correlation. Carlos Cinelli gives an example where the
1,014
Does causation imply correlation?
The answer is: Causation does not imply (linear) correlation. Assume we have the causal graph: $X \rightarrow Y$, where $X$ is a cause of $Y$, such that, if $X < 0$ we have $Y=X$ and else (if $X \geq 0$) we have $Y=-X$. Clearly, $X$ is a cause of $Y$. However, when you compute the correlation between instances of $X$ and $Y$, e.g., for the points $X$=[-5,-4,-3,-2,-1,0,1,2,3,4,5] and $Y$=[-5,-4,-3,-2,-1,0,-1,-2,-3,-4,-5], then the correlation $corr(X,Y)$ will be 0, even though there exists a true causal mechanism relationship between $X$ and $Y$, for which the value of $Y$ is solely determined by the value of $X$. You can try it out here: https://ncalculators.com/statistics/covariance-calculator.htm using the above data vectors.
Does causation imply correlation?
The answer is: Causation does not imply (linear) correlation. Assume we have the causal graph: $X \rightarrow Y$, where $X$ is a cause of $Y$, such that, if $X < 0$ we have $Y=X$ and else (if $X \geq
Does causation imply correlation? The answer is: Causation does not imply (linear) correlation. Assume we have the causal graph: $X \rightarrow Y$, where $X$ is a cause of $Y$, such that, if $X < 0$ we have $Y=X$ and else (if $X \geq 0$) we have $Y=-X$. Clearly, $X$ is a cause of $Y$. However, when you compute the correlation between instances of $X$ and $Y$, e.g., for the points $X$=[-5,-4,-3,-2,-1,0,1,2,3,4,5] and $Y$=[-5,-4,-3,-2,-1,0,-1,-2,-3,-4,-5], then the correlation $corr(X,Y)$ will be 0, even though there exists a true causal mechanism relationship between $X$ and $Y$, for which the value of $Y$ is solely determined by the value of $X$. You can try it out here: https://ncalculators.com/statistics/covariance-calculator.htm using the above data vectors.
Does causation imply correlation? The answer is: Causation does not imply (linear) correlation. Assume we have the causal graph: $X \rightarrow Y$, where $X$ is a cause of $Y$, such that, if $X < 0$ we have $Y=X$ and else (if $X \geq
1,015
Does causation imply correlation?
I add a less statistically technical answer here for the less statistically inclined audience: One variable (let's say, X) can positively influence another variable (let's say, $Y$), while not being associated with $Y$, or even being negatively associated with $Y$, if there are confounding factors that distort the association between $X$ and $Y$. For example, suppose that the very best doctors are put in wards with the highest needs patients. While the quality of doctors itself has a positive influence on reducing death rates of patients, the quality of doctors could actually be negatively correlated with death rates, because of the confounding variable of the needs of the patients.
Does causation imply correlation?
I add a less statistically technical answer here for the less statistically inclined audience: One variable (let's say, X) can positively influence another variable (let's say, $Y$), while not being a
Does causation imply correlation? I add a less statistically technical answer here for the less statistically inclined audience: One variable (let's say, X) can positively influence another variable (let's say, $Y$), while not being associated with $Y$, or even being negatively associated with $Y$, if there are confounding factors that distort the association between $X$ and $Y$. For example, suppose that the very best doctors are put in wards with the highest needs patients. While the quality of doctors itself has a positive influence on reducing death rates of patients, the quality of doctors could actually be negatively correlated with death rates, because of the confounding variable of the needs of the patients.
Does causation imply correlation? I add a less statistically technical answer here for the less statistically inclined audience: One variable (let's say, X) can positively influence another variable (let's say, $Y$), while not being a
1,016
Help me understand Bayesian prior and posterior distributions
Let me first explain what a conjugate prior is. I will then explain the Bayesian analyses using your specific example. Bayesian statistics involve the following steps: Define the prior distribution that incorporates your subjective beliefs about a parameter (in your example the parameter of interest is the proportion of left-handers). The prior can be "uninformative" or "informative" (but there is no prior that has no information, see the discussion here). Gather data. Update your prior distribution with the data using Bayes' theorem to obtain a posterior distribution. The posterior distribution is a probability distribution that represents your updated beliefs about the parameter after having seen the data. Analyze the posterior distribution and summarize it (mean, median, sd, quantiles, ...). The basis of all bayesian statistics is Bayes' theorem, which is $$ \mathrm{posterior} \propto \mathrm{prior} \times \mathrm{likelihood} $$ In your case, the likelihood is binomial. If the prior and the posterior distribution are in the same family, the prior and posterior are called conjugate distributions. The beta distribution is a conjugate prior because the posterior is also a beta distribution. We say that the beta distribution is the conjugate family for the binomial likelihood. Conjugate analyses are convenient but rarely occur in real-world problems. In most cases, the posterior distribution has to be found numerically via MCMC (using Stan, WinBUGS, OpenBUGS, JAGS, PyMC or some other program). If the prior probability distribution does not integrate to 1, it is called an improper prior, if it does integrate to 1 it is called a proper prior. In most cases, an improper prior does not pose a major problem for Bayesian analyses. The posterior distribution must be proper though, i.e. the posterior must integrate to 1. These rules of thumb follow directly from the nature of the Bayesian analysis procedure: If the prior is uninformative, the posterior is very much determined by the data (the posterior is data-driven) If the prior is informative, the posterior is a mixture of the prior and the data The more informative the prior, the more data you need to "change" your beliefs, so to speak because the posterior is very much driven by the prior information If you have a lot of data, the data will dominate the posterior distribution (they will overwhelm the prior) An excellent overview of some possible "informative" and "uninformative" priors for the beta distribution can be found in this post. Say your prior beta is $\mathrm{Beta}(\pi_{LH}| \alpha, \beta)$ where $\pi_{LH}$ is the proportion of left-handers. To specify the prior parameters $\alpha$ and $\beta$, it is useful to know the mean and variance of the beta distribution (for example, if you want your prior to have a certain mean and variance). The mean is $\bar{\pi}_{LH}=\alpha/(\alpha + \beta)$. Thus, whenever $\alpha =\beta$, the mean is $0.5$. The variance of the beta distribution is $\frac{\alpha\beta}{(\alpha + \beta)^{2}(\alpha + \beta + 1)}$. Now, the convenient thing is that you can think of $\alpha$ and $\beta$ as previously observed (pseudo-)data, namely $\alpha$ left-handers and $\beta$ right-handers out of a (pseudo-)sample of size $n_{eq}=\alpha + \beta$. The $\mathrm{Beta}(\pi_{LH} |\alpha=1, \beta=1)$ distribution is the uniform (all values of $\pi_{LH}$ are equally probable) and is the equivalent of having observed two people out of which one is left-handed and one is right-handed. The posterior beta distribution is simply $\mathrm{Beta}(z + \alpha, N - z +\beta)$ where $N$ is the size of the sample and $z$ is the number of left-handers in the sample. The posterior mean of $\pi_{LH}$ is therefore $(z + \alpha)/(N + \alpha + \beta)$. So to find the parameters of the posterior beta distribution, we simply add $z$ left-handers to $\alpha$ and $N-z$ right-handers to $\beta$. The posterior variance is $\frac{(z+\alpha)(N-z+\beta)}{(N+\alpha+\beta)^{2}(N + \alpha + \beta + 1)}$. Note that a highly informative prior also leads to a smaller variance of the posterior distribution (the graphs below illustrate the point nicely). In your case, $z=2$ and $N=18$ and your prior is the uniform which is uninformative, so $\alpha = \beta = 1$. Your posterior distribution is therefore $Beta(3, 17)$. The posterior mean is $\bar{\pi}_{LH}=3/(3+17)=0.15$. Here is a graph that shows the prior, the likelihood of the data and the posterior You see that because your prior distribution is uninformative, your posterior distribution is entirely driven by the data. Also plotted is the highest density interval (HDI) for the posterior distribution. Imagine that you put your posterior distribution in a 2D-basin and start to fill in water until 95% of the distribution are above the waterline. The points where the waterline intersects with the posterior distribution constitute the 95%-HDI. Every point inside the HDI has a higher probability than any point outside it. Also, the HDI always includes the peak of the posterior distribution (i.e. the mode). The HDI is different from an equal tailed 95% credible interval where 2.5% from each tail of the posterior are excluded (see here). For your second task, you're asked to incorporate the information that 5-20% of the population are left-handers into account. There are several ways of doing that. The easiest way is to say that the prior beta distribution should have a mean of $0.125$ which is the mean of $0.05$ and $0.2$. But how to choose $\alpha$ and $\beta$ of the prior beta distribution? First, you want your mean of the prior distribution to be $0.125$ out of a pseudo-sample of equivalent sample size $n_{eq}$. More generally, if you want your prior to have a mean $m$ with a pseudo-sample size $n_{eq}$, the corresponding $\alpha$ and $\beta$ values are: $\alpha = mn_{eq}$ and $\beta = (1-m)n_{eq}$. All you are left to do now is to choose the pseudo-sample size $n_{eq}$ which determines how confident you are about your prior information. Let's say you are very sure about your prior information and set $n_{eq}=1000$. The parameters of your prior distribution are thereore $\alpha = 0.125\cdot 1000 = 125$ and $\beta = (1 - 0.125)\cdot 1000 = 875$. The posterior distribution is $\mathrm{Beta}(127, 891)$ with a mean of about $0.125$ which is practically the same as the prior mean of $0.125$. The prior information is dominating the posterior (see the following graph): If you are less sure about the prior information, you could set the $n_{eq}$ of your pseudo-sample to, say, $10$, which yields $\alpha=1.25$ and $\beta=8.75$ for your prior beta distribution. The posterior distribution is $\mathrm{Beta}(3.25, 24.75)$ with a mean of about $0.116$. The posterior mean is now near the mean of your data ($0.111$) because the data overwhelm the prior. Here is the graph showing the situation: A more advanced method of incorporating the prior information would be to say that the $0.025$ quantile of your prior beta distribution should be about $0.05$ and the $0.975$ quantile should be about $0.2$. This is equivalent of saying that your are 95% sure that the proportion of left-handers in the population lies between 5% and 20%. The function beta.select in the R package LearnBayes calculates the corresponding $\alpha$ and $\beta$ values of a beta distribution corresponding to such quantiles. The code is library(LearnBayes) quantile1=list(p=.025, x=0.05) # the 2.5% quantile should be 0.05 quantile2=list(p=.975, x=0.2) # the 97.5% quantile should be 0.2 beta.select(quantile1, quantile2) [1] 7.61 59.13 It seems that a beta distribution with paramters $\alpha = 7.61$ and $\beta=59.13$ has the desired properties. The prior mean is $7.61/(7.61 + 59.13)\approx 0.114$ which is near the mean of your data ($0.111$). Again, this prior distribution incorporates the information of a pseudo-sample of an equivalent sample size of about $n_{eq}\approx 7.61+59.13 \approx 66.74$. The posterior distribution is $\mathrm{Beta}(9.61, 75.13)$ with a mean of $0.113$ which is comparable with the mean of the previous analysis using a highly informative $\mathrm{Beta}(125, 875)$ prior. Here is the corresponding graph: See also this reference for a short but imho good overview of Bayesian reasoning and simple analysis. A longer introduction for conjugate analyses, especially for binomial data can be found here. A general introduction into Bayesian thinking can be found here. More slides concerning aspects of Baysian statistics are here.
Help me understand Bayesian prior and posterior distributions
Let me first explain what a conjugate prior is. I will then explain the Bayesian analyses using your specific example. Bayesian statistics involve the following steps: Define the prior distribution t
Help me understand Bayesian prior and posterior distributions Let me first explain what a conjugate prior is. I will then explain the Bayesian analyses using your specific example. Bayesian statistics involve the following steps: Define the prior distribution that incorporates your subjective beliefs about a parameter (in your example the parameter of interest is the proportion of left-handers). The prior can be "uninformative" or "informative" (but there is no prior that has no information, see the discussion here). Gather data. Update your prior distribution with the data using Bayes' theorem to obtain a posterior distribution. The posterior distribution is a probability distribution that represents your updated beliefs about the parameter after having seen the data. Analyze the posterior distribution and summarize it (mean, median, sd, quantiles, ...). The basis of all bayesian statistics is Bayes' theorem, which is $$ \mathrm{posterior} \propto \mathrm{prior} \times \mathrm{likelihood} $$ In your case, the likelihood is binomial. If the prior and the posterior distribution are in the same family, the prior and posterior are called conjugate distributions. The beta distribution is a conjugate prior because the posterior is also a beta distribution. We say that the beta distribution is the conjugate family for the binomial likelihood. Conjugate analyses are convenient but rarely occur in real-world problems. In most cases, the posterior distribution has to be found numerically via MCMC (using Stan, WinBUGS, OpenBUGS, JAGS, PyMC or some other program). If the prior probability distribution does not integrate to 1, it is called an improper prior, if it does integrate to 1 it is called a proper prior. In most cases, an improper prior does not pose a major problem for Bayesian analyses. The posterior distribution must be proper though, i.e. the posterior must integrate to 1. These rules of thumb follow directly from the nature of the Bayesian analysis procedure: If the prior is uninformative, the posterior is very much determined by the data (the posterior is data-driven) If the prior is informative, the posterior is a mixture of the prior and the data The more informative the prior, the more data you need to "change" your beliefs, so to speak because the posterior is very much driven by the prior information If you have a lot of data, the data will dominate the posterior distribution (they will overwhelm the prior) An excellent overview of some possible "informative" and "uninformative" priors for the beta distribution can be found in this post. Say your prior beta is $\mathrm{Beta}(\pi_{LH}| \alpha, \beta)$ where $\pi_{LH}$ is the proportion of left-handers. To specify the prior parameters $\alpha$ and $\beta$, it is useful to know the mean and variance of the beta distribution (for example, if you want your prior to have a certain mean and variance). The mean is $\bar{\pi}_{LH}=\alpha/(\alpha + \beta)$. Thus, whenever $\alpha =\beta$, the mean is $0.5$. The variance of the beta distribution is $\frac{\alpha\beta}{(\alpha + \beta)^{2}(\alpha + \beta + 1)}$. Now, the convenient thing is that you can think of $\alpha$ and $\beta$ as previously observed (pseudo-)data, namely $\alpha$ left-handers and $\beta$ right-handers out of a (pseudo-)sample of size $n_{eq}=\alpha + \beta$. The $\mathrm{Beta}(\pi_{LH} |\alpha=1, \beta=1)$ distribution is the uniform (all values of $\pi_{LH}$ are equally probable) and is the equivalent of having observed two people out of which one is left-handed and one is right-handed. The posterior beta distribution is simply $\mathrm{Beta}(z + \alpha, N - z +\beta)$ where $N$ is the size of the sample and $z$ is the number of left-handers in the sample. The posterior mean of $\pi_{LH}$ is therefore $(z + \alpha)/(N + \alpha + \beta)$. So to find the parameters of the posterior beta distribution, we simply add $z$ left-handers to $\alpha$ and $N-z$ right-handers to $\beta$. The posterior variance is $\frac{(z+\alpha)(N-z+\beta)}{(N+\alpha+\beta)^{2}(N + \alpha + \beta + 1)}$. Note that a highly informative prior also leads to a smaller variance of the posterior distribution (the graphs below illustrate the point nicely). In your case, $z=2$ and $N=18$ and your prior is the uniform which is uninformative, so $\alpha = \beta = 1$. Your posterior distribution is therefore $Beta(3, 17)$. The posterior mean is $\bar{\pi}_{LH}=3/(3+17)=0.15$. Here is a graph that shows the prior, the likelihood of the data and the posterior You see that because your prior distribution is uninformative, your posterior distribution is entirely driven by the data. Also plotted is the highest density interval (HDI) for the posterior distribution. Imagine that you put your posterior distribution in a 2D-basin and start to fill in water until 95% of the distribution are above the waterline. The points where the waterline intersects with the posterior distribution constitute the 95%-HDI. Every point inside the HDI has a higher probability than any point outside it. Also, the HDI always includes the peak of the posterior distribution (i.e. the mode). The HDI is different from an equal tailed 95% credible interval where 2.5% from each tail of the posterior are excluded (see here). For your second task, you're asked to incorporate the information that 5-20% of the population are left-handers into account. There are several ways of doing that. The easiest way is to say that the prior beta distribution should have a mean of $0.125$ which is the mean of $0.05$ and $0.2$. But how to choose $\alpha$ and $\beta$ of the prior beta distribution? First, you want your mean of the prior distribution to be $0.125$ out of a pseudo-sample of equivalent sample size $n_{eq}$. More generally, if you want your prior to have a mean $m$ with a pseudo-sample size $n_{eq}$, the corresponding $\alpha$ and $\beta$ values are: $\alpha = mn_{eq}$ and $\beta = (1-m)n_{eq}$. All you are left to do now is to choose the pseudo-sample size $n_{eq}$ which determines how confident you are about your prior information. Let's say you are very sure about your prior information and set $n_{eq}=1000$. The parameters of your prior distribution are thereore $\alpha = 0.125\cdot 1000 = 125$ and $\beta = (1 - 0.125)\cdot 1000 = 875$. The posterior distribution is $\mathrm{Beta}(127, 891)$ with a mean of about $0.125$ which is practically the same as the prior mean of $0.125$. The prior information is dominating the posterior (see the following graph): If you are less sure about the prior information, you could set the $n_{eq}$ of your pseudo-sample to, say, $10$, which yields $\alpha=1.25$ and $\beta=8.75$ for your prior beta distribution. The posterior distribution is $\mathrm{Beta}(3.25, 24.75)$ with a mean of about $0.116$. The posterior mean is now near the mean of your data ($0.111$) because the data overwhelm the prior. Here is the graph showing the situation: A more advanced method of incorporating the prior information would be to say that the $0.025$ quantile of your prior beta distribution should be about $0.05$ and the $0.975$ quantile should be about $0.2$. This is equivalent of saying that your are 95% sure that the proportion of left-handers in the population lies between 5% and 20%. The function beta.select in the R package LearnBayes calculates the corresponding $\alpha$ and $\beta$ values of a beta distribution corresponding to such quantiles. The code is library(LearnBayes) quantile1=list(p=.025, x=0.05) # the 2.5% quantile should be 0.05 quantile2=list(p=.975, x=0.2) # the 97.5% quantile should be 0.2 beta.select(quantile1, quantile2) [1] 7.61 59.13 It seems that a beta distribution with paramters $\alpha = 7.61$ and $\beta=59.13$ has the desired properties. The prior mean is $7.61/(7.61 + 59.13)\approx 0.114$ which is near the mean of your data ($0.111$). Again, this prior distribution incorporates the information of a pseudo-sample of an equivalent sample size of about $n_{eq}\approx 7.61+59.13 \approx 66.74$. The posterior distribution is $\mathrm{Beta}(9.61, 75.13)$ with a mean of $0.113$ which is comparable with the mean of the previous analysis using a highly informative $\mathrm{Beta}(125, 875)$ prior. Here is the corresponding graph: See also this reference for a short but imho good overview of Bayesian reasoning and simple analysis. A longer introduction for conjugate analyses, especially for binomial data can be found here. A general introduction into Bayesian thinking can be found here. More slides concerning aspects of Baysian statistics are here.
Help me understand Bayesian prior and posterior distributions Let me first explain what a conjugate prior is. I will then explain the Bayesian analyses using your specific example. Bayesian statistics involve the following steps: Define the prior distribution t
1,017
Help me understand Bayesian prior and posterior distributions
A beta distribution with $\alpha$ = 1 and $\beta$ = 1 is the same as a uniform distribution. So it is in fact, uniformative. You're trying to find information about a parameter of a distribution (in this case, percentage of left handed people in a group of people). Bayes formula states: $P(r|Y_{1,...,n})$ = $\frac{P(Y_{1,...,n}|r)*P(r)}{\int P(Y_{1,...,n}|\theta)*P(r)}$ which you pointed out is proportional to: $P(r|Y_{1,...,n})$ $\propto$ $(Y_{1,...,n}|r)*P(r)$ So basically you're starting with your prior belief of the proportion of left handers in the group(P(r), which you're using a uniform dist for), then considering the data which you collect to inform your prior(a binomial in this case. either you're right or left handed, so $P(Y_{1,...,n}|r)$). A binomial distribution has a beta conjugate prior, which means that the posterior distribution $P(r|Y_{1,...n})$, the distribution of the paramter after considering the data is in the same family as the prior. r here is not unknown in the end. (and frankly it wasn't before collecting the data. we've got a pretty good idea of the proportion of left handers in society.) You've got both the prior distribution (your assumption of r) and you've collected data and put the two together. The posterior is your new assumption of the distribution of left handers after considering the data. So you take the likelihood of the data, and multiply it by a uniform. The expected value of a beta distribution (which is what the poster is) is $\frac{\alpha}{\alpha+\beta}$. So when you started, your assumption with $\alpha$=1 and $\beta$=1 was that the proportion of left handers in the world was $\frac{1}{2}$. Now you've collected data that has 2 lefties out of 18. You've calculated a posterior. (still a beta) Your $\alpha$ and $\beta$ values are now different, changing your idea of the proportion of lefties vs. righties. how has it changed?
Help me understand Bayesian prior and posterior distributions
A beta distribution with $\alpha$ = 1 and $\beta$ = 1 is the same as a uniform distribution. So it is in fact, uniformative. You're trying to find information about a parameter of a distribution (in
Help me understand Bayesian prior and posterior distributions A beta distribution with $\alpha$ = 1 and $\beta$ = 1 is the same as a uniform distribution. So it is in fact, uniformative. You're trying to find information about a parameter of a distribution (in this case, percentage of left handed people in a group of people). Bayes formula states: $P(r|Y_{1,...,n})$ = $\frac{P(Y_{1,...,n}|r)*P(r)}{\int P(Y_{1,...,n}|\theta)*P(r)}$ which you pointed out is proportional to: $P(r|Y_{1,...,n})$ $\propto$ $(Y_{1,...,n}|r)*P(r)$ So basically you're starting with your prior belief of the proportion of left handers in the group(P(r), which you're using a uniform dist for), then considering the data which you collect to inform your prior(a binomial in this case. either you're right or left handed, so $P(Y_{1,...,n}|r)$). A binomial distribution has a beta conjugate prior, which means that the posterior distribution $P(r|Y_{1,...n})$, the distribution of the paramter after considering the data is in the same family as the prior. r here is not unknown in the end. (and frankly it wasn't before collecting the data. we've got a pretty good idea of the proportion of left handers in society.) You've got both the prior distribution (your assumption of r) and you've collected data and put the two together. The posterior is your new assumption of the distribution of left handers after considering the data. So you take the likelihood of the data, and multiply it by a uniform. The expected value of a beta distribution (which is what the poster is) is $\frac{\alpha}{\alpha+\beta}$. So when you started, your assumption with $\alpha$=1 and $\beta$=1 was that the proportion of left handers in the world was $\frac{1}{2}$. Now you've collected data that has 2 lefties out of 18. You've calculated a posterior. (still a beta) Your $\alpha$ and $\beta$ values are now different, changing your idea of the proportion of lefties vs. righties. how has it changed?
Help me understand Bayesian prior and posterior distributions A beta distribution with $\alpha$ = 1 and $\beta$ = 1 is the same as a uniform distribution. So it is in fact, uniformative. You're trying to find information about a parameter of a distribution (in
1,018
Help me understand Bayesian prior and posterior distributions
In the first part of your question it asks you to define a suitable prior for "r". With the binomial data in hand it would be wise to choose a beta distribution. Because then the posterior will be a beta. The Uniform ditribution being a special case of beta, you can choose prior for "r" the Uniform disribution allowing every possible value of "r" to be equally probable. In the second part you have provided with the information regarding the prior distribution "r". With this in hand @COOLSerdash's answer will give you the proper directions. Thank you for posting this question and COOLSerdash for providing a proper answer.
Help me understand Bayesian prior and posterior distributions
In the first part of your question it asks you to define a suitable prior for "r". With the binomial data in hand it would be wise to choose a beta distribution. Because then the posterior will be a b
Help me understand Bayesian prior and posterior distributions In the first part of your question it asks you to define a suitable prior for "r". With the binomial data in hand it would be wise to choose a beta distribution. Because then the posterior will be a beta. The Uniform ditribution being a special case of beta, you can choose prior for "r" the Uniform disribution allowing every possible value of "r" to be equally probable. In the second part you have provided with the information regarding the prior distribution "r". With this in hand @COOLSerdash's answer will give you the proper directions. Thank you for posting this question and COOLSerdash for providing a proper answer.
Help me understand Bayesian prior and posterior distributions In the first part of your question it asks you to define a suitable prior for "r". With the binomial data in hand it would be wise to choose a beta distribution. Because then the posterior will be a b
1,019
Amazon interview question—probability of 2nd interview
Say 200 people took the interview, so that 100 received a 2nd interview and 100 did not. Out of the first lot, 95 felt they had a great first interview. Out of the 2nd lot, 75 felt they had a great first interview. So in total 95 + 75 people felt they had a great first interview. Of those 95 + 75 = 170 people, only 95 actually got a 2nd interview. Thus the probability is: $$\frac{95}{(95 + 75)}=\frac{95}{170}=\frac{19}{34}$$ Note that, as many commenters graciously point out, this computation is only justifiable if you assume that your friends form an unbiased and well distributed sampling set, which may be a strong assumption.
Amazon interview question—probability of 2nd interview
Say 200 people took the interview, so that 100 received a 2nd interview and 100 did not. Out of the first lot, 95 felt they had a great first interview. Out of the 2nd lot, 75 felt they had a great fi
Amazon interview question—probability of 2nd interview Say 200 people took the interview, so that 100 received a 2nd interview and 100 did not. Out of the first lot, 95 felt they had a great first interview. Out of the 2nd lot, 75 felt they had a great first interview. So in total 95 + 75 people felt they had a great first interview. Of those 95 + 75 = 170 people, only 95 actually got a 2nd interview. Thus the probability is: $$\frac{95}{(95 + 75)}=\frac{95}{170}=\frac{19}{34}$$ Note that, as many commenters graciously point out, this computation is only justifiable if you assume that your friends form an unbiased and well distributed sampling set, which may be a strong assumption.
Amazon interview question—probability of 2nd interview Say 200 people took the interview, so that 100 received a 2nd interview and 100 did not. Out of the first lot, 95 felt they had a great first interview. Out of the 2nd lot, 75 felt they had a great fi
1,020
Amazon interview question—probability of 2nd interview
Let $\text{pass}=$ being invited to a second interview, $\text{fail}=$ not being so invited, $\text{good}=$ feel good about first interview, and $\text{bad}=$ don't feel good about first interview. $$ \begin{align} p(\text{pass}) &= 0.5 \\ p(\text{good}\mid\text{pass}) &= 0.95 \\ p(\text{good}\mid\text{fail}) &= 0.75 \\ p(\text{pass}\mid\text{good}) &= \;? \end{align} $$ Use Bayes' Rule $$ p(\text{pass}\mid\text{good}) = \frac{p(\text{good}\mid\text{pass}) \times p(\text{pass})}{p(\text{good})} $$ To solve, we need to realize that: $$ \begin{align} p(\text{good}) &= p(\text{good}\mid\text{pass})\times p(\text{pass}) + p(\text{good}\mid\text{fail})\times p(\text{fail}) \\&= 0.5(0.95 + 0.75) \\&= 0.85 \end{align} $$ Thus: $$ p(\text{pass}\mid\text{good}) = \frac{0.95 \times 0.5}{0.85} \approx 0.559 $$ So feeling good about your interview only makes you slightly more likely to actually move on. Edit: Based on a large number of comments and additional answers, I feel compelled to state some implicit assumptions. Namely, that your friend group is a representative sample of all interview candidates. If your friend group is not representative of all interview candidates, but is representative of your performance (i.e. you and your friends fit within the same subset of the population) then your information about your friends could still provide predictive power. Let's say you and your friends are a particularly intelligent bunch, and 75% of you move on to the next interview. Then we can modify the above approach as follows: $$p(\text{pass}\mid\text{friend})=0.75$$ $$p(\text{good}\mid\text{pass, friend})=0.95$$ $$p(\text{good}\mid\text{fail, friend})=0.75$$ $$ p(\text{pass}\mid\text{good, friend}) = \frac{p(\text{good}\mid\text{pass, friend}) \times p(\text{pass}\mid\text{friend})}{p(\text{good}\mid\text{friend})} = \frac{0.95 \times 0.75}{0.85} \approx 0.838 $$
Amazon interview question—probability of 2nd interview
Let $\text{pass}=$ being invited to a second interview, $\text{fail}=$ not being so invited, $\text{good}=$ feel good about first interview, and $\text{bad}=$ don't feel good about first interview.
Amazon interview question—probability of 2nd interview Let $\text{pass}=$ being invited to a second interview, $\text{fail}=$ not being so invited, $\text{good}=$ feel good about first interview, and $\text{bad}=$ don't feel good about first interview. $$ \begin{align} p(\text{pass}) &= 0.5 \\ p(\text{good}\mid\text{pass}) &= 0.95 \\ p(\text{good}\mid\text{fail}) &= 0.75 \\ p(\text{pass}\mid\text{good}) &= \;? \end{align} $$ Use Bayes' Rule $$ p(\text{pass}\mid\text{good}) = \frac{p(\text{good}\mid\text{pass}) \times p(\text{pass})}{p(\text{good})} $$ To solve, we need to realize that: $$ \begin{align} p(\text{good}) &= p(\text{good}\mid\text{pass})\times p(\text{pass}) + p(\text{good}\mid\text{fail})\times p(\text{fail}) \\&= 0.5(0.95 + 0.75) \\&= 0.85 \end{align} $$ Thus: $$ p(\text{pass}\mid\text{good}) = \frac{0.95 \times 0.5}{0.85} \approx 0.559 $$ So feeling good about your interview only makes you slightly more likely to actually move on. Edit: Based on a large number of comments and additional answers, I feel compelled to state some implicit assumptions. Namely, that your friend group is a representative sample of all interview candidates. If your friend group is not representative of all interview candidates, but is representative of your performance (i.e. you and your friends fit within the same subset of the population) then your information about your friends could still provide predictive power. Let's say you and your friends are a particularly intelligent bunch, and 75% of you move on to the next interview. Then we can modify the above approach as follows: $$p(\text{pass}\mid\text{friend})=0.75$$ $$p(\text{good}\mid\text{pass, friend})=0.95$$ $$p(\text{good}\mid\text{fail, friend})=0.75$$ $$ p(\text{pass}\mid\text{good, friend}) = \frac{p(\text{good}\mid\text{pass, friend}) \times p(\text{pass}\mid\text{friend})}{p(\text{good}\mid\text{friend})} = \frac{0.95 \times 0.75}{0.85} \approx 0.838 $$
Amazon interview question—probability of 2nd interview Let $\text{pass}=$ being invited to a second interview, $\text{fail}=$ not being so invited, $\text{good}=$ feel good about first interview, and $\text{bad}=$ don't feel good about first interview.
1,021
Amazon interview question—probability of 2nd interview
The question contains insufficient information to answer the question: $x$% of all people do A $y$% of your friends do B Unless we know the population size of all people and your friends, it is not possible to answer this question accurately, unless we make either of two assumptions: The group your friends is representative of the overall population. This results in Vincent Galinas' answer or, equivalently, Alex Williams' answer. The group your friends is not representative, and is much smaller than the overall population. This results in CeeJeeB's answer. Edit: Do also read the comment by Kyle Strand below. Another aspect we should consider is how similar am I to my friends?. This depends on whether one interprets you as the person spoken to or as an unspecified individual or group of individuals (both usages exist).
Amazon interview question—probability of 2nd interview
The question contains insufficient information to answer the question: $x$% of all people do A $y$% of your friends do B Unless we know the population size of all people and your friends, it is not po
Amazon interview question—probability of 2nd interview The question contains insufficient information to answer the question: $x$% of all people do A $y$% of your friends do B Unless we know the population size of all people and your friends, it is not possible to answer this question accurately, unless we make either of two assumptions: The group your friends is representative of the overall population. This results in Vincent Galinas' answer or, equivalently, Alex Williams' answer. The group your friends is not representative, and is much smaller than the overall population. This results in CeeJeeB's answer. Edit: Do also read the comment by Kyle Strand below. Another aspect we should consider is how similar am I to my friends?. This depends on whether one interprets you as the person spoken to or as an unspecified individual or group of individuals (both usages exist).
Amazon interview question—probability of 2nd interview The question contains insufficient information to answer the question: $x$% of all people do A $y$% of your friends do B Unless we know the population size of all people and your friends, it is not po
1,022
Amazon interview question—probability of 2nd interview
The answer is 50%. Particularly since it was an interview question I think Amazon wanted to test the candidate to see if they could spot the obvious and not be distracted by the unimportant. When you hear hoofbeats, think horses, not zebras - reference My explanation: The first statement is all the information you need. 50% of All People who receive first interview receive a second interview The other two statements are just observations. Feeling you had a good interview does not increase your chances of having a second. Although statistically the observations may be correct I believe they cannot be used to predict future outcomes. Consider the following. 2 shops sell lottery scratch cards After selling 100 cards each a customer gets a winning card from shop 1 Statistically you could say that shop 1 now has a greater chance of a person getting a winning ticket, 1 in 100 compared to 0 in 100 for shop 2. We understand this is not true. The reason it is not true is because in this example past events will not have a bearing on future outcomes.
Amazon interview question—probability of 2nd interview
The answer is 50%. Particularly since it was an interview question I think Amazon wanted to test the candidate to see if they could spot the obvious and not be distracted by the unimportant. When you
Amazon interview question—probability of 2nd interview The answer is 50%. Particularly since it was an interview question I think Amazon wanted to test the candidate to see if they could spot the obvious and not be distracted by the unimportant. When you hear hoofbeats, think horses, not zebras - reference My explanation: The first statement is all the information you need. 50% of All People who receive first interview receive a second interview The other two statements are just observations. Feeling you had a good interview does not increase your chances of having a second. Although statistically the observations may be correct I believe they cannot be used to predict future outcomes. Consider the following. 2 shops sell lottery scratch cards After selling 100 cards each a customer gets a winning card from shop 1 Statistically you could say that shop 1 now has a greater chance of a person getting a winning ticket, 1 in 100 compared to 0 in 100 for shop 2. We understand this is not true. The reason it is not true is because in this example past events will not have a bearing on future outcomes.
Amazon interview question—probability of 2nd interview The answer is 50%. Particularly since it was an interview question I think Amazon wanted to test the candidate to see if they could spot the obvious and not be distracted by the unimportant. When you
1,023
Amazon interview question—probability of 2nd interview
The answer that I would give is: Based on this information, 50%. 'Your friends' is not a representative sample so it should not be considered in the probability calculation. If you assume that the data is valid then Bayes' theorem is the way to go.
Amazon interview question—probability of 2nd interview
The answer that I would give is: Based on this information, 50%. 'Your friends' is not a representative sample so it should not be considered in the probability calculation. If you assume that the d
Amazon interview question—probability of 2nd interview The answer that I would give is: Based on this information, 50%. 'Your friends' is not a representative sample so it should not be considered in the probability calculation. If you assume that the data is valid then Bayes' theorem is the way to go.
Amazon interview question—probability of 2nd interview The answer that I would give is: Based on this information, 50%. 'Your friends' is not a representative sample so it should not be considered in the probability calculation. If you assume that the d
1,024
Amazon interview question—probability of 2nd interview
State that none of your friends are also up for interview. State that the question is underconstrained. Before they can scramble for some further constraint to the problem quickly try and get in a more productive pre-prepared question of your own in a manner fully expecting a response. Maybe you can get them to move on to a more productive interview.
Amazon interview question—probability of 2nd interview
State that none of your friends are also up for interview. State that the question is underconstrained. Before they can scramble for some further constraint to the problem quickly try and get in a mo
Amazon interview question—probability of 2nd interview State that none of your friends are also up for interview. State that the question is underconstrained. Before they can scramble for some further constraint to the problem quickly try and get in a more productive pre-prepared question of your own in a manner fully expecting a response. Maybe you can get them to move on to a more productive interview.
Amazon interview question—probability of 2nd interview State that none of your friends are also up for interview. State that the question is underconstrained. Before they can scramble for some further constraint to the problem quickly try and get in a mo
1,025
Amazon interview question—probability of 2nd interview
Joke answers but should work well: "100% When it comes to demanding superb performance from myself, I don't attribute the outcome to any probability. See you in the 2nd interview." "50%, until my friends got their own Amazon Prime account I won't consider their feelings valid. Actually, sorry, that was a bit too harsh. Let me take it back and rephrase: I won't even consider them human beings." "Wait, no one ever made my whiny friends feel good. What are your secrets? I want to work for Amazon; give me a chance to please to unpleasable!" Fake a phone vibration "Oh, sorry! It was just my Amazon Prime account telling me that the Honda I ordered was shipped. Where were we?" "Regardless, I still feel you should send those who didn't get a 2nd interview a 1-month free trial of Amazon Prime. No one should live their life without knowing its glory. And once we got them, retention, retention, retention." "55.9% All my friends have an Amazon Prime account and I will make sure to make their experience counts."
Amazon interview question—probability of 2nd interview
Joke answers but should work well: "100% When it comes to demanding superb performance from myself, I don't attribute the outcome to any probability. See you in the 2nd interview." "50%, until my fr
Amazon interview question—probability of 2nd interview Joke answers but should work well: "100% When it comes to demanding superb performance from myself, I don't attribute the outcome to any probability. See you in the 2nd interview." "50%, until my friends got their own Amazon Prime account I won't consider their feelings valid. Actually, sorry, that was a bit too harsh. Let me take it back and rephrase: I won't even consider them human beings." "Wait, no one ever made my whiny friends feel good. What are your secrets? I want to work for Amazon; give me a chance to please to unpleasable!" Fake a phone vibration "Oh, sorry! It was just my Amazon Prime account telling me that the Honda I ordered was shipped. Where were we?" "Regardless, I still feel you should send those who didn't get a 2nd interview a 1-month free trial of Amazon Prime. No one should live their life without knowing its glory. And once we got them, retention, retention, retention." "55.9% All my friends have an Amazon Prime account and I will make sure to make their experience counts."
Amazon interview question—probability of 2nd interview Joke answers but should work well: "100% When it comes to demanding superb performance from myself, I don't attribute the outcome to any probability. See you in the 2nd interview." "50%, until my fr
1,026
Amazon interview question—probability of 2nd interview
Simple case : 95 / (95 + 75) ≈ 0.559 is a quick way to get to the result Out of people who felt good - 95 succeeded , 75 failed . So thats probability of you passing from that group is above . But No where it is said you are part of the above group . If you can think that distributions (your friends circle's) pattern is generic or you are in that group you might as well compute this way Also IMO not that it matters much but the facts about your friend feelings NEED not have any implication in future - that way its worded . For example it rained yday doesn't mean there is a possibility of rain tommorow unless Facts , like 50% clearing is not affecting the probability of "what you feel" and the "chances of getting based on that" in that case. Safer Approach : However I even would have thought of the 50% thingy above . I.e from the perspective of real facts - 50% is probability makes sense . 1) No where does it say your feelings SHOULD have anything to do with your results .2) There could be ppl who are your friends - but HAD NO feelings - what happened to them ... So given all the combinations that are possible - stick with the safest choice ! PS: I might have flunked this test too.
Amazon interview question—probability of 2nd interview
Simple case : 95 / (95 + 75) ≈ 0.559 is a quick way to get to the result Out of people who felt good - 95 succeeded , 75 failed . So thats probability of you passing from that group is above . But
Amazon interview question—probability of 2nd interview Simple case : 95 / (95 + 75) ≈ 0.559 is a quick way to get to the result Out of people who felt good - 95 succeeded , 75 failed . So thats probability of you passing from that group is above . But No where it is said you are part of the above group . If you can think that distributions (your friends circle's) pattern is generic or you are in that group you might as well compute this way Also IMO not that it matters much but the facts about your friend feelings NEED not have any implication in future - that way its worded . For example it rained yday doesn't mean there is a possibility of rain tommorow unless Facts , like 50% clearing is not affecting the probability of "what you feel" and the "chances of getting based on that" in that case. Safer Approach : However I even would have thought of the 50% thingy above . I.e from the perspective of real facts - 50% is probability makes sense . 1) No where does it say your feelings SHOULD have anything to do with your results .2) There could be ppl who are your friends - but HAD NO feelings - what happened to them ... So given all the combinations that are possible - stick with the safest choice ! PS: I might have flunked this test too.
Amazon interview question—probability of 2nd interview Simple case : 95 / (95 + 75) ≈ 0.559 is a quick way to get to the result Out of people who felt good - 95 succeeded , 75 failed . So thats probability of you passing from that group is above . But
1,027
Amazon interview question—probability of 2nd interview
It might be helpful to view this chain of events as a binary tree where just two leaf probabilities are relevant. The root node contains all folks who had a 1st interview; we then split this group on being invited to a 2nd interview ("2nd", "no 2nd") and subsequently on whether they felt good about the 1st interview ("good", "bad"). The conditional probabilities $P(\text{good} | \text{2nd}) = 0.95$ and $P(\text{good} | \text{no 2nd}) = 0.75$ are positioned at the edges. For educational reasons, we present two versions: one with decimal probabilities and the other with absolute counts -assuming a population of $200$. We can now directly compute the conditional probability (of receiving a second interview given that you had a good first interview) from the leaf probabilities in two ways: $$ P(\text{2nd } | \text{ good}) = \frac{P(\text{2nd} \cap \text{good})}{P( \text{good})} = \frac{0.475}{0.475 + 0.375} = \frac{95}{95 + 75} \approx 0.56 $$
Amazon interview question—probability of 2nd interview
It might be helpful to view this chain of events as a binary tree where just two leaf probabilities are relevant. The root node contains all folks who had a 1st interview; we then split this group on
Amazon interview question—probability of 2nd interview It might be helpful to view this chain of events as a binary tree where just two leaf probabilities are relevant. The root node contains all folks who had a 1st interview; we then split this group on being invited to a 2nd interview ("2nd", "no 2nd") and subsequently on whether they felt good about the 1st interview ("good", "bad"). The conditional probabilities $P(\text{good} | \text{2nd}) = 0.95$ and $P(\text{good} | \text{no 2nd}) = 0.75$ are positioned at the edges. For educational reasons, we present two versions: one with decimal probabilities and the other with absolute counts -assuming a population of $200$. We can now directly compute the conditional probability (of receiving a second interview given that you had a good first interview) from the leaf probabilities in two ways: $$ P(\text{2nd } | \text{ good}) = \frac{P(\text{2nd} \cap \text{good})}{P( \text{good})} = \frac{0.475}{0.475 + 0.375} = \frac{95}{95 + 75} \approx 0.56 $$
Amazon interview question—probability of 2nd interview It might be helpful to view this chain of events as a binary tree where just two leaf probabilities are relevant. The root node contains all folks who had a 1st interview; we then split this group on
1,028
Amazon interview question—probability of 2nd interview
The answer is 50%. They told you in the first line what the chance of anyone getting a second interview is. It's a test of your ability to see the essential information and not get distracted by irrelevant noise like how your friends felt. How they felt made no difference.
Amazon interview question—probability of 2nd interview
The answer is 50%. They told you in the first line what the chance of anyone getting a second interview is. It's a test of your ability to see the essential information and not get distracted by irrel
Amazon interview question—probability of 2nd interview The answer is 50%. They told you in the first line what the chance of anyone getting a second interview is. It's a test of your ability to see the essential information and not get distracted by irrelevant noise like how your friends felt. How they felt made no difference.
Amazon interview question—probability of 2nd interview The answer is 50%. They told you in the first line what the chance of anyone getting a second interview is. It's a test of your ability to see the essential information and not get distracted by irrel
1,029
Amazon interview question—probability of 2nd interview
Both statements say: % of your friends not % of your friends who were interviewed We do know that the group "that got a second interview" can only include those who had a first interview. However, the group "that did not get a second interview" includes all other friends. Without knowing what percentage of your friends were interviewed, it is impossible to identify any correlation between feeling you had a good first interview and receiving a second.
Amazon interview question—probability of 2nd interview
Both statements say: % of your friends not % of your friends who were interviewed We do know that the group "that got a second interview" can only include those who had a first interview. However,
Amazon interview question—probability of 2nd interview Both statements say: % of your friends not % of your friends who were interviewed We do know that the group "that got a second interview" can only include those who had a first interview. However, the group "that did not get a second interview" includes all other friends. Without knowing what percentage of your friends were interviewed, it is impossible to identify any correlation between feeling you had a good first interview and receiving a second.
Amazon interview question—probability of 2nd interview Both statements say: % of your friends not % of your friends who were interviewed We do know that the group "that got a second interview" can only include those who had a first interview. However,
1,030
Amazon interview question—probability of 2nd interview
This being an interview question, I don't believe there is a correct answer. I would most likely calculate the ~56% using Bayes and then tell the interviewer: Without any knowledge about me, it could be between 50% and 56%, but because I know me and my past, the probability is 100%
Amazon interview question—probability of 2nd interview
This being an interview question, I don't believe there is a correct answer. I would most likely calculate the ~56% using Bayes and then tell the interviewer: Without any knowledge about me, it could
Amazon interview question—probability of 2nd interview This being an interview question, I don't believe there is a correct answer. I would most likely calculate the ~56% using Bayes and then tell the interviewer: Without any knowledge about me, it could be between 50% and 56%, but because I know me and my past, the probability is 100%
Amazon interview question—probability of 2nd interview This being an interview question, I don't believe there is a correct answer. I would most likely calculate the ~56% using Bayes and then tell the interviewer: Without any knowledge about me, it could
1,031
Amazon interview question—probability of 2nd interview
I think the answer is 50% - right at the beginning of the question. It's irrelevant what percentage of your friends feel.
Amazon interview question—probability of 2nd interview
I think the answer is 50% - right at the beginning of the question. It's irrelevant what percentage of your friends feel.
Amazon interview question—probability of 2nd interview I think the answer is 50% - right at the beginning of the question. It's irrelevant what percentage of your friends feel.
Amazon interview question—probability of 2nd interview I think the answer is 50% - right at the beginning of the question. It's irrelevant what percentage of your friends feel.
1,032
Amazon interview question—probability of 2nd interview
Mathematically You're chances are 50%. This is because in the Venn diagram of Amazon Interviewees you fall into the Universal Set of ALL Interviewees, but not the set of 'Your friends'. Had the question stated: 'One of your friends had a great interview. What is the percentage she'll get a second interview?' Then the current top answer would be valid. But those 2nd and 3rd statistics only apply to you if you consider yourself one of your own friends. So, maybe it's more of a psychological question?
Amazon interview question—probability of 2nd interview
Mathematically You're chances are 50%. This is because in the Venn diagram of Amazon Interviewees you fall into the Universal Set of ALL Interviewees, but not the set of 'Your friends'. Had the ques
Amazon interview question—probability of 2nd interview Mathematically You're chances are 50%. This is because in the Venn diagram of Amazon Interviewees you fall into the Universal Set of ALL Interviewees, but not the set of 'Your friends'. Had the question stated: 'One of your friends had a great interview. What is the percentage she'll get a second interview?' Then the current top answer would be valid. But those 2nd and 3rd statistics only apply to you if you consider yourself one of your own friends. So, maybe it's more of a psychological question?
Amazon interview question—probability of 2nd interview Mathematically You're chances are 50%. This is because in the Venn diagram of Amazon Interviewees you fall into the Universal Set of ALL Interviewees, but not the set of 'Your friends'. Had the ques
1,033
Amazon interview question—probability of 2nd interview
Answer is: ≈1 The question doesnt provide how many people among those appearing for interview,are our friends.However, we can assume that data & get any answer we want.Also, main thing about this assumption is that only our friends get selected for 2nd interview. Lets say 104 of your friends appear for the interview,& 100 of them get 2nd interview. So, we can say 95 of them felt they had a good first interview(Criteria 2).Also, out of remaining 4,75%(ie 3) of them felt they had a good interview(Criteria 3).So out of 104, 98 felt they had a good interview.but 95 were selected.so final probability is : 95/98.We can always say that 100*2 = 200(104 are friends out of them) people in total gave the first interview, in order to satisfy the 1st criteria.here, all 96 who were not friends,failed to clear 1st interview. Now you increase friends to 108 & do it again, for 100 of them getting 2nd interview.your final probability would be 101/108 .Thus, as we increase no of friends who didnt clear first interview, the probability decreases.So for maximum efficiency, no of friends who didnt clear should always be 4. Now increase the friends.Suppose they are 10,004(10000 who cleared,4 who didnt). so now, out of 10000,9500 felt they had a good interview.So in total, 9503(among 4 failed,3 felt they had good interview, therefore 9500+3) felt they had a good interview,but only 9500 cleared. ie final probability = 9500/9503 which is ≈1.Again, we can put that 20000 people in total appeared for the interview, & all those who werent friends, couldnt clear it.So, 1st criteria is again satisfied. Note: Our assumption about no of friends,no of them clearing the interview & no of other participants, is all in order to get the probability to 1.we can modify this data & can get any probability we want.
Amazon interview question—probability of 2nd interview
Answer is: ≈1 The question doesnt provide how many people among those appearing for interview,are our friends.However, we can assume that data & get any answer we want.Also, main thing about this assu
Amazon interview question—probability of 2nd interview Answer is: ≈1 The question doesnt provide how many people among those appearing for interview,are our friends.However, we can assume that data & get any answer we want.Also, main thing about this assumption is that only our friends get selected for 2nd interview. Lets say 104 of your friends appear for the interview,& 100 of them get 2nd interview. So, we can say 95 of them felt they had a good first interview(Criteria 2).Also, out of remaining 4,75%(ie 3) of them felt they had a good interview(Criteria 3).So out of 104, 98 felt they had a good interview.but 95 were selected.so final probability is : 95/98.We can always say that 100*2 = 200(104 are friends out of them) people in total gave the first interview, in order to satisfy the 1st criteria.here, all 96 who were not friends,failed to clear 1st interview. Now you increase friends to 108 & do it again, for 100 of them getting 2nd interview.your final probability would be 101/108 .Thus, as we increase no of friends who didnt clear first interview, the probability decreases.So for maximum efficiency, no of friends who didnt clear should always be 4. Now increase the friends.Suppose they are 10,004(10000 who cleared,4 who didnt). so now, out of 10000,9500 felt they had a good interview.So in total, 9503(among 4 failed,3 felt they had good interview, therefore 9500+3) felt they had a good interview,but only 9500 cleared. ie final probability = 9500/9503 which is ≈1.Again, we can put that 20000 people in total appeared for the interview, & all those who werent friends, couldnt clear it.So, 1st criteria is again satisfied. Note: Our assumption about no of friends,no of them clearing the interview & no of other participants, is all in order to get the probability to 1.we can modify this data & can get any probability we want.
Amazon interview question—probability of 2nd interview Answer is: ≈1 The question doesnt provide how many people among those appearing for interview,are our friends.However, we can assume that data & get any answer we want.Also, main thing about this assu
1,034
Why L1 norm for sparse models
Consider the vector $\vec{x}=(1,\varepsilon)\in\mathbb{R}^2$ where $\varepsilon>0$ is small. The $l_1$ and $l_2$ norms of $\vec{x}$, respectively, are given by $$||\vec{x}||_1 = 1+\varepsilon,\ \ ||\vec{x}||_2^2 = 1+\varepsilon^2$$ Now say that, as part of some regularization procedure, we are going to reduce the magnitude of one of the elements of $\vec{x}$ by $\delta\leq\varepsilon$. If we change $x_1$ to $1-\delta$, the resulting norms are $$||\vec{x}-(\delta,0)||_1 = 1-\delta+\varepsilon,\ \ ||\vec{x}-(\delta,0)||_2^2 = 1-2\delta+\delta^2+\varepsilon^2$$ On the other hand, reducing $x_2$ by $\delta$ gives norms $$||\vec{x}-(0,\delta)||_1 = 1-\delta+\varepsilon,\ \ ||\vec{x}-(0,\delta)||_2^2 = 1-2\varepsilon\delta+\delta^2+\varepsilon^2$$ The thing to notice here is that, for an $l_2$ penalty, regularizing the larger term $x_1$ results in a much greater reduction in norm than doing so to the smaller term $x_2\approx 0$. For the $l_1$ penalty, however, the reduction is the same. Thus, when penalizing a model using the $l_2$ norm, it is highly unlikely that anything will ever be set to zero, since the reduction in $l_2$ norm going from $\varepsilon$ to $0$ is almost nonexistent when $\varepsilon$ is small. On the other hand, the reduction in $l_1$ norm is always equal to $\delta$, regardless of the quantity being penalized. Another way to think of it: it's not so much that $l_1$ penalties encourage sparsity, but that $l_2$ penalties in some sense discourage sparsity by yielding diminishing returns as elements are moved closer to zero.
Why L1 norm for sparse models
Consider the vector $\vec{x}=(1,\varepsilon)\in\mathbb{R}^2$ where $\varepsilon>0$ is small. The $l_1$ and $l_2$ norms of $\vec{x}$, respectively, are given by $$||\vec{x}||_1 = 1+\varepsilon,\ \ ||\v
Why L1 norm for sparse models Consider the vector $\vec{x}=(1,\varepsilon)\in\mathbb{R}^2$ where $\varepsilon>0$ is small. The $l_1$ and $l_2$ norms of $\vec{x}$, respectively, are given by $$||\vec{x}||_1 = 1+\varepsilon,\ \ ||\vec{x}||_2^2 = 1+\varepsilon^2$$ Now say that, as part of some regularization procedure, we are going to reduce the magnitude of one of the elements of $\vec{x}$ by $\delta\leq\varepsilon$. If we change $x_1$ to $1-\delta$, the resulting norms are $$||\vec{x}-(\delta,0)||_1 = 1-\delta+\varepsilon,\ \ ||\vec{x}-(\delta,0)||_2^2 = 1-2\delta+\delta^2+\varepsilon^2$$ On the other hand, reducing $x_2$ by $\delta$ gives norms $$||\vec{x}-(0,\delta)||_1 = 1-\delta+\varepsilon,\ \ ||\vec{x}-(0,\delta)||_2^2 = 1-2\varepsilon\delta+\delta^2+\varepsilon^2$$ The thing to notice here is that, for an $l_2$ penalty, regularizing the larger term $x_1$ results in a much greater reduction in norm than doing so to the smaller term $x_2\approx 0$. For the $l_1$ penalty, however, the reduction is the same. Thus, when penalizing a model using the $l_2$ norm, it is highly unlikely that anything will ever be set to zero, since the reduction in $l_2$ norm going from $\varepsilon$ to $0$ is almost nonexistent when $\varepsilon$ is small. On the other hand, the reduction in $l_1$ norm is always equal to $\delta$, regardless of the quantity being penalized. Another way to think of it: it's not so much that $l_1$ penalties encourage sparsity, but that $l_2$ penalties in some sense discourage sparsity by yielding diminishing returns as elements are moved closer to zero.
Why L1 norm for sparse models Consider the vector $\vec{x}=(1,\varepsilon)\in\mathbb{R}^2$ where $\varepsilon>0$ is small. The $l_1$ and $l_2$ norms of $\vec{x}$, respectively, are given by $$||\vec{x}||_1 = 1+\varepsilon,\ \ ||\v
1,035
Why L1 norm for sparse models
With a sparse model, we think of a model where many of the weights are 0. Let us therefore reason about how L1-regularization is more likely to create 0-weights. Consider a model consisting of the weights $(w_1, w_2, \dots, w_m)$. With L1 regularization, you penalize the model by a loss function $L_1(w)$ = $\Sigma_i |w_i|$. With L2-regularization, you penalize the model by a loss function $L_2(w)$ = $\frac{1}{2} \Sigma_i w_i^2$ If using gradient descent, you will iteratively make the weights change in the opposite direction of the gradient with a step size $\eta$ multiplied with the gradient. This means that a more steep gradient will make us take a larger step, while a more flat gradient will make us take a smaller step. Let us look at the gradients (subgradient in case of L1): $\frac{dL_1(w)}{dw} = sign(w)$, where $sign(w) = (\frac{w_1}{|w_1|}, \frac{w_2}{|w_2|}, \dots, \frac{w_m}{|w_m|})$ $\frac{dL_2(w)}{dw} = w$ If we plot the loss function and it's derivative for a model consisting of just a single parameter, it looks like this for L1: And like this for L2: Notice that for $L_1$, the gradient is either 1 or -1, except for when $w_1 = 0$. That means that L1-regularization will move any weight towards 0 with the same step size, regardless the weight's value. In contrast, you can see that the $L_2$ gradient is linearly decreasing towards 0 as the weight goes towards 0. Therefore, L2-regularization will also move any weight towards 0, but it will take smaller and smaller steps as a weight approaches 0. Try to imagine that you start with a model with $w_1 = 5$ and using $\eta = \frac{1}{2}$. In the following picture, you can see how gradient descent using L1-regularization makes 10 of the updates $w_1 := w_1 - \eta \cdot \frac{dL_1(w)}{dw} = w_1 - \frac{1}{2} \cdot 1$, until reaching a model with $w_1 = 0$: In constrast, with L2-regularization where $\eta = \frac{1}{2}$, the gradient is $w_1$, causing every step to be only halfway towards 0. That is, we make the update $w_1 := w_1 - \eta \cdot \frac{dL_2(w)}{dw} = w_1 - \frac{1}{2} \cdot w_1$ Therefore, the model never reaches a weight of 0, regardless of how many steps we take: Note that L2-regularization can make a weight reach zero if the step size $\eta$ is so high that it reaches zero in a single step. Even if L2-regularization on its own over or undershoots 0, it can still reach a weight of 0 when used together with an objective function that tries to minimize the error of the model with respect to the weights. In that case, finding the best weights of the model is a trade-off between regularizing (having small weights) and minimizing loss (fitting the training data), and the result of that trade-off can be that the best value for some weights are 0.
Why L1 norm for sparse models
With a sparse model, we think of a model where many of the weights are 0. Let us therefore reason about how L1-regularization is more likely to create 0-weights. Consider a model consisting of the wei
Why L1 norm for sparse models With a sparse model, we think of a model where many of the weights are 0. Let us therefore reason about how L1-regularization is more likely to create 0-weights. Consider a model consisting of the weights $(w_1, w_2, \dots, w_m)$. With L1 regularization, you penalize the model by a loss function $L_1(w)$ = $\Sigma_i |w_i|$. With L2-regularization, you penalize the model by a loss function $L_2(w)$ = $\frac{1}{2} \Sigma_i w_i^2$ If using gradient descent, you will iteratively make the weights change in the opposite direction of the gradient with a step size $\eta$ multiplied with the gradient. This means that a more steep gradient will make us take a larger step, while a more flat gradient will make us take a smaller step. Let us look at the gradients (subgradient in case of L1): $\frac{dL_1(w)}{dw} = sign(w)$, where $sign(w) = (\frac{w_1}{|w_1|}, \frac{w_2}{|w_2|}, \dots, \frac{w_m}{|w_m|})$ $\frac{dL_2(w)}{dw} = w$ If we plot the loss function and it's derivative for a model consisting of just a single parameter, it looks like this for L1: And like this for L2: Notice that for $L_1$, the gradient is either 1 or -1, except for when $w_1 = 0$. That means that L1-regularization will move any weight towards 0 with the same step size, regardless the weight's value. In contrast, you can see that the $L_2$ gradient is linearly decreasing towards 0 as the weight goes towards 0. Therefore, L2-regularization will also move any weight towards 0, but it will take smaller and smaller steps as a weight approaches 0. Try to imagine that you start with a model with $w_1 = 5$ and using $\eta = \frac{1}{2}$. In the following picture, you can see how gradient descent using L1-regularization makes 10 of the updates $w_1 := w_1 - \eta \cdot \frac{dL_1(w)}{dw} = w_1 - \frac{1}{2} \cdot 1$, until reaching a model with $w_1 = 0$: In constrast, with L2-regularization where $\eta = \frac{1}{2}$, the gradient is $w_1$, causing every step to be only halfway towards 0. That is, we make the update $w_1 := w_1 - \eta \cdot \frac{dL_2(w)}{dw} = w_1 - \frac{1}{2} \cdot w_1$ Therefore, the model never reaches a weight of 0, regardless of how many steps we take: Note that L2-regularization can make a weight reach zero if the step size $\eta$ is so high that it reaches zero in a single step. Even if L2-regularization on its own over or undershoots 0, it can still reach a weight of 0 when used together with an objective function that tries to minimize the error of the model with respect to the weights. In that case, finding the best weights of the model is a trade-off between regularizing (having small weights) and minimizing loss (fitting the training data), and the result of that trade-off can be that the best value for some weights are 0.
Why L1 norm for sparse models With a sparse model, we think of a model where many of the weights are 0. Let us therefore reason about how L1-regularization is more likely to create 0-weights. Consider a model consisting of the wei
1,036
Why L1 norm for sparse models
The Figure 3.11 from Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is very illustrative: Explanations: The $\hat{\beta}$ is the unconstrained least squares estimate. The red ellipses are (as explained in the caption of this Figure) the contours of the least squares error function, in terms of parameters $\beta_1$ and $\beta_2$. Without constraints, the error function is minimized at the MLE $\hat{\beta}$, and its value increases as the red ellipses out expand. The diamond and disk regions are feasible regions for lasso ($L_1$) regression and ridge ($L_2$) regression respectively. Heuristically, for each method, we are looking for the intersection of the red ellipses and the blue region as the objective is to minimize the error function while maintaining the feasibility. That being said, it is clear to see that the $L_1$ constraint, which corresponds to the diamond feasible region, is more likely to produce an intersection that has one component of the solution is zero (i.e., the sparse model) due to the geometric properties of ellipses, disks, and diamonds. It is simply because diamonds have corners (of which one component is zero) that are easier to intersect with the ellipses that extending diagonally.
Why L1 norm for sparse models
The Figure 3.11 from Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is very illustrative: Explanations: The $\hat{\beta}$ is the unconstrained least squares estimate. The red ell
Why L1 norm for sparse models The Figure 3.11 from Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is very illustrative: Explanations: The $\hat{\beta}$ is the unconstrained least squares estimate. The red ellipses are (as explained in the caption of this Figure) the contours of the least squares error function, in terms of parameters $\beta_1$ and $\beta_2$. Without constraints, the error function is minimized at the MLE $\hat{\beta}$, and its value increases as the red ellipses out expand. The diamond and disk regions are feasible regions for lasso ($L_1$) regression and ridge ($L_2$) regression respectively. Heuristically, for each method, we are looking for the intersection of the red ellipses and the blue region as the objective is to minimize the error function while maintaining the feasibility. That being said, it is clear to see that the $L_1$ constraint, which corresponds to the diamond feasible region, is more likely to produce an intersection that has one component of the solution is zero (i.e., the sparse model) due to the geometric properties of ellipses, disks, and diamonds. It is simply because diamonds have corners (of which one component is zero) that are easier to intersect with the ellipses that extending diagonally.
Why L1 norm for sparse models The Figure 3.11 from Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is very illustrative: Explanations: The $\hat{\beta}$ is the unconstrained least squares estimate. The red ell
1,037
Why L1 norm for sparse models
Have a look on figure 3.11 (page 71) of The elements of statistical learning. It shows the position of a unconstrained $\hat \beta$ that minimizes the squared error function, the ellipses showing the levels of the square error function, and where are the $\hat \beta$ subject to constraints $\ell_1 (\hat \beta) < t$ and $\ell_2 (\hat \beta) < t$. This will allow you to understand very geometrically that subject to the $\ell_1$ constraint, you get some null components. This is basically because the $\ell_1$ ball $\{ x : \ell_1(x) \le 1\}$ has "edges" on the axes. More generally, this book is a good reference on this subject: both rigorous and well illustrated, great explanations.
Why L1 norm for sparse models
Have a look on figure 3.11 (page 71) of The elements of statistical learning. It shows the position of a unconstrained $\hat \beta$ that minimizes the squared error function, the ellipses showing the
Why L1 norm for sparse models Have a look on figure 3.11 (page 71) of The elements of statistical learning. It shows the position of a unconstrained $\hat \beta$ that minimizes the squared error function, the ellipses showing the levels of the square error function, and where are the $\hat \beta$ subject to constraints $\ell_1 (\hat \beta) < t$ and $\ell_2 (\hat \beta) < t$. This will allow you to understand very geometrically that subject to the $\ell_1$ constraint, you get some null components. This is basically because the $\ell_1$ ball $\{ x : \ell_1(x) \le 1\}$ has "edges" on the axes. More generally, this book is a good reference on this subject: both rigorous and well illustrated, great explanations.
Why L1 norm for sparse models Have a look on figure 3.11 (page 71) of The elements of statistical learning. It shows the position of a unconstrained $\hat \beta$ that minimizes the squared error function, the ellipses showing the
1,038
Why L1 norm for sparse models
The image shows the shapes of area occupied by L1 and L2 Norm. The second image consists of various Gradient Descent contours for various regression problems. In all the contour plots, observe the red circle which intersects the Ridge or L2 Norm. the intersection is not on the axes. The black circle in all the contours represents the one which interesects the L1 Norm or Lasso. It intersects relatively close to axes. This results in making coefficients to 0 and hence feature selection. Hence L1 norm make the model sparse. More Detailed explanation at the following link: Click Post on Towards Data Science
Why L1 norm for sparse models
The image shows the shapes of area occupied by L1 and L2 Norm. The second image consists of various Gradient Descent contours for various regression problems. In all the contour plots, observe the red
Why L1 norm for sparse models The image shows the shapes of area occupied by L1 and L2 Norm. The second image consists of various Gradient Descent contours for various regression problems. In all the contour plots, observe the red circle which intersects the Ridge or L2 Norm. the intersection is not on the axes. The black circle in all the contours represents the one which interesects the L1 Norm or Lasso. It intersects relatively close to axes. This results in making coefficients to 0 and hence feature selection. Hence L1 norm make the model sparse. More Detailed explanation at the following link: Click Post on Towards Data Science
Why L1 norm for sparse models The image shows the shapes of area occupied by L1 and L2 Norm. The second image consists of various Gradient Descent contours for various regression problems. In all the contour plots, observe the red
1,039
Why L1 norm for sparse models
A simple non mathematical answer wold be: For L2: Penalty term is squared,so squaring a small value will make it smaller. We don't have to make it zero to achieve our aim to get minimum square error, we will get it before that. For L1: Penalty term is absolute,we might need to go to zero as there are no catalyst to make small smaller. This my point of view.
Why L1 norm for sparse models
A simple non mathematical answer wold be: For L2: Penalty term is squared,so squaring a small value will make it smaller. We don't have to make it zero to achieve our aim to get minimum square error,
Why L1 norm for sparse models A simple non mathematical answer wold be: For L2: Penalty term is squared,so squaring a small value will make it smaller. We don't have to make it zero to achieve our aim to get minimum square error, we will get it before that. For L1: Penalty term is absolute,we might need to go to zero as there are no catalyst to make small smaller. This my point of view.
Why L1 norm for sparse models A simple non mathematical answer wold be: For L2: Penalty term is squared,so squaring a small value will make it smaller. We don't have to make it zero to achieve our aim to get minimum square error,
1,040
Why L1 norm for sparse models
I suggest you read some more about the theory of convex optimization. An answer to why the $ \ell_1 $ regularization achieves sparsity can be found if you examine implementations of models employing it, for example LASSO. One such method to solve the convex optimization problem with $ \ell_1 $ norm is by using the proximal gradient method, as $ \ell_1 $ norm is not differentiable. I found Ryan Tibshirani's slides for his convex optimization course to be quite helpful on this topic, even though my mathematical background is limited. You can find the derivation of the soft-thresholding operator $ S_\lambda $ used in the slides in the first answer to the mathematics StackExchange question on deriving the soft-thresholding operator. It is not too hard to follow, even without great knowledge of subgradients, and its result clearly shows why you get sparsity--for some $ \lambda > 0 $, for the $ i $th component $ w_t^i $ of the weight vector at time $ t $, the proximal gradient step is $ w_{t + 1}^i = w_t^i - \lambda $ if $ |w_t^i| > \lambda $, $ w_{t + 1}^i = 0 $ when $ |w_t^i| \le \lambda $. In ISTA (iterative soft-thresholding algorithm) described in Tibshirani's slides for LASSO, you would see that the weight vector update is $$ \mathbf{w}_{t + 1} = S_\lambda(\mathbf{w}_t + \eta\mathbf{X}^\top(\mathbf{y} - \mathbf{X}\mathbf{w})) $$ That is, after the least-squares gradient update with step $ \eta > 0 $, you perform soft-thresholding. This achieves sparsity, as the vector components with magnitude less than $ \lambda $ are set to 0. You can replace $ -\mathbf{X}^\top(\mathbf{y} - \mathbf{Xw}) $ with a general objective function gradient as well.
Why L1 norm for sparse models
I suggest you read some more about the theory of convex optimization. An answer to why the $ \ell_1 $ regularization achieves sparsity can be found if you examine implementations of models employing i
Why L1 norm for sparse models I suggest you read some more about the theory of convex optimization. An answer to why the $ \ell_1 $ regularization achieves sparsity can be found if you examine implementations of models employing it, for example LASSO. One such method to solve the convex optimization problem with $ \ell_1 $ norm is by using the proximal gradient method, as $ \ell_1 $ norm is not differentiable. I found Ryan Tibshirani's slides for his convex optimization course to be quite helpful on this topic, even though my mathematical background is limited. You can find the derivation of the soft-thresholding operator $ S_\lambda $ used in the slides in the first answer to the mathematics StackExchange question on deriving the soft-thresholding operator. It is not too hard to follow, even without great knowledge of subgradients, and its result clearly shows why you get sparsity--for some $ \lambda > 0 $, for the $ i $th component $ w_t^i $ of the weight vector at time $ t $, the proximal gradient step is $ w_{t + 1}^i = w_t^i - \lambda $ if $ |w_t^i| > \lambda $, $ w_{t + 1}^i = 0 $ when $ |w_t^i| \le \lambda $. In ISTA (iterative soft-thresholding algorithm) described in Tibshirani's slides for LASSO, you would see that the weight vector update is $$ \mathbf{w}_{t + 1} = S_\lambda(\mathbf{w}_t + \eta\mathbf{X}^\top(\mathbf{y} - \mathbf{X}\mathbf{w})) $$ That is, after the least-squares gradient update with step $ \eta > 0 $, you perform soft-thresholding. This achieves sparsity, as the vector components with magnitude less than $ \lambda $ are set to 0. You can replace $ -\mathbf{X}^\top(\mathbf{y} - \mathbf{Xw}) $ with a general objective function gradient as well.
Why L1 norm for sparse models I suggest you read some more about the theory of convex optimization. An answer to why the $ \ell_1 $ regularization achieves sparsity can be found if you examine implementations of models employing i
1,041
Why L1 norm for sparse models
l2 regularizer does not change the value of weight vector from one iteration to another iteration because of the slope of l2 norm is reducing all the time where as l1 regularizer constantly reduce the value of weight vector towards optimal W* which is 0 because of the slopeod L1 norm is constant
Why L1 norm for sparse models
l2 regularizer does not change the value of weight vector from one iteration to another iteration because of the slope of l2 norm is reducing all the time where as l1 regularizer constantly reduce the
Why L1 norm for sparse models l2 regularizer does not change the value of weight vector from one iteration to another iteration because of the slope of l2 norm is reducing all the time where as l1 regularizer constantly reduce the value of weight vector towards optimal W* which is 0 because of the slopeod L1 norm is constant
Why L1 norm for sparse models l2 regularizer does not change the value of weight vector from one iteration to another iteration because of the slope of l2 norm is reducing all the time where as l1 regularizer constantly reduce the
1,042
Pearson's or Spearman's correlation with non-normal data
Pearson's correlation is a measure of the linear relationship between two continuous random variables. It does not assume normality although it does assume finite variances and finite covariance. When the variables are bivariate normal, Pearson's correlation provides a complete description of the association. Spearman's correlation applies to ranks and so provides a measure of a monotonic relationship between two continuous random variables. It is also useful with ordinal data and is robust to outliers (unlike Pearson's correlation). The distribution of either correlation coefficient will depend on the underlying distribution, although both are asymptotically normal because of the central limit theorem.
Pearson's or Spearman's correlation with non-normal data
Pearson's correlation is a measure of the linear relationship between two continuous random variables. It does not assume normality although it does assume finite variances and finite covariance. When
Pearson's or Spearman's correlation with non-normal data Pearson's correlation is a measure of the linear relationship between two continuous random variables. It does not assume normality although it does assume finite variances and finite covariance. When the variables are bivariate normal, Pearson's correlation provides a complete description of the association. Spearman's correlation applies to ranks and so provides a measure of a monotonic relationship between two continuous random variables. It is also useful with ordinal data and is robust to outliers (unlike Pearson's correlation). The distribution of either correlation coefficient will depend on the underlying distribution, although both are asymptotically normal because of the central limit theorem.
Pearson's or Spearman's correlation with non-normal data Pearson's correlation is a measure of the linear relationship between two continuous random variables. It does not assume normality although it does assume finite variances and finite covariance. When
1,043
Pearson's or Spearman's correlation with non-normal data
Don't forget Kendall's tau! Roger Newson has argued for the superiority of Kendall's τa over Spearman's correlation rS as a rank-based measure of correlation in a paper whose full text is now freely available online: Newson R. Parameters behind "nonparametric" statistics: Kendall's tau,Somers' D and median differences. Stata Journal 2002; 2(1):45-64. He references (on p47) Kendall & Gibbons (1990) as arguing that "...confidence intervals for Spearman’s rS are less reliable and less interpretable than confidence intervals for Kendall’s τ-parameters, but the sample Spearman’s rS is much more easily calculated without a computer" (which is no longer of much importance of course). Kendall, M. G. and J. D. Gibbons. 1990. Rank Correlation Methods. 5th ed. London: Griffin.
Pearson's or Spearman's correlation with non-normal data
Don't forget Kendall's tau! Roger Newson has argued for the superiority of Kendall's τa over Spearman's correlation rS as a rank-based measure of correlation in a paper whose full text is now freely a
Pearson's or Spearman's correlation with non-normal data Don't forget Kendall's tau! Roger Newson has argued for the superiority of Kendall's τa over Spearman's correlation rS as a rank-based measure of correlation in a paper whose full text is now freely available online: Newson R. Parameters behind "nonparametric" statistics: Kendall's tau,Somers' D and median differences. Stata Journal 2002; 2(1):45-64. He references (on p47) Kendall & Gibbons (1990) as arguing that "...confidence intervals for Spearman’s rS are less reliable and less interpretable than confidence intervals for Kendall’s τ-parameters, but the sample Spearman’s rS is much more easily calculated without a computer" (which is no longer of much importance of course). Kendall, M. G. and J. D. Gibbons. 1990. Rank Correlation Methods. 5th ed. London: Griffin.
Pearson's or Spearman's correlation with non-normal data Don't forget Kendall's tau! Roger Newson has argued for the superiority of Kendall's τa over Spearman's correlation rS as a rank-based measure of correlation in a paper whose full text is now freely a
1,044
Pearson's or Spearman's correlation with non-normal data
From an applied perspective, I am more concerned with choosing an approach that summarises the relationship between two variables in a way that aligns with my research question. I think that determining a method for getting accurate standard errors and p-values is a question that should come second. Even if you chose not to rely on asymptotics, there's always the option to bootstrap or change distributional assumptions. As a general rule, I prefer Pearson's correlation because (a) it generally aligns more with my theoretical interests; (b) it enables more direct comparability of findings across studies, because most studies in my area report Pearson's correlation; and (c) in many settings there is minimal difference between Pearson and Spearman correlation coefficients. However, there are situations where I think Pearson's correlation on raw variables is misleading. Outliers: Outliers can have great influence on Pearson's correlations. Many outliers in applied settings reflect measurement failures or other factors that the model is not intended to generalise to. One option is to remove such outliers. Univariate outliers do not exist with Spearman's rho because everything is converted to ranks. Thus, Spearman is more robust. Highly skewed variables: When correlating skewed variables, particularly highly skewed variables, a log or some other transformation often makes the underlying relationship between the two variables clearer (e.g., brain size by body weight of animals). In such settings it may be that the raw metric is not the most meaningful metric anyway. Spearman's rho has a similar effect to transformation by converting both variables to ranks. From this perspective, Spearman's rho can be seen as a quick-and-dirty approach (or more positively, it is less subjective) whereby you don't have to think about optimal transformations. In both cases above, I would advise researchers to either consider adjustment strategies (e.g., transformations, outlier removal/adjustment) before applying Pearson's correlation or use Spearman's rho.
Pearson's or Spearman's correlation with non-normal data
From an applied perspective, I am more concerned with choosing an approach that summarises the relationship between two variables in a way that aligns with my research question. I think that determini
Pearson's or Spearman's correlation with non-normal data From an applied perspective, I am more concerned with choosing an approach that summarises the relationship between two variables in a way that aligns with my research question. I think that determining a method for getting accurate standard errors and p-values is a question that should come second. Even if you chose not to rely on asymptotics, there's always the option to bootstrap or change distributional assumptions. As a general rule, I prefer Pearson's correlation because (a) it generally aligns more with my theoretical interests; (b) it enables more direct comparability of findings across studies, because most studies in my area report Pearson's correlation; and (c) in many settings there is minimal difference between Pearson and Spearman correlation coefficients. However, there are situations where I think Pearson's correlation on raw variables is misleading. Outliers: Outliers can have great influence on Pearson's correlations. Many outliers in applied settings reflect measurement failures or other factors that the model is not intended to generalise to. One option is to remove such outliers. Univariate outliers do not exist with Spearman's rho because everything is converted to ranks. Thus, Spearman is more robust. Highly skewed variables: When correlating skewed variables, particularly highly skewed variables, a log or some other transformation often makes the underlying relationship between the two variables clearer (e.g., brain size by body weight of animals). In such settings it may be that the raw metric is not the most meaningful metric anyway. Spearman's rho has a similar effect to transformation by converting both variables to ranks. From this perspective, Spearman's rho can be seen as a quick-and-dirty approach (or more positively, it is less subjective) whereby you don't have to think about optimal transformations. In both cases above, I would advise researchers to either consider adjustment strategies (e.g., transformations, outlier removal/adjustment) before applying Pearson's correlation or use Spearman's rho.
Pearson's or Spearman's correlation with non-normal data From an applied perspective, I am more concerned with choosing an approach that summarises the relationship between two variables in a way that aligns with my research question. I think that determini
1,045
Pearson's or Spearman's correlation with non-normal data
Updated The question asks us to choose between Pearson's and Spearman's method when normality is questioned. Restricted to this concern, I think the following paper should inform anyone's decision: On the Effects of Non-Normality on the Distribution of the Sample Product-Moment Correlation Coefficient (Kowalski, 1975) It's quite nice and provides a survey of the considerable literature, spanning decades, on this topic -- starting from Pearson's "mutilated and distorted surfaces" and robustness of distribution of $r$. At least part of the contradictory nature of the "facts" is that much of this work was done before the advent of computing power -- which complicated things because the type of non-normality had to be considered and was hard to examine without simulations. Kowalski's analysis concludes that the distribution of $r$ is not robust in the presence of non-normality and recommends alternative procedures. The entire paper is quite informative and recommended reading, but skip to the very short conclusion at the end of the paper for a summary. If asked to choose between one of Spearman and Pearson when normality is violated, the distribution free alternative is worth advocating, i.e. Spearman's method. Previously .. Spearman's correlation is a rank based correlation measure; it's non-parametric and does not rest upon an assumption of normality. The sampling distribution for Pearson's correlation does assume normality; in particular this means that although you can compute it, conclusions based on significance testing may not be sound. As Rob points out in the comments, with large sample this is not an issue. With small samples though, where normality is violated, Spearman's correlation should be preferred. Update Mulling over the comments and the answers, it seems to me that this boils down to the usual non-parametric vs. parametric tests debate. Much of the literature, e.g. in biostatistics, doesn't deal with large samples. I'm generally not cavalier with relying on asymptotics. Perhaps it's justified in this case, but that's not readily apparent to me.
Pearson's or Spearman's correlation with non-normal data
Updated The question asks us to choose between Pearson's and Spearman's method when normality is questioned. Restricted to this concern, I think the following paper should inform anyone's decision:
Pearson's or Spearman's correlation with non-normal data Updated The question asks us to choose between Pearson's and Spearman's method when normality is questioned. Restricted to this concern, I think the following paper should inform anyone's decision: On the Effects of Non-Normality on the Distribution of the Sample Product-Moment Correlation Coefficient (Kowalski, 1975) It's quite nice and provides a survey of the considerable literature, spanning decades, on this topic -- starting from Pearson's "mutilated and distorted surfaces" and robustness of distribution of $r$. At least part of the contradictory nature of the "facts" is that much of this work was done before the advent of computing power -- which complicated things because the type of non-normality had to be considered and was hard to examine without simulations. Kowalski's analysis concludes that the distribution of $r$ is not robust in the presence of non-normality and recommends alternative procedures. The entire paper is quite informative and recommended reading, but skip to the very short conclusion at the end of the paper for a summary. If asked to choose between one of Spearman and Pearson when normality is violated, the distribution free alternative is worth advocating, i.e. Spearman's method. Previously .. Spearman's correlation is a rank based correlation measure; it's non-parametric and does not rest upon an assumption of normality. The sampling distribution for Pearson's correlation does assume normality; in particular this means that although you can compute it, conclusions based on significance testing may not be sound. As Rob points out in the comments, with large sample this is not an issue. With small samples though, where normality is violated, Spearman's correlation should be preferred. Update Mulling over the comments and the answers, it seems to me that this boils down to the usual non-parametric vs. parametric tests debate. Much of the literature, e.g. in biostatistics, doesn't deal with large samples. I'm generally not cavalier with relying on asymptotics. Perhaps it's justified in this case, but that's not readily apparent to me.
Pearson's or Spearman's correlation with non-normal data Updated The question asks us to choose between Pearson's and Spearman's method when normality is questioned. Restricted to this concern, I think the following paper should inform anyone's decision:
1,046
Pearson's or Spearman's correlation with non-normal data
I think these figures (of Gross-Error Sensitivity and Asymptotic Variance) and quotation from the below paper will make it a bit clear: "The Kendall correlation measure is more robust and slightly more efficient than Spearman’s rank correlation, making it the preferable estimator from both perspectives." Source: Croux, C. and Dehon, C. (2010). Influence functions of the Spearman and Kendall correlation measures. Statistical Methods and Applications, 19, 497-515.
Pearson's or Spearman's correlation with non-normal data
I think these figures (of Gross-Error Sensitivity and Asymptotic Variance) and quotation from the below paper will make it a bit clear: "The Kendall correlation measure is more robust and slightly m
Pearson's or Spearman's correlation with non-normal data I think these figures (of Gross-Error Sensitivity and Asymptotic Variance) and quotation from the below paper will make it a bit clear: "The Kendall correlation measure is more robust and slightly more efficient than Spearman’s rank correlation, making it the preferable estimator from both perspectives." Source: Croux, C. and Dehon, C. (2010). Influence functions of the Spearman and Kendall correlation measures. Statistical Methods and Applications, 19, 497-515.
Pearson's or Spearman's correlation with non-normal data I think these figures (of Gross-Error Sensitivity and Asymptotic Variance) and quotation from the below paper will make it a bit clear: "The Kendall correlation measure is more robust and slightly m
1,047
Pearson's or Spearman's correlation with non-normal data
Even though this is an age old question, I would like to contribute the (cool) observation that Pearson's $\rho$ is nothing but the slope of the trend line between $Y$ and $X$ after means have been removed and the scales are normalized for $\sigma_Y$, i.e. after removing means and normalizing for $\sigma_Y$, Pearson's $r$ is the least-squares solution of $\hat Y=X\hat\beta$ where $\hat Y = Y / \sigma_Y$. This leads to a quite easy decision rule between the two: Plot $Y$ over the $X$ (simple scatter plot) and add a trend line. If the trend looks out of place then don't use Pearson's $\rho$. Bonus: you get to visualize your data, which is never a bad thing. If you aren't comfortable with Pearson's $\rho$, then Spearman's rank makes this a bit better because it rescales both the x-axis and the y-axis in a non-linear way (rank encoding) and then fits the trend line in the embedded (transformed) space. In practice, this seems to work well and it does improve robustness towards outliers or skew as others have pointed out. In theory, I do think Spearman's rank is a bit funny though because rank encoding is a transformation that maps real numbers onto a discrete sequence of numbers. Fitting a linear regression onto discrete numbers is non-sense (they are discrete), so what is happening is that we re-embedd the sequence into the real numbers again using their natural embedding and fit a regression in that space instead. Seems to work well enough in practice, but I do find it funny. Instead of using Spearman's rank, it may be better to just commit to the rank encoding and go with Kendall's $\tau$ instead; even though we lose the relationship with Pearson's $\rho$. Pearson's $\rho$ from Least-Squares We can start with the desire to fit a linear regression model $Y=X\hat\beta + b$ on our observations using least-squares. Here $X$ is a vector of observations and $Y$ is another vector of matching observations. If we are happy to make the assumption that $X$ and $Y$ had their mean removed ($\mu_X=\mu_Y=0$, easy enough to do) then we can reduce the model to $Y=X\hat\beta$. For this, there exists a closed-form solution $\hat\beta=(X^TX)^{-1}X^TY$. Under the vector notation $\text{Cov}(X, Y) = E[XY]-E[X]E[Y] = E[XY] = X^TY$ - we removed the means - and similarly $\sigma_X = \text{Var}(X, X) = \text{Cov}(X, X) = X^TX$. If we now rewrite $\hat\beta$ in terms of $\text{Cov}$ and $\sigma_X$ we get $\hat\beta = \sigma_X^{-1}\text{Cov}(X,Y) = \frac{\text{Cov}(X,Y)}{\sigma_X}$. Plugging this back into the model and normalizing for $\sigma_Y$ resuls in $Y/\sigma_Y = \frac{\text{Cov}(X,Y)}{\sigma_X\sigma_Y}X$, where the slope is exactly Pearson's $\rho$. $Y/\sigma_Y$ is the expected rescaling of $Y$, since we are interested in a variance-normalized coefficient.
Pearson's or Spearman's correlation with non-normal data
Even though this is an age old question, I would like to contribute the (cool) observation that Pearson's $\rho$ is nothing but the slope of the trend line between $Y$ and $X$ after means have been re
Pearson's or Spearman's correlation with non-normal data Even though this is an age old question, I would like to contribute the (cool) observation that Pearson's $\rho$ is nothing but the slope of the trend line between $Y$ and $X$ after means have been removed and the scales are normalized for $\sigma_Y$, i.e. after removing means and normalizing for $\sigma_Y$, Pearson's $r$ is the least-squares solution of $\hat Y=X\hat\beta$ where $\hat Y = Y / \sigma_Y$. This leads to a quite easy decision rule between the two: Plot $Y$ over the $X$ (simple scatter plot) and add a trend line. If the trend looks out of place then don't use Pearson's $\rho$. Bonus: you get to visualize your data, which is never a bad thing. If you aren't comfortable with Pearson's $\rho$, then Spearman's rank makes this a bit better because it rescales both the x-axis and the y-axis in a non-linear way (rank encoding) and then fits the trend line in the embedded (transformed) space. In practice, this seems to work well and it does improve robustness towards outliers or skew as others have pointed out. In theory, I do think Spearman's rank is a bit funny though because rank encoding is a transformation that maps real numbers onto a discrete sequence of numbers. Fitting a linear regression onto discrete numbers is non-sense (they are discrete), so what is happening is that we re-embedd the sequence into the real numbers again using their natural embedding and fit a regression in that space instead. Seems to work well enough in practice, but I do find it funny. Instead of using Spearman's rank, it may be better to just commit to the rank encoding and go with Kendall's $\tau$ instead; even though we lose the relationship with Pearson's $\rho$. Pearson's $\rho$ from Least-Squares We can start with the desire to fit a linear regression model $Y=X\hat\beta + b$ on our observations using least-squares. Here $X$ is a vector of observations and $Y$ is another vector of matching observations. If we are happy to make the assumption that $X$ and $Y$ had their mean removed ($\mu_X=\mu_Y=0$, easy enough to do) then we can reduce the model to $Y=X\hat\beta$. For this, there exists a closed-form solution $\hat\beta=(X^TX)^{-1}X^TY$. Under the vector notation $\text{Cov}(X, Y) = E[XY]-E[X]E[Y] = E[XY] = X^TY$ - we removed the means - and similarly $\sigma_X = \text{Var}(X, X) = \text{Cov}(X, X) = X^TX$. If we now rewrite $\hat\beta$ in terms of $\text{Cov}$ and $\sigma_X$ we get $\hat\beta = \sigma_X^{-1}\text{Cov}(X,Y) = \frac{\text{Cov}(X,Y)}{\sigma_X}$. Plugging this back into the model and normalizing for $\sigma_Y$ resuls in $Y/\sigma_Y = \frac{\text{Cov}(X,Y)}{\sigma_X\sigma_Y}X$, where the slope is exactly Pearson's $\rho$. $Y/\sigma_Y$ is the expected rescaling of $Y$, since we are interested in a variance-normalized coefficient.
Pearson's or Spearman's correlation with non-normal data Even though this is an age old question, I would like to contribute the (cool) observation that Pearson's $\rho$ is nothing but the slope of the trend line between $Y$ and $X$ after means have been re
1,048
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep learning?
Subtracting the dataset mean serves to "center" the data. Additionally, you ideally would like to divide by the sttdev of that feature or pixel as well if you want to normalize each feature value to a z-score. The reason we do both of those things is because in the process of training our network, we're going to be multiplying (weights) and adding to (biases) these initial inputs in order to cause activations that we then backpropogate with the gradients to train the model. We'd like in this process for each feature to have a similar range so that our gradients don't go out of control (and that we only need one global learning rate multiplier). Another way you can think about it is deep learning networks traditionally share many parameters - if you didn't scale your inputs in a way that resulted in similarly-ranged feature values (ie: over the whole dataset by subtracting mean) sharing wouldn't happen very easily because to one part of the image weight w is a lot and to another it's too small. You will see in some CNN models that per-image whitening is used, which is more along the lines of your thinking.
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep
Subtracting the dataset mean serves to "center" the data. Additionally, you ideally would like to divide by the sttdev of that feature or pixel as well if you want to normalize each feature value to a
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep learning? Subtracting the dataset mean serves to "center" the data. Additionally, you ideally would like to divide by the sttdev of that feature or pixel as well if you want to normalize each feature value to a z-score. The reason we do both of those things is because in the process of training our network, we're going to be multiplying (weights) and adding to (biases) these initial inputs in order to cause activations that we then backpropogate with the gradients to train the model. We'd like in this process for each feature to have a similar range so that our gradients don't go out of control (and that we only need one global learning rate multiplier). Another way you can think about it is deep learning networks traditionally share many parameters - if you didn't scale your inputs in a way that resulted in similarly-ranged feature values (ie: over the whole dataset by subtracting mean) sharing wouldn't happen very easily because to one part of the image weight w is a lot and to another it's too small. You will see in some CNN models that per-image whitening is used, which is more along the lines of your thinking.
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep Subtracting the dataset mean serves to "center" the data. Additionally, you ideally would like to divide by the sttdev of that feature or pixel as well if you want to normalize each feature value to a
1,049
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep learning?
Prior to batch normalization, mean subtraction per channel was used to center the data around zero mean for each channel (R, G, B). This typically helps the network to learn faster since gradients act uniformly for each channel. I suspect if you use batch normalization, the per channel mean subtraction pre-processing step is not really necessary since you are normalizing per mini-batch anyway.
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep
Prior to batch normalization, mean subtraction per channel was used to center the data around zero mean for each channel (R, G, B). This typically helps the network to learn faster since gradients act
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep learning? Prior to batch normalization, mean subtraction per channel was used to center the data around zero mean for each channel (R, G, B). This typically helps the network to learn faster since gradients act uniformly for each channel. I suspect if you use batch normalization, the per channel mean subtraction pre-processing step is not really necessary since you are normalizing per mini-batch anyway.
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep Prior to batch normalization, mean subtraction per channel was used to center the data around zero mean for each channel (R, G, B). This typically helps the network to learn faster since gradients act
1,050
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep learning?
Per-image normalization is common and is even the only in-built function currently in Tensorflow (primarily due to being very easy to implement). It is used for the exact reason you mentioned (day VS night for the same image). However, if you imagine a more ideal scenario where lighting was controlled, then the relative differences between each image would be of great value in the algorithm, and we wouldn't want to wipe that out with per-image normalization (and would want to do normalization in the context of the entire training data set).
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep
Per-image normalization is common and is even the only in-built function currently in Tensorflow (primarily due to being very easy to implement). It is used for the exact reason you mentioned (day VS
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep learning? Per-image normalization is common and is even the only in-built function currently in Tensorflow (primarily due to being very easy to implement). It is used for the exact reason you mentioned (day VS night for the same image). However, if you imagine a more ideal scenario where lighting was controlled, then the relative differences between each image would be of great value in the algorithm, and we wouldn't want to wipe that out with per-image normalization (and would want to do normalization in the context of the entire training data set).
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep Per-image normalization is common and is even the only in-built function currently in Tensorflow (primarily due to being very easy to implement). It is used for the exact reason you mentioned (day VS
1,051
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep learning?
There are two aspects to this topic: Normalization to keep all data in the same scale --> the outcome is going to be similar when normalizing both on a per-image basis or across the entire image data set Preservation of relative information --> this is where doing normalization on a per-image or per-set basis makes a big difference For example if you want to learn a CNN to recognize night scenes vs. daytime scenes and you normalize on a per-image basis, the network will fail miserably because all the images will be scaled equally. Another pitfall of per-image normalization is that you may be artificially gaining up image sensor shot noise (e.g. for very dark scenes) and this will throw off the CNN in confusing such noise as useful information. Last word of caution on normalization: if it is done incorrectly it can lead to unrecoverable loss of information, for example image clipping (generating values that are below the valid range of the image datatype) or saturation (above the valid range). This is a classic mistake when operating with uint8 variables to represent images and values either go below 0 or exceed 255 due to normalization / pre-processing operations. Once this happens, image information is lost and it cannot be recovered, so the CNN will fail to learn any useful information from those image pixels.
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep
There are two aspects to this topic: Normalization to keep all data in the same scale --> the outcome is going to be similar when normalizing both on a per-image basis or across the entire image data
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep learning? There are two aspects to this topic: Normalization to keep all data in the same scale --> the outcome is going to be similar when normalizing both on a per-image basis or across the entire image data set Preservation of relative information --> this is where doing normalization on a per-image or per-set basis makes a big difference For example if you want to learn a CNN to recognize night scenes vs. daytime scenes and you normalize on a per-image basis, the network will fail miserably because all the images will be scaled equally. Another pitfall of per-image normalization is that you may be artificially gaining up image sensor shot noise (e.g. for very dark scenes) and this will throw off the CNN in confusing such noise as useful information. Last word of caution on normalization: if it is done incorrectly it can lead to unrecoverable loss of information, for example image clipping (generating values that are below the valid range of the image datatype) or saturation (above the valid range). This is a classic mistake when operating with uint8 variables to represent images and values either go below 0 or exceed 255 due to normalization / pre-processing operations. Once this happens, image information is lost and it cannot be recovered, so the CNN will fail to learn any useful information from those image pixels.
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep There are two aspects to this topic: Normalization to keep all data in the same scale --> the outcome is going to be similar when normalizing both on a per-image basis or across the entire image data
1,052
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep learning?
This is called preprocessing of data before using it. You can process in many ways but there is one condition that you should process each data with same function X_preproc = f(X) and this f(.) should not depend on data itself, so if you use the current image mean to process this current image then your f(X) will actually be really f(X, image) and you don't want that. The image contrast normalization you were talking about is for a different purpose. Image contrast normalization will help in feature. But f(.) above will help on optimization by keeping all the features numerically equal to each other (of-course approximately)
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep
This is called preprocessing of data before using it. You can process in many ways but there is one condition that you should process each data with same function X_preproc = f(X) and this f(.) should
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep learning? This is called preprocessing of data before using it. You can process in many ways but there is one condition that you should process each data with same function X_preproc = f(X) and this f(.) should not depend on data itself, so if you use the current image mean to process this current image then your f(X) will actually be really f(X, image) and you don't want that. The image contrast normalization you were talking about is for a different purpose. Image contrast normalization will help in feature. But f(.) above will help on optimization by keeping all the features numerically equal to each other (of-course approximately)
Why normalize images by subtracting dataset's image mean, instead of the current image mean in deep This is called preprocessing of data before using it. You can process in many ways but there is one condition that you should process each data with same function X_preproc = f(X) and this f(.) should
1,053
Batch gradient descent versus stochastic gradient descent
The applicability of batch or stochastic gradient descent really depends on the error manifold expected. Batch gradient descent computes the gradient using the whole dataset. This is great for convex, or relatively smooth error manifolds. In this case, we move somewhat directly towards an optimum solution, either local or global. Additionally, batch gradient descent, given an annealed learning rate, will eventually find the minimum located in it's basin of attraction. Stochastic gradient descent (SGD) computes the gradient using a single sample. Most applications of SGD actually use a minibatch of several samples, for reasons that will be explained a bit later. SGD works well (Not well, I suppose, but better than batch gradient descent) for error manifolds that have lots of local maxima/minima. In this case, the somewhat noisier gradient calculated using the reduced number of samples tends to jerk the model out of local minima into a region that hopefully is more optimal. Single samples are really noisy, while minibatches tend to average a little of the noise out. Thus, the amount of jerk is reduced when using minibatches. A good balance is struck when the minibatch size is small enough to avoid some of the poor local minima, but large enough that it doesn't avoid the global minima or better-performing local minima. (Incidently, this assumes that the best minima have a larger and deeper basin of attraction, and are therefore easier to fall into.) One benefit of SGD is that it's computationally a whole lot faster. Large datasets often can't be held in RAM, which makes vectorization much less efficient. Rather, each sample or batch of samples must be loaded, worked with, the results stored, and so on. Minibatch SGD, on the other hand, is usually intentionally made small enough to be computationally tractable. Usually, this computational advantage is leveraged by performing many more iterations of SGD, making many more steps than conventional batch gradient descent. This usually results in a model that is very close to that which would be found via batch gradient descent, or better. The way I like to think of how SGD works is to imagine that I have one point that represents my input distribution. My model is attempting to learn that input distribution. Surrounding the input distribution is a shaded area that represents the input distributions of all of the possible minibatches I could sample. It's usually a fair assumption that the minibatch input distributions are close in proximity to the true input distribution. Batch gradient descent, at all steps, takes the steepest route to reach the true input distribution. SGD, on the other hand, chooses a random point within the shaded area, and takes the steepest route towards this point. At each iteration, though, it chooses a new point. The average of all of these steps will approximate the true input distribution, usually quite well.
Batch gradient descent versus stochastic gradient descent
The applicability of batch or stochastic gradient descent really depends on the error manifold expected. Batch gradient descent computes the gradient using the whole dataset. This is great for convex
Batch gradient descent versus stochastic gradient descent The applicability of batch or stochastic gradient descent really depends on the error manifold expected. Batch gradient descent computes the gradient using the whole dataset. This is great for convex, or relatively smooth error manifolds. In this case, we move somewhat directly towards an optimum solution, either local or global. Additionally, batch gradient descent, given an annealed learning rate, will eventually find the minimum located in it's basin of attraction. Stochastic gradient descent (SGD) computes the gradient using a single sample. Most applications of SGD actually use a minibatch of several samples, for reasons that will be explained a bit later. SGD works well (Not well, I suppose, but better than batch gradient descent) for error manifolds that have lots of local maxima/minima. In this case, the somewhat noisier gradient calculated using the reduced number of samples tends to jerk the model out of local minima into a region that hopefully is more optimal. Single samples are really noisy, while minibatches tend to average a little of the noise out. Thus, the amount of jerk is reduced when using minibatches. A good balance is struck when the minibatch size is small enough to avoid some of the poor local minima, but large enough that it doesn't avoid the global minima or better-performing local minima. (Incidently, this assumes that the best minima have a larger and deeper basin of attraction, and are therefore easier to fall into.) One benefit of SGD is that it's computationally a whole lot faster. Large datasets often can't be held in RAM, which makes vectorization much less efficient. Rather, each sample or batch of samples must be loaded, worked with, the results stored, and so on. Minibatch SGD, on the other hand, is usually intentionally made small enough to be computationally tractable. Usually, this computational advantage is leveraged by performing many more iterations of SGD, making many more steps than conventional batch gradient descent. This usually results in a model that is very close to that which would be found via batch gradient descent, or better. The way I like to think of how SGD works is to imagine that I have one point that represents my input distribution. My model is attempting to learn that input distribution. Surrounding the input distribution is a shaded area that represents the input distributions of all of the possible minibatches I could sample. It's usually a fair assumption that the minibatch input distributions are close in proximity to the true input distribution. Batch gradient descent, at all steps, takes the steepest route to reach the true input distribution. SGD, on the other hand, chooses a random point within the shaded area, and takes the steepest route towards this point. At each iteration, though, it chooses a new point. The average of all of these steps will approximate the true input distribution, usually quite well.
Batch gradient descent versus stochastic gradient descent The applicability of batch or stochastic gradient descent really depends on the error manifold expected. Batch gradient descent computes the gradient using the whole dataset. This is great for convex
1,054
Batch gradient descent versus stochastic gradient descent
As other answer suggests, the main reason to use SGD is to reduce the computation cost of gradient while still largely maintaining the gradient direction when averaged over many mini-batches or samples - that surely helps bring you to the local minima. Why minibatch works. The mathematics behind this is that, the "true" gradient of the cost function (the gradient for the generalization error or for infinitely large samples set) is the expectation of the gradient $g$ over the true data generating distribution $p_{data}$; the actual gradient $\hat{g}$ computed over a batch of samples is always an approximation to the true gradient with the empirical data distribution $\hat{p}_{data}$. $$ \hat{g} = E_{\hat{p}_{data}}({\partial J(\theta)\over \partial \theta}) $$ Batch gradient descent can bring you the possible "optimal" gradient given all your data samples, it is not the "true" gradient though. A smaller batch (i.e. a minibatch) is probably not as optimal as the full batch, but they are both approximations - so is the single-sample minibatch (SGD). Assuming there is no dependence between the $m$ samples in one minibatch, the computed $\hat{g}(m)$ is an unbiased estimate of the true gradient. The (squared) standard errors of the estimates with different minibatch sizes is inversely proportional to the sizes of the minibatch. That is, $$ {SE({\hat{g}(n)}) \over SE({\hat{g}(m)})} = { \sqrt {m \over n}} $$ I.e., the reduction of standard error is the square root of the increase of sample size. This means, if the minibatch size is small, the learning rate has to be small too, in order to achieve stability over the big variance. When the samples are not independent, the property of unbiased estimate is no longer maintained. That requires you to shuffle the samples before the training, if the samples are sequenced not randomly enough. Why minibatch may work better. Firstly, minibatch makes some learning problems from technically intractable to be tractable due to the reduced computation demand with smaller batch size. Secondly, reduced batch size does not necessarily mean reduced gradient accuracy. The training samples many have lots of noises or outliers or biases. A randomly sampled minibatch may reflect the true data generating distribution better (or no worse) than the original full batch. If some iterations of the minibatch gradient updates give you a better estimation, overall the averaged result of one epoch can be better than the gradient computed from a full batch. Thirdly, minibatch does not only help deal with unpleasant data samples, but also help deal with unpleasant cost function that has many local minima. As Jason_L_Bens mentions, sometimes the error manifolds may be easier to trap a regular gradient into a local minima, while more difficult to trap the temporarily random gradient computed with minibatch. Finally, with gradient descent, you are not reaching the global minima in one step, but iterating on the error manifold. Gradient largely gives you only the direction to iterate. With minibatch, you can iterate much faster. In many cases, the more iterations, the better point you can reach. You do not really care at all weather the point is optimal globally or even locally. You just want to reach a reasonable model that brings you acceptable generalization error. Minibatch makes that easier. You may find the book "Deep learning" by Ian Goodfellow, et al, has pretty good discussions on this topic if you read through it carefully.
Batch gradient descent versus stochastic gradient descent
As other answer suggests, the main reason to use SGD is to reduce the computation cost of gradient while still largely maintaining the gradient direction when averaged over many mini-batches or sample
Batch gradient descent versus stochastic gradient descent As other answer suggests, the main reason to use SGD is to reduce the computation cost of gradient while still largely maintaining the gradient direction when averaged over many mini-batches or samples - that surely helps bring you to the local minima. Why minibatch works. The mathematics behind this is that, the "true" gradient of the cost function (the gradient for the generalization error or for infinitely large samples set) is the expectation of the gradient $g$ over the true data generating distribution $p_{data}$; the actual gradient $\hat{g}$ computed over a batch of samples is always an approximation to the true gradient with the empirical data distribution $\hat{p}_{data}$. $$ \hat{g} = E_{\hat{p}_{data}}({\partial J(\theta)\over \partial \theta}) $$ Batch gradient descent can bring you the possible "optimal" gradient given all your data samples, it is not the "true" gradient though. A smaller batch (i.e. a minibatch) is probably not as optimal as the full batch, but they are both approximations - so is the single-sample minibatch (SGD). Assuming there is no dependence between the $m$ samples in one minibatch, the computed $\hat{g}(m)$ is an unbiased estimate of the true gradient. The (squared) standard errors of the estimates with different minibatch sizes is inversely proportional to the sizes of the minibatch. That is, $$ {SE({\hat{g}(n)}) \over SE({\hat{g}(m)})} = { \sqrt {m \over n}} $$ I.e., the reduction of standard error is the square root of the increase of sample size. This means, if the minibatch size is small, the learning rate has to be small too, in order to achieve stability over the big variance. When the samples are not independent, the property of unbiased estimate is no longer maintained. That requires you to shuffle the samples before the training, if the samples are sequenced not randomly enough. Why minibatch may work better. Firstly, minibatch makes some learning problems from technically intractable to be tractable due to the reduced computation demand with smaller batch size. Secondly, reduced batch size does not necessarily mean reduced gradient accuracy. The training samples many have lots of noises or outliers or biases. A randomly sampled minibatch may reflect the true data generating distribution better (or no worse) than the original full batch. If some iterations of the minibatch gradient updates give you a better estimation, overall the averaged result of one epoch can be better than the gradient computed from a full batch. Thirdly, minibatch does not only help deal with unpleasant data samples, but also help deal with unpleasant cost function that has many local minima. As Jason_L_Bens mentions, sometimes the error manifolds may be easier to trap a regular gradient into a local minima, while more difficult to trap the temporarily random gradient computed with minibatch. Finally, with gradient descent, you are not reaching the global minima in one step, but iterating on the error manifold. Gradient largely gives you only the direction to iterate. With minibatch, you can iterate much faster. In many cases, the more iterations, the better point you can reach. You do not really care at all weather the point is optimal globally or even locally. You just want to reach a reasonable model that brings you acceptable generalization error. Minibatch makes that easier. You may find the book "Deep learning" by Ian Goodfellow, et al, has pretty good discussions on this topic if you read through it carefully.
Batch gradient descent versus stochastic gradient descent As other answer suggests, the main reason to use SGD is to reduce the computation cost of gradient while still largely maintaining the gradient direction when averaged over many mini-batches or sample
1,055
Batch gradient descent versus stochastic gradient descent
To me, batch gradient resembles lean gradient. In lean gradient, the batch size is chosen so every parameter that shall be updated, is also varied independently, but not necessarily orthogonally, in the batch. For example, if the batch contains 10 experiments, 10 rows, then it is possible to form $2^{10-1} = 512$ independent columns. 10 rows enables independent, but not orthogonal, update of 512 parameters.
Batch gradient descent versus stochastic gradient descent
To me, batch gradient resembles lean gradient. In lean gradient, the batch size is chosen so every parameter that shall be updated, is also varied independently, but not necessarily orthogonally, in t
Batch gradient descent versus stochastic gradient descent To me, batch gradient resembles lean gradient. In lean gradient, the batch size is chosen so every parameter that shall be updated, is also varied independently, but not necessarily orthogonally, in the batch. For example, if the batch contains 10 experiments, 10 rows, then it is possible to form $2^{10-1} = 512$ independent columns. 10 rows enables independent, but not orthogonal, update of 512 parameters.
Batch gradient descent versus stochastic gradient descent To me, batch gradient resembles lean gradient. In lean gradient, the batch size is chosen so every parameter that shall be updated, is also varied independently, but not necessarily orthogonally, in t
1,056
Batch gradient descent versus stochastic gradient descent
If you want to see the differences in a formula, this might help. In above equation, m indicates the number of training data points. In Batch Gradient Descent, As the yellow circle shows, in order to calculate the gradient of the cost function, we add up the cost of each sample. If we have 3 million samples, we have to loop through all 3 million samples or use the dot product. def gradientDescent(X, y, theta, alpha, num_iters): """ Performs gradient descent to learn theta """ m = y.size # number of training examples for i in range(num_iters): y_hat = np.dot(X, theta) theta = theta - alpha * (1.0/m) * np.dot(X.T, y_hat-y) return theta Do you see np.dot(X.T, y_hat-y) above? That’s the vectorized version of “looping through (summing) all 3 million samples”. Wait... just to move a single step towards the minimum, do we really have to calculate each cost 3 million times? Yes. If you insist on using the batch gradient descent. But if you use Stochastic Gradient Descent, you don’t have to! In SGD, we use the cost gradient of ONE (1) example at each iteration, instead of adding up and using the costs of ALL examples. Image Source
Batch gradient descent versus stochastic gradient descent
If you want to see the differences in a formula, this might help. In above equation, m indicates the number of training data points. In Batch Gradient Descent, As the yellow circle shows, in order to
Batch gradient descent versus stochastic gradient descent If you want to see the differences in a formula, this might help. In above equation, m indicates the number of training data points. In Batch Gradient Descent, As the yellow circle shows, in order to calculate the gradient of the cost function, we add up the cost of each sample. If we have 3 million samples, we have to loop through all 3 million samples or use the dot product. def gradientDescent(X, y, theta, alpha, num_iters): """ Performs gradient descent to learn theta """ m = y.size # number of training examples for i in range(num_iters): y_hat = np.dot(X, theta) theta = theta - alpha * (1.0/m) * np.dot(X.T, y_hat-y) return theta Do you see np.dot(X.T, y_hat-y) above? That’s the vectorized version of “looping through (summing) all 3 million samples”. Wait... just to move a single step towards the minimum, do we really have to calculate each cost 3 million times? Yes. If you insist on using the batch gradient descent. But if you use Stochastic Gradient Descent, you don’t have to! In SGD, we use the cost gradient of ONE (1) example at each iteration, instead of adding up and using the costs of ALL examples. Image Source
Batch gradient descent versus stochastic gradient descent If you want to see the differences in a formula, this might help. In above equation, m indicates the number of training data points. In Batch Gradient Descent, As the yellow circle shows, in order to
1,057
Batch gradient descent versus stochastic gradient descent
Imagine you are going down the hill to a valley of minimum height. You may use batch gradient descent to calculate the direction to the valley once and just go there. But on that direction you may have an up hill. It's better to avoid it, and this is what stochastic gradient descent idea is about. Sometimes is better to take small steps. Just for the terminology direction batch gradient descent is not what you may intuitively think. This refers to the complete dataset pass per one iteration and in that context the entire dataset is called batch where we compute the average of the true gradient. Stochastic gradient descent is actually what we know as a multiple batches per iteration gradient descent update. This is why you may heard the word minibatch for the second.
Batch gradient descent versus stochastic gradient descent
Imagine you are going down the hill to a valley of minimum height. You may use batch gradient descent to calculate the direction to the valley once and just go there. But on that direction you may hav
Batch gradient descent versus stochastic gradient descent Imagine you are going down the hill to a valley of minimum height. You may use batch gradient descent to calculate the direction to the valley once and just go there. But on that direction you may have an up hill. It's better to avoid it, and this is what stochastic gradient descent idea is about. Sometimes is better to take small steps. Just for the terminology direction batch gradient descent is not what you may intuitively think. This refers to the complete dataset pass per one iteration and in that context the entire dataset is called batch where we compute the average of the true gradient. Stochastic gradient descent is actually what we know as a multiple batches per iteration gradient descent update. This is why you may heard the word minibatch for the second.
Batch gradient descent versus stochastic gradient descent Imagine you are going down the hill to a valley of minimum height. You may use batch gradient descent to calculate the direction to the valley once and just go there. But on that direction you may hav
1,058
Batch gradient descent versus stochastic gradient descent
As mentioned in the above answers, the noise in stochastic gradient descent helps you escape "bad" stationary points. For example, you can see how gradient descent (in pink) gets stuck in a saddle point while stochastic gradient descent (in yellow) escapes it. This picture is taken from here.
Batch gradient descent versus stochastic gradient descent
As mentioned in the above answers, the noise in stochastic gradient descent helps you escape "bad" stationary points. For example, you can see how gradient descent (in pink) gets stuck in a saddle poi
Batch gradient descent versus stochastic gradient descent As mentioned in the above answers, the noise in stochastic gradient descent helps you escape "bad" stationary points. For example, you can see how gradient descent (in pink) gets stuck in a saddle point while stochastic gradient descent (in yellow) escapes it. This picture is taken from here.
Batch gradient descent versus stochastic gradient descent As mentioned in the above answers, the noise in stochastic gradient descent helps you escape "bad" stationary points. For example, you can see how gradient descent (in pink) gets stuck in a saddle poi
1,059
Correlations with unordered categorical variables
It depends on what sense of a correlation you want. When you run the prototypical Pearson's product moment correlation, you get a measure of the strength of association and you get a test of the significance of that association. More typically however, the significance test and the measure of effect size differ. Significance tests: Continuous vs. Nominal: run an ANOVA. In R, you can use ?aov. Nominal vs. Nominal: run a chi-squared test. In R, you use ?chisq.test. Effect size (strength of association): Continuous vs. Nominal: calculate the intraclass correlation. In R, you can use ?ICC in the psych package; there is also an ICC package. Nominal vs. Nominal: calculate Cramer's V. In R, you can use ?assocstats in the vcd package.
Correlations with unordered categorical variables
It depends on what sense of a correlation you want. When you run the prototypical Pearson's product moment correlation, you get a measure of the strength of association and you get a test of the sign
Correlations with unordered categorical variables It depends on what sense of a correlation you want. When you run the prototypical Pearson's product moment correlation, you get a measure of the strength of association and you get a test of the significance of that association. More typically however, the significance test and the measure of effect size differ. Significance tests: Continuous vs. Nominal: run an ANOVA. In R, you can use ?aov. Nominal vs. Nominal: run a chi-squared test. In R, you use ?chisq.test. Effect size (strength of association): Continuous vs. Nominal: calculate the intraclass correlation. In R, you can use ?ICC in the psych package; there is also an ICC package. Nominal vs. Nominal: calculate Cramer's V. In R, you can use ?assocstats in the vcd package.
Correlations with unordered categorical variables It depends on what sense of a correlation you want. When you run the prototypical Pearson's product moment correlation, you get a measure of the strength of association and you get a test of the sign
1,060
Correlations with unordered categorical variables
I've seen the following cheatsheet linked before: https://stats.idre.ucla.edu/other/mult-pkg/whatstat/ It may be useful to you. It even has links to specific R libraries.
Correlations with unordered categorical variables
I've seen the following cheatsheet linked before: https://stats.idre.ucla.edu/other/mult-pkg/whatstat/ It may be useful to you. It even has links to specific R libraries.
Correlations with unordered categorical variables I've seen the following cheatsheet linked before: https://stats.idre.ucla.edu/other/mult-pkg/whatstat/ It may be useful to you. It even has links to specific R libraries.
Correlations with unordered categorical variables I've seen the following cheatsheet linked before: https://stats.idre.ucla.edu/other/mult-pkg/whatstat/ It may be useful to you. It even has links to specific R libraries.
1,061
Correlations with unordered categorical variables
If you want a correlation matrix of categorical variables, you can use the following wrapper function (requiring the 'vcd' package): catcorrm <- function(vars, dat) sapply(vars, function(y) sapply(vars, function(x) assocstats(table(dat[,x], dat[,y]))$cramer)) Where: vars is a string vector of categorical variables you want to correlate dat is a data.frame containing the variables The result is a matrix of Cramer's V's.
Correlations with unordered categorical variables
If you want a correlation matrix of categorical variables, you can use the following wrapper function (requiring the 'vcd' package): catcorrm <- function(vars, dat) sapply(vars, function(y) sapply(var
Correlations with unordered categorical variables If you want a correlation matrix of categorical variables, you can use the following wrapper function (requiring the 'vcd' package): catcorrm <- function(vars, dat) sapply(vars, function(y) sapply(vars, function(x) assocstats(table(dat[,x], dat[,y]))$cramer)) Where: vars is a string vector of categorical variables you want to correlate dat is a data.frame containing the variables The result is a matrix of Cramer's V's.
Correlations with unordered categorical variables If you want a correlation matrix of categorical variables, you can use the following wrapper function (requiring the 'vcd' package): catcorrm <- function(vars, dat) sapply(vars, function(y) sapply(var
1,062
Correlations with unordered categorical variables
Depends on what you want to achieve. Let $X$ be the continuous, numerical variable and $K$ the (unordered) categorical variable. Then one possible approach is to assign numerical scores $t_i$ to each of the possible values of $K$, $i=1, \dots, p$. One possible criterion is to maximize the correlation between the $X$ and the scores $t_i$. With only one continuous and one categorical variable, this might not be very helpful, since the maximum correlation will always be one (to show that, and find some such scores, is an exercise in using Lagrange multipliers! With multiple variables, we try to find compromise scores for the categorical variables, maybe trying to maximize the multiple correlation $R^2$. Then the individual correlations will not more (except very special cases!) equal one. Such an analysis can be seen as a generalization of multiple correspondence analysis, and is known under many names, such as canonical correlation analysis, homogeneity analysis, and many others. An implementation in R is in the homals package (on CRAN). googling for some of this names will give a wealth of information, there is a complete book: Albert Gifi, "Nonlinear Multivariate Analysis". Good luck!
Correlations with unordered categorical variables
Depends on what you want to achieve. Let $X$ be the continuous, numerical variable and $K$ the (unordered) categorical variable. Then one possible approach is to assign numerical scores $t_i$ to eac
Correlations with unordered categorical variables Depends on what you want to achieve. Let $X$ be the continuous, numerical variable and $K$ the (unordered) categorical variable. Then one possible approach is to assign numerical scores $t_i$ to each of the possible values of $K$, $i=1, \dots, p$. One possible criterion is to maximize the correlation between the $X$ and the scores $t_i$. With only one continuous and one categorical variable, this might not be very helpful, since the maximum correlation will always be one (to show that, and find some such scores, is an exercise in using Lagrange multipliers! With multiple variables, we try to find compromise scores for the categorical variables, maybe trying to maximize the multiple correlation $R^2$. Then the individual correlations will not more (except very special cases!) equal one. Such an analysis can be seen as a generalization of multiple correspondence analysis, and is known under many names, such as canonical correlation analysis, homogeneity analysis, and many others. An implementation in R is in the homals package (on CRAN). googling for some of this names will give a wealth of information, there is a complete book: Albert Gifi, "Nonlinear Multivariate Analysis". Good luck!
Correlations with unordered categorical variables Depends on what you want to achieve. Let $X$ be the continuous, numerical variable and $K$ the (unordered) categorical variable. Then one possible approach is to assign numerical scores $t_i$ to eac
1,063
Correlations with unordered categorical variables
I had a similar problem and I tried the Chi-squared-Test as suggested but I got very confused in assessing the P-Values against NULL Hypothesis. I will explain how I interpreted categorical variables. I am not sure how relevant it is in your case. I had Response Variable Y and two Predictor Variables X1 and X2 where X2 being a categorical variable with two levels say 1 and 2. I was trying to fit a Linear Model ols = lm(Y ~ X1 + X2, data=mydata) But I wanted to understand how different level of X2 fits the above equation. I came across a R function by() by(mydata,X2,function(x) summary(lm(Y~X1,data=x))) What this code does is, it is trying to fit in Linear Model for each level of X2. This gave me all P-value and R-square, Residual standard error which I understand and can interpret. Again I am not sure if this is what you want. I sort of compared different values of X2 in predicting Y.
Correlations with unordered categorical variables
I had a similar problem and I tried the Chi-squared-Test as suggested but I got very confused in assessing the P-Values against NULL Hypothesis. I will explain how I interpreted categorical variables
Correlations with unordered categorical variables I had a similar problem and I tried the Chi-squared-Test as suggested but I got very confused in assessing the P-Values against NULL Hypothesis. I will explain how I interpreted categorical variables. I am not sure how relevant it is in your case. I had Response Variable Y and two Predictor Variables X1 and X2 where X2 being a categorical variable with two levels say 1 and 2. I was trying to fit a Linear Model ols = lm(Y ~ X1 + X2, data=mydata) But I wanted to understand how different level of X2 fits the above equation. I came across a R function by() by(mydata,X2,function(x) summary(lm(Y~X1,data=x))) What this code does is, it is trying to fit in Linear Model for each level of X2. This gave me all P-value and R-square, Residual standard error which I understand and can interpret. Again I am not sure if this is what you want. I sort of compared different values of X2 in predicting Y.
Correlations with unordered categorical variables I had a similar problem and I tried the Chi-squared-Test as suggested but I got very confused in assessing the P-Values against NULL Hypothesis. I will explain how I interpreted categorical variables
1,064
Correlations with unordered categorical variables
To measure the link strength between two categorical variable i would rather suggest the use of a cross tab with the chisquare stat to measure the link strength between a numerical and a categorical variable you can use a mean comparison to see if it change significally from one category to an others
Correlations with unordered categorical variables
To measure the link strength between two categorical variable i would rather suggest the use of a cross tab with the chisquare stat to measure the link strength between a numerical and a categorical v
Correlations with unordered categorical variables To measure the link strength between two categorical variable i would rather suggest the use of a cross tab with the chisquare stat to measure the link strength between a numerical and a categorical variable you can use a mean comparison to see if it change significally from one category to an others
Correlations with unordered categorical variables To measure the link strength between two categorical variable i would rather suggest the use of a cross tab with the chisquare stat to measure the link strength between a numerical and a categorical v
1,065
What's the difference between probability and statistics?
The short answer to this I've heard from Persi Diaconis is the following: The problems considered by probability and statistics are inverse to each other. In probability theory we consider some underlying process which has some randomness or uncertainty modeled by random variables, and we figure out what happens. In statistics we observe something that has happened, and try to figure out what underlying process would explain those observations.
What's the difference between probability and statistics?
The short answer to this I've heard from Persi Diaconis is the following: The problems considered by probability and statistics are inverse to each other. In probability theory we consider some under
What's the difference between probability and statistics? The short answer to this I've heard from Persi Diaconis is the following: The problems considered by probability and statistics are inverse to each other. In probability theory we consider some underlying process which has some randomness or uncertainty modeled by random variables, and we figure out what happens. In statistics we observe something that has happened, and try to figure out what underlying process would explain those observations.
What's the difference between probability and statistics? The short answer to this I've heard from Persi Diaconis is the following: The problems considered by probability and statistics are inverse to each other. In probability theory we consider some under
1,066
What's the difference between probability and statistics?
I like the example of a jar of red and green jelly beans. A probabilist starts by knowing the proportion of each and asks the probability of drawing a red jelly bean. A statistician infers the proportion of red jelly beans by sampling from the jar.
What's the difference between probability and statistics?
I like the example of a jar of red and green jelly beans. A probabilist starts by knowing the proportion of each and asks the probability of drawing a red jelly bean. A statistician infers the prop
What's the difference between probability and statistics? I like the example of a jar of red and green jelly beans. A probabilist starts by knowing the proportion of each and asks the probability of drawing a red jelly bean. A statistician infers the proportion of red jelly beans by sampling from the jar.
What's the difference between probability and statistics? I like the example of a jar of red and green jelly beans. A probabilist starts by knowing the proportion of each and asks the probability of drawing a red jelly bean. A statistician infers the prop
1,067
What's the difference between probability and statistics?
It's misleading to simply say that statistics is simply the inverse of probability. Yes, statistical questions are questions of inverse probability, but they are ill-posed inverse problems, and this makes a big difference in terms of how they are addressed. Probability is a branch of pure mathematics--probability questions can be posed and solved using axiomatic reasoning, and therefore there is one correct answer to any probability question. Statistical questions can be converted to probability questions by the use of probability models. Once we make certain assumptions about the mechanism generating the data, we can answer statistical questions using probability theory. HOWEVER, the proper formulation and checking of these probability models is just as important, or even more important, than the subsequent analysis of the problem using these models. One could say that statistics comprises of two parts. The first part is the question of how to formulate and evaluate probabilistic models for the problem; this endeavor lies within the domain of "philosophy of science". The second part is the question of obtaining answers after a certain model has been assumed. This part of statistics is indeed a matter of applied probability theory, and in practice, contains a fair deal of numerical analysis as well. See: http://bactra.org/reviews/error/
What's the difference between probability and statistics?
It's misleading to simply say that statistics is simply the inverse of probability. Yes, statistical questions are questions of inverse probability, but they are ill-posed inverse problems, and this
What's the difference between probability and statistics? It's misleading to simply say that statistics is simply the inverse of probability. Yes, statistical questions are questions of inverse probability, but they are ill-posed inverse problems, and this makes a big difference in terms of how they are addressed. Probability is a branch of pure mathematics--probability questions can be posed and solved using axiomatic reasoning, and therefore there is one correct answer to any probability question. Statistical questions can be converted to probability questions by the use of probability models. Once we make certain assumptions about the mechanism generating the data, we can answer statistical questions using probability theory. HOWEVER, the proper formulation and checking of these probability models is just as important, or even more important, than the subsequent analysis of the problem using these models. One could say that statistics comprises of two parts. The first part is the question of how to formulate and evaluate probabilistic models for the problem; this endeavor lies within the domain of "philosophy of science". The second part is the question of obtaining answers after a certain model has been assumed. This part of statistics is indeed a matter of applied probability theory, and in practice, contains a fair deal of numerical analysis as well. See: http://bactra.org/reviews/error/
What's the difference between probability and statistics? It's misleading to simply say that statistics is simply the inverse of probability. Yes, statistical questions are questions of inverse probability, but they are ill-posed inverse problems, and this
1,068
What's the difference between probability and statistics?
I like this from Steve Skienna's Calculated Bets (see the link for complete discussion): In summary, probability theory enables us to find the consequences of a given ideal world, while statistical theory enables us to to measure the extent to which our world is ideal.
What's the difference between probability and statistics?
I like this from Steve Skienna's Calculated Bets (see the link for complete discussion): In summary, probability theory enables us to find the consequences of a given ideal world, while statistical t
What's the difference between probability and statistics? I like this from Steve Skienna's Calculated Bets (see the link for complete discussion): In summary, probability theory enables us to find the consequences of a given ideal world, while statistical theory enables us to to measure the extent to which our world is ideal.
What's the difference between probability and statistics? I like this from Steve Skienna's Calculated Bets (see the link for complete discussion): In summary, probability theory enables us to find the consequences of a given ideal world, while statistical t
1,069
What's the difference between probability and statistics?
Table 3.1 of Intuitive Biostatistics answers this question with the diagram shown below. Note that all the arrows point to the right for probability, and point to the left for statistics. PROBABILITY General ---> Specific Population ---> Sample Model ---> Data STATISTICS General <--- Specific Population <--- Sample Model <--- Data
What's the difference between probability and statistics?
Table 3.1 of Intuitive Biostatistics answers this question with the diagram shown below. Note that all the arrows point to the right for probability, and point to the left for statistics. PROBABILITY
What's the difference between probability and statistics? Table 3.1 of Intuitive Biostatistics answers this question with the diagram shown below. Note that all the arrows point to the right for probability, and point to the left for statistics. PROBABILITY General ---> Specific Population ---> Sample Model ---> Data STATISTICS General <--- Specific Population <--- Sample Model <--- Data
What's the difference between probability and statistics? Table 3.1 of Intuitive Biostatistics answers this question with the diagram shown below. Note that all the arrows point to the right for probability, and point to the left for statistics. PROBABILITY
1,070
What's the difference between probability and statistics?
Probability is a pure science (math), statistics is about data. They are connected since probability forms some kind of fundament for statistics, providing basic ideas.
What's the difference between probability and statistics?
Probability is a pure science (math), statistics is about data. They are connected since probability forms some kind of fundament for statistics, providing basic ideas.
What's the difference between probability and statistics? Probability is a pure science (math), statistics is about data. They are connected since probability forms some kind of fundament for statistics, providing basic ideas.
What's the difference between probability and statistics? Probability is a pure science (math), statistics is about data. They are connected since probability forms some kind of fundament for statistics, providing basic ideas.
1,071
What's the difference between probability and statistics?
Probability answers questions about what will happen, statistics answers questions about what did happen.
What's the difference between probability and statistics?
Probability answers questions about what will happen, statistics answers questions about what did happen.
What's the difference between probability and statistics? Probability answers questions about what will happen, statistics answers questions about what did happen.
What's the difference between probability and statistics? Probability answers questions about what will happen, statistics answers questions about what did happen.
1,072
What's the difference between probability and statistics?
Probability is about quantifying uncertainty whereas statistics is explaining the variation in some measure of interest (e.g., why do income levels vary?) that we observe in the real world. We explain the variation by using some observable factors (e.g., gender, education level, age etc for the income example). However, since we cannot possibly take into account all possible factors that affect income, we leave any unexplained variation to random errors (which is where quantifying uncertainty comes in). Since, we attribute "Variation = Effect of Observable Factors + Effect of Random Errors" we need the tools provided by probability to account for the effect of random errors on the variation that we observe. Some examples follow: Quantifying Uncertainty Example 1: You roll a 6-sided die. What is the probability of obtaining a 1? Example 2: What is the probability that the annual income of an adult person selected at random from the United States is less than $40,000? Explaining Variation Example 1: We observe that the annual income of a person varies. What factors explain the variation in a person's income? Clearly, we cannot account for all factors. Thus, we attribute a person's income to some observable factors (e.g, education level, gender, age etc) and leave any remaining variation to uncertainty (or in the language of statistics: to random errors). Example 2: We observe that some consumers choose Tide most of the time they buy a detergent whereas some other consumers choose detergent brand xyz. What explains the variation in choice? We attribute the variation in choices to some observable factors such as price, brand name etc and leave any unexplained variation to random errors (or uncertainty).
What's the difference between probability and statistics?
Probability is about quantifying uncertainty whereas statistics is explaining the variation in some measure of interest (e.g., why do income levels vary?) that we observe in the real world. We explai
What's the difference between probability and statistics? Probability is about quantifying uncertainty whereas statistics is explaining the variation in some measure of interest (e.g., why do income levels vary?) that we observe in the real world. We explain the variation by using some observable factors (e.g., gender, education level, age etc for the income example). However, since we cannot possibly take into account all possible factors that affect income, we leave any unexplained variation to random errors (which is where quantifying uncertainty comes in). Since, we attribute "Variation = Effect of Observable Factors + Effect of Random Errors" we need the tools provided by probability to account for the effect of random errors on the variation that we observe. Some examples follow: Quantifying Uncertainty Example 1: You roll a 6-sided die. What is the probability of obtaining a 1? Example 2: What is the probability that the annual income of an adult person selected at random from the United States is less than $40,000? Explaining Variation Example 1: We observe that the annual income of a person varies. What factors explain the variation in a person's income? Clearly, we cannot account for all factors. Thus, we attribute a person's income to some observable factors (e.g, education level, gender, age etc) and leave any remaining variation to uncertainty (or in the language of statistics: to random errors). Example 2: We observe that some consumers choose Tide most of the time they buy a detergent whereas some other consumers choose detergent brand xyz. What explains the variation in choice? We attribute the variation in choices to some observable factors such as price, brand name etc and leave any unexplained variation to random errors (or uncertainty).
What's the difference between probability and statistics? Probability is about quantifying uncertainty whereas statistics is explaining the variation in some measure of interest (e.g., why do income levels vary?) that we observe in the real world. We explai
1,073
What's the difference between probability and statistics?
Probability is the embrace of uncertainty, while statistics is an empirical, ravenous pursuit of the truth (damned liars excluded, of course).
What's the difference between probability and statistics?
Probability is the embrace of uncertainty, while statistics is an empirical, ravenous pursuit of the truth (damned liars excluded, of course).
What's the difference between probability and statistics? Probability is the embrace of uncertainty, while statistics is an empirical, ravenous pursuit of the truth (damned liars excluded, of course).
What's the difference between probability and statistics? Probability is the embrace of uncertainty, while statistics is an empirical, ravenous pursuit of the truth (damned liars excluded, of course).
1,074
What's the difference between probability and statistics?
Similar to what Mark said, Statistics was historically called Inverse Probability, since statistics tries to infer the causes of an event given the observations, while probability tends to be the other way around.
What's the difference between probability and statistics?
Similar to what Mark said, Statistics was historically called Inverse Probability, since statistics tries to infer the causes of an event given the observations, while probability tends to be the othe
What's the difference between probability and statistics? Similar to what Mark said, Statistics was historically called Inverse Probability, since statistics tries to infer the causes of an event given the observations, while probability tends to be the other way around.
What's the difference between probability and statistics? Similar to what Mark said, Statistics was historically called Inverse Probability, since statistics tries to infer the causes of an event given the observations, while probability tends to be the othe
1,075
What's the difference between probability and statistics?
The probability of an event is its long-run relative frequency. So it's basically telling you the chance of, for example, getting a 'head' on the next flip of a coin, or getting a '3' on the next roll of a die. A statistic is any numerical measure computed from a sample of the population. For example, the sample mean. We use this as a statistic which estimates the population mean, which is a parameter. So basically it's giving you some kind of summary of a sample. You can only get a statistic from a sample, otherwise if you compute a numerical measure on a population, it is called a population parameter.
What's the difference between probability and statistics?
The probability of an event is its long-run relative frequency. So it's basically telling you the chance of, for example, getting a 'head' on the next flip of a coin, or getting a '3' on the next roll
What's the difference between probability and statistics? The probability of an event is its long-run relative frequency. So it's basically telling you the chance of, for example, getting a 'head' on the next flip of a coin, or getting a '3' on the next roll of a die. A statistic is any numerical measure computed from a sample of the population. For example, the sample mean. We use this as a statistic which estimates the population mean, which is a parameter. So basically it's giving you some kind of summary of a sample. You can only get a statistic from a sample, otherwise if you compute a numerical measure on a population, it is called a population parameter.
What's the difference between probability and statistics? The probability of an event is its long-run relative frequency. So it's basically telling you the chance of, for example, getting a 'head' on the next flip of a coin, or getting a '3' on the next roll
1,076
What's the difference between probability and statistics?
Probability studies, well, how probable events are. You intuitively know what probability is. Statistics is the study of data: showing it (using tools such as charts), summarizing it (using means and standard deviations etc.), reaching conclusions about the world from which that data was drawn (fitting lines to data etc.), and -- this is key -- quantifying how sure we can be about our conclusions. In order to quantify how sure we can be about our conclusions we need to use Probability. Let's say you have last year's data about rainfall in the region where you live and where I live. Last year it rained an average of 1/4 inch per week where you live, and 3/8 inch where I live. So we can say that rainfall in my region is on average 50% greater than where you live, right? Not so fast, Sparky. It could be a coincidence: maybe it just happened to rain a lot last year where I live. We can use Probability to estimate how confident we can be in our conclusion that my home is 50% soggier than yours. So basically you can say that Probability is the mathematical foundation for the Theory of Statistics.
What's the difference between probability and statistics?
Probability studies, well, how probable events are. You intuitively know what probability is. Statistics is the study of data: showing it (using tools such as charts), summarizing it (using means and
What's the difference between probability and statistics? Probability studies, well, how probable events are. You intuitively know what probability is. Statistics is the study of data: showing it (using tools such as charts), summarizing it (using means and standard deviations etc.), reaching conclusions about the world from which that data was drawn (fitting lines to data etc.), and -- this is key -- quantifying how sure we can be about our conclusions. In order to quantify how sure we can be about our conclusions we need to use Probability. Let's say you have last year's data about rainfall in the region where you live and where I live. Last year it rained an average of 1/4 inch per week where you live, and 3/8 inch where I live. So we can say that rainfall in my region is on average 50% greater than where you live, right? Not so fast, Sparky. It could be a coincidence: maybe it just happened to rain a lot last year where I live. We can use Probability to estimate how confident we can be in our conclusion that my home is 50% soggier than yours. So basically you can say that Probability is the mathematical foundation for the Theory of Statistics.
What's the difference between probability and statistics? Probability studies, well, how probable events are. You intuitively know what probability is. Statistics is the study of data: showing it (using tools such as charts), summarizing it (using means and
1,077
What's the difference between probability and statistics?
In probability theory, we are given random variables X1, X2, ... in some way, and then we study their properties, i.e. calculate probability P{ X1 \in B1 }, study the convergence of X1, X2, ... etc. In mathematical statistics, we are given n realizations of some random variable X, and set of distributions D; the problem is to find amongst distributions from D one which is most likely to generate the data we observed.
What's the difference between probability and statistics?
In probability theory, we are given random variables X1, X2, ... in some way, and then we study their properties, i.e. calculate probability P{ X1 \in B1 }, study the convergence of X1, X2, ... etc. I
What's the difference between probability and statistics? In probability theory, we are given random variables X1, X2, ... in some way, and then we study their properties, i.e. calculate probability P{ X1 \in B1 }, study the convergence of X1, X2, ... etc. In mathematical statistics, we are given n realizations of some random variable X, and set of distributions D; the problem is to find amongst distributions from D one which is most likely to generate the data we observed.
What's the difference between probability and statistics? In probability theory, we are given random variables X1, X2, ... in some way, and then we study their properties, i.e. calculate probability P{ X1 \in B1 }, study the convergence of X1, X2, ... etc. I
1,078
What's the difference between probability and statistics?
In probability, the distribution is known and knowable in advance - you start with a known probability distribution function (or similar), and sample from it. In statistics, the distribution is unknown in advance. It may even be unknowable. Assumptions are hypothesised about the probability distribution behind observed data, in order to be able to apply probability theory to that data in order to know whether a null hypothesis about that data can be rejected or not. There is a philosophical discussion about whether there is such a thing as probability in the real world, or whether it is an ideal figment of our mathematical imaginations, and all our observations can only be statistical.
What's the difference between probability and statistics?
In probability, the distribution is known and knowable in advance - you start with a known probability distribution function (or similar), and sample from it. In statistics, the distribution is unknow
What's the difference between probability and statistics? In probability, the distribution is known and knowable in advance - you start with a known probability distribution function (or similar), and sample from it. In statistics, the distribution is unknown in advance. It may even be unknowable. Assumptions are hypothesised about the probability distribution behind observed data, in order to be able to apply probability theory to that data in order to know whether a null hypothesis about that data can be rejected or not. There is a philosophical discussion about whether there is such a thing as probability in the real world, or whether it is an ideal figment of our mathematical imaginations, and all our observations can only be statistical.
What's the difference between probability and statistics? In probability, the distribution is known and knowable in advance - you start with a known probability distribution function (or similar), and sample from it. In statistics, the distribution is unknow
1,079
What's the difference between probability and statistics?
Probability: Given known parameters, find the probability of observing a particular set of data. Statistics: Given a particular set of observed data, make an inference about what the parameters might be. Statistics is "more subjective" and "more art than science" (relative to probability). $$$$ $$\underline{\text{Example}}$$ We have a coin that can be flipped. Let $p$ be the proportion of coin-flips that are heads. Probability: Suppose $p=\frac{1}{2}$. Then what's the probability of getting $HHH$ (three heads in a row)? Most probabilists would give the same, simple answer: "The probability is $\frac{1}{8}$." Statistics: Suppose we get $HHH$. Then what's $p$? Different statisticians will give different, often long-winded answers.
What's the difference between probability and statistics?
Probability: Given known parameters, find the probability of observing a particular set of data. Statistics: Given a particular set of observed data, make an inference about what the parameters might
What's the difference between probability and statistics? Probability: Given known parameters, find the probability of observing a particular set of data. Statistics: Given a particular set of observed data, make an inference about what the parameters might be. Statistics is "more subjective" and "more art than science" (relative to probability). $$$$ $$\underline{\text{Example}}$$ We have a coin that can be flipped. Let $p$ be the proportion of coin-flips that are heads. Probability: Suppose $p=\frac{1}{2}$. Then what's the probability of getting $HHH$ (three heads in a row)? Most probabilists would give the same, simple answer: "The probability is $\frac{1}{8}$." Statistics: Suppose we get $HHH$. Then what's $p$? Different statisticians will give different, often long-winded answers.
What's the difference between probability and statistics? Probability: Given known parameters, find the probability of observing a particular set of data. Statistics: Given a particular set of observed data, make an inference about what the parameters might
1,080
What's the difference between probability and statistics?
Statistics is the pursuit of truth in the face of uncertainty. Probability is the tool that allows us to quantify uncertainty. (I have provided another, longer, answer that assumed that what was being asked was something along the lines of "how would you explain it to your grandmother?")
What's the difference between probability and statistics?
Statistics is the pursuit of truth in the face of uncertainty. Probability is the tool that allows us to quantify uncertainty. (I have provided another, longer, answer that assumed that what was being
What's the difference between probability and statistics? Statistics is the pursuit of truth in the face of uncertainty. Probability is the tool that allows us to quantify uncertainty. (I have provided another, longer, answer that assumed that what was being asked was something along the lines of "how would you explain it to your grandmother?")
What's the difference between probability and statistics? Statistics is the pursuit of truth in the face of uncertainty. Probability is the tool that allows us to quantify uncertainty. (I have provided another, longer, answer that assumed that what was being
1,081
What's the difference between probability and statistics?
Answer #1: Statistics is parametrized Probability. Any book on measure-theoretic Probability will tell you about the Probability triplet: $(\Omega, \mathcal F, P)$. But if you're doing Statistics, you have to add $\theta$ to the above: $(\Omega, \mathcal F, P_\theta)$, i.e. for different values of $\theta$, you get different probability measures (different distributions). Answer #2: Probability is about going forward; Statistics is about going backward. Probability is about the process of generating (simulating) data given a value of $\theta$. Statistics is about the process of taking data to draw conclusions about $\theta$. Disclaimer: the above are mathematical answers. In reality, much of Statistics is also about designing/discovering appropriate models, questioning existing models, designing experiments, dealing with imperfect data, etc. "All models are wrong."
What's the difference between probability and statistics?
Answer #1: Statistics is parametrized Probability. Any book on measure-theoretic Probability will tell you about the Probability triplet: $(\Omega, \mathcal F, P)$. But if you're doing Statistics, yo
What's the difference between probability and statistics? Answer #1: Statistics is parametrized Probability. Any book on measure-theoretic Probability will tell you about the Probability triplet: $(\Omega, \mathcal F, P)$. But if you're doing Statistics, you have to add $\theta$ to the above: $(\Omega, \mathcal F, P_\theta)$, i.e. for different values of $\theta$, you get different probability measures (different distributions). Answer #2: Probability is about going forward; Statistics is about going backward. Probability is about the process of generating (simulating) data given a value of $\theta$. Statistics is about the process of taking data to draw conclusions about $\theta$. Disclaimer: the above are mathematical answers. In reality, much of Statistics is also about designing/discovering appropriate models, questioning existing models, designing experiments, dealing with imperfect data, etc. "All models are wrong."
What's the difference between probability and statistics? Answer #1: Statistics is parametrized Probability. Any book on measure-theoretic Probability will tell you about the Probability triplet: $(\Omega, \mathcal F, P)$. But if you're doing Statistics, yo
1,082
What's the difference between probability and statistics?
The difference between probabilities and statistics is that in probabilities there is no mistake. We are sure for the probability because we know exactly how many sides has a coin, or how many blue caramels are in the vase. But in statistics we examine a piece of a population of whatever we examine, and from this, we try to see the truth, but always there is an a% of wrong conclusions. The only thing in statistics that is true, is this a% mistake, that in fact is a probability.
What's the difference between probability and statistics?
The difference between probabilities and statistics is that in probabilities there is no mistake. We are sure for the probability because we know exactly how many sides has a coin, or how many blue ca
What's the difference between probability and statistics? The difference between probabilities and statistics is that in probabilities there is no mistake. We are sure for the probability because we know exactly how many sides has a coin, or how many blue caramels are in the vase. But in statistics we examine a piece of a population of whatever we examine, and from this, we try to see the truth, but always there is an a% of wrong conclusions. The only thing in statistics that is true, is this a% mistake, that in fact is a probability.
What's the difference between probability and statistics? The difference between probabilities and statistics is that in probabilities there is no mistake. We are sure for the probability because we know exactly how many sides has a coin, or how many blue ca
1,083
What's the difference between probability and statistics?
Savage's text Foundations of Statistics has been cited over 12000 times on Google Scholar.[3] It tells the following. It is unanimously agreed that statistics depends somehow on probability. But, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the Tower of Babel. Doubtless, much of the disagreement is merely terminological and would disappear under sufficiently sharp analysis. https://en.wikipedia.org/wiki/Foundations_of_statistics So the point that Probability Theory is a Foundation of Statistics is hardly disputed. Everything else is fair game. But in trying to be more helpful, practical with an answer... However, probability theory contains much that is mostly of mathematical interest and not directly relevant to statistics. Moreover, many topics in statistics are independent of probability theory https://en.wikipedia.org/wiki/Probability_and_statistics The above is not exhaustive or authorative by any means, but I believe it's useful. Commonly it has helped me to see things such as... Descrete Mathematics >> Probability Theory >> Statistics With each being heavily used, on average, in the foundations of the next. That is there are large intersections in how we study the next's foundations. PS. There's inductive and deductive Statistics, so that's not where the difference lies.
What's the difference between probability and statistics?
Savage's text Foundations of Statistics has been cited over 12000 times on Google Scholar.[3] It tells the following. It is unanimously agreed that statistics depends somehow on probability. But, as
What's the difference between probability and statistics? Savage's text Foundations of Statistics has been cited over 12000 times on Google Scholar.[3] It tells the following. It is unanimously agreed that statistics depends somehow on probability. But, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the Tower of Babel. Doubtless, much of the disagreement is merely terminological and would disappear under sufficiently sharp analysis. https://en.wikipedia.org/wiki/Foundations_of_statistics So the point that Probability Theory is a Foundation of Statistics is hardly disputed. Everything else is fair game. But in trying to be more helpful, practical with an answer... However, probability theory contains much that is mostly of mathematical interest and not directly relevant to statistics. Moreover, many topics in statistics are independent of probability theory https://en.wikipedia.org/wiki/Probability_and_statistics The above is not exhaustive or authorative by any means, but I believe it's useful. Commonly it has helped me to see things such as... Descrete Mathematics >> Probability Theory >> Statistics With each being heavily used, on average, in the foundations of the next. That is there are large intersections in how we study the next's foundations. PS. There's inductive and deductive Statistics, so that's not where the difference lies.
What's the difference between probability and statistics? Savage's text Foundations of Statistics has been cited over 12000 times on Google Scholar.[3] It tells the following. It is unanimously agreed that statistics depends somehow on probability. But, as
1,084
What's the difference between probability and statistics?
The term "statistics" is beautifully explained by J. C. Maxwell in the article Molecules (in Nature 8, 1873, pp. 437–441). Let me quote the relevant passage: When the working members of Section F get hold of a Report of the Census, or any other document containing the numerical data of Economic and Social Science, they begin by distributing the whole population into groups, according to age, income-tax, education, religious belief, or criminal convictions. The number of individuals is far too great to allow of their tracing the history of each separately, so that, in order to reduce their labour within human limits, they concentrate their attention on small number of artificial groups. The varying number of individuals in each group, and not the varying state of each individual, is the primary datum from which they work. This, of course, is not the only method of studying human nature. We may observe the conduct of individual men and compare it with that conduct which their previous character and their present circumstances, according to the best existing theory, would lead us to expect. Those who practise this method endeavour to improve their knowledge of the elements of human nature, in much the same way as an astronomer corrects the elements of a planet by comparing its actual position with that deduced from the received elements. The study of human nature by parents and schoolmasters, by historians and statesmen, is therefore to be distinguished from that carried on by registrars and tabulators, and by those statesmen who put their faith in figures. The one may be called the historical, and the other the statistical method. The equations of dynamics completely express the laws of the historical method as applied to matter, but the application of these equations implies a perfect knowledge of all the data. But the smallest portion of matter which we can subject to experiment consists of millions of molecules, not one of which ever becomes individually sensible to us. We cannot, therefore, ascertain the actual motion of any one of these molecules, so that we are obliged to abandon the strict historical method, and to adopt the statistical method of dealing with large groups of molecules. He gives this explanation of the statistical method in several other works. For example, "In the statistical method of investigation, we do not follow the system during its motion, but we fix our attention on a particular phase, and ascertain whether the system is in that phase or not, and also when it enters the phase and when it leaves it" (Trans. Cambridge Philos. Soc. 12, 1879, pp. 547–570). There's another beautiful passage by Maxwell about "probability" (from a letter to Campbell, 1850, reprinted in The Life of James Clerk Maxwell, p. 143): the actual science of Logic is conversant at present only with things either certain, impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore the true Logic for this world is the Calculus of Probabilities, which takes account of the magnitude of the probability (which is, or which ought to be in a reasonable man's mind). So we can say: – In statistics we are "concentrating our attention on small number of artificial groups" or quantities; we're making a sort of cataloguing or census. – In probability we are calculating our uncertainty about some events or quantities. The two are distinct, and we can be doing the one without the other. For example, if we make a complete census of the entire population of a nation and count the exact number of people belonging to particular groups such as age, gender, and so on, we are doing statistics. There's no uncertainty – probability – involved, because the numbers we find are exact and known. On the other hand, imagine someone passing in front of us on the street, and we wonder about their age. In this case we're uncertain and we use probability, but there is no statistics involved, since we aren't making some sort of census or catalogue. But the two can also occur together. If we can't make a complete census of a population, we have to guess how many people are in specific age-gender groups. Hence we're using probability while doing statistics. Vice versa, we can consider exact statistical data about people's ages, and from such data try to make a better guess about the person passing in front of us. Hence we're using statistics while deciding upon a probability.
What's the difference between probability and statistics?
The term "statistics" is beautifully explained by J. C. Maxwell in the article Molecules (in Nature 8, 1873, pp. 437–441). Let me quote the relevant passage: When the working members of Section F get
What's the difference between probability and statistics? The term "statistics" is beautifully explained by J. C. Maxwell in the article Molecules (in Nature 8, 1873, pp. 437–441). Let me quote the relevant passage: When the working members of Section F get hold of a Report of the Census, or any other document containing the numerical data of Economic and Social Science, they begin by distributing the whole population into groups, according to age, income-tax, education, religious belief, or criminal convictions. The number of individuals is far too great to allow of their tracing the history of each separately, so that, in order to reduce their labour within human limits, they concentrate their attention on small number of artificial groups. The varying number of individuals in each group, and not the varying state of each individual, is the primary datum from which they work. This, of course, is not the only method of studying human nature. We may observe the conduct of individual men and compare it with that conduct which their previous character and their present circumstances, according to the best existing theory, would lead us to expect. Those who practise this method endeavour to improve their knowledge of the elements of human nature, in much the same way as an astronomer corrects the elements of a planet by comparing its actual position with that deduced from the received elements. The study of human nature by parents and schoolmasters, by historians and statesmen, is therefore to be distinguished from that carried on by registrars and tabulators, and by those statesmen who put their faith in figures. The one may be called the historical, and the other the statistical method. The equations of dynamics completely express the laws of the historical method as applied to matter, but the application of these equations implies a perfect knowledge of all the data. But the smallest portion of matter which we can subject to experiment consists of millions of molecules, not one of which ever becomes individually sensible to us. We cannot, therefore, ascertain the actual motion of any one of these molecules, so that we are obliged to abandon the strict historical method, and to adopt the statistical method of dealing with large groups of molecules. He gives this explanation of the statistical method in several other works. For example, "In the statistical method of investigation, we do not follow the system during its motion, but we fix our attention on a particular phase, and ascertain whether the system is in that phase or not, and also when it enters the phase and when it leaves it" (Trans. Cambridge Philos. Soc. 12, 1879, pp. 547–570). There's another beautiful passage by Maxwell about "probability" (from a letter to Campbell, 1850, reprinted in The Life of James Clerk Maxwell, p. 143): the actual science of Logic is conversant at present only with things either certain, impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore the true Logic for this world is the Calculus of Probabilities, which takes account of the magnitude of the probability (which is, or which ought to be in a reasonable man's mind). So we can say: – In statistics we are "concentrating our attention on small number of artificial groups" or quantities; we're making a sort of cataloguing or census. – In probability we are calculating our uncertainty about some events or quantities. The two are distinct, and we can be doing the one without the other. For example, if we make a complete census of the entire population of a nation and count the exact number of people belonging to particular groups such as age, gender, and so on, we are doing statistics. There's no uncertainty – probability – involved, because the numbers we find are exact and known. On the other hand, imagine someone passing in front of us on the street, and we wonder about their age. In this case we're uncertain and we use probability, but there is no statistics involved, since we aren't making some sort of census or catalogue. But the two can also occur together. If we can't make a complete census of a population, we have to guess how many people are in specific age-gender groups. Hence we're using probability while doing statistics. Vice versa, we can consider exact statistical data about people's ages, and from such data try to make a better guess about the person passing in front of us. Hence we're using statistics while deciding upon a probability.
What's the difference between probability and statistics? The term "statistics" is beautifully explained by J. C. Maxwell in the article Molecules (in Nature 8, 1873, pp. 437–441). Let me quote the relevant passage: When the working members of Section F get
1,085
What's the difference between probability and statistics?
Many people and mathematicians say that 'STATISTICS is the inverse of PROBABILITY',but its not particularly right. The way of approaching or method of solving these 2 are completely different but they are INTERCONNECTED. i will like to refer to my friend John D Cook..... "I like the example of a jar of red and green jelly beans. A probabilist starts by knowing the proportion of each and lets say finds the probability of drawing a red jelly bean. A statistician infers the proportion of red jelly beans by sampling from the jar." Now the proportion of the red jelly bean obtained by sampling from the jar is used by the probabilist to find the probability of drawing a red bean from the jar Consider this example---->>> In an examination 30% of students failed in physics, 25% failed in maths, 12% failed both in physics and maths. A student is selected at random find the probability that the student has failed in Physics,if it is known that he failed in maths. The above sum is a problem of probability,but if we look carefully we will find that the sum is provided with some statistical data 30% student failed in physics, 25% " " " maths ' ' ' These are basically frequencies if the percentages are calculated. thus we are being provided with a statistical data which in turn helps us to find the probability SO PROBABILITY AND STATISTICS ARE VERY MUCH INTERCONNECTED OR RATHER WE CAN SAY THAT PROBABILITY IS DEPENDENT A LOT ON STATISTICS
What's the difference between probability and statistics?
Many people and mathematicians say that 'STATISTICS is the inverse of PROBABILITY',but its not particularly right. The way of approaching or method of solving these 2 are completely different but they
What's the difference between probability and statistics? Many people and mathematicians say that 'STATISTICS is the inverse of PROBABILITY',but its not particularly right. The way of approaching or method of solving these 2 are completely different but they are INTERCONNECTED. i will like to refer to my friend John D Cook..... "I like the example of a jar of red and green jelly beans. A probabilist starts by knowing the proportion of each and lets say finds the probability of drawing a red jelly bean. A statistician infers the proportion of red jelly beans by sampling from the jar." Now the proportion of the red jelly bean obtained by sampling from the jar is used by the probabilist to find the probability of drawing a red bean from the jar Consider this example---->>> In an examination 30% of students failed in physics, 25% failed in maths, 12% failed both in physics and maths. A student is selected at random find the probability that the student has failed in Physics,if it is known that he failed in maths. The above sum is a problem of probability,but if we look carefully we will find that the sum is provided with some statistical data 30% student failed in physics, 25% " " " maths ' ' ' These are basically frequencies if the percentages are calculated. thus we are being provided with a statistical data which in turn helps us to find the probability SO PROBABILITY AND STATISTICS ARE VERY MUCH INTERCONNECTED OR RATHER WE CAN SAY THAT PROBABILITY IS DEPENDENT A LOT ON STATISTICS
What's the difference between probability and statistics? Many people and mathematicians say that 'STATISTICS is the inverse of PROBABILITY',but its not particularly right. The way of approaching or method of solving these 2 are completely different but they
1,086
What is the difference between linear regression on y with x and x with y?
The best way to think about this is to imagine a scatterplot of points with $y$ on the vertical axis and $x$ represented by the horizontal axis. Given this framework, you see a cloud of points, which may be vaguely circular, or may be elongated into an ellipse. What you are trying to do in regression is find what might be called the 'line of best fit'. However, while this seems straightforward, we need to figure out what we mean by 'best', and that means we must define what it would be for a line to be good, or for one line to be better than another, etc. Specifically, we must stipulate a loss function. A loss function gives us a way to say how 'bad' something is, and thus, when we minimize that, we make our line as 'good' as possible, or find the 'best' line. Traditionally, when we conduct a regression analysis, we find estimates of the slope and intercept so as to minimize the sum of squared errors. These are defined as follows: $$ SSE=\sum_{i=1}^N(y_i-(\hat\beta_0+\hat\beta_1x_i))^2 $$ In terms of our scatterplot, this means we are minimizing the (sum of the squared) vertical distances between the observed data points and the line. On the other hand, it is perfectly reasonable to regress $x$ onto $y$, but in that case, we would put $x$ on the vertical axis, and so on. If we kept our plot as is (with $x$ on the horizontal axis), regressing $x$ onto $y$ (again, using a slightly adapted version of the above equation with $x$ and $y$ switched) means that we would be minimizing the sum of the horizontal distances between the observed data points and the line. This sounds very similar, but is not quite the same thing. (The way to recognize this is to do it both ways, and then algebraically convert one set of parameter estimates into the terms of the other. Comparing the first model with the rearranged version of the second model, it becomes easy to see that they are not the same.) Note that neither way would produce the same line we would intuitively draw if someone handed us a piece of graph paper with points plotted on it. In that case, we would draw a line straight through the center, but minimizing the vertical distance yields a line that is slightly flatter (i.e., with a shallower slope), whereas minimizing the horizontal distance yields a line that is slightly steeper. A correlation is symmetrical; $x$ is as correlated with $y$ as $y$ is with $x$. The Pearson product-moment correlation can be understood within a regression context, however. The correlation coefficient, $r$, is the slope of the regression line when both variables have been standardized first. That is, you first subtracted off the mean from each observation, and then divided the differences by the standard deviation. The cloud of data points will now be centered on the origin, and the slope would be the same whether you regressed $y$ onto $x$, or $x$ onto $y$ (but note the comment by @DilipSarwate below). Now, why does this matter? Using our traditional loss function, we are saying that all of the error is in only one of the variables (viz., $y$). That is, we are saying that $x$ is measured without error and constitutes the set of values we care about, but that $y$ has sampling error. This is very different from saying the converse. This was important in an interesting historical episode: In the late 70's and early 80's in the US, the case was made that there was discrimination against women in the workplace, and this was backed up with regression analyses showing that women with equal backgrounds (e.g., qualifications, experience, etc.) were paid, on average, less than men. Critics (or just people who were extra thorough) reasoned that if this was true, women who were paid equally with men would have to be more highly qualified, but when this was checked, it was found that although the results were 'significant' when assessed the one way, they were not 'significant' when checked the other way, which threw everyone involved into a tizzy. See here for a famous paper that tried to clear the issue up. (Updated much later) Here's another way to think about this that approaches the topic through the formulas instead of visually: The formula for the slope of a simple regression line is a consequence of the loss function that has been adopted. If you are using the standard Ordinary Least Squares loss function (noted above), you can derive the formula for the slope that you see in every intro textbook. This formula can be presented in various forms; one of which I call the 'intuitive' formula for the slope. Consider this form for both the situation where you are regressing $y$ on $x$, and where you are regressing $x$ on $y$: $$ \overbrace{\hat\beta_1=\frac{\text{Cov}(x,y)}{\text{Var}(x)}}^{y\text{ on } x}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\overbrace{\hat\beta_1=\frac{\text{Cov}(y,x)}{\text{Var}(y)}}^{x\text{ on }y} $$ Now, I hope it's obvious that these would not be the same unless $\text{Var}(x)$ equals $\text{Var}(y)$. If the variances are equal (e.g., because you standardized the variables first), then so are the standard deviations, and thus the variances would both also equal $\text{SD}(x)\text{SD}(y)$. In this case, $\hat\beta_1$ would equal Pearson's $r$, which is the same either way by virtue of the principle of commutativity: $$ \overbrace{r=\frac{\text{Cov}(x,y)}{\text{SD}(x)\text{SD}(y)}}^{\text{correlating }x\text{ with }y}~~~~~~~~~~~~~~~~~~~~~~~~~~~\overbrace{r=\frac{\text{Cov}(y,x)}{\text{SD}(y)\text{SD}(x)}}^{\text{correlating }y\text{ with }x} $$
What is the difference between linear regression on y with x and x with y?
The best way to think about this is to imagine a scatterplot of points with $y$ on the vertical axis and $x$ represented by the horizontal axis. Given this framework, you see a cloud of points, which
What is the difference between linear regression on y with x and x with y? The best way to think about this is to imagine a scatterplot of points with $y$ on the vertical axis and $x$ represented by the horizontal axis. Given this framework, you see a cloud of points, which may be vaguely circular, or may be elongated into an ellipse. What you are trying to do in regression is find what might be called the 'line of best fit'. However, while this seems straightforward, we need to figure out what we mean by 'best', and that means we must define what it would be for a line to be good, or for one line to be better than another, etc. Specifically, we must stipulate a loss function. A loss function gives us a way to say how 'bad' something is, and thus, when we minimize that, we make our line as 'good' as possible, or find the 'best' line. Traditionally, when we conduct a regression analysis, we find estimates of the slope and intercept so as to minimize the sum of squared errors. These are defined as follows: $$ SSE=\sum_{i=1}^N(y_i-(\hat\beta_0+\hat\beta_1x_i))^2 $$ In terms of our scatterplot, this means we are minimizing the (sum of the squared) vertical distances between the observed data points and the line. On the other hand, it is perfectly reasonable to regress $x$ onto $y$, but in that case, we would put $x$ on the vertical axis, and so on. If we kept our plot as is (with $x$ on the horizontal axis), regressing $x$ onto $y$ (again, using a slightly adapted version of the above equation with $x$ and $y$ switched) means that we would be minimizing the sum of the horizontal distances between the observed data points and the line. This sounds very similar, but is not quite the same thing. (The way to recognize this is to do it both ways, and then algebraically convert one set of parameter estimates into the terms of the other. Comparing the first model with the rearranged version of the second model, it becomes easy to see that they are not the same.) Note that neither way would produce the same line we would intuitively draw if someone handed us a piece of graph paper with points plotted on it. In that case, we would draw a line straight through the center, but minimizing the vertical distance yields a line that is slightly flatter (i.e., with a shallower slope), whereas minimizing the horizontal distance yields a line that is slightly steeper. A correlation is symmetrical; $x$ is as correlated with $y$ as $y$ is with $x$. The Pearson product-moment correlation can be understood within a regression context, however. The correlation coefficient, $r$, is the slope of the regression line when both variables have been standardized first. That is, you first subtracted off the mean from each observation, and then divided the differences by the standard deviation. The cloud of data points will now be centered on the origin, and the slope would be the same whether you regressed $y$ onto $x$, or $x$ onto $y$ (but note the comment by @DilipSarwate below). Now, why does this matter? Using our traditional loss function, we are saying that all of the error is in only one of the variables (viz., $y$). That is, we are saying that $x$ is measured without error and constitutes the set of values we care about, but that $y$ has sampling error. This is very different from saying the converse. This was important in an interesting historical episode: In the late 70's and early 80's in the US, the case was made that there was discrimination against women in the workplace, and this was backed up with regression analyses showing that women with equal backgrounds (e.g., qualifications, experience, etc.) were paid, on average, less than men. Critics (or just people who were extra thorough) reasoned that if this was true, women who were paid equally with men would have to be more highly qualified, but when this was checked, it was found that although the results were 'significant' when assessed the one way, they were not 'significant' when checked the other way, which threw everyone involved into a tizzy. See here for a famous paper that tried to clear the issue up. (Updated much later) Here's another way to think about this that approaches the topic through the formulas instead of visually: The formula for the slope of a simple regression line is a consequence of the loss function that has been adopted. If you are using the standard Ordinary Least Squares loss function (noted above), you can derive the formula for the slope that you see in every intro textbook. This formula can be presented in various forms; one of which I call the 'intuitive' formula for the slope. Consider this form for both the situation where you are regressing $y$ on $x$, and where you are regressing $x$ on $y$: $$ \overbrace{\hat\beta_1=\frac{\text{Cov}(x,y)}{\text{Var}(x)}}^{y\text{ on } x}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\overbrace{\hat\beta_1=\frac{\text{Cov}(y,x)}{\text{Var}(y)}}^{x\text{ on }y} $$ Now, I hope it's obvious that these would not be the same unless $\text{Var}(x)$ equals $\text{Var}(y)$. If the variances are equal (e.g., because you standardized the variables first), then so are the standard deviations, and thus the variances would both also equal $\text{SD}(x)\text{SD}(y)$. In this case, $\hat\beta_1$ would equal Pearson's $r$, which is the same either way by virtue of the principle of commutativity: $$ \overbrace{r=\frac{\text{Cov}(x,y)}{\text{SD}(x)\text{SD}(y)}}^{\text{correlating }x\text{ with }y}~~~~~~~~~~~~~~~~~~~~~~~~~~~\overbrace{r=\frac{\text{Cov}(y,x)}{\text{SD}(y)\text{SD}(x)}}^{\text{correlating }y\text{ with }x} $$
What is the difference between linear regression on y with x and x with y? The best way to think about this is to imagine a scatterplot of points with $y$ on the vertical axis and $x$ represented by the horizontal axis. Given this framework, you see a cloud of points, which
1,087
What is the difference between linear regression on y with x and x with y?
I'm going to illustrate the answer with some R code and output. First, we construct a random normal distribution, y, with a mean of 5 and a SD of 1: y <- rnorm(1000, mean=5, sd=1) Next, I purposely create a second random normal distribution, x, which is simply 5x the value of y for each y: x <- y*5 By design, we have perfect correlation of x and y: cor(x,y) [1] 1 cor(y,x) [1] 1 However, when we do a regression, we are looking for a function that relates x and y so the results of the regression coefficients depend on which one we use as the dependent variable, and which we use as the independent variable. In this case, we don't fit an intercept because we made x a function of y with no random variation: lm(y~x-1) Call: lm(formula = y ~ x - 1) Coefficients: x 0.2 lm(x ~ y-1) Call: lm(formula = x ~ y - 1) Coefficients: y 5 So the regressions tell us that y=0.2x and that x=5y, which of course are equivalent. The correlation coefficient is simply showing us that there is an exact match in unit change levels between x and y, so that (for example) a 1-unit increase in y always produces a 0.2-unit increase in x.
What is the difference between linear regression on y with x and x with y?
I'm going to illustrate the answer with some R code and output. First, we construct a random normal distribution, y, with a mean of 5 and a SD of 1: y <- rnorm(1000, mean=5, sd=1) Next, I purposely c
What is the difference between linear regression on y with x and x with y? I'm going to illustrate the answer with some R code and output. First, we construct a random normal distribution, y, with a mean of 5 and a SD of 1: y <- rnorm(1000, mean=5, sd=1) Next, I purposely create a second random normal distribution, x, which is simply 5x the value of y for each y: x <- y*5 By design, we have perfect correlation of x and y: cor(x,y) [1] 1 cor(y,x) [1] 1 However, when we do a regression, we are looking for a function that relates x and y so the results of the regression coefficients depend on which one we use as the dependent variable, and which we use as the independent variable. In this case, we don't fit an intercept because we made x a function of y with no random variation: lm(y~x-1) Call: lm(formula = y ~ x - 1) Coefficients: x 0.2 lm(x ~ y-1) Call: lm(formula = x ~ y - 1) Coefficients: y 5 So the regressions tell us that y=0.2x and that x=5y, which of course are equivalent. The correlation coefficient is simply showing us that there is an exact match in unit change levels between x and y, so that (for example) a 1-unit increase in y always produces a 0.2-unit increase in x.
What is the difference between linear regression on y with x and x with y? I'm going to illustrate the answer with some R code and output. First, we construct a random normal distribution, y, with a mean of 5 and a SD of 1: y <- rnorm(1000, mean=5, sd=1) Next, I purposely c
1,088
What is the difference between linear regression on y with x and x with y?
The insight that since Pearson's correlation is the same whether we do a regression of x against y, or y against x is a good one, we should get the same linear regression is a good one. It is only slightly incorrect, and we can use it to understand what is actually occurring. This is the equation for a line, which is what we are trying to get from our regression The equation for the slope of that line is driven by Pearson's correlation This is the equation for Pearson's correlation. It is the same whether we are regressing x against y or y against x However when we look back at our second equation for slope, we see that Pearson's correlation is not the only term in that equation. If we are calculating y against x, we also have the sample standard deviation of y divided by the sample standard deviation of x. If we were to calculate the regression of x against y we would need to invert those two terms.
What is the difference between linear regression on y with x and x with y?
The insight that since Pearson's correlation is the same whether we do a regression of x against y, or y against x is a good one, we should get the same linear regression is a good one. It is only sl
What is the difference between linear regression on y with x and x with y? The insight that since Pearson's correlation is the same whether we do a regression of x against y, or y against x is a good one, we should get the same linear regression is a good one. It is only slightly incorrect, and we can use it to understand what is actually occurring. This is the equation for a line, which is what we are trying to get from our regression The equation for the slope of that line is driven by Pearson's correlation This is the equation for Pearson's correlation. It is the same whether we are regressing x against y or y against x However when we look back at our second equation for slope, we see that Pearson's correlation is not the only term in that equation. If we are calculating y against x, we also have the sample standard deviation of y divided by the sample standard deviation of x. If we were to calculate the regression of x against y we would need to invert those two terms.
What is the difference between linear regression on y with x and x with y? The insight that since Pearson's correlation is the same whether we do a regression of x against y, or y against x is a good one, we should get the same linear regression is a good one. It is only sl
1,089
What is the difference between linear regression on y with x and x with y?
On questions like this it's easy to get caught up on the technical issues, so I'd like to focus specifically on the question in the title of the thread which asks: What is the difference between linear regression on y with x and x with y? Consider for a moment a (simplified) econometric model from human capital theory (the link goes to an article by Nobel Laureate Gary Becker). Let's say we specify a model of the following form: \begin{equation} \text{wages} = b_{0} + b_{1}~\text{years of education} + \text{error} \end{equation} This model can be interpreted as a causal relationship between wages and education. Importantly, causality in this context means the direction of causality runs from education to wages and not the other way round. This is implicit in the way the model has been formulated; the dependent variable is wages and the independent variable is years of education. Now, if we make a reversal of the econometric equation (that is, change y on x to x on y), such that the model becomes \begin{equation} \text{years of education} = b_{0} + b_{1}~\text{wages} + \text{error} \end{equation} then implicit in the formulation of the econometric equation is that we are saying that the direction of causality runs from wages to education. I'm sure you can think of more examples like this one (outside the realm of economics too), but as you can see, the interpretation of the model can change quite significantly when we switch from regressing y on x to x on y. So, to the answer the question: What is the difference between linear regression on y with x and x with y?, we can say that the interpretation of the regression equation changes when we regress x on y instead of y on x. We shouldn't overlook this point because a model that has a sound interpretation can quickly turn into one which makes little or no sense.
What is the difference between linear regression on y with x and x with y?
On questions like this it's easy to get caught up on the technical issues, so I'd like to focus specifically on the question in the title of the thread which asks: What is the difference between linea
What is the difference between linear regression on y with x and x with y? On questions like this it's easy to get caught up on the technical issues, so I'd like to focus specifically on the question in the title of the thread which asks: What is the difference between linear regression on y with x and x with y? Consider for a moment a (simplified) econometric model from human capital theory (the link goes to an article by Nobel Laureate Gary Becker). Let's say we specify a model of the following form: \begin{equation} \text{wages} = b_{0} + b_{1}~\text{years of education} + \text{error} \end{equation} This model can be interpreted as a causal relationship between wages and education. Importantly, causality in this context means the direction of causality runs from education to wages and not the other way round. This is implicit in the way the model has been formulated; the dependent variable is wages and the independent variable is years of education. Now, if we make a reversal of the econometric equation (that is, change y on x to x on y), such that the model becomes \begin{equation} \text{years of education} = b_{0} + b_{1}~\text{wages} + \text{error} \end{equation} then implicit in the formulation of the econometric equation is that we are saying that the direction of causality runs from wages to education. I'm sure you can think of more examples like this one (outside the realm of economics too), but as you can see, the interpretation of the model can change quite significantly when we switch from regressing y on x to x on y. So, to the answer the question: What is the difference between linear regression on y with x and x with y?, we can say that the interpretation of the regression equation changes when we regress x on y instead of y on x. We shouldn't overlook this point because a model that has a sound interpretation can quickly turn into one which makes little or no sense.
What is the difference between linear regression on y with x and x with y? On questions like this it's easy to get caught up on the technical issues, so I'd like to focus specifically on the question in the title of the thread which asks: What is the difference between linea
1,090
What is the difference between linear regression on y with x and x with y?
Expanding on @gung's excellent answer: In a simple linear regression the absolute value of Pearson's $r$ can be seen as the geometric mean of the two slopes we obtain if we regress $y$ on $x$ and $x$ on $y$, respectively: $$\sqrt{{\hat{\beta}_1}_{y\,on\,x} \cdot {\hat{\beta}_1}_{x\,on\,y}} = \sqrt{\frac{\text{Cov}(x,y)}{\text{Var}(x)} \cdot \frac{\text{Cov}(y,x)}{\text{Var}(y)}} = \frac{|\text{Cov}(x,y)|}{\text{SD}(x) \cdot \text{SD}(y)} = |r| $$ We can obtain $r$ directly using $$r = sign({\hat{\beta}_1}_{y\,on\,x}) \cdot \sqrt{{\hat{\beta}_1}_{y\,on\,x} \cdot {\hat{\beta}_1}_{x\,on\,y}} $$ or $$r = sign({\hat{\beta}_1}_{x\,on\,y}) \cdot \sqrt{{\hat{\beta}_1}_{y\,on\,x} \cdot {\hat{\beta}_1}_{x\,on\,y}} $$ Interestingly, by the AM–GM inequality, it follows that the absolute value of the arithmetic mean of the two slope coefficients is greater than (or equal to) the absolute value of Pearson's $r$: $$ |\frac{1}{2} \cdot ({\hat{\beta}_1}_{y\,on\,x} + {\hat{\beta}_1}_{x\,on\,y})| \geq \sqrt{{\hat{\beta}_1}_{y\,on\,x} \cdot {\hat{\beta}_1}_{x\,on\,y}} = |r| $$
What is the difference between linear regression on y with x and x with y?
Expanding on @gung's excellent answer: In a simple linear regression the absolute value of Pearson's $r$ can be seen as the geometric mean of the two slopes we obtain if we regress $y$ on $x$ and $
What is the difference between linear regression on y with x and x with y? Expanding on @gung's excellent answer: In a simple linear regression the absolute value of Pearson's $r$ can be seen as the geometric mean of the two slopes we obtain if we regress $y$ on $x$ and $x$ on $y$, respectively: $$\sqrt{{\hat{\beta}_1}_{y\,on\,x} \cdot {\hat{\beta}_1}_{x\,on\,y}} = \sqrt{\frac{\text{Cov}(x,y)}{\text{Var}(x)} \cdot \frac{\text{Cov}(y,x)}{\text{Var}(y)}} = \frac{|\text{Cov}(x,y)|}{\text{SD}(x) \cdot \text{SD}(y)} = |r| $$ We can obtain $r$ directly using $$r = sign({\hat{\beta}_1}_{y\,on\,x}) \cdot \sqrt{{\hat{\beta}_1}_{y\,on\,x} \cdot {\hat{\beta}_1}_{x\,on\,y}} $$ or $$r = sign({\hat{\beta}_1}_{x\,on\,y}) \cdot \sqrt{{\hat{\beta}_1}_{y\,on\,x} \cdot {\hat{\beta}_1}_{x\,on\,y}} $$ Interestingly, by the AM–GM inequality, it follows that the absolute value of the arithmetic mean of the two slope coefficients is greater than (or equal to) the absolute value of Pearson's $r$: $$ |\frac{1}{2} \cdot ({\hat{\beta}_1}_{y\,on\,x} + {\hat{\beta}_1}_{x\,on\,y})| \geq \sqrt{{\hat{\beta}_1}_{y\,on\,x} \cdot {\hat{\beta}_1}_{x\,on\,y}} = |r| $$
What is the difference between linear regression on y with x and x with y? Expanding on @gung's excellent answer: In a simple linear regression the absolute value of Pearson's $r$ can be seen as the geometric mean of the two slopes we obtain if we regress $y$ on $x$ and $
1,091
What is the difference between linear regression on y with x and x with y?
There is a very interesting phenomenon about this topic. After exchanging x and y, although the regression coefficient changes, but the t-statistic/F-statistic and significance level for the coefficient don't change. This is also true even in multiple regression, where we exchange y with one of the independent variables. It is due to a delicate relation between the F-statistic and (partial) correlation coefficient. That relation really touches the core of linear model theory.There are more details about this conclusion in my notebook: Why exchange y and x has no effect on p
What is the difference between linear regression on y with x and x with y?
There is a very interesting phenomenon about this topic. After exchanging x and y, although the regression coefficient changes, but the t-statistic/F-statistic and significance level for the coefficie
What is the difference between linear regression on y with x and x with y? There is a very interesting phenomenon about this topic. After exchanging x and y, although the regression coefficient changes, but the t-statistic/F-statistic and significance level for the coefficient don't change. This is also true even in multiple regression, where we exchange y with one of the independent variables. It is due to a delicate relation between the F-statistic and (partial) correlation coefficient. That relation really touches the core of linear model theory.There are more details about this conclusion in my notebook: Why exchange y and x has no effect on p
What is the difference between linear regression on y with x and x with y? There is a very interesting phenomenon about this topic. After exchanging x and y, although the regression coefficient changes, but the t-statistic/F-statistic and significance level for the coefficie
1,092
What is the difference between linear regression on y with x and x with y?
The relation is not symmetric because we are solving two different optimisation problems. $\textbf{ Doing regression of $y$ given $x$}$ can be written as solving the following problem: $$\min_b \mathbb E(Y - bX)^2$$ whereas for $\textbf{doing regression of $x$ given $y$}$: $$\min_b \mathbb E(X - bY)^2$$, which can be rewritten as: $$\min_b \frac{1}{b^2} \mathbb E(Y - bX)^2$$ It is also important to note that, two different-looking problems may have the same solution.
What is the difference between linear regression on y with x and x with y?
The relation is not symmetric because we are solving two different optimisation problems. $\textbf{ Doing regression of $y$ given $x$}$ can be written as solving the following problem: $$\min_b \mathb
What is the difference between linear regression on y with x and x with y? The relation is not symmetric because we are solving two different optimisation problems. $\textbf{ Doing regression of $y$ given $x$}$ can be written as solving the following problem: $$\min_b \mathbb E(Y - bX)^2$$ whereas for $\textbf{doing regression of $x$ given $y$}$: $$\min_b \mathbb E(X - bY)^2$$, which can be rewritten as: $$\min_b \frac{1}{b^2} \mathbb E(Y - bX)^2$$ It is also important to note that, two different-looking problems may have the same solution.
What is the difference between linear regression on y with x and x with y? The relation is not symmetric because we are solving two different optimisation problems. $\textbf{ Doing regression of $y$ given $x$}$ can be written as solving the following problem: $$\min_b \mathb
1,093
What is the difference between linear regression on y with x and x with y?
This question can also be answered from a linear algebra perspective. Say you have a bunch of data points $(x,y)$. We want to find the line $y=mx+b$ that's closest to all our points (the regression line). As an example, say we have the points $(1,2),(2,4.5),(3,6),(4,7)$. We can look at this as a simultaneous equation problem: \begin{align} & \underline{mx + b = y}\\ & 1x + b = 2 \\ & 2x + b = 4.5 \\ & 3x + b = 6 \\ & 4x + b = 7 \end{align} In matrix form: $$ \left[\begin{matrix} 1 & 1 \\ 2 & 1 \\ 3 & 1 \\ 4 & 1 \end{matrix}\right] \left[\begin{matrix} x \\ b \\ \end{matrix}\right]=\left[\begin{matrix} 2 \\ 4.5 \\ 6 \\ 7 \end{matrix}\right] $$ We see right away that $\vec{y}=(2,4.5,6,7)$ (the right hand side vector) is not in the span of the columns of our matrix, meaning we will not find an $(x,b)$ to solve our system. The closest vector to $\vec{y}$ we can find in our column space is the projection $\vec p$ of $\vec{y}$ on the column space. If we swap out $\vec{y}$ with its projection $\vec p$ on the column space, and solve our system of equations for $\vec p$, we get the least squares solution, aka the regression line. I.e. we can solve $$ \left[\begin{matrix} 1 & 1 \\ 2 & 1 \\ 3 & 1 \\ 4 & 1 \end{matrix}\right] \left[\begin{matrix} x \\ b \\ \end{matrix}\right]=\left[\begin{matrix} p_1 \\ p_2 \\ p_3 \\ p_4 \end{matrix}\right] $$ to obtain the regression line $y=mx+b$ (here $m$ is the correlation coefficient normally called $\beta$). If you did $x=my+b$ instead, you'd have: $$ \left[\begin{matrix} 2 & 1 \\ 4.5 & 1 \\ 6 & 1 \\ 7 & 1 \end{matrix}\right] \left[\begin{matrix} y \\ b \\ \end{matrix}\right]=\left[\begin{matrix} 1 \\ 2 \\ 3 \\ 4 \end{matrix}\right] $$ To find the regression line, we'd have to solve this system using the projection $\vec r$ of $\vec x = (1,2,3,4)$ on to the column space of our new matrix. That is, we swap $(1,2,3,4)$ with its projection $(r_1,r_2,r_3,r_4)$ on the span of $(2,4.5,6,7)$ and $(1,1,1,1)$ and solve the system. You can solve it by hand if you want to and compare it to a least squares solution found by a computer. The idea that the regression of y given x or x given y should be the same, is equivalent to asking if $\vec p=\vec r$ in linear algebra terms. We know that $\vec p$ is in $span (\vec x,\vec b)$ and $\vec r$ is in $span (\vec y,\vec b)$. We known that $\vec x \neq c \vec y$ since this is what motivated us to look for a regression line in the first place. Therefore, the intersection of $span (\vec x,\vec b)$ and $span (\vec y,\vec b)$ is $c \vec b$. So if $\vec p=\vec r$, then $\vec p=\vec r = c \vec b$. What type of line is $c\vec b = c(1,1,1,\dots)$? On the plane, it's $y=x$. It's the line that goes out 45° from the axes of your plot. Most of the time our regression lines will not be of the $y=x$ type. So we can see how regression is usually not symmetric. The correlation is symmetric however. From a linear algebra perspective the correlation (aka pearson(x,y)) is $\cos(\theta)$ where $\theta$ is the angle between $\vec x$ and $\vec y$. In the example, the correlation/pearson(x,y) is the $\cos(\theta)$ of $(1,2,3,4)$ and $(2,4.5,6,7)$. Clearly the angle between $\vec x$ and $\vec y$ is equal to the angle between $\vec y$ and $\vec x$, so the correlation must be too.
What is the difference between linear regression on y with x and x with y?
This question can also be answered from a linear algebra perspective. Say you have a bunch of data points $(x,y)$. We want to find the line $y=mx+b$ that's closest to all our points (the regression li
What is the difference between linear regression on y with x and x with y? This question can also be answered from a linear algebra perspective. Say you have a bunch of data points $(x,y)$. We want to find the line $y=mx+b$ that's closest to all our points (the regression line). As an example, say we have the points $(1,2),(2,4.5),(3,6),(4,7)$. We can look at this as a simultaneous equation problem: \begin{align} & \underline{mx + b = y}\\ & 1x + b = 2 \\ & 2x + b = 4.5 \\ & 3x + b = 6 \\ & 4x + b = 7 \end{align} In matrix form: $$ \left[\begin{matrix} 1 & 1 \\ 2 & 1 \\ 3 & 1 \\ 4 & 1 \end{matrix}\right] \left[\begin{matrix} x \\ b \\ \end{matrix}\right]=\left[\begin{matrix} 2 \\ 4.5 \\ 6 \\ 7 \end{matrix}\right] $$ We see right away that $\vec{y}=(2,4.5,6,7)$ (the right hand side vector) is not in the span of the columns of our matrix, meaning we will not find an $(x,b)$ to solve our system. The closest vector to $\vec{y}$ we can find in our column space is the projection $\vec p$ of $\vec{y}$ on the column space. If we swap out $\vec{y}$ with its projection $\vec p$ on the column space, and solve our system of equations for $\vec p$, we get the least squares solution, aka the regression line. I.e. we can solve $$ \left[\begin{matrix} 1 & 1 \\ 2 & 1 \\ 3 & 1 \\ 4 & 1 \end{matrix}\right] \left[\begin{matrix} x \\ b \\ \end{matrix}\right]=\left[\begin{matrix} p_1 \\ p_2 \\ p_3 \\ p_4 \end{matrix}\right] $$ to obtain the regression line $y=mx+b$ (here $m$ is the correlation coefficient normally called $\beta$). If you did $x=my+b$ instead, you'd have: $$ \left[\begin{matrix} 2 & 1 \\ 4.5 & 1 \\ 6 & 1 \\ 7 & 1 \end{matrix}\right] \left[\begin{matrix} y \\ b \\ \end{matrix}\right]=\left[\begin{matrix} 1 \\ 2 \\ 3 \\ 4 \end{matrix}\right] $$ To find the regression line, we'd have to solve this system using the projection $\vec r$ of $\vec x = (1,2,3,4)$ on to the column space of our new matrix. That is, we swap $(1,2,3,4)$ with its projection $(r_1,r_2,r_3,r_4)$ on the span of $(2,4.5,6,7)$ and $(1,1,1,1)$ and solve the system. You can solve it by hand if you want to and compare it to a least squares solution found by a computer. The idea that the regression of y given x or x given y should be the same, is equivalent to asking if $\vec p=\vec r$ in linear algebra terms. We know that $\vec p$ is in $span (\vec x,\vec b)$ and $\vec r$ is in $span (\vec y,\vec b)$. We known that $\vec x \neq c \vec y$ since this is what motivated us to look for a regression line in the first place. Therefore, the intersection of $span (\vec x,\vec b)$ and $span (\vec y,\vec b)$ is $c \vec b$. So if $\vec p=\vec r$, then $\vec p=\vec r = c \vec b$. What type of line is $c\vec b = c(1,1,1,\dots)$? On the plane, it's $y=x$. It's the line that goes out 45° from the axes of your plot. Most of the time our regression lines will not be of the $y=x$ type. So we can see how regression is usually not symmetric. The correlation is symmetric however. From a linear algebra perspective the correlation (aka pearson(x,y)) is $\cos(\theta)$ where $\theta$ is the angle between $\vec x$ and $\vec y$. In the example, the correlation/pearson(x,y) is the $\cos(\theta)$ of $(1,2,3,4)$ and $(2,4.5,6,7)$. Clearly the angle between $\vec x$ and $\vec y$ is equal to the angle between $\vec y$ and $\vec x$, so the correlation must be too.
What is the difference between linear regression on y with x and x with y? This question can also be answered from a linear algebra perspective. Say you have a bunch of data points $(x,y)$. We want to find the line $y=mx+b$ that's closest to all our points (the regression li
1,094
What is the difference between linear regression on y with x and x with y?
Well, it's true that for a simple bivariate regression, the linear correlation coefficient and R-square will be the same for both equations. But the slopes will be $rS_y/S_x$ or $rS_x/S_y$ , which are not reciprocals of each other, unless $r = 1$.
What is the difference between linear regression on y with x and x with y?
Well, it's true that for a simple bivariate regression, the linear correlation coefficient and R-square will be the same for both equations. But the slopes will be $rS_y/S_x$ or $rS_x/S_y$ , which ar
What is the difference between linear regression on y with x and x with y? Well, it's true that for a simple bivariate regression, the linear correlation coefficient and R-square will be the same for both equations. But the slopes will be $rS_y/S_x$ or $rS_x/S_y$ , which are not reciprocals of each other, unless $r = 1$.
What is the difference between linear regression on y with x and x with y? Well, it's true that for a simple bivariate regression, the linear correlation coefficient and R-square will be the same for both equations. But the slopes will be $rS_y/S_x$ or $rS_x/S_y$ , which ar
1,095
Why does a time series have to be stationary?
Stationarity is a one type of dependence structure. Suppose we have a data $X_1,...,X_n$. The most basic assumption is that $X_i$ are independent, i.e. we have a sample. The independence is a nice property, since using it we can derive a lot of useful results. The problem is that sometimes (or frequently, depending on the view) this property does not hold. Now independence is a unique property, two random variables can be independent only in one way, but they can be dependent in various ways. So stationarity is one way of modeling the dependence structure. It turns out that a lot of nice results which holds for independent random variables (law of large numbers, central limit theorem to name a few) hold for stationary random variables (we should strictly say sequences). And of course it turns out that a lot of data can be considered stationary, so the concept of stationarity is very important in modeling non-independent data. When we have determined that we have stationarity, naturally we want to model it. This is where ARMA(AutoRegressive Moving Average) models come in. It turns out that any stationary data can be approximated with stationary ARMA model, thanks to Wold decomposition theorem. So that is why ARMA models are very popular and that is why we need to make sure that the series is stationary to use these models. Now again the same story holds as with independence and dependence. Stationarity is defined uniquely, i.e. data is either stationary or not, so there is only one way for data to be stationary, but lots of ways for it to be non-stationary. Again it turns out that a lot of data becomes stationary after certain transformation. ARIMA(AutoRegressive Integrated Moving Average) model is one model for non-stationarity. It assumes that the data becomes stationary after differencing. In the regression context the stationarity is important since the same results which apply for independent data holds if the data is stationary.
Why does a time series have to be stationary?
Stationarity is a one type of dependence structure. Suppose we have a data $X_1,...,X_n$. The most basic assumption is that $X_i$ are independent, i.e. we have a sample. The independence is a nice pro
Why does a time series have to be stationary? Stationarity is a one type of dependence structure. Suppose we have a data $X_1,...,X_n$. The most basic assumption is that $X_i$ are independent, i.e. we have a sample. The independence is a nice property, since using it we can derive a lot of useful results. The problem is that sometimes (or frequently, depending on the view) this property does not hold. Now independence is a unique property, two random variables can be independent only in one way, but they can be dependent in various ways. So stationarity is one way of modeling the dependence structure. It turns out that a lot of nice results which holds for independent random variables (law of large numbers, central limit theorem to name a few) hold for stationary random variables (we should strictly say sequences). And of course it turns out that a lot of data can be considered stationary, so the concept of stationarity is very important in modeling non-independent data. When we have determined that we have stationarity, naturally we want to model it. This is where ARMA(AutoRegressive Moving Average) models come in. It turns out that any stationary data can be approximated with stationary ARMA model, thanks to Wold decomposition theorem. So that is why ARMA models are very popular and that is why we need to make sure that the series is stationary to use these models. Now again the same story holds as with independence and dependence. Stationarity is defined uniquely, i.e. data is either stationary or not, so there is only one way for data to be stationary, but lots of ways for it to be non-stationary. Again it turns out that a lot of data becomes stationary after certain transformation. ARIMA(AutoRegressive Integrated Moving Average) model is one model for non-stationarity. It assumes that the data becomes stationary after differencing. In the regression context the stationarity is important since the same results which apply for independent data holds if the data is stationary.
Why does a time series have to be stationary? Stationarity is a one type of dependence structure. Suppose we have a data $X_1,...,X_n$. The most basic assumption is that $X_i$ are independent, i.e. we have a sample. The independence is a nice pro
1,096
Why does a time series have to be stationary?
What quantities are we typically interested in when we perform statistical analysis on a time series? We want to know Its expected value, Its variance, and The correlation between values $s$ periods apart for a set of $s$ values. How do we calculate these things? Using a mean across many time periods. The mean across many time periods is only informative if the expected value is the same across those time periods. If these population parameters can vary, what are we really estimating by taking an average across time? (Weak) stationarity requires that these population quantities must be the same across time, making the sample average a reasonable way to estimate them. In addition to this, stationary processes avoid the problem of spurious regression.
Why does a time series have to be stationary?
What quantities are we typically interested in when we perform statistical analysis on a time series? We want to know Its expected value, Its variance, and The correlation between values $s$ periods
Why does a time series have to be stationary? What quantities are we typically interested in when we perform statistical analysis on a time series? We want to know Its expected value, Its variance, and The correlation between values $s$ periods apart for a set of $s$ values. How do we calculate these things? Using a mean across many time periods. The mean across many time periods is only informative if the expected value is the same across those time periods. If these population parameters can vary, what are we really estimating by taking an average across time? (Weak) stationarity requires that these population quantities must be the same across time, making the sample average a reasonable way to estimate them. In addition to this, stationary processes avoid the problem of spurious regression.
Why does a time series have to be stationary? What quantities are we typically interested in when we perform statistical analysis on a time series? We want to know Its expected value, Its variance, and The correlation between values $s$ periods
1,097
Why does a time series have to be stationary?
To add a high-level answer to some of the other answers that are good but more detailed, stationarity is important because, in its absence, a model describing the data will vary in accuracy at different time points. As such, stationarity is required for sample statistics such as means, variances, and correlations to accurately describe the data at all time points of interest. Looking at the time series plots below, you can (hopefully) see how the mean and variance of any given segment of time would do a good job representing the whole stationary time series but a relatively poor job representing the whole non-stationary time series. For instance, the mean of the non-stationary time series is much lower from $600<t<800$ and its variance is much higher in this range than in the range from $200<t<400$.
Why does a time series have to be stationary?
To add a high-level answer to some of the other answers that are good but more detailed, stationarity is important because, in its absence, a model describing the data will vary in accuracy at differe
Why does a time series have to be stationary? To add a high-level answer to some of the other answers that are good but more detailed, stationarity is important because, in its absence, a model describing the data will vary in accuracy at different time points. As such, stationarity is required for sample statistics such as means, variances, and correlations to accurately describe the data at all time points of interest. Looking at the time series plots below, you can (hopefully) see how the mean and variance of any given segment of time would do a good job representing the whole stationary time series but a relatively poor job representing the whole non-stationary time series. For instance, the mean of the non-stationary time series is much lower from $600<t<800$ and its variance is much higher in this range than in the range from $200<t<400$.
Why does a time series have to be stationary? To add a high-level answer to some of the other answers that are good but more detailed, stationarity is important because, in its absence, a model describing the data will vary in accuracy at differe
1,098
Why does a time series have to be stationary?
An underlying idea in statistical learning is that you can learn by repeating an experiment. For example, we can keep flipping a thumbtack to learn the probability that a thumbtack lands on its head. In the time-series context, we observe a single run of a stochastic process rather than repeated runs of the stochastic process. We observe 1 long experiment rather than multiple, independent experiments. We need stationarity and ergodicity so that observing a long run of a stochastic process is similar to observing many independent runs of a stochastic process. Some (imprecise) definitions Let $\Omega$ be a sample space. A stochastic process $\{Y_t\}$ is a function of both time $t \in \{1, 2, 3, \ldots\}$ and outcome $\omega \in \Omega$. For any time $t$, $Y_t$ is a random variable (i.e. a function from $\Omega$ to some space such as the space of real numbers). For any outcome $\omega$ the series $Y(\omega)$ is a time-series of real numbers: $\{Y_1(\omega), Y_2(\omega), Y_3(\omega), \ldots \}$ A fundamental issue in time series In Statistics 101, we're taught about a series of independent and identically distributed variables $X_1$, $X_2$, $X_3$ etc... We observe multiple, identical experiments $i = 1, \ldots, n$ where an $\omega_i \in \Omega$ is randomly chosen and this allows us to learn about random variable $X$. By the Law of Large Numbers, we have $\frac{1}{n} \sum_{i=1}^n X_i$ converging almost surely to $\operatorname{E}[X]$. A fundamental difference in the time-series setting is that we're observing multiple observations over time $t$ rather than multiple draws from $\Omega$. In the general case, the sample mean of a stochastic process $\frac{1}{T} \sum_{t=1}^T Y_t$ may not converge to anything at all! For multiple observations over time to accomplish a similar task as multiple draws from the sample space, we need stationarity and ergodicity. If an unconditional mean $\operatorname{E}[Y]$ exists and the conditions for the ergodic theorem are satisfied, the time-series, sample mean $\frac{1}{T}\sum_{t =1}^T Y_t$ will converge to the unconditional mean $\operatorname{E}[Y]$. Example 1: failure of stationarity Let $\{Y_t\}$ be the degenerate process $Y_t = t$. We can see that $\{Y_t\}$ is not a stationary (the joint distribution is not time-invariant). Let $S_t = \frac{1}{t} \sum_{i=1}^t Y_i$ be the time-series sample mean, and it's obvious that $S_t$ doesn't converge to anything as $t \rightarrow \infty$: $S_1 = 1, S_2 = \frac{3}{2}, S_3 = 2, \ldots, S_t = \frac{t+1}{2}$. A time invariant mean of $Y_t$ doesn't exist: $S_t$ is unbounded as $t \rightarrow \infty$. Example: failure of ergodicity Let $X$ be the result of a single coin flip. Let $Y_t = X$ for all $t$, that is, either $\{Y_t\} = (0, 0, 0, 0, 0, 0, 0, \ldots)$ or $\{Y_t\} = (1, 1, 1, 1, 1, 1, 1, \ldots$. Even though $\operatorname{E}[Y_t] = \frac{1}{2}$, the time-series sample mean $S_t = \frac{1}{t} \sum_{i = 1}^t Y_i$ won't give you the mean of $Y_t$.
Why does a time series have to be stationary?
An underlying idea in statistical learning is that you can learn by repeating an experiment. For example, we can keep flipping a thumbtack to learn the probability that a thumbtack lands on its head.
Why does a time series have to be stationary? An underlying idea in statistical learning is that you can learn by repeating an experiment. For example, we can keep flipping a thumbtack to learn the probability that a thumbtack lands on its head. In the time-series context, we observe a single run of a stochastic process rather than repeated runs of the stochastic process. We observe 1 long experiment rather than multiple, independent experiments. We need stationarity and ergodicity so that observing a long run of a stochastic process is similar to observing many independent runs of a stochastic process. Some (imprecise) definitions Let $\Omega$ be a sample space. A stochastic process $\{Y_t\}$ is a function of both time $t \in \{1, 2, 3, \ldots\}$ and outcome $\omega \in \Omega$. For any time $t$, $Y_t$ is a random variable (i.e. a function from $\Omega$ to some space such as the space of real numbers). For any outcome $\omega$ the series $Y(\omega)$ is a time-series of real numbers: $\{Y_1(\omega), Y_2(\omega), Y_3(\omega), \ldots \}$ A fundamental issue in time series In Statistics 101, we're taught about a series of independent and identically distributed variables $X_1$, $X_2$, $X_3$ etc... We observe multiple, identical experiments $i = 1, \ldots, n$ where an $\omega_i \in \Omega$ is randomly chosen and this allows us to learn about random variable $X$. By the Law of Large Numbers, we have $\frac{1}{n} \sum_{i=1}^n X_i$ converging almost surely to $\operatorname{E}[X]$. A fundamental difference in the time-series setting is that we're observing multiple observations over time $t$ rather than multiple draws from $\Omega$. In the general case, the sample mean of a stochastic process $\frac{1}{T} \sum_{t=1}^T Y_t$ may not converge to anything at all! For multiple observations over time to accomplish a similar task as multiple draws from the sample space, we need stationarity and ergodicity. If an unconditional mean $\operatorname{E}[Y]$ exists and the conditions for the ergodic theorem are satisfied, the time-series, sample mean $\frac{1}{T}\sum_{t =1}^T Y_t$ will converge to the unconditional mean $\operatorname{E}[Y]$. Example 1: failure of stationarity Let $\{Y_t\}$ be the degenerate process $Y_t = t$. We can see that $\{Y_t\}$ is not a stationary (the joint distribution is not time-invariant). Let $S_t = \frac{1}{t} \sum_{i=1}^t Y_i$ be the time-series sample mean, and it's obvious that $S_t$ doesn't converge to anything as $t \rightarrow \infty$: $S_1 = 1, S_2 = \frac{3}{2}, S_3 = 2, \ldots, S_t = \frac{t+1}{2}$. A time invariant mean of $Y_t$ doesn't exist: $S_t$ is unbounded as $t \rightarrow \infty$. Example: failure of ergodicity Let $X$ be the result of a single coin flip. Let $Y_t = X$ for all $t$, that is, either $\{Y_t\} = (0, 0, 0, 0, 0, 0, 0, \ldots)$ or $\{Y_t\} = (1, 1, 1, 1, 1, 1, 1, \ldots$. Even though $\operatorname{E}[Y_t] = \frac{1}{2}$, the time-series sample mean $S_t = \frac{1}{t} \sum_{i = 1}^t Y_i$ won't give you the mean of $Y_t$.
Why does a time series have to be stationary? An underlying idea in statistical learning is that you can learn by repeating an experiment. For example, we can keep flipping a thumbtack to learn the probability that a thumbtack lands on its head.
1,099
Why does a time series have to be stationary?
First of all, ARIMA(p,1,q) processes are not stationary. These are so called integrated series, e.g. $x_t=x_{t-1}+e_t$ is ARIMA(0,1,0) or I(1) process, also random walk or unit root. So, no, you don't need them all stationary. However, we often do look for stationarity. Why? Consider the forecasting problem. How do you forecast? If everything's different tomorrow then it's impossible to forecast, because everything's going to be different. So the key to forecasting is to find something that will be the same tomorrow, and extend that to tomorrow. That something can be anything. I'll give you a couple of examples. In the I(1) model above, we often assume (or hope) that the error distribution is the same today and tomorrow: $e_t\sim\mathcal{N}(0,\sigma^2)$. So, in this case we are saying that tomorrow the distribution will still be normal, and that its mean and the variance will still be the same 0 and $\sigma^2$. This did not make the series stationary yet, but we found the invariant part in the process. Next, if you look at the first difference: $\Delta x_t\equiv x_t-x_{t-1}=e_t$ - this cat is stationary. However, understand that the goal was not really to find the stationary series $\Delta x_t$, but to find something invariant, which was the distribution of errors. It just happens so that in the stationary series by definition there will be invariant parts such as unconditional mean and variance. Another example, say the true series are: $x_t=\alpha t+e_t$. Say, all we know about the errors is that their mean is zero: $E[e_t]=0$. Now, we can forecast again! All we need is to estimate the growth rate $\alpha$, that's what was invariant and the mean of errors. Every time you find something invariant, you can forecast. For forecasting we absolutely need to find the constant (time invariant) component in the series, otherwise it's impossible to forecast by definition. Stationarity is just a particular case of the invariance.
Why does a time series have to be stationary?
First of all, ARIMA(p,1,q) processes are not stationary. These are so called integrated series, e.g. $x_t=x_{t-1}+e_t$ is ARIMA(0,1,0) or I(1) process, also random walk or unit root. So, no, you don't
Why does a time series have to be stationary? First of all, ARIMA(p,1,q) processes are not stationary. These are so called integrated series, e.g. $x_t=x_{t-1}+e_t$ is ARIMA(0,1,0) or I(1) process, also random walk or unit root. So, no, you don't need them all stationary. However, we often do look for stationarity. Why? Consider the forecasting problem. How do you forecast? If everything's different tomorrow then it's impossible to forecast, because everything's going to be different. So the key to forecasting is to find something that will be the same tomorrow, and extend that to tomorrow. That something can be anything. I'll give you a couple of examples. In the I(1) model above, we often assume (or hope) that the error distribution is the same today and tomorrow: $e_t\sim\mathcal{N}(0,\sigma^2)$. So, in this case we are saying that tomorrow the distribution will still be normal, and that its mean and the variance will still be the same 0 and $\sigma^2$. This did not make the series stationary yet, but we found the invariant part in the process. Next, if you look at the first difference: $\Delta x_t\equiv x_t-x_{t-1}=e_t$ - this cat is stationary. However, understand that the goal was not really to find the stationary series $\Delta x_t$, but to find something invariant, which was the distribution of errors. It just happens so that in the stationary series by definition there will be invariant parts such as unconditional mean and variance. Another example, say the true series are: $x_t=\alpha t+e_t$. Say, all we know about the errors is that their mean is zero: $E[e_t]=0$. Now, we can forecast again! All we need is to estimate the growth rate $\alpha$, that's what was invariant and the mean of errors. Every time you find something invariant, you can forecast. For forecasting we absolutely need to find the constant (time invariant) component in the series, otherwise it's impossible to forecast by definition. Stationarity is just a particular case of the invariance.
Why does a time series have to be stationary? First of all, ARIMA(p,1,q) processes are not stationary. These are so called integrated series, e.g. $x_t=x_{t-1}+e_t$ is ARIMA(0,1,0) or I(1) process, also random walk or unit root. So, no, you don't
1,100
Why does a time series have to be stationary?
Since ARIMA is regressing on itself for the most part, it uses a type of self-induced multiple regression that would be unnecessarily influenced by either a strong trend or seasonality. This multiple regression technique is based on previous time series values, especially those within the latest periods, and allows us to extract a very interesting "inter-relationship" between multiple past values that work to explain a future value.
Why does a time series have to be stationary?
Since ARIMA is regressing on itself for the most part, it uses a type of self-induced multiple regression that would be unnecessarily influenced by either a strong trend or seasonality. This multiple
Why does a time series have to be stationary? Since ARIMA is regressing on itself for the most part, it uses a type of self-induced multiple regression that would be unnecessarily influenced by either a strong trend or seasonality. This multiple regression technique is based on previous time series values, especially those within the latest periods, and allows us to extract a very interesting "inter-relationship" between multiple past values that work to explain a future value.
Why does a time series have to be stationary? Since ARIMA is regressing on itself for the most part, it uses a type of self-induced multiple regression that would be unnecessarily influenced by either a strong trend or seasonality. This multiple