idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
401 | Famous statistical quotations | The combination of some data and an
aching desire for an answer does not
ensure that a reasonable answer can be
extracted from a given body of data
Tukey | Famous statistical quotations | The combination of some data and an
aching desire for an answer does not
ensure that a reasonable answer can be
extracted from a given body of data
Tukey | Famous statistical quotations
The combination of some data and an
aching desire for an answer does not
ensure that a reasonable answer can be
extracted from a given body of data
Tukey | Famous statistical quotations
The combination of some data and an
aching desire for an answer does not
ensure that a reasonable answer can be
extracted from a given body of data
Tukey |
402 | Famous statistical quotations | There are no routine statistical
questions, only questionable
statistical routines.
D.R. Cox | Famous statistical quotations | There are no routine statistical
questions, only questionable
statistical routines.
D.R. Cox | Famous statistical quotations
There are no routine statistical
questions, only questionable
statistical routines.
D.R. Cox | Famous statistical quotations
There are no routine statistical
questions, only questionable
statistical routines.
D.R. Cox |
403 | Famous statistical quotations | Statistics - A subject which most statisticians find difficult but which many physicians are experts on. "Stephen S. Senn" | Famous statistical quotations | Statistics - A subject which most statisticians find difficult but which many physicians are experts on. "Stephen S. Senn" | Famous statistical quotations
Statistics - A subject which most statisticians find difficult but which many physicians are experts on. "Stephen S. Senn" | Famous statistical quotations
Statistics - A subject which most statisticians find difficult but which many physicians are experts on. "Stephen S. Senn" |
404 | Famous statistical quotations | He uses statistics like a drunken man uses a lamp post, more for support than illumination.
-- Andrew Lang | Famous statistical quotations | He uses statistics like a drunken man uses a lamp post, more for support than illumination.
-- Andrew Lang | Famous statistical quotations
He uses statistics like a drunken man uses a lamp post, more for support than illumination.
-- Andrew Lang | Famous statistical quotations
He uses statistics like a drunken man uses a lamp post, more for support than illumination.
-- Andrew Lang |
405 | Famous statistical quotations | Strange events permit themselves the
luxury of occurring.
-- Charlie Chan | Famous statistical quotations | Strange events permit themselves the
luxury of occurring.
-- Charlie Chan | Famous statistical quotations
Strange events permit themselves the
luxury of occurring.
-- Charlie Chan | Famous statistical quotations
Strange events permit themselves the
luxury of occurring.
-- Charlie Chan |
406 | Famous statistical quotations | A nice one I came about:
I think it's much more interesting to live not knowing than to have answers which might be wrong.
By Richard Feynman (link) | Famous statistical quotations | A nice one I came about:
I think it's much more interesting to live not knowing than to have answers which might be wrong.
By Richard Feynman (link) | Famous statistical quotations
A nice one I came about:
I think it's much more interesting to live not knowing than to have answers which might be wrong.
By Richard Feynman (link) | Famous statistical quotations
A nice one I came about:
I think it's much more interesting to live not knowing than to have answers which might be wrong.
By Richard Feynman (link) |
407 | Famous statistical quotations | The best thing about being a statistician is that you get to play in everyone's backyard.
-- John Tukey
(This is MY favourite Tukey quote) | Famous statistical quotations | The best thing about being a statistician is that you get to play in everyone's backyard.
-- John Tukey
(This is MY favourite Tukey quote) | Famous statistical quotations
The best thing about being a statistician is that you get to play in everyone's backyard.
-- John Tukey
(This is MY favourite Tukey quote) | Famous statistical quotations
The best thing about being a statistician is that you get to play in everyone's backyard.
-- John Tukey
(This is MY favourite Tukey quote) |
408 | Famous statistical quotations | Absence of evidence is not evidence of absence.
–Martin Rees (Wikipedia) | Famous statistical quotations | Absence of evidence is not evidence of absence.
–Martin Rees (Wikipedia) | Famous statistical quotations
Absence of evidence is not evidence of absence.
–Martin Rees (Wikipedia) | Famous statistical quotations
Absence of evidence is not evidence of absence.
–Martin Rees (Wikipedia) |
409 | Famous statistical quotations | "It's easy to lie with statistics; it is easier to lie without them."
-- Frederick Mosteller | Famous statistical quotations | "It's easy to lie with statistics; it is easier to lie without them."
-- Frederick Mosteller | Famous statistical quotations
"It's easy to lie with statistics; it is easier to lie without them."
-- Frederick Mosteller | Famous statistical quotations
"It's easy to lie with statistics; it is easier to lie without them."
-- Frederick Mosteller |
410 | Famous statistical quotations | Say you were standing with one foot in the oven and one foot in an ice bucket. According to the percentage people, you should be perfectly comfortable.
-Bobby Bragan, 1963 | Famous statistical quotations | Say you were standing with one foot in the oven and one foot in an ice bucket. According to the percentage people, you should be perfectly comfortable.
-Bobby Bragan, 1963 | Famous statistical quotations
Say you were standing with one foot in the oven and one foot in an ice bucket. According to the percentage people, you should be perfectly comfortable.
-Bobby Bragan, 1963 | Famous statistical quotations
Say you were standing with one foot in the oven and one foot in an ice bucket. According to the percentage people, you should be perfectly comfortable.
-Bobby Bragan, 1963 |
411 | Famous statistical quotations | Tout le monde y croit cependant, me disait un jour M. Lippmann, car les expérimentateurs s'imaginent que c'est un théorème de mathématiques, et les mathématiciens que c'est un fait expérimental.
Henri Poincaré, Calcul des probabilités (2nd ed., 1912), p. 171.
In English:
Everybody believes in the exponential law of errors [i.e., the Normal distribution]: the experimenters, because they think it can be proved by mathematics; and the mathematicians, because they believe it has been established by observation.
Whittaker, E. T. and Robinson, G. "Normal Frequency Distribution." Ch. 8 in The Calculus of Observations: A Treatise on Numerical Mathematics, 4th ed. New York: Dover, pp. 164-208, 1967. p. 179.
Quoted at Mathworld.com. | Famous statistical quotations | Tout le monde y croit cependant, me disait un jour M. Lippmann, car les expérimentateurs s'imaginent que c'est un théorème de mathématiques, et les mathématiciens que c'est un fait expérimental.
Henr | Famous statistical quotations
Tout le monde y croit cependant, me disait un jour M. Lippmann, car les expérimentateurs s'imaginent que c'est un théorème de mathématiques, et les mathématiciens que c'est un fait expérimental.
Henri Poincaré, Calcul des probabilités (2nd ed., 1912), p. 171.
In English:
Everybody believes in the exponential law of errors [i.e., the Normal distribution]: the experimenters, because they think it can be proved by mathematics; and the mathematicians, because they believe it has been established by observation.
Whittaker, E. T. and Robinson, G. "Normal Frequency Distribution." Ch. 8 in The Calculus of Observations: A Treatise on Numerical Mathematics, 4th ed. New York: Dover, pp. 164-208, 1967. p. 179.
Quoted at Mathworld.com. | Famous statistical quotations
Tout le monde y croit cependant, me disait un jour M. Lippmann, car les expérimentateurs s'imaginent que c'est un théorème de mathématiques, et les mathématiciens que c'est un fait expérimental.
Henr |
412 | Famous statistical quotations | My greatest concern was what to call
it. I thought of calling it
'information,' but the word was overly
used, so I decided to call it
'uncertainty.' When I discussed it
with John von Neumann, he had a better
idea. Von Neumann told me, 'You should
call it entropy, for two reasons. In
the first place your uncertainty
function has been used in statistical
mechanics under that name, so it
already has a name. In the second
place, and more important, no one
really knows what entropy really is,
so in a debate you will always have
the advantage.'
Claude Elwood Shannon | Famous statistical quotations | My greatest concern was what to call
it. I thought of calling it
'information,' but the word was overly
used, so I decided to call it
'uncertainty.' When I discussed it
with John von Neumann | Famous statistical quotations
My greatest concern was what to call
it. I thought of calling it
'information,' but the word was overly
used, so I decided to call it
'uncertainty.' When I discussed it
with John von Neumann, he had a better
idea. Von Neumann told me, 'You should
call it entropy, for two reasons. In
the first place your uncertainty
function has been used in statistical
mechanics under that name, so it
already has a name. In the second
place, and more important, no one
really knows what entropy really is,
so in a debate you will always have
the advantage.'
Claude Elwood Shannon | Famous statistical quotations
My greatest concern was what to call
it. I thought of calling it
'information,' but the word was overly
used, so I decided to call it
'uncertainty.' When I discussed it
with John von Neumann |
413 | Famous statistical quotations | I don't know about famous, but the following is one of my favourites:
Conducting data analysis is like
drinking a fine wine. It is important
to swirl and sniff the wine, to unpack
the complex bouquet and to appreciate
the experience. Gulping the wine
doesn’t work.
-Daniel B. Wright (2003), see PDF of Article.
Reference:
Wright, D. B. (2003). Making friends with your data: Improving how statistics are conducted and reported1. British Journal of Educational Psychology, 73(1), 123-136. | Famous statistical quotations | I don't know about famous, but the following is one of my favourites:
Conducting data analysis is like
drinking a fine wine. It is important
to swirl and sniff the wine, to unpack
the complex b | Famous statistical quotations
I don't know about famous, but the following is one of my favourites:
Conducting data analysis is like
drinking a fine wine. It is important
to swirl and sniff the wine, to unpack
the complex bouquet and to appreciate
the experience. Gulping the wine
doesn’t work.
-Daniel B. Wright (2003), see PDF of Article.
Reference:
Wright, D. B. (2003). Making friends with your data: Improving how statistics are conducted and reported1. British Journal of Educational Psychology, 73(1), 123-136. | Famous statistical quotations
I don't know about famous, but the following is one of my favourites:
Conducting data analysis is like
drinking a fine wine. It is important
to swirl and sniff the wine, to unpack
the complex b |
414 | Famous statistical quotations | ... surely, God loves the .06 nearly as much as the .05. Can there be any
doubt that God views the strength of evidence for or against the null as a
fairly continuous function of the magnitude of p? (p.1277)
Rosnow, R. L., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44(10), 1276-1284. pdf. | Famous statistical quotations | ... surely, God loves the .06 nearly as much as the .05. Can there be any
doubt that God views the strength of evidence for or against the null as a
fairly continuous function of the magnitude of p? ( | Famous statistical quotations
... surely, God loves the .06 nearly as much as the .05. Can there be any
doubt that God views the strength of evidence for or against the null as a
fairly continuous function of the magnitude of p? (p.1277)
Rosnow, R. L., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44(10), 1276-1284. pdf. | Famous statistical quotations
... surely, God loves the .06 nearly as much as the .05. Can there be any
doubt that God views the strength of evidence for or against the null as a
fairly continuous function of the magnitude of p? ( |
415 | Famous statistical quotations | All we know about the world teaches us that the effects of A and B are always different---in some decimal place---for any A and B. Thus asking "are the effects different?" is foolish.
Tukey (again but this one is my favorite) | Famous statistical quotations | All we know about the world teaches us that the effects of A and B are always different---in some decimal place---for any A and B. Thus asking "are the effects different?" is foolish.
Tukey (again bu | Famous statistical quotations
All we know about the world teaches us that the effects of A and B are always different---in some decimal place---for any A and B. Thus asking "are the effects different?" is foolish.
Tukey (again but this one is my favorite) | Famous statistical quotations
All we know about the world teaches us that the effects of A and B are always different---in some decimal place---for any A and B. Thus asking "are the effects different?" is foolish.
Tukey (again bu |
416 | Famous statistical quotations | On two occasions I have been asked [by
members of Parliament], ‘Pray, Mr.
Babbage, if you put into the machine
wrong figures, will the right answers
come out?’ I am not able rightly to
apprehend the kind of confusion of
ideas that could provoke such a
question.
Charles Babbage | Famous statistical quotations | On two occasions I have been asked [by
members of Parliament], ‘Pray, Mr.
Babbage, if you put into the machine
wrong figures, will the right answers
come out?’ I am not able rightly to
appre | Famous statistical quotations
On two occasions I have been asked [by
members of Parliament], ‘Pray, Mr.
Babbage, if you put into the machine
wrong figures, will the right answers
come out?’ I am not able rightly to
apprehend the kind of confusion of
ideas that could provoke such a
question.
Charles Babbage | Famous statistical quotations
On two occasions I have been asked [by
members of Parliament], ‘Pray, Mr.
Babbage, if you put into the machine
wrong figures, will the right answers
come out?’ I am not able rightly to
appre |
417 | Famous statistical quotations | The subjectivist (i.e. Bayesian)
states his judgements, whereas the
objectivist sweeps them under the
carpet by calling assumptions
knowledge, and he basks in the
glorious objectivity of science.
I.J. Good | Famous statistical quotations | The subjectivist (i.e. Bayesian)
states his judgements, whereas the
objectivist sweeps them under the
carpet by calling assumptions
knowledge, and he basks in the
glorious objectivity of sci | Famous statistical quotations
The subjectivist (i.e. Bayesian)
states his judgements, whereas the
objectivist sweeps them under the
carpet by calling assumptions
knowledge, and he basks in the
glorious objectivity of science.
I.J. Good | Famous statistical quotations
The subjectivist (i.e. Bayesian)
states his judgements, whereas the
objectivist sweeps them under the
carpet by calling assumptions
knowledge, and he basks in the
glorious objectivity of sci |
418 | Famous statistical quotations | Do not trust any statistics you did not fake yourself.
-- Winston Churchill | Famous statistical quotations | Do not trust any statistics you did not fake yourself.
-- Winston Churchill | Famous statistical quotations
Do not trust any statistics you did not fake yourself.
-- Winston Churchill | Famous statistical quotations
Do not trust any statistics you did not fake yourself.
-- Winston Churchill |
419 | Is $R^2$ useful or dangerous? | To address the first question, consider the model
$$Y = X + \sin(X) + \varepsilon$$
with iid $\varepsilon$ of mean zero and finite variance. As the range of $X$ (thought of as fixed or random) increases, $R^2$ goes to 1. Nevertheless, if the variance of $\varepsilon$ is small (around 1 or less), the data are "noticeably non-linear." In the plots, $var(\varepsilon)=1$.
Incidentally, an easy way to get a small $R^2$ is to slice the independent variables into narrow ranges. The regression (using exactly the same model) within each range will have a low $R^2$ even when the full regression based on all the data has a high $R^2$. Contemplating this situation is an informative exercise and good preparation for the second question.
Both the following plots use the same data. The $R^2$ for the full regression is 0.86. The $R^2$ for the slices (of width 1/2 from -5/2 to 5/2) are .16, .18, .07, .14, .08, .17, .20, .12, .01, .00, reading left to right. If anything, the fits get better in the sliced situation because the 10 separate lines can more closely conform to the data within their narrow ranges. Although the $R^2$ for all the slices are far below the full $R^2$, neither the strength of the relationship, the linearity, nor indeed any aspect of the data (except the range of $X$ used for the regression) has changed.
(One might object that this slicing procedure changes the distribution of $X$. That is true, but it nevertheless corresponds with the most common use of $R^2$ in fixed-effects modeling and reveals the degree to which $R^2$ is telling us about the variance of $X$ in the random-effects situation. In particular, when $X$ is constrained to vary within a smaller interval of its natural range, $R^2$ will usually drop.)
The basic problem with $R^2$ is that it depends on too many things (even when adjusted in multiple regression), but most especially on the variance of the independent variables and the variance of the residuals. Normally it tells us nothing about "linearity" or "strength of relationship" or even "goodness of fit" for comparing a sequence of models.
Most of the time you can find a better statistic than $R^2$. For model selection you can look to AIC and BIC; for expressing the adequacy of a model, look at the variance of the residuals.
This brings us finally to the second question. One situation in which $R^2$ might have some use is when the independent variables are set to standard values, essentially controlling for the effect of their variance. Then $1 - R^2$ is really a proxy for the variance of the residuals, suitably standardized. | Is $R^2$ useful or dangerous? | To address the first question, consider the model
$$Y = X + \sin(X) + \varepsilon$$
with iid $\varepsilon$ of mean zero and finite variance. As the range of $X$ (thought of as fixed or random) increa | Is $R^2$ useful or dangerous?
To address the first question, consider the model
$$Y = X + \sin(X) + \varepsilon$$
with iid $\varepsilon$ of mean zero and finite variance. As the range of $X$ (thought of as fixed or random) increases, $R^2$ goes to 1. Nevertheless, if the variance of $\varepsilon$ is small (around 1 or less), the data are "noticeably non-linear." In the plots, $var(\varepsilon)=1$.
Incidentally, an easy way to get a small $R^2$ is to slice the independent variables into narrow ranges. The regression (using exactly the same model) within each range will have a low $R^2$ even when the full regression based on all the data has a high $R^2$. Contemplating this situation is an informative exercise and good preparation for the second question.
Both the following plots use the same data. The $R^2$ for the full regression is 0.86. The $R^2$ for the slices (of width 1/2 from -5/2 to 5/2) are .16, .18, .07, .14, .08, .17, .20, .12, .01, .00, reading left to right. If anything, the fits get better in the sliced situation because the 10 separate lines can more closely conform to the data within their narrow ranges. Although the $R^2$ for all the slices are far below the full $R^2$, neither the strength of the relationship, the linearity, nor indeed any aspect of the data (except the range of $X$ used for the regression) has changed.
(One might object that this slicing procedure changes the distribution of $X$. That is true, but it nevertheless corresponds with the most common use of $R^2$ in fixed-effects modeling and reveals the degree to which $R^2$ is telling us about the variance of $X$ in the random-effects situation. In particular, when $X$ is constrained to vary within a smaller interval of its natural range, $R^2$ will usually drop.)
The basic problem with $R^2$ is that it depends on too many things (even when adjusted in multiple regression), but most especially on the variance of the independent variables and the variance of the residuals. Normally it tells us nothing about "linearity" or "strength of relationship" or even "goodness of fit" for comparing a sequence of models.
Most of the time you can find a better statistic than $R^2$. For model selection you can look to AIC and BIC; for expressing the adequacy of a model, look at the variance of the residuals.
This brings us finally to the second question. One situation in which $R^2$ might have some use is when the independent variables are set to standard values, essentially controlling for the effect of their variance. Then $1 - R^2$ is really a proxy for the variance of the residuals, suitably standardized. | Is $R^2$ useful or dangerous?
To address the first question, consider the model
$$Y = X + \sin(X) + \varepsilon$$
with iid $\varepsilon$ of mean zero and finite variance. As the range of $X$ (thought of as fixed or random) increa |
420 | Is $R^2$ useful or dangerous? | Your example only applies when the variable $\newcommand{\Var}{\mathrm{Var}}X$ should be in the model. It certainly doesn't apply when one uses the usual least squares estimates. To see this, note that if we estimate $a$ by least squares in your example, we get:
$$\hat{a}=\frac{\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}}{\frac{1}{N}\sum_{i=1}^{N}X_{i}^{2}}=\frac{\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}}{s_{X}^{2}+\overline{X}^{2}}$$
Where $s_{X}^2=\frac{1}{N}\sum_{i=1}^{N}(X_{i}-\overline{X})^{2}$ is the (sample) variance of $X$ and $\overline{X}=\frac{1}{N}\sum_{i=1}^{N}X_{i}$ is the (sample) mean of $X$
$$\hat{a}^{2}\Var[X]=\hat{a}^{2}s_{X}^{2}=\frac{\left(\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}\right)^2}{s_{X}^2}\left(\frac{s_{X}^{2}}{s_{X}^{2}+\overline{X}^{2}}\right)^2$$
Now the second term is always less than $1$ (equal to $1$ in the limit) so we get an upper bound for the contribution to $R^2$ from the variable $X$:
$$\hat{a}^{2}\Var[X]\leq \frac{\left(\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}\right)^2}{s_{X}^2}$$
And so unless $\left(\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}\right)^2\to\infty$ as well, we will actually see $R^2\to 0$ as $s_{X}^{2}\to\infty$ (because the numerator goes to zero, but denominator goes into $\Var[\epsilon]>0$). Additionally, we may get $R^2$ converging to something in between $0$ and $1$ depending on how quickly the two terms diverge. Now the above term will generally diverge faster than $s_{X}^2$ if $X$ should be in the model, and slower if $X$ shouldn't be in the model. In both case $R^2$ goes in the right directions.
And also note that for any finite data set (i.e. a real one) we can never have $R^2=1$ unless all the errors are exactly zero. This basically indicates that $R^2$ is a relative measure, rather than an absolute one. For unless $R^2$ is actually equal to $1$, we can always find a better fitting model. This is probably the "dangerous" aspect of $R^2$ in that because it is scaled to be between $0$ and $1$ it seems like we can interpet it in an absolute sense.
It is probably more useful to look at how quickly $R^2$ drops as you add variables into the model. And last, but not least, it should never be ignored in variable selection, as $R^2$ is effectively a sufficient statistic for variable selection - it contains all the information on variable selection that is in the data. The only thing that is needed is to choose the drop in $R^2$ which corresponds to "fitting the errors" - which usually depends on the sample size and the number of variables. | Is $R^2$ useful or dangerous? | Your example only applies when the variable $\newcommand{\Var}{\mathrm{Var}}X$ should be in the model. It certainly doesn't apply when one uses the usual least squares estimates. To see this, note t | Is $R^2$ useful or dangerous?
Your example only applies when the variable $\newcommand{\Var}{\mathrm{Var}}X$ should be in the model. It certainly doesn't apply when one uses the usual least squares estimates. To see this, note that if we estimate $a$ by least squares in your example, we get:
$$\hat{a}=\frac{\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}}{\frac{1}{N}\sum_{i=1}^{N}X_{i}^{2}}=\frac{\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}}{s_{X}^{2}+\overline{X}^{2}}$$
Where $s_{X}^2=\frac{1}{N}\sum_{i=1}^{N}(X_{i}-\overline{X})^{2}$ is the (sample) variance of $X$ and $\overline{X}=\frac{1}{N}\sum_{i=1}^{N}X_{i}$ is the (sample) mean of $X$
$$\hat{a}^{2}\Var[X]=\hat{a}^{2}s_{X}^{2}=\frac{\left(\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}\right)^2}{s_{X}^2}\left(\frac{s_{X}^{2}}{s_{X}^{2}+\overline{X}^{2}}\right)^2$$
Now the second term is always less than $1$ (equal to $1$ in the limit) so we get an upper bound for the contribution to $R^2$ from the variable $X$:
$$\hat{a}^{2}\Var[X]\leq \frac{\left(\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}\right)^2}{s_{X}^2}$$
And so unless $\left(\frac{1}{N}\sum_{i=1}^{N}X_{i}Y_{i}\right)^2\to\infty$ as well, we will actually see $R^2\to 0$ as $s_{X}^{2}\to\infty$ (because the numerator goes to zero, but denominator goes into $\Var[\epsilon]>0$). Additionally, we may get $R^2$ converging to something in between $0$ and $1$ depending on how quickly the two terms diverge. Now the above term will generally diverge faster than $s_{X}^2$ if $X$ should be in the model, and slower if $X$ shouldn't be in the model. In both case $R^2$ goes in the right directions.
And also note that for any finite data set (i.e. a real one) we can never have $R^2=1$ unless all the errors are exactly zero. This basically indicates that $R^2$ is a relative measure, rather than an absolute one. For unless $R^2$ is actually equal to $1$, we can always find a better fitting model. This is probably the "dangerous" aspect of $R^2$ in that because it is scaled to be between $0$ and $1$ it seems like we can interpet it in an absolute sense.
It is probably more useful to look at how quickly $R^2$ drops as you add variables into the model. And last, but not least, it should never be ignored in variable selection, as $R^2$ is effectively a sufficient statistic for variable selection - it contains all the information on variable selection that is in the data. The only thing that is needed is to choose the drop in $R^2$ which corresponds to "fitting the errors" - which usually depends on the sample size and the number of variables. | Is $R^2$ useful or dangerous?
Your example only applies when the variable $\newcommand{\Var}{\mathrm{Var}}X$ should be in the model. It certainly doesn't apply when one uses the usual least squares estimates. To see this, note t |
421 | Is $R^2$ useful or dangerous? | If I can add an example of when $R^2$ is dangerous. Many years ago I was working on some biometric data and being young and foolish I was delighted when I found some statistically significant $R^2$ values for my fancy regressions which I had constructed using stepwise functions. It was only afterwards looking back after my presentation to a large international audience did I realize that given the massive variance of the data - combined with the possible poor representation of the sample with respect to the population, an $R^2$ of 0.02 was utterly meaningless even if it was "statistically significant"...
Those working with statistics need to understand the data! | Is $R^2$ useful or dangerous? | If I can add an example of when $R^2$ is dangerous. Many years ago I was working on some biometric data and being young and foolish I was delighted when I found some statistically significant $R^2$ v | Is $R^2$ useful or dangerous?
If I can add an example of when $R^2$ is dangerous. Many years ago I was working on some biometric data and being young and foolish I was delighted when I found some statistically significant $R^2$ values for my fancy regressions which I had constructed using stepwise functions. It was only afterwards looking back after my presentation to a large international audience did I realize that given the massive variance of the data - combined with the possible poor representation of the sample with respect to the population, an $R^2$ of 0.02 was utterly meaningless even if it was "statistically significant"...
Those working with statistics need to understand the data! | Is $R^2$ useful or dangerous?
If I can add an example of when $R^2$ is dangerous. Many years ago I was working on some biometric data and being young and foolish I was delighted when I found some statistically significant $R^2$ v |
422 | Is $R^2$ useful or dangerous? | When you have a single predictor $R^{2}$ is exactly interpreted as the proportion of variation in $Y$ that can be explained by the linear relationship with $X$. This interpretation must be kept in mind when looking at the value of $R^2$.
You can get a large $R^2$ from a non-linear relationship only when the relationship is close to linear. For example, suppose $Y = e^{X} + \varepsilon$ where $X \sim {\rm Uniform}(2,3)$ and $\varepsilon \sim N(0,1)$. If you do the calculation of
$$ R^{2} = {\rm cor}(X, e^{X} + \varepsilon)^{2} $$
you will find it to be around $.914$ (I only approximated this by simulation) despite that the relationship is clearly not linear. The reason is that $e^{X}$ looks an awful lot like a linear function over the interval $(2,3)$. | Is $R^2$ useful or dangerous? | When you have a single predictor $R^{2}$ is exactly interpreted as the proportion of variation in $Y$ that can be explained by the linear relationship with $X$. This interpretation must be kept in min | Is $R^2$ useful or dangerous?
When you have a single predictor $R^{2}$ is exactly interpreted as the proportion of variation in $Y$ that can be explained by the linear relationship with $X$. This interpretation must be kept in mind when looking at the value of $R^2$.
You can get a large $R^2$ from a non-linear relationship only when the relationship is close to linear. For example, suppose $Y = e^{X} + \varepsilon$ where $X \sim {\rm Uniform}(2,3)$ and $\varepsilon \sim N(0,1)$. If you do the calculation of
$$ R^{2} = {\rm cor}(X, e^{X} + \varepsilon)^{2} $$
you will find it to be around $.914$ (I only approximated this by simulation) despite that the relationship is clearly not linear. The reason is that $e^{X}$ looks an awful lot like a linear function over the interval $(2,3)$. | Is $R^2$ useful or dangerous?
When you have a single predictor $R^{2}$ is exactly interpreted as the proportion of variation in $Y$ that can be explained by the linear relationship with $X$. This interpretation must be kept in min |
423 | Is $R^2$ useful or dangerous? | One situation you would want to avoid $R^2$ is multiple regression, where adding irrelevant predictor variables to the model can in some cases increase $R^2$. This can be addressed by using the adjusted $R^2$ value instead, calculated as
$\bar{R}^2 = 1 - (1-R^2)\frac{n-1}{n-p-1}$ where $n$ is the number of data samples, and $p$ is the number of regressors not counting the constant term. | Is $R^2$ useful or dangerous? | One situation you would want to avoid $R^2$ is multiple regression, where adding irrelevant predictor variables to the model can in some cases increase $R^2$. This can be addressed by using the adjust | Is $R^2$ useful or dangerous?
One situation you would want to avoid $R^2$ is multiple regression, where adding irrelevant predictor variables to the model can in some cases increase $R^2$. This can be addressed by using the adjusted $R^2$ value instead, calculated as
$\bar{R}^2 = 1 - (1-R^2)\frac{n-1}{n-p-1}$ where $n$ is the number of data samples, and $p$ is the number of regressors not counting the constant term. | Is $R^2$ useful or dangerous?
One situation you would want to avoid $R^2$ is multiple regression, where adding irrelevant predictor variables to the model can in some cases increase $R^2$. This can be addressed by using the adjust |
424 | Is $R^2$ useful or dangerous? | A good example for high $R^2$ with a nonlinear function is the quadratic function $y=x^2$ restricted to the interval $[0,1]$. With 0 noise it will not have an $R^2$ square of 1 if you have 3 or more points since they will not fit perfectly on a straight line. But if the design points are scattered uniformly on the $[0, 1]$ the $R^2$ you get will be high perhaps surprisingly so. This may not be the case if you have a lot of points near 0 and a lot near 1 with little or nothing in the middle.
$R^2$ will be poor in the perfect linear case if the noise term has a large variance. So you can take the model $Y= x + \epsilon$ which is technically a perfect linear model but let the variance in e tend to infinity and you will have $R^2$ going to 0.
Inspite of its deficiencies R square does measure the percentage of variance explained by the data and so it does measure goodness of fit. A high $R^2$ means a good fit but we still have to be careful about the good fit being caused by too many parameters for the size of the data set that we have.
In the multiple regression situation there is the overfitting problem. Add variables and $R^2$ will always increase. The adjusted $R^2$ remedies this somewhat as it takes account of the number of parameters being estimated. | Is $R^2$ useful or dangerous? | A good example for high $R^2$ with a nonlinear function is the quadratic function $y=x^2$ restricted to the interval $[0,1]$. With 0 noise it will not have an $R^2$ square of 1 if you have 3 or more p | Is $R^2$ useful or dangerous?
A good example for high $R^2$ with a nonlinear function is the quadratic function $y=x^2$ restricted to the interval $[0,1]$. With 0 noise it will not have an $R^2$ square of 1 if you have 3 or more points since they will not fit perfectly on a straight line. But if the design points are scattered uniformly on the $[0, 1]$ the $R^2$ you get will be high perhaps surprisingly so. This may not be the case if you have a lot of points near 0 and a lot near 1 with little or nothing in the middle.
$R^2$ will be poor in the perfect linear case if the noise term has a large variance. So you can take the model $Y= x + \epsilon$ which is technically a perfect linear model but let the variance in e tend to infinity and you will have $R^2$ going to 0.
Inspite of its deficiencies R square does measure the percentage of variance explained by the data and so it does measure goodness of fit. A high $R^2$ means a good fit but we still have to be careful about the good fit being caused by too many parameters for the size of the data set that we have.
In the multiple regression situation there is the overfitting problem. Add variables and $R^2$ will always increase. The adjusted $R^2$ remedies this somewhat as it takes account of the number of parameters being estimated. | Is $R^2$ useful or dangerous?
A good example for high $R^2$ with a nonlinear function is the quadratic function $y=x^2$ restricted to the interval $[0,1]$. With 0 noise it will not have an $R^2$ square of 1 if you have 3 or more p |
425 | How to choose a predictive model after k-fold cross-validation? | I think that you are missing something still in your understanding of the purpose of cross-validation.
Let's get some terminology straight, generally when we say 'a model' we refer to a particular method for describing how some input data relates to what we are trying to predict. We don't generally refer to particular instances of that method as different models. So you might say 'I have a linear regression model' but you wouldn't call two different sets of the trained coefficients different models. At least not in the context of model selection.
So, when you do K-fold cross validation, you are testing how well your model is able to get trained by some data and then predict data it hasn't seen. We use cross validation for this because if you train using all the data you have, you have none left for testing. You could do this once, say by using 80% of the data to train and 20% to test, but what if the 20% you happened to pick to test happens to contain a bunch of points that are particularly easy (or particularly hard) to predict? We will not have come up with the best estimate possible of the models ability to learn and predict.
We want to use all of the data. So to continue the above example of an 80/20 split, we would do 5-fold cross validation by training the model 5 times on 80% of the data and testing on 20%. We ensure that each data point ends up in the 20% test set exactly once. We've therefore used every data point we have to contribute to an understanding of how well our model performs the task of learning from some data and predicting some new data.
But the purpose of cross-validation is not to come up with our final model. We don't use these 5 instances of our trained model to do any real prediction. For that we want to use all the data we have to come up with the best model possible. The purpose of cross-validation is model checking, not model building.
Now, say we have two models, say a linear regression model and a neural network. How can we say which model is better? We can do K-fold cross-validation and see which one proves better at predicting the test set points. But once we have used cross-validation to select the better performing model, we train that model (whether it be the linear regression or the neural network) on all the data. We don't use the actual model instances we trained during cross-validation for our final predictive model.
Note that there is a technique called bootstrap aggregation (usually shortened to 'bagging') that does in a way use model instances produced in a way similar to cross-validation to build up an ensemble model, but that is an advanced technique beyond the scope of your question here. | How to choose a predictive model after k-fold cross-validation? | I think that you are missing something still in your understanding of the purpose of cross-validation.
Let's get some terminology straight, generally when we say 'a model' we refer to a particular met | How to choose a predictive model after k-fold cross-validation?
I think that you are missing something still in your understanding of the purpose of cross-validation.
Let's get some terminology straight, generally when we say 'a model' we refer to a particular method for describing how some input data relates to what we are trying to predict. We don't generally refer to particular instances of that method as different models. So you might say 'I have a linear regression model' but you wouldn't call two different sets of the trained coefficients different models. At least not in the context of model selection.
So, when you do K-fold cross validation, you are testing how well your model is able to get trained by some data and then predict data it hasn't seen. We use cross validation for this because if you train using all the data you have, you have none left for testing. You could do this once, say by using 80% of the data to train and 20% to test, but what if the 20% you happened to pick to test happens to contain a bunch of points that are particularly easy (or particularly hard) to predict? We will not have come up with the best estimate possible of the models ability to learn and predict.
We want to use all of the data. So to continue the above example of an 80/20 split, we would do 5-fold cross validation by training the model 5 times on 80% of the data and testing on 20%. We ensure that each data point ends up in the 20% test set exactly once. We've therefore used every data point we have to contribute to an understanding of how well our model performs the task of learning from some data and predicting some new data.
But the purpose of cross-validation is not to come up with our final model. We don't use these 5 instances of our trained model to do any real prediction. For that we want to use all the data we have to come up with the best model possible. The purpose of cross-validation is model checking, not model building.
Now, say we have two models, say a linear regression model and a neural network. How can we say which model is better? We can do K-fold cross-validation and see which one proves better at predicting the test set points. But once we have used cross-validation to select the better performing model, we train that model (whether it be the linear regression or the neural network) on all the data. We don't use the actual model instances we trained during cross-validation for our final predictive model.
Note that there is a technique called bootstrap aggregation (usually shortened to 'bagging') that does in a way use model instances produced in a way similar to cross-validation to build up an ensemble model, but that is an advanced technique beyond the scope of your question here. | How to choose a predictive model after k-fold cross-validation?
I think that you are missing something still in your understanding of the purpose of cross-validation.
Let's get some terminology straight, generally when we say 'a model' we refer to a particular met |
426 | How to choose a predictive model after k-fold cross-validation? | I found this excellent article How to Train a Final Machine Learning Model very helpful in clearing up all the confusions I have regarding the use of CV in machine learning.
Basically we use CV (e.g. 80/20 split, k-fold, etc) to estimate how well your whole procedure (including the data engineering, choice of model (i.e. algorithm) and hyper-parameters, etc.) will perform on future unseen data. And once you've chosen the winning "procedure", the fitted models from CV have served their purpose and can now be discarded. You then use the same winning "procedure" and train your final model using the whole data set. | How to choose a predictive model after k-fold cross-validation? | I found this excellent article How to Train a Final Machine Learning Model very helpful in clearing up all the confusions I have regarding the use of CV in machine learning.
Basically we use CV (e.g | How to choose a predictive model after k-fold cross-validation?
I found this excellent article How to Train a Final Machine Learning Model very helpful in clearing up all the confusions I have regarding the use of CV in machine learning.
Basically we use CV (e.g. 80/20 split, k-fold, etc) to estimate how well your whole procedure (including the data engineering, choice of model (i.e. algorithm) and hyper-parameters, etc.) will perform on future unseen data. And once you've chosen the winning "procedure", the fitted models from CV have served their purpose and can now be discarded. You then use the same winning "procedure" and train your final model using the whole data set. | How to choose a predictive model after k-fold cross-validation?
I found this excellent article How to Train a Final Machine Learning Model very helpful in clearing up all the confusions I have regarding the use of CV in machine learning.
Basically we use CV (e.g |
427 | How to choose a predictive model after k-fold cross-validation? | Let me throw in a few points in addition to Bogdanovist's answer
As you say, you train $k$ different models. They differ in that 1/(k-1)th of the training data is exchanged against other cases. These models are sometimes called surrogate models because the (average) performance measured for these models is taken as a surrogate of the performance of the model trained on all cases.
Now, there are some assumptions in this process.
Assumption 1: the surrogate models are equivalent to the "whole data" model.
It is quite common that this assumption breaks down, and the symptom is the well-known pessimistic bias of $k$-fold cross validation (or other resampling based validation schemes). The performance of the surrogate models is on average worse than the performance of the "whole data" model if the learning curve has still a positive slope (i.e. less training samples lead to worse models).
Assumption 2 is a weaker version of assumption 1: even if the surrogate models are on average worse than the whole data model, we assume them to be equivalent to each other. This allows summarizing the test results for $k$ surrogate models as one average performance.
Model instability leads to the breakdown of this assumption: the true performance of models trained on $N \frac{k - 1}{k}$ training cases varies a lot. You can measure this by doing iterations/repetitions of the $k$-fold cross validation (new random assignments to the $k$ subsets) and looking at the variance (random differences) between the predictions of different surrogate models for the same case.
The finite number of cases means the performance measurement will be subject to a random error (variance) due to the finite number of test cases. This source of variance is different from (and thus adds to) the model instablilty variance.
The differences in the observed performance are due to these two sources of variance.
The "selection" you think about is a data set selection: selecting one of the surrogate models means selecting a subset of training samples and claiming that this subset of training samples leads to a superior model. While this may be truely the case, usually the "superiority" is spurious. In any case, as picking "the best" of the surrogate models is a data-driven optimization, you'd need to validate (measure performance) this picked model with new unknown data. The test set within this cross validation is not independent as it was used to select the surrogate model.
You may want to look at our paper, it is about classification where things are usually worse than for regression. However, it shows how these sources of variance and bias add up.
Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33.
DOI: 10.1016/j.aca.2012.11.007
accepted manuscript on arXiv: 1211.1323 | How to choose a predictive model after k-fold cross-validation? | Let me throw in a few points in addition to Bogdanovist's answer
As you say, you train $k$ different models. They differ in that 1/(k-1)th of the training data is exchanged against other cases. These | How to choose a predictive model after k-fold cross-validation?
Let me throw in a few points in addition to Bogdanovist's answer
As you say, you train $k$ different models. They differ in that 1/(k-1)th of the training data is exchanged against other cases. These models are sometimes called surrogate models because the (average) performance measured for these models is taken as a surrogate of the performance of the model trained on all cases.
Now, there are some assumptions in this process.
Assumption 1: the surrogate models are equivalent to the "whole data" model.
It is quite common that this assumption breaks down, and the symptom is the well-known pessimistic bias of $k$-fold cross validation (or other resampling based validation schemes). The performance of the surrogate models is on average worse than the performance of the "whole data" model if the learning curve has still a positive slope (i.e. less training samples lead to worse models).
Assumption 2 is a weaker version of assumption 1: even if the surrogate models are on average worse than the whole data model, we assume them to be equivalent to each other. This allows summarizing the test results for $k$ surrogate models as one average performance.
Model instability leads to the breakdown of this assumption: the true performance of models trained on $N \frac{k - 1}{k}$ training cases varies a lot. You can measure this by doing iterations/repetitions of the $k$-fold cross validation (new random assignments to the $k$ subsets) and looking at the variance (random differences) between the predictions of different surrogate models for the same case.
The finite number of cases means the performance measurement will be subject to a random error (variance) due to the finite number of test cases. This source of variance is different from (and thus adds to) the model instablilty variance.
The differences in the observed performance are due to these two sources of variance.
The "selection" you think about is a data set selection: selecting one of the surrogate models means selecting a subset of training samples and claiming that this subset of training samples leads to a superior model. While this may be truely the case, usually the "superiority" is spurious. In any case, as picking "the best" of the surrogate models is a data-driven optimization, you'd need to validate (measure performance) this picked model with new unknown data. The test set within this cross validation is not independent as it was used to select the surrogate model.
You may want to look at our paper, it is about classification where things are usually worse than for regression. However, it shows how these sources of variance and bias add up.
Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33.
DOI: 10.1016/j.aca.2012.11.007
accepted manuscript on arXiv: 1211.1323 | How to choose a predictive model after k-fold cross-validation?
Let me throw in a few points in addition to Bogdanovist's answer
As you say, you train $k$ different models. They differ in that 1/(k-1)th of the training data is exchanged against other cases. These |
428 | How to choose a predictive model after k-fold cross-validation? | Why do we use k-fold cross validation?
Cross-validation is a method to estimate the skill of a method on unseen data. Like using a train-test split.
Cross-validation systematically creates and evaluates multiple models on multiple subsets of the dataset.
This, in turn, provides a population of performance measures.
We can calculate the mean of these measures to get an idea of how
well the procedure performs on average.
We can calculate the standard deviation of these measures to get an
idea of how much the skill of the procedure is expected to vary in
practice.
This is also helpful for providing a more nuanced comparison of one procedure to another when you are trying to choose which algorithm and data preparation procedures to use.
Also, this information is invaluable as you can use the mean and spread to give a confidence interval on the expected performance on a machine learning procedure in practice.
reference | How to choose a predictive model after k-fold cross-validation? | Why do we use k-fold cross validation?
Cross-validation is a method to estimate the skill of a method on unseen data. Like using a train-test split.
Cross-validation systematically creates and evaluat | How to choose a predictive model after k-fold cross-validation?
Why do we use k-fold cross validation?
Cross-validation is a method to estimate the skill of a method on unseen data. Like using a train-test split.
Cross-validation systematically creates and evaluates multiple models on multiple subsets of the dataset.
This, in turn, provides a population of performance measures.
We can calculate the mean of these measures to get an idea of how
well the procedure performs on average.
We can calculate the standard deviation of these measures to get an
idea of how much the skill of the procedure is expected to vary in
practice.
This is also helpful for providing a more nuanced comparison of one procedure to another when you are trying to choose which algorithm and data preparation procedures to use.
Also, this information is invaluable as you can use the mean and spread to give a confidence interval on the expected performance on a machine learning procedure in practice.
reference | How to choose a predictive model after k-fold cross-validation?
Why do we use k-fold cross validation?
Cross-validation is a method to estimate the skill of a method on unseen data. Like using a train-test split.
Cross-validation systematically creates and evaluat |
429 | How to choose a predictive model after k-fold cross-validation? | It's a very interesting question. To make it clear, we should understand the difference of model and model evaluation. We use full training set to build a model, and we expect this model would be finally used.
K fold cross evaluation would build K models but all would be dropped. The K models are just used for evaluation. and It just produced metrics to tell you how well this model fits with your data.
For example, you choose LinearRegression algo and perform two operation on the same training set: one with 10 fold cross validation, and the other with 20 fold. the regression(or classifier) model should be the same, but Correlation coefficient and Root relative squared error are different.
Below are two runs for 10 fold and 20 fold cross validation with weka
1st run with 10 fold
=== Run information ===
Test mode: 10-fold cross-validation
...
=== Classifier model (full training set) ===
Linear Regression Model <---- This model is the same
Date = 844769960.1903 * passenger_numbers -711510446549.7296
Time taken to build model: 0 seconds
=== Cross-validation === <---- Hereafter produced different metrics
=== Summary ===
Correlation coefficient 0.9206
Mean absolute error 35151281151.9807
Root mean squared error 42707499176.2097
Relative absolute error 37.0147 %
Root relative squared error 38.9596 %
Total Number of Instances 144
2nd run with 20 fold
=== Run information ===
...
Test mode: 20-fold cross-validation
=== Classifier model (full training set) ===
Linear Regression Model <---- This model is the same
Date = 844769960.1903 * passenger_numbers -711510446549.7296
Time taken to build model: 0 seconds
=== Cross-validation === <---- Hereafter produced different metrics
=== Summary ===
Correlation coefficient 0.9203
Mean absolute error 35093728104.8746
Root mean squared error 42790545071.8199
Relative absolute error 36.9394 %
Root relative squared error 39.0096 %
Total Number of Instances 144 | How to choose a predictive model after k-fold cross-validation? | It's a very interesting question. To make it clear, we should understand the difference of model and model evaluation. We use full training set to build a model, and we expect this model would be fina | How to choose a predictive model after k-fold cross-validation?
It's a very interesting question. To make it clear, we should understand the difference of model and model evaluation. We use full training set to build a model, and we expect this model would be finally used.
K fold cross evaluation would build K models but all would be dropped. The K models are just used for evaluation. and It just produced metrics to tell you how well this model fits with your data.
For example, you choose LinearRegression algo and perform two operation on the same training set: one with 10 fold cross validation, and the other with 20 fold. the regression(or classifier) model should be the same, but Correlation coefficient and Root relative squared error are different.
Below are two runs for 10 fold and 20 fold cross validation with weka
1st run with 10 fold
=== Run information ===
Test mode: 10-fold cross-validation
...
=== Classifier model (full training set) ===
Linear Regression Model <---- This model is the same
Date = 844769960.1903 * passenger_numbers -711510446549.7296
Time taken to build model: 0 seconds
=== Cross-validation === <---- Hereafter produced different metrics
=== Summary ===
Correlation coefficient 0.9206
Mean absolute error 35151281151.9807
Root mean squared error 42707499176.2097
Relative absolute error 37.0147 %
Root relative squared error 38.9596 %
Total Number of Instances 144
2nd run with 20 fold
=== Run information ===
...
Test mode: 20-fold cross-validation
=== Classifier model (full training set) ===
Linear Regression Model <---- This model is the same
Date = 844769960.1903 * passenger_numbers -711510446549.7296
Time taken to build model: 0 seconds
=== Cross-validation === <---- Hereafter produced different metrics
=== Summary ===
Correlation coefficient 0.9203
Mean absolute error 35093728104.8746
Root mean squared error 42790545071.8199
Relative absolute error 36.9394 %
Root relative squared error 39.0096 %
Total Number of Instances 144 | How to choose a predictive model after k-fold cross-validation?
It's a very interesting question. To make it clear, we should understand the difference of model and model evaluation. We use full training set to build a model, and we expect this model would be fina |
430 | How to choose a predictive model after k-fold cross-validation? | I am not sure the discussion above is entirely correct. In cross-validation, we can split the data into Training and Testing for each run. Using the training data alone, one needs to fit the model and choose the tuning parameters in each class of models being considered. For example, in Neural Nets the tuning parameters are the number of neurons and the choices for activation function. In order to do this, one cross-validates in the training data alone.
Once the best model in each class is found, the best fit model is evaluated using the test data. The "outer" cross-validation loop can be used to give a better estimate of test data performance as well as an estimate on the variability. A discussion can then compare test performance for different classes say Neural Nets vs. SVM. One model class is chosen, with the model size fixed, and now the entire data is used to learn the best model.
Now, if as part of your machine learning algorithm you want to constantly select the best model class (say every week), then even this choice needs to be evaluated in the training data! Test data measurement cannot be used to judge the model class choice if it is a dynamic option. | How to choose a predictive model after k-fold cross-validation? | I am not sure the discussion above is entirely correct. In cross-validation, we can split the data into Training and Testing for each run. Using the training data alone, one needs to fit the model an | How to choose a predictive model after k-fold cross-validation?
I am not sure the discussion above is entirely correct. In cross-validation, we can split the data into Training and Testing for each run. Using the training data alone, one needs to fit the model and choose the tuning parameters in each class of models being considered. For example, in Neural Nets the tuning parameters are the number of neurons and the choices for activation function. In order to do this, one cross-validates in the training data alone.
Once the best model in each class is found, the best fit model is evaluated using the test data. The "outer" cross-validation loop can be used to give a better estimate of test data performance as well as an estimate on the variability. A discussion can then compare test performance for different classes say Neural Nets vs. SVM. One model class is chosen, with the model size fixed, and now the entire data is used to learn the best model.
Now, if as part of your machine learning algorithm you want to constantly select the best model class (say every week), then even this choice needs to be evaluated in the training data! Test data measurement cannot be used to judge the model class choice if it is a dynamic option. | How to choose a predictive model after k-fold cross-validation?
I am not sure the discussion above is entirely correct. In cross-validation, we can split the data into Training and Testing for each run. Using the training data alone, one needs to fit the model an |
431 | How to choose a predictive model after k-fold cross-validation? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Even belatedly, let me throw in my 2 drachmas. I am of the opinion that you can train
a model using KFold cross-validation as long as you do it inside a loop. I am not posting everything, just the juice. Mind you that the functions below can be used with any model, be it an sklearn predictor/pipeline, a Keras or a Pytorch model after affecting the necessary array/tensor conversions is some cases. Here goes:
#Get the necessary metrics (in this case for a severely imbalanced dataset)
def get_scores(model,X,y):
pred=np.round(model.predict(X))
probs=model.predict_proba(X)[:,1]
precision,recall,_=precision_recall_curve(y,probs)
accu=accuracy_score(y,pred)
pr_auc=auc(recall,precision)
f2=fbeta_score(y,pred,beta=2)
return pred,accu,pr_auc,f2
#Train model with KFold cross-validation
def train_model(model,X,y):
accu_list,pr_auc_list,f2_list=[],[],[]
kf=StratifiedKFold(5,False,seed)
for train,val in kf.split(X,y):
X_train,y_train=X[train],y[train]
X_val,y_val=X[val],y[val]
model.fit(X_train,y_train)
_,accu,pr_auc,f2=get_scores(model,X_val,y_val)
accu_list.append(accu)
pr_auc_list.append(pr_auc)
f2_list.append(f2)
print(f'Training Accuracy: {np.mean(accu_list):.3f}')
print(f'Training PR_AUC: {np.mean(pr_auc_list):.3f}')
print(f'Training F2: {np.mean(f2_list):.3f}')
return model
So, after training and when you want to predict unknown data you would just do:
fitted_model=train_model(model,X,y)
I hope that works out for you. In my case it works everytime. Now, computational cost is different - and very sad - matter. | How to choose a predictive model after k-fold cross-validation? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How to choose a predictive model after k-fold cross-validation?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Even belatedly, let me throw in my 2 drachmas. I am of the opinion that you can train
a model using KFold cross-validation as long as you do it inside a loop. I am not posting everything, just the juice. Mind you that the functions below can be used with any model, be it an sklearn predictor/pipeline, a Keras or a Pytorch model after affecting the necessary array/tensor conversions is some cases. Here goes:
#Get the necessary metrics (in this case for a severely imbalanced dataset)
def get_scores(model,X,y):
pred=np.round(model.predict(X))
probs=model.predict_proba(X)[:,1]
precision,recall,_=precision_recall_curve(y,probs)
accu=accuracy_score(y,pred)
pr_auc=auc(recall,precision)
f2=fbeta_score(y,pred,beta=2)
return pred,accu,pr_auc,f2
#Train model with KFold cross-validation
def train_model(model,X,y):
accu_list,pr_auc_list,f2_list=[],[],[]
kf=StratifiedKFold(5,False,seed)
for train,val in kf.split(X,y):
X_train,y_train=X[train],y[train]
X_val,y_val=X[val],y[val]
model.fit(X_train,y_train)
_,accu,pr_auc,f2=get_scores(model,X_val,y_val)
accu_list.append(accu)
pr_auc_list.append(pr_auc)
f2_list.append(f2)
print(f'Training Accuracy: {np.mean(accu_list):.3f}')
print(f'Training PR_AUC: {np.mean(pr_auc_list):.3f}')
print(f'Training F2: {np.mean(f2_list):.3f}')
return model
So, after training and when you want to predict unknown data you would just do:
fitted_model=train_model(model,X,y)
I hope that works out for you. In my case it works everytime. Now, computational cost is different - and very sad - matter. | How to choose a predictive model after k-fold cross-validation?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
432 | How to choose a predictive model after k-fold cross-validation? | Consider two types of algorithms:
1. Algorithms come with hyperparameters, which do not change with different data subsets. Cross-validation might be used to evaluate
the performance of different algorithms
feature engineering
feature input
the selection of best hyperparameters.
However, with those algorithms (such as tree based methods), different data subsets in CV may have different splitting rules. Except the choice of your ML procedures and hyperparameter tuning, you need a final rule to be applied for your future data. Therefore, applying your final choices to the full dataset is normally required to obtain the final model.
2. Algorithms come with parameters that change with different data.
Statistical models such as (generalised) linear models might be easier to understand. CV might be used to evaluate
the performance of different models
the performance of different distributional assumptions
feature engineering
feature input
With such methods, after selecting a model and features going into the model based on CV, you need regression coefficients to be applied to the future data to make prediction.
Where are those coefficients coming from then?
One way is to apply the final model to your entire dataset to obtain them.
The other way is to take average on coefficients obtained from different data subsets if the full data is too large to run in one go.
If you are using training/validation/test splits, then I would normally treat training + validation as the full dataset, and the test split as future data to test on your final model. | How to choose a predictive model after k-fold cross-validation? | Consider two types of algorithms:
1. Algorithms come with hyperparameters, which do not change with different data subsets. Cross-validation might be used to evaluate
the performance of different alg | How to choose a predictive model after k-fold cross-validation?
Consider two types of algorithms:
1. Algorithms come with hyperparameters, which do not change with different data subsets. Cross-validation might be used to evaluate
the performance of different algorithms
feature engineering
feature input
the selection of best hyperparameters.
However, with those algorithms (such as tree based methods), different data subsets in CV may have different splitting rules. Except the choice of your ML procedures and hyperparameter tuning, you need a final rule to be applied for your future data. Therefore, applying your final choices to the full dataset is normally required to obtain the final model.
2. Algorithms come with parameters that change with different data.
Statistical models such as (generalised) linear models might be easier to understand. CV might be used to evaluate
the performance of different models
the performance of different distributional assumptions
feature engineering
feature input
With such methods, after selecting a model and features going into the model based on CV, you need regression coefficients to be applied to the future data to make prediction.
Where are those coefficients coming from then?
One way is to apply the final model to your entire dataset to obtain them.
The other way is to take average on coefficients obtained from different data subsets if the full data is too large to run in one go.
If you are using training/validation/test splits, then I would normally treat training + validation as the full dataset, and the test split as future data to test on your final model. | How to choose a predictive model after k-fold cross-validation?
Consider two types of algorithms:
1. Algorithms come with hyperparameters, which do not change with different data subsets. Cross-validation might be used to evaluate
the performance of different alg |
433 | Interpretation of R's lm() output | Five point summary
yes, the idea is to give a quick summary of the distribution. It should be roughly symmetrical about mean, the median should be close to 0, the 1Q and 3Q values should ideally be roughly similar values.
Coefficients and $\hat{\beta_i}s$
Each coefficient in the model is a Gaussian (Normal) random variable. The $\hat{\beta_i}$ is the estimate of the mean of the distribution of that random variable, and the standard error is the square root of the variance of that distribution. It is a measure of the uncertainty in the estimate of the $\hat{\beta_i}$.
You can look at how these are computed (well the mathematical formulae used) on Wikipedia. Note that any self-respecting stats programme will not use the standard mathematical equations to compute the $\hat{\beta_i}$ because doing them on a computer can lead to a large loss of precision in the computations.
$t$-statistics
The $t$ statistics are the estimates ($\hat{\beta_i}$) divided by their standard errors ($\hat{\sigma_i}$), e.g. $t_i = \frac{\hat{\beta_i}}{\hat{\sigma_i}}$. Assuming you have the same model in object modas your Q:
> mod <- lm(Sepal.Width ~ Petal.Width, data = iris)
then the $t$ values R reports are computed as:
> tstats <- coef(mod) / sqrt(diag(vcov(mod)))
(Intercept) Petal.Width
53.277950 -4.786461
Where coef(mod) are the $\hat{\beta_i}$, and sqrt(diag(vcov(mod))) gives the square roots of the diagonal elements of the covariance matrix of the model parameters, which are the standard errors of the parameters ($\hat{\sigma_i}$).
The p-value is the probability of achieving a $|t|$ as large as or larger than the observed absolute t value if the null hypothesis ($H_0$) was true, where $H_0$ is $\beta_i = 0$. They are computed as (using tstats from above):
> 2 * pt(abs(tstats), df = df.residual(mod), lower.tail = FALSE)
(Intercept) Petal.Width
1.835999e-98 4.073229e-06
So we compute the upper tail probability of achieving the $t$ values we did from a $t$ distribution with degrees of freedom equal to the residual degrees of freedom of the model. This represents the probability of achieving a $t$ value greater than the absolute values of the observed $t$s. It is multiplied by 2, because of course $t$ can be large in the negative direction too.
Residual standard error
The residual standard error is an estimate of the parameter $\sigma$. The assumption in ordinary least squares is that the residuals are individually described by a Gaussian (normal) distribution with mean 0 and standard deviation $\sigma$. The $\sigma$ relates to the constant variance assumption; each residual has the same variance and that variance is equal to $\sigma^2$.
Adjusted $R^2$
Adjusted $R^2$ is computed as:
$$1 - (1 - R^2) \frac{n - 1}{n - p - 1}$$
The adjusted $R^2$ is the same thing as $R^2$, but adjusted for the complexity (i.e. the number of parameters) of the model. Given a model with a single parameter, with a certain $R^2$, if we add another parameter to this model, the $R^2$ of the new model has to increase, even if the added parameter has no statistical power. The adjusted $R^2$ accounts for this by including the number of parameters in the model.
$F$-statistic
The $F$ is the ratio of two variances ($SSR/SSE$), the variance explained by the parameters in the model (sum of squares of regression, SSR) and the residual or unexplained variance (sum of squares of error, SSE). You can see this better if we get the ANOVA table for the model via anova():
> anova(mod)
Analysis of Variance Table
Response: Sepal.Width
Df Sum Sq Mean Sq F value Pr(>F)
Petal.Width 1 3.7945 3.7945 22.91 4.073e-06 ***
Residuals 148 24.5124 0.1656
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
The $F$s are the same in the ANOVA output and the summary(mod) output. The Mean Sq column contains the two variances and $3.7945 / 0.1656 = 22.91$. We can compute the probability of achieving an $F$ that large under the null hypothesis of no effect, from an $F$-distribution with 1 and 148 degrees of freedom. This is what is reported in the final column of the ANOVA table. In the simple case of a single, continuous predictor (as per your example), $F = t_{\mathrm{Petal.Width}}^2$, which is why the p-values are the same. This equivalence only holds in this simple case. | Interpretation of R's lm() output | Five point summary
yes, the idea is to give a quick summary of the distribution. It should be roughly symmetrical about mean, the median should be close to 0, the 1Q and 3Q values should ideally be ro | Interpretation of R's lm() output
Five point summary
yes, the idea is to give a quick summary of the distribution. It should be roughly symmetrical about mean, the median should be close to 0, the 1Q and 3Q values should ideally be roughly similar values.
Coefficients and $\hat{\beta_i}s$
Each coefficient in the model is a Gaussian (Normal) random variable. The $\hat{\beta_i}$ is the estimate of the mean of the distribution of that random variable, and the standard error is the square root of the variance of that distribution. It is a measure of the uncertainty in the estimate of the $\hat{\beta_i}$.
You can look at how these are computed (well the mathematical formulae used) on Wikipedia. Note that any self-respecting stats programme will not use the standard mathematical equations to compute the $\hat{\beta_i}$ because doing them on a computer can lead to a large loss of precision in the computations.
$t$-statistics
The $t$ statistics are the estimates ($\hat{\beta_i}$) divided by their standard errors ($\hat{\sigma_i}$), e.g. $t_i = \frac{\hat{\beta_i}}{\hat{\sigma_i}}$. Assuming you have the same model in object modas your Q:
> mod <- lm(Sepal.Width ~ Petal.Width, data = iris)
then the $t$ values R reports are computed as:
> tstats <- coef(mod) / sqrt(diag(vcov(mod)))
(Intercept) Petal.Width
53.277950 -4.786461
Where coef(mod) are the $\hat{\beta_i}$, and sqrt(diag(vcov(mod))) gives the square roots of the diagonal elements of the covariance matrix of the model parameters, which are the standard errors of the parameters ($\hat{\sigma_i}$).
The p-value is the probability of achieving a $|t|$ as large as or larger than the observed absolute t value if the null hypothesis ($H_0$) was true, where $H_0$ is $\beta_i = 0$. They are computed as (using tstats from above):
> 2 * pt(abs(tstats), df = df.residual(mod), lower.tail = FALSE)
(Intercept) Petal.Width
1.835999e-98 4.073229e-06
So we compute the upper tail probability of achieving the $t$ values we did from a $t$ distribution with degrees of freedom equal to the residual degrees of freedom of the model. This represents the probability of achieving a $t$ value greater than the absolute values of the observed $t$s. It is multiplied by 2, because of course $t$ can be large in the negative direction too.
Residual standard error
The residual standard error is an estimate of the parameter $\sigma$. The assumption in ordinary least squares is that the residuals are individually described by a Gaussian (normal) distribution with mean 0 and standard deviation $\sigma$. The $\sigma$ relates to the constant variance assumption; each residual has the same variance and that variance is equal to $\sigma^2$.
Adjusted $R^2$
Adjusted $R^2$ is computed as:
$$1 - (1 - R^2) \frac{n - 1}{n - p - 1}$$
The adjusted $R^2$ is the same thing as $R^2$, but adjusted for the complexity (i.e. the number of parameters) of the model. Given a model with a single parameter, with a certain $R^2$, if we add another parameter to this model, the $R^2$ of the new model has to increase, even if the added parameter has no statistical power. The adjusted $R^2$ accounts for this by including the number of parameters in the model.
$F$-statistic
The $F$ is the ratio of two variances ($SSR/SSE$), the variance explained by the parameters in the model (sum of squares of regression, SSR) and the residual or unexplained variance (sum of squares of error, SSE). You can see this better if we get the ANOVA table for the model via anova():
> anova(mod)
Analysis of Variance Table
Response: Sepal.Width
Df Sum Sq Mean Sq F value Pr(>F)
Petal.Width 1 3.7945 3.7945 22.91 4.073e-06 ***
Residuals 148 24.5124 0.1656
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
The $F$s are the same in the ANOVA output and the summary(mod) output. The Mean Sq column contains the two variances and $3.7945 / 0.1656 = 22.91$. We can compute the probability of achieving an $F$ that large under the null hypothesis of no effect, from an $F$-distribution with 1 and 148 degrees of freedom. This is what is reported in the final column of the ANOVA table. In the simple case of a single, continuous predictor (as per your example), $F = t_{\mathrm{Petal.Width}}^2$, which is why the p-values are the same. This equivalence only holds in this simple case. | Interpretation of R's lm() output
Five point summary
yes, the idea is to give a quick summary of the distribution. It should be roughly symmetrical about mean, the median should be close to 0, the 1Q and 3Q values should ideally be ro |
434 | Interpretation of R's lm() output | Ronen Israel and Adrienne Ross (AQR) wrote a very nice paper on this subject: Measuring Factor Exposures: Uses and Abuses.
To summarize (see: p. 8),
Generally, the higher the $R^2$ the better the model explains portfolio returns.
When the t-statistic is greater than two, we can say with 95% confidence (or a 5% chance we are wrong) that the beta estimate is statistically different than zero. In other words, we can say that a portfolio has significant exposure to a factor.
R's lm() summary calculates the p-value Pr(>|t|). The smaller the p-value is, the more significant the factor is. P-value = 0.05 is a reasonable threshold. | Interpretation of R's lm() output | Ronen Israel and Adrienne Ross (AQR) wrote a very nice paper on this subject: Measuring Factor Exposures: Uses and Abuses.
To summarize (see: p. 8),
Generally, the higher the $R^2$ the better the mod | Interpretation of R's lm() output
Ronen Israel and Adrienne Ross (AQR) wrote a very nice paper on this subject: Measuring Factor Exposures: Uses and Abuses.
To summarize (see: p. 8),
Generally, the higher the $R^2$ the better the model explains portfolio returns.
When the t-statistic is greater than two, we can say with 95% confidence (or a 5% chance we are wrong) that the beta estimate is statistically different than zero. In other words, we can say that a portfolio has significant exposure to a factor.
R's lm() summary calculates the p-value Pr(>|t|). The smaller the p-value is, the more significant the factor is. P-value = 0.05 is a reasonable threshold. | Interpretation of R's lm() output
Ronen Israel and Adrienne Ross (AQR) wrote a very nice paper on this subject: Measuring Factor Exposures: Uses and Abuses.
To summarize (see: p. 8),
Generally, the higher the $R^2$ the better the mod |
435 | How would you explain covariance to someone who understands only the mean? | Sometimes we can "augment knowledge" with an unusual or different approach. I would like this reply to be accessible to kindergartners and also have some fun, so everybody get out your crayons!
Given paired $(x,y)$ data, draw their scatterplot. (The younger students may need a teacher to produce this for them. :-) Each pair of points $(x_i,y_i)$, $(x_j,y_j)$ in that plot determines a rectangle: it's the smallest rectangle, whose sides are parallel to the axes, containing those points. Thus the points are either at the upper right and lower left corners (a "positive" relationship) or they are at the upper left and lower right corners (a "negative" relationship).
Draw all possible such rectangles. Color them transparently, making the positive rectangles red (say) and the negative rectangles "anti-red" (blue). In this fashion, wherever rectangles overlap, their colors are either enhanced when they are the same (blue and blue or red and red) or cancel out when they are different.
(In this illustration of a positive (red) and negative (blue) rectangle, the overlap ought to be white; unfortunately, this software does not have a true "anti-red" color. The overlap is gray, so it will darken the plot, but on the whole the net amount of red is correct.)
Now we're ready for the explanation of covariance.
The covariance is the net amount of red in the plot (treating blue as negative values).
Here are some examples with 32 binormal points drawn from distributions with the given covariances, ordered from most negative (bluest) to most positive (reddest).
They are drawn on common axes to make them comparable. The rectangles are lightly outlined to help you see them. This is an updated (2019) version of the original: it uses software that properly cancels the red and cyan colors in overlapping rectangles.
Let's deduce some properties of covariance. Understanding of these properties will be accessible to anyone who has actually drawn a few of the rectangles. :-)
Bilinearity. Because the amount of red depends on the size of the plot, covariance is directly proportional to the scale on the x-axis and to the scale on the y-axis.
Correlation. Covariance increases as the points approximate an upward sloping line and decreases as the points approximate a downward sloping line. This is because in the former case most of the rectangles are positive and in the latter case, most are negative.
Relationship to linear associations. Because non-linear associations can create mixtures of positive and negative rectangles, they lead to unpredictable (and not very useful) covariances. Linear associations can be fully interpreted by means of the preceding two characterizations.
Sensitivity to outliers. A geometric outlier (one point standing away from the mass) will create many large rectangles in association with all the other points. It alone can create a net positive or negative amount of red in the overall picture.
Incidentally, this definition of covariance differs from the usual one only by a universal constant of proportionality (independent of the data set size). The mathematically inclined will have no trouble performing the algebraic demonstration that the formula given here is always twice the usual covariance. | How would you explain covariance to someone who understands only the mean? | Sometimes we can "augment knowledge" with an unusual or different approach. I would like this reply to be accessible to kindergartners and also have some fun, so everybody get out your crayons!
Given | How would you explain covariance to someone who understands only the mean?
Sometimes we can "augment knowledge" with an unusual or different approach. I would like this reply to be accessible to kindergartners and also have some fun, so everybody get out your crayons!
Given paired $(x,y)$ data, draw their scatterplot. (The younger students may need a teacher to produce this for them. :-) Each pair of points $(x_i,y_i)$, $(x_j,y_j)$ in that plot determines a rectangle: it's the smallest rectangle, whose sides are parallel to the axes, containing those points. Thus the points are either at the upper right and lower left corners (a "positive" relationship) or they are at the upper left and lower right corners (a "negative" relationship).
Draw all possible such rectangles. Color them transparently, making the positive rectangles red (say) and the negative rectangles "anti-red" (blue). In this fashion, wherever rectangles overlap, their colors are either enhanced when they are the same (blue and blue or red and red) or cancel out when they are different.
(In this illustration of a positive (red) and negative (blue) rectangle, the overlap ought to be white; unfortunately, this software does not have a true "anti-red" color. The overlap is gray, so it will darken the plot, but on the whole the net amount of red is correct.)
Now we're ready for the explanation of covariance.
The covariance is the net amount of red in the plot (treating blue as negative values).
Here are some examples with 32 binormal points drawn from distributions with the given covariances, ordered from most negative (bluest) to most positive (reddest).
They are drawn on common axes to make them comparable. The rectangles are lightly outlined to help you see them. This is an updated (2019) version of the original: it uses software that properly cancels the red and cyan colors in overlapping rectangles.
Let's deduce some properties of covariance. Understanding of these properties will be accessible to anyone who has actually drawn a few of the rectangles. :-)
Bilinearity. Because the amount of red depends on the size of the plot, covariance is directly proportional to the scale on the x-axis and to the scale on the y-axis.
Correlation. Covariance increases as the points approximate an upward sloping line and decreases as the points approximate a downward sloping line. This is because in the former case most of the rectangles are positive and in the latter case, most are negative.
Relationship to linear associations. Because non-linear associations can create mixtures of positive and negative rectangles, they lead to unpredictable (and not very useful) covariances. Linear associations can be fully interpreted by means of the preceding two characterizations.
Sensitivity to outliers. A geometric outlier (one point standing away from the mass) will create many large rectangles in association with all the other points. It alone can create a net positive or negative amount of red in the overall picture.
Incidentally, this definition of covariance differs from the usual one only by a universal constant of proportionality (independent of the data set size). The mathematically inclined will have no trouble performing the algebraic demonstration that the formula given here is always twice the usual covariance. | How would you explain covariance to someone who understands only the mean?
Sometimes we can "augment knowledge" with an unusual or different approach. I would like this reply to be accessible to kindergartners and also have some fun, so everybody get out your crayons!
Given |
436 | How would you explain covariance to someone who understands only the mean? | To elaborate on my comment, I used to teach the covariance as a measure of the (average) co-variation between two variables, say $x$ and $y$.
It is useful to recall the basic formula (simple to explain, no need to talk about mathematical expectancies for an introductory course):
$$
\text{cov}(x,y)=\frac{1}{n}\sum_{i=1}^n(x_i-\bar x)(y_i-\bar y)
$$
so that we clearly see that each observation, $(x_i,y_i)$, might contribute positively or negatively to the covariance, depending on the product of their deviation from the mean of the two variables, $\bar x$ and $\bar y$. Note that I do not speak of magnitude here, but simply of the sign of the contribution of the ith observation.
This is what I've depicted in the following diagrams. Artificial data were generated using a linear model (left, $y = 1.2x + \varepsilon$; right, $y = 0.1x + \varepsilon$, where $\varepsilon$ were drawn from a gaussian distribution with zero mean and $\text{SD}=2$, and $x$ from an uniform distribution on the interval $[0,20]$).
The vertical and horizontal bars represent the mean of $x$ and $y$, respectively. That mean that instead of "looking at individual observations" from the origin $(0,0)$, we can do it from $(\bar x, \bar y)$. This just amounts to a translation on the x- and y-axis. In this new coordinate system, every observation that is located in the upper-right or lower-left quadrant contributes positively to the covariance, whereas observations located in the two other quadrants contribute negatively to it. In the first case (left), the covariance equals 30.11 and the distribution in the four quadrants is given below:
+ -
+ 30 2
- 0 28
Clearly, when the $x_i$'s are above their mean, so do the corresponding $y_i$'s (wrt. $\bar y$). Eye-balling the shape of the 2D cloud of points, when $x$ values increase $y$ values tend to increase too. (But remember we could also use the fact that there is a clear relationship between the covariance and the slope of the regression line, i.e. $b=\text{Cov}(x,y)/\text{Var}(x)$.)
In the second case (right, same $x_i$), the covariance equals 3.54 and the distribution across quadrants is more "homogeneous" as shown below:
+ -
+ 18 14
- 12 16
In other words, there is an increased number of case where the $x_i$'s and $y_i$'s do not covary in the same direction wrt. their means.
Note that we could reduce the covariance by scaling either $x$ or $y$. In the left panel, the covariance of $(x/10,y)$ (or $(x,y/10)$) is reduced by a ten fold amount (3.01). Since the units of measurement and the spread of $x$ and $y$ (relative to their means) make it difficult to interpret the value of the covariance in absolute terms, we generally scale both variables by their standard deviations and get the correlation coefficient. This means that in addition to re-centering our $(x,y)$ scatterplot to $(\bar x, \bar y)$ we also scale the x- and y-unit in terms of standard deviation, which leads to a more interpretable measure of the linear covariation between $x$ and $y$. | How would you explain covariance to someone who understands only the mean? | To elaborate on my comment, I used to teach the covariance as a measure of the (average) co-variation between two variables, say $x$ and $y$.
It is useful to recall the basic formula (simple to expla | How would you explain covariance to someone who understands only the mean?
To elaborate on my comment, I used to teach the covariance as a measure of the (average) co-variation between two variables, say $x$ and $y$.
It is useful to recall the basic formula (simple to explain, no need to talk about mathematical expectancies for an introductory course):
$$
\text{cov}(x,y)=\frac{1}{n}\sum_{i=1}^n(x_i-\bar x)(y_i-\bar y)
$$
so that we clearly see that each observation, $(x_i,y_i)$, might contribute positively or negatively to the covariance, depending on the product of their deviation from the mean of the two variables, $\bar x$ and $\bar y$. Note that I do not speak of magnitude here, but simply of the sign of the contribution of the ith observation.
This is what I've depicted in the following diagrams. Artificial data were generated using a linear model (left, $y = 1.2x + \varepsilon$; right, $y = 0.1x + \varepsilon$, where $\varepsilon$ were drawn from a gaussian distribution with zero mean and $\text{SD}=2$, and $x$ from an uniform distribution on the interval $[0,20]$).
The vertical and horizontal bars represent the mean of $x$ and $y$, respectively. That mean that instead of "looking at individual observations" from the origin $(0,0)$, we can do it from $(\bar x, \bar y)$. This just amounts to a translation on the x- and y-axis. In this new coordinate system, every observation that is located in the upper-right or lower-left quadrant contributes positively to the covariance, whereas observations located in the two other quadrants contribute negatively to it. In the first case (left), the covariance equals 30.11 and the distribution in the four quadrants is given below:
+ -
+ 30 2
- 0 28
Clearly, when the $x_i$'s are above their mean, so do the corresponding $y_i$'s (wrt. $\bar y$). Eye-balling the shape of the 2D cloud of points, when $x$ values increase $y$ values tend to increase too. (But remember we could also use the fact that there is a clear relationship between the covariance and the slope of the regression line, i.e. $b=\text{Cov}(x,y)/\text{Var}(x)$.)
In the second case (right, same $x_i$), the covariance equals 3.54 and the distribution across quadrants is more "homogeneous" as shown below:
+ -
+ 18 14
- 12 16
In other words, there is an increased number of case where the $x_i$'s and $y_i$'s do not covary in the same direction wrt. their means.
Note that we could reduce the covariance by scaling either $x$ or $y$. In the left panel, the covariance of $(x/10,y)$ (or $(x,y/10)$) is reduced by a ten fold amount (3.01). Since the units of measurement and the spread of $x$ and $y$ (relative to their means) make it difficult to interpret the value of the covariance in absolute terms, we generally scale both variables by their standard deviations and get the correlation coefficient. This means that in addition to re-centering our $(x,y)$ scatterplot to $(\bar x, \bar y)$ we also scale the x- and y-unit in terms of standard deviation, which leads to a more interpretable measure of the linear covariation between $x$ and $y$. | How would you explain covariance to someone who understands only the mean?
To elaborate on my comment, I used to teach the covariance as a measure of the (average) co-variation between two variables, say $x$ and $y$.
It is useful to recall the basic formula (simple to expla |
437 | How would you explain covariance to someone who understands only the mean? | I loved @whuber 's answer - before I only had a vague idea in my mind of how covariance could be visualised, but those rectangle plots are genius.
However since the formula for covariance involves the mean, and the OP's original question did state that the 'receiver' does understand the concept of the mean, I thought I would have a crack at adapting @whuber's rectangle plots to compare each data point to the means of x and y, as this more represents what's going on in the covariance formula. I thought it actually ended up looking fairly intuitive:
The blue dot in the middle of each plot is the mean of x (x_mean) and mean of y (y_mean).
The rectangles are comparing the value of x - x_mean and y - y_mean for each data point.
The rectangle is green when either:
both x and y are greater than their respective means
both x and y are less than their respective means
The rectangle is red when either:
x is greater than x_mean but y is less than y_mean
x is less than x_mean but y is greater than y_mean
Covariance (and correlation) can be both strongly negative and strongly positive. When the graph is dominated by one colour more than the other, it means that the data mostly follows a consistent pattern.
If the graph has lots more green than red, it means that y generally increases
when x increases.
If the graph has lots more red than green, it means that y generally decreases when x increases.
If the graph isn't dominated by one colour or the other, it means that there isn't much of a pattern to how x and y relate to each other.
The actual value of the covariance for two different variables x and y, is basically the sum of all the green area minus all the red area, then divided by the total number of data points - effectively the average greenness-vs-redness of the graph.
How does that sound/look? | How would you explain covariance to someone who understands only the mean? | I loved @whuber 's answer - before I only had a vague idea in my mind of how covariance could be visualised, but those rectangle plots are genius.
However since the formula for covariance involves th | How would you explain covariance to someone who understands only the mean?
I loved @whuber 's answer - before I only had a vague idea in my mind of how covariance could be visualised, but those rectangle plots are genius.
However since the formula for covariance involves the mean, and the OP's original question did state that the 'receiver' does understand the concept of the mean, I thought I would have a crack at adapting @whuber's rectangle plots to compare each data point to the means of x and y, as this more represents what's going on in the covariance formula. I thought it actually ended up looking fairly intuitive:
The blue dot in the middle of each plot is the mean of x (x_mean) and mean of y (y_mean).
The rectangles are comparing the value of x - x_mean and y - y_mean for each data point.
The rectangle is green when either:
both x and y are greater than their respective means
both x and y are less than their respective means
The rectangle is red when either:
x is greater than x_mean but y is less than y_mean
x is less than x_mean but y is greater than y_mean
Covariance (and correlation) can be both strongly negative and strongly positive. When the graph is dominated by one colour more than the other, it means that the data mostly follows a consistent pattern.
If the graph has lots more green than red, it means that y generally increases
when x increases.
If the graph has lots more red than green, it means that y generally decreases when x increases.
If the graph isn't dominated by one colour or the other, it means that there isn't much of a pattern to how x and y relate to each other.
The actual value of the covariance for two different variables x and y, is basically the sum of all the green area minus all the red area, then divided by the total number of data points - effectively the average greenness-vs-redness of the graph.
How does that sound/look? | How would you explain covariance to someone who understands only the mean?
I loved @whuber 's answer - before I only had a vague idea in my mind of how covariance could be visualised, but those rectangle plots are genius.
However since the formula for covariance involves th |
438 | How would you explain covariance to someone who understands only the mean? | Covariance is a measure of how much one variable goes up when the other goes up. | How would you explain covariance to someone who understands only the mean? | Covariance is a measure of how much one variable goes up when the other goes up. | How would you explain covariance to someone who understands only the mean?
Covariance is a measure of how much one variable goes up when the other goes up. | How would you explain covariance to someone who understands only the mean?
Covariance is a measure of how much one variable goes up when the other goes up. |
439 | How would you explain covariance to someone who understands only the mean? | I am answering my own question, but I thought It'd be great for the people coming across this post to check out some of the explanations on this page.
I'm paraphrasing one of the very well articulated answers (by a user'Zhop'). I'm doing so in case if that site shuts down or the page gets taken down when someone eons from now accesses this post ;)
Covariance is a measure of how much two variables change together.
Compare this to Variance, which is just the range over which one
measure (or variable) varies.
In studying social patterns, you might hypothesize that wealthier
people are likely to be more educated, so you'd try to see how closely
measures of wealth and education stay together. You would use a
measure of covariance to determine this.
...
I'm not sure what you mean when you ask how does it apply to
statistics. It is one measure taught in many stats classes. Did you
mean, when should you use it?
You use it when you want to see how much two or more variables change
in relation to each other.
Think of people on a team. Look at how they vary in geographic
location compared to each other. When the team is playing or
practicing, the distance between individual members is very small and
we would say they are in the same location. And when their location
changes, it changes for all individuals together (say, travelling on a
bus to a game). In this situation, we would say they have a high level
of covariance. But when they aren't playing, then the covariance rate
is likely to be pretty low, because they are all going to different
places at different rates of speed.
So you can predict one team member's location, based on another team
member's location when they are practicing or playing a game with a
high degree of accuracy. The covariance measurement would be close to
1, I believe. But when they are not practicing or playing, you would
have a much smaller chance of predicting one person's location, based
on a team member's location. It would be close to zero, probably,
although not zero, since sometimes team members will be friends, and
might go places together on their own time.
However, if you randomly selected individuals in the United States,
and tried to use one of them to predict the other's locations, you'd
probably find the covariance was zero. In other words, there is
absolutely no relation between one randomly selected person's location
in the US, and another's.
Adding another one (by 'CatofGrey') that helps augment the intuition:
In probability theory and statistics, covariance is the measure of how
much two random variables vary together (as distinct from variance,
which measures how much a single variable varies).
If two variables tend to vary together (that is, when one of them is
above its expected value, then the other variable tends to be above
its expected value too), then the covariance between the two variables
will be positive. On the other hand, if one of them is above its
expected value and the other variable tends to be below its expected
value, then the covariance between the two variables will be negative.
These two together have made me understand covariance as I've never understood it before! Simply amazing!! | How would you explain covariance to someone who understands only the mean? | I am answering my own question, but I thought It'd be great for the people coming across this post to check out some of the explanations on this page.
I'm paraphrasing one of the very well articulated | How would you explain covariance to someone who understands only the mean?
I am answering my own question, but I thought It'd be great for the people coming across this post to check out some of the explanations on this page.
I'm paraphrasing one of the very well articulated answers (by a user'Zhop'). I'm doing so in case if that site shuts down or the page gets taken down when someone eons from now accesses this post ;)
Covariance is a measure of how much two variables change together.
Compare this to Variance, which is just the range over which one
measure (or variable) varies.
In studying social patterns, you might hypothesize that wealthier
people are likely to be more educated, so you'd try to see how closely
measures of wealth and education stay together. You would use a
measure of covariance to determine this.
...
I'm not sure what you mean when you ask how does it apply to
statistics. It is one measure taught in many stats classes. Did you
mean, when should you use it?
You use it when you want to see how much two or more variables change
in relation to each other.
Think of people on a team. Look at how they vary in geographic
location compared to each other. When the team is playing or
practicing, the distance between individual members is very small and
we would say they are in the same location. And when their location
changes, it changes for all individuals together (say, travelling on a
bus to a game). In this situation, we would say they have a high level
of covariance. But when they aren't playing, then the covariance rate
is likely to be pretty low, because they are all going to different
places at different rates of speed.
So you can predict one team member's location, based on another team
member's location when they are practicing or playing a game with a
high degree of accuracy. The covariance measurement would be close to
1, I believe. But when they are not practicing or playing, you would
have a much smaller chance of predicting one person's location, based
on a team member's location. It would be close to zero, probably,
although not zero, since sometimes team members will be friends, and
might go places together on their own time.
However, if you randomly selected individuals in the United States,
and tried to use one of them to predict the other's locations, you'd
probably find the covariance was zero. In other words, there is
absolutely no relation between one randomly selected person's location
in the US, and another's.
Adding another one (by 'CatofGrey') that helps augment the intuition:
In probability theory and statistics, covariance is the measure of how
much two random variables vary together (as distinct from variance,
which measures how much a single variable varies).
If two variables tend to vary together (that is, when one of them is
above its expected value, then the other variable tends to be above
its expected value too), then the covariance between the two variables
will be positive. On the other hand, if one of them is above its
expected value and the other variable tends to be below its expected
value, then the covariance between the two variables will be negative.
These two together have made me understand covariance as I've never understood it before! Simply amazing!! | How would you explain covariance to someone who understands only the mean?
I am answering my own question, but I thought It'd be great for the people coming across this post to check out some of the explanations on this page.
I'm paraphrasing one of the very well articulated |
440 | How would you explain covariance to someone who understands only the mean? | I really like Whuber's answer, so I gathered some more resources. Covariance describes both how far the variables are spread out, and the nature of their relationship.
Covariance uses rectangles to describe how far away an observation is from the mean on a scatter graph:
If a rectangle has long sides and a high width or short sides and a short width, it provides evidence that the two variables move together.
If a rectangle has two sides that are relatively long for that variables, and two sides that are relatively short for the other variable, this observation provides evidence the variables do not move together very well.
If the rectangle is in the 2nd or 4th quadrant, then when one variable is greater than the mean, the other is less than the mean. An increase in one variable is associated with a decrease in the other.
I found a cool visualization of this at http://sciguides.com/guides/covariance/, It explains what covariance is if you just know the mean. link via the wayback machine | How would you explain covariance to someone who understands only the mean? | I really like Whuber's answer, so I gathered some more resources. Covariance describes both how far the variables are spread out, and the nature of their relationship.
Covariance uses rectangles to de | How would you explain covariance to someone who understands only the mean?
I really like Whuber's answer, so I gathered some more resources. Covariance describes both how far the variables are spread out, and the nature of their relationship.
Covariance uses rectangles to describe how far away an observation is from the mean on a scatter graph:
If a rectangle has long sides and a high width or short sides and a short width, it provides evidence that the two variables move together.
If a rectangle has two sides that are relatively long for that variables, and two sides that are relatively short for the other variable, this observation provides evidence the variables do not move together very well.
If the rectangle is in the 2nd or 4th quadrant, then when one variable is greater than the mean, the other is less than the mean. An increase in one variable is associated with a decrease in the other.
I found a cool visualization of this at http://sciguides.com/guides/covariance/, It explains what covariance is if you just know the mean. link via the wayback machine | How would you explain covariance to someone who understands only the mean?
I really like Whuber's answer, so I gathered some more resources. Covariance describes both how far the variables are spread out, and the nature of their relationship.
Covariance uses rectangles to de |
441 | How would you explain covariance to someone who understands only the mean? | Here's another attempt to explain covariance with a picture. Every panel in the picture below contains 50 points simulated from a bivariate distribution with correlation between x & y of 0.8 and variances as shown in the row and column labels. The covariance is shown in the lower-right corner of each panel.
Anyone interested in improving this...here's the R code:
library(mvtnorm)
rowvars <- colvars <- c(10,20,30,40,50)
all <- NULL
for(i in 1:length(colvars)){
colvar <- colvars[i]
for(j in 1:length(rowvars)){
set.seed(303) # Put seed here to show same data in each panel
rowvar <- rowvars[j]
# Simulate 50 points, corr=0.8
sig <- matrix(c(rowvar, .8*sqrt(rowvar)*sqrt(colvar), .8*sqrt(rowvar)*sqrt(colvar), colvar), nrow=2)
yy <- rmvnorm(50, mean=c(0,0), sig)
dati <- data.frame(i=i, j=j, colvar=colvar, rowvar=rowvar, covar=.8*sqrt(rowvar)*sqrt(colvar), yy)
all <- rbind(all, dati)
}
}
names(all) <- c('i','j','colvar','rowvar','covar','x','y')
all <- transform(all, colvar=factor(colvar), rowvar=factor(rowvar))
library(latticeExtra)
useOuterStrips(xyplot(y~x|colvar*rowvar, all, cov=all$covar,
panel=function(x,y,subscripts, cov,...){
panel.xyplot(x,y,...)
print(cor(x,y))
ltext(14,-12, round(cov[subscripts][1],0))
})) | How would you explain covariance to someone who understands only the mean? | Here's another attempt to explain covariance with a picture. Every panel in the picture below contains 50 points simulated from a bivariate distribution with correlation between x & y of 0.8 and vari | How would you explain covariance to someone who understands only the mean?
Here's another attempt to explain covariance with a picture. Every panel in the picture below contains 50 points simulated from a bivariate distribution with correlation between x & y of 0.8 and variances as shown in the row and column labels. The covariance is shown in the lower-right corner of each panel.
Anyone interested in improving this...here's the R code:
library(mvtnorm)
rowvars <- colvars <- c(10,20,30,40,50)
all <- NULL
for(i in 1:length(colvars)){
colvar <- colvars[i]
for(j in 1:length(rowvars)){
set.seed(303) # Put seed here to show same data in each panel
rowvar <- rowvars[j]
# Simulate 50 points, corr=0.8
sig <- matrix(c(rowvar, .8*sqrt(rowvar)*sqrt(colvar), .8*sqrt(rowvar)*sqrt(colvar), colvar), nrow=2)
yy <- rmvnorm(50, mean=c(0,0), sig)
dati <- data.frame(i=i, j=j, colvar=colvar, rowvar=rowvar, covar=.8*sqrt(rowvar)*sqrt(colvar), yy)
all <- rbind(all, dati)
}
}
names(all) <- c('i','j','colvar','rowvar','covar','x','y')
all <- transform(all, colvar=factor(colvar), rowvar=factor(rowvar))
library(latticeExtra)
useOuterStrips(xyplot(y~x|colvar*rowvar, all, cov=all$covar,
panel=function(x,y,subscripts, cov,...){
panel.xyplot(x,y,...)
print(cor(x,y))
ltext(14,-12, round(cov[subscripts][1],0))
})) | How would you explain covariance to someone who understands only the mean?
Here's another attempt to explain covariance with a picture. Every panel in the picture below contains 50 points simulated from a bivariate distribution with correlation between x & y of 0.8 and vari |
442 | How would you explain covariance to someone who understands only the mean? | I would simply explain correlation which is pretty intuitive. I would say "Correlation measures the strength of relationship between two variables X and Y. Correlation is between -1 and 1 and will be close to 1 in absolute value when the relationship is strong. Covariance is just the correlation multiplied by the standard deviations of the two variables. So while correlation is dimensionless, covariance is in the product of the units for variable X and variable Y. | How would you explain covariance to someone who understands only the mean? | I would simply explain correlation which is pretty intuitive. I would say "Correlation measures the strength of relationship between two variables X and Y. Correlation is between -1 and 1 and will b | How would you explain covariance to someone who understands only the mean?
I would simply explain correlation which is pretty intuitive. I would say "Correlation measures the strength of relationship between two variables X and Y. Correlation is between -1 and 1 and will be close to 1 in absolute value when the relationship is strong. Covariance is just the correlation multiplied by the standard deviations of the two variables. So while correlation is dimensionless, covariance is in the product of the units for variable X and variable Y. | How would you explain covariance to someone who understands only the mean?
I would simply explain correlation which is pretty intuitive. I would say "Correlation measures the strength of relationship between two variables X and Y. Correlation is between -1 and 1 and will b |
443 | How would you explain covariance to someone who understands only the mean? | Variance is the degree by which a random vairable changes with respect to its expected value Owing to the stochastic nature of be underlying process the random variable represents.
Covariance is the degree by which two different random variables change with respect to each other. This could happen when random variables are driven by the same underlying process, or derivatives thereof. Either processes represented by these random variables are affecting each other, or it's the same process but one of the random variables is derived from the other. | How would you explain covariance to someone who understands only the mean? | Variance is the degree by which a random vairable changes with respect to its expected value Owing to the stochastic nature of be underlying process the random variable represents.
Covariance is the d | How would you explain covariance to someone who understands only the mean?
Variance is the degree by which a random vairable changes with respect to its expected value Owing to the stochastic nature of be underlying process the random variable represents.
Covariance is the degree by which two different random variables change with respect to each other. This could happen when random variables are driven by the same underlying process, or derivatives thereof. Either processes represented by these random variables are affecting each other, or it's the same process but one of the random variables is derived from the other. | How would you explain covariance to someone who understands only the mean?
Variance is the degree by which a random vairable changes with respect to its expected value Owing to the stochastic nature of be underlying process the random variable represents.
Covariance is the d |
444 | How would you explain covariance to someone who understands only the mean? | Two variables that would have a high positive covariance (correlation) would be the number of people in a room, and the number of fingers that are in the room. (As the number of people increases, we expect the number of fingers to increase as well.)
Something that might have a negative covariance (correlation) would be a person's age, and the number of hair follicles on their head. Or, the number of zits on a person's face (in a certain age group), and how many dates they have in a week. We expect people with more years to have less hair, and people with more acne to have less dates.. These are negatively correlated. | How would you explain covariance to someone who understands only the mean? | Two variables that would have a high positive covariance (correlation) would be the number of people in a room, and the number of fingers that are in the room. (As the number of people increases, we | How would you explain covariance to someone who understands only the mean?
Two variables that would have a high positive covariance (correlation) would be the number of people in a room, and the number of fingers that are in the room. (As the number of people increases, we expect the number of fingers to increase as well.)
Something that might have a negative covariance (correlation) would be a person's age, and the number of hair follicles on their head. Or, the number of zits on a person's face (in a certain age group), and how many dates they have in a week. We expect people with more years to have less hair, and people with more acne to have less dates.. These are negatively correlated. | How would you explain covariance to someone who understands only the mean?
Two variables that would have a high positive covariance (correlation) would be the number of people in a room, and the number of fingers that are in the room. (As the number of people increases, we |
445 | How would you explain covariance to someone who understands only the mean? | Covariance is a statistical measure that describes the relationship between two variables. If two variables have a positive covariance, it means that they tend to increase or decrease together. If they have a negative covariance, it means that they tend to move in opposite directions. If they have a covariance of zero, it means that they are independent and do not affect each other.
To explain covariance to someone who understands only the mean, you could start by explaining that the mean is a measure of the central tendency of a distribution. The mean tells you the average value of a set of numbers.
Covariance, on the other hand, measures how two variables vary together. It tells you whether they tend to increase or decrease together, or whether they move in opposite directions.
For example, suppose you have two sets of numbers, X and Y. The mean of X tells you the average value of X, and the mean of Y tells you the average value of Y. If the covariance between X and Y is positive, it means that when X is above its mean, Y tends to be above its mean as well. And when X is below its mean, Y tends to be below its mean as well. If the covariance is negative, it means that when X is above its mean, Y tends to be below its mean, and vice versa. If the covariance is zero, it means that there is no relationship between X and Y.
So, in summary, covariance measures the tendency of two variables to vary together, and can be positive, negative, or zero. | How would you explain covariance to someone who understands only the mean? | Covariance is a statistical measure that describes the relationship between two variables. If two variables have a positive covariance, it means that they tend to increase or decrease together. If the | How would you explain covariance to someone who understands only the mean?
Covariance is a statistical measure that describes the relationship between two variables. If two variables have a positive covariance, it means that they tend to increase or decrease together. If they have a negative covariance, it means that they tend to move in opposite directions. If they have a covariance of zero, it means that they are independent and do not affect each other.
To explain covariance to someone who understands only the mean, you could start by explaining that the mean is a measure of the central tendency of a distribution. The mean tells you the average value of a set of numbers.
Covariance, on the other hand, measures how two variables vary together. It tells you whether they tend to increase or decrease together, or whether they move in opposite directions.
For example, suppose you have two sets of numbers, X and Y. The mean of X tells you the average value of X, and the mean of Y tells you the average value of Y. If the covariance between X and Y is positive, it means that when X is above its mean, Y tends to be above its mean as well. And when X is below its mean, Y tends to be below its mean as well. If the covariance is negative, it means that when X is above its mean, Y tends to be below its mean, and vice versa. If the covariance is zero, it means that there is no relationship between X and Y.
So, in summary, covariance measures the tendency of two variables to vary together, and can be positive, negative, or zero. | How would you explain covariance to someone who understands only the mean?
Covariance is a statistical measure that describes the relationship between two variables. If two variables have a positive covariance, it means that they tend to increase or decrease together. If the |
446 | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | First, we need to understand what is a Markov chain. Consider the following weather example from Wikipedia. Suppose that weather on any given day can be classified into two states only: sunny and rainy. Based on past experience, we know the following:
$P(\text{Next day is Sunny}\,\vert \,\text{Given today is Rainy)}=0.50$
Since, the next day's weather is either sunny or rainy it follows that:
$P(\text{Next day is Rainy}\,\vert \,\text{Given today is Rainy)}=0.50$
Similarly, let:
$P(\text{Next day is Rainy}\,\vert \,\text{Given today is Sunny)}=0.10$
Therefore, it follows that:
$P(\text{Next day is Sunny}\,\vert \,\text{Given today is Sunny)}=0.90$
The above four numbers can be compactly represented as a transition matrix which represents the probabilities of the weather moving from one state to another state as follows:
$P = \begin{bmatrix}
& S & R \\
S& 0.9 & 0.1 \\
R& 0.5 & 0.5
\end{bmatrix}$
We might ask several questions whose answers follow:
Q1: If the weather is sunny today then what is the weather likely to be tomorrow?
A1: Since, we do not know what is going to happen for sure, the best we can say is that there is a $90\%$ chance that it is likely to be sunny and $10\%$ that it will be rainy.
Q2: What about two days from today?
A2: One day prediction: $90\%$ sunny, $10\%$ rainy. Therefore, two days from now:
First day it can be sunny and the next day also it can be sunny. Chances of this happening are: $0.9 \times 0.9$.
Or
First day it can be rainy and second day it can be sunny. Chances of this happening are: $0.1 \times 0.5$.
Therefore, the probability that the weather will be sunny in two days is:
$P(\text{Sunny 2 days from now} = 0.9 \times 0.9 + 0.1 \times 0.5 = 0.81 + 0.05 = 0.86$
Similarly, the probability that it will be rainy is:
$P(\text{Rainy 2 days from now} = 0.1 \times 0.5 + 0.9 \times 0.1 = 0.05 + 0.09 = 0.14$
In linear algebra (transition matrices) these calculations correspond to all the permutations in transitions from one step to the next (sunny-to-sunny ($S_2S$), sunny-to-rainy ($S_2R$), rainy-to-sunny ($R_2S$) or rainy-to-rainy ($R_2R$)) with their calculated probabilities:
On the lower part of the image we see how to calculate the probability of a future state ($t+1$ or $t+2$) given the probabilities (probability mass function, $PMF$) for every state (sunny or rainy) at time zero (now or $t_0$) as simple matrix multiplication.
If you keep forecasting weather like this you will notice that eventually the $n$-th day forecast, where $n$ is very large (say $30$), settles to the following 'equilibrium' probabilities:
$P(\text{Sunny}) = 0.833$
and
$P(\text{Rainy}) = 0.167$
In other words, your forecast for the $n$-th day and the $n+1$-th day remain the same. In addition, you can also check that the 'equilibrium' probabilities do not depend on the weather today. You would get the same forecast for the weather if you start of by assuming that the weather today is sunny or rainy.
The above example will only work if the state transition probabilities satisfy several conditions which I will not discuss here. But, notice the following features of this 'nice' Markov chain (nice = transition probabilities satisfy conditions):
Irrespective of the initial starting state we will eventually reach an equilibrium probability distribution of states.
Markov Chain Monte Carlo exploits the above feature as follows:
We want to generate random draws from a target distribution. We then identify a way to construct a 'nice' Markov chain such that its equilibrium probability distribution is our target distribution.
If we can construct such a chain then we arbitrarily start from some point and iterate the Markov chain many times (like how we forecast the weather $n$ times). Eventually, the draws we generate would appear as if they are coming from our target distribution.
We then approximate the quantities of interest (e.g. mean) by taking the sample average of the draws after discarding a few initial draws which is the Monte Carlo component.
There are several ways to construct 'nice' Markov chains (e.g., Gibbs sampler, Metropolis-Hastings algorithm). | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | First, we need to understand what is a Markov chain. Consider the following weather example from Wikipedia. Suppose that weather on any given day can be classified into two states only: sunny and rain | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
First, we need to understand what is a Markov chain. Consider the following weather example from Wikipedia. Suppose that weather on any given day can be classified into two states only: sunny and rainy. Based on past experience, we know the following:
$P(\text{Next day is Sunny}\,\vert \,\text{Given today is Rainy)}=0.50$
Since, the next day's weather is either sunny or rainy it follows that:
$P(\text{Next day is Rainy}\,\vert \,\text{Given today is Rainy)}=0.50$
Similarly, let:
$P(\text{Next day is Rainy}\,\vert \,\text{Given today is Sunny)}=0.10$
Therefore, it follows that:
$P(\text{Next day is Sunny}\,\vert \,\text{Given today is Sunny)}=0.90$
The above four numbers can be compactly represented as a transition matrix which represents the probabilities of the weather moving from one state to another state as follows:
$P = \begin{bmatrix}
& S & R \\
S& 0.9 & 0.1 \\
R& 0.5 & 0.5
\end{bmatrix}$
We might ask several questions whose answers follow:
Q1: If the weather is sunny today then what is the weather likely to be tomorrow?
A1: Since, we do not know what is going to happen for sure, the best we can say is that there is a $90\%$ chance that it is likely to be sunny and $10\%$ that it will be rainy.
Q2: What about two days from today?
A2: One day prediction: $90\%$ sunny, $10\%$ rainy. Therefore, two days from now:
First day it can be sunny and the next day also it can be sunny. Chances of this happening are: $0.9 \times 0.9$.
Or
First day it can be rainy and second day it can be sunny. Chances of this happening are: $0.1 \times 0.5$.
Therefore, the probability that the weather will be sunny in two days is:
$P(\text{Sunny 2 days from now} = 0.9 \times 0.9 + 0.1 \times 0.5 = 0.81 + 0.05 = 0.86$
Similarly, the probability that it will be rainy is:
$P(\text{Rainy 2 days from now} = 0.1 \times 0.5 + 0.9 \times 0.1 = 0.05 + 0.09 = 0.14$
In linear algebra (transition matrices) these calculations correspond to all the permutations in transitions from one step to the next (sunny-to-sunny ($S_2S$), sunny-to-rainy ($S_2R$), rainy-to-sunny ($R_2S$) or rainy-to-rainy ($R_2R$)) with their calculated probabilities:
On the lower part of the image we see how to calculate the probability of a future state ($t+1$ or $t+2$) given the probabilities (probability mass function, $PMF$) for every state (sunny or rainy) at time zero (now or $t_0$) as simple matrix multiplication.
If you keep forecasting weather like this you will notice that eventually the $n$-th day forecast, where $n$ is very large (say $30$), settles to the following 'equilibrium' probabilities:
$P(\text{Sunny}) = 0.833$
and
$P(\text{Rainy}) = 0.167$
In other words, your forecast for the $n$-th day and the $n+1$-th day remain the same. In addition, you can also check that the 'equilibrium' probabilities do not depend on the weather today. You would get the same forecast for the weather if you start of by assuming that the weather today is sunny or rainy.
The above example will only work if the state transition probabilities satisfy several conditions which I will not discuss here. But, notice the following features of this 'nice' Markov chain (nice = transition probabilities satisfy conditions):
Irrespective of the initial starting state we will eventually reach an equilibrium probability distribution of states.
Markov Chain Monte Carlo exploits the above feature as follows:
We want to generate random draws from a target distribution. We then identify a way to construct a 'nice' Markov chain such that its equilibrium probability distribution is our target distribution.
If we can construct such a chain then we arbitrarily start from some point and iterate the Markov chain many times (like how we forecast the weather $n$ times). Eventually, the draws we generate would appear as if they are coming from our target distribution.
We then approximate the quantities of interest (e.g. mean) by taking the sample average of the draws after discarding a few initial draws which is the Monte Carlo component.
There are several ways to construct 'nice' Markov chains (e.g., Gibbs sampler, Metropolis-Hastings algorithm). | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
First, we need to understand what is a Markov chain. Consider the following weather example from Wikipedia. Suppose that weather on any given day can be classified into two states only: sunny and rain |
447 | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | I think there's a nice and simple intuition to be gained from the (independence-chain) Metropolis-Hastings algorithm.
First, what's the goal? The goal of MCMC is to draw samples from some probability distribution without having to know its exact height at any point. The way MCMC achieves this is to "wander around" on that distribution in such a way that the amount of time spent in each location is proportional to the height of the distribution. If the "wandering around" process is set up correctly, you can make sure that this proportionality (between time spent and height of the distribution) is achieved.
Intuitively, what we want to do is to to walk around on some (lumpy) surface in such a way that the amount of time we spend (or # samples drawn) in each location is proportional to the height of the surface at that location. So, e.g., we'd like to spend twice as much time on a hilltop that's at an altitude of 100m as we do on a nearby hill that's at an altitude of 50m. The nice thing is that we can do this even if we don't know the absolute heights of points on the surface: all we have to know are the relative heights. e.g., if one hilltop A is twice as high as hilltop B, then we'd like to spend twice as much time at A as we spend at B.
The simplest variant of the Metropolis-Hastings algorithm (independence chain sampling) achieves this as follows: assume that in every (discrete) time-step, we pick a random new "proposed" location (selected uniformly across the entire surface). If the proposed location is higher than where we're standing now, move to it. If the proposed location is lower, then move to the new location with probability p, where p is the ratio of the height of that point to the height of the current location. (i.e., flip a coin with a probability p of getting heads; if it comes up heads, move to the new location; if it comes up tails, stay where we are). Keep a list of the locations you've been at on every time step, and that list will(asymptotically) have the right proportion of time spent in each part of the surface. (And for the A and B hills described above, you'll end up with twice the probability of moving from B to A as you have of moving from A to B).
There are more complicated schemes for proposing new locations and the rules for accepting them, but the basic idea is still: (1) pick a new "proposed" location; (2) figure out how much higher or lower that location is compared to your current location; (3) probabilistically stay put or move to that location in a way that respects the overall goal of spending time proportional to the height of the location.
What is this useful for? Suppose we have a probabilistic model of the weather that allows us to evaluate A*P(weather), where A is an unknown constant. (This often happens--many models are convenient to formulate in a way such that you can't determine what A is). So we can't exactly evaluate P("rain tomorrow"). However, we can run the MCMC sampler for a while and then ask: what fraction of the samples (or "locations") ended up in the "rain tomorrow" state. That fraction will be the (model-based) probabilistic weather forecast. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | I think there's a nice and simple intuition to be gained from the (independence-chain) Metropolis-Hastings algorithm.
First, what's the goal? The goal of MCMC is to draw samples from some probability | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
I think there's a nice and simple intuition to be gained from the (independence-chain) Metropolis-Hastings algorithm.
First, what's the goal? The goal of MCMC is to draw samples from some probability distribution without having to know its exact height at any point. The way MCMC achieves this is to "wander around" on that distribution in such a way that the amount of time spent in each location is proportional to the height of the distribution. If the "wandering around" process is set up correctly, you can make sure that this proportionality (between time spent and height of the distribution) is achieved.
Intuitively, what we want to do is to to walk around on some (lumpy) surface in such a way that the amount of time we spend (or # samples drawn) in each location is proportional to the height of the surface at that location. So, e.g., we'd like to spend twice as much time on a hilltop that's at an altitude of 100m as we do on a nearby hill that's at an altitude of 50m. The nice thing is that we can do this even if we don't know the absolute heights of points on the surface: all we have to know are the relative heights. e.g., if one hilltop A is twice as high as hilltop B, then we'd like to spend twice as much time at A as we spend at B.
The simplest variant of the Metropolis-Hastings algorithm (independence chain sampling) achieves this as follows: assume that in every (discrete) time-step, we pick a random new "proposed" location (selected uniformly across the entire surface). If the proposed location is higher than where we're standing now, move to it. If the proposed location is lower, then move to the new location with probability p, where p is the ratio of the height of that point to the height of the current location. (i.e., flip a coin with a probability p of getting heads; if it comes up heads, move to the new location; if it comes up tails, stay where we are). Keep a list of the locations you've been at on every time step, and that list will(asymptotically) have the right proportion of time spent in each part of the surface. (And for the A and B hills described above, you'll end up with twice the probability of moving from B to A as you have of moving from A to B).
There are more complicated schemes for proposing new locations and the rules for accepting them, but the basic idea is still: (1) pick a new "proposed" location; (2) figure out how much higher or lower that location is compared to your current location; (3) probabilistically stay put or move to that location in a way that respects the overall goal of spending time proportional to the height of the location.
What is this useful for? Suppose we have a probabilistic model of the weather that allows us to evaluate A*P(weather), where A is an unknown constant. (This often happens--many models are convenient to formulate in a way such that you can't determine what A is). So we can't exactly evaluate P("rain tomorrow"). However, we can run the MCMC sampler for a while and then ask: what fraction of the samples (or "locations") ended up in the "rain tomorrow" state. That fraction will be the (model-based) probabilistic weather forecast. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
I think there's a nice and simple intuition to be gained from the (independence-chain) Metropolis-Hastings algorithm.
First, what's the goal? The goal of MCMC is to draw samples from some probability |
448 | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | I'd probably say something like this:
"Anytime we want to talk about probabilities, we're really integrating a density. In Bayesian analysis, a lot of the densities we come up with aren't analytically tractable: you can only integrate them -- if you can integrate them at all -- with a great deal of suffering. So what we do instead is simulate the random variable a lot, and then figure out probabilities from our simulated random numbers. If we want to know the probability that X is less than 10, we count the proportion of simulated random variable results less than 10 and use that as our estimate. That's the "Monte Carlo" part, it's an estimate of probability based off of random numbers. With enough simulated random numbers, the estimate is very good, but it's still inherently random.
"So why "Markov Chain"? Because under certain technical conditions, you can generate a memoryless process (aka a Markovian one) that has the same limiting distribution as the random variable that you're trying to simulate. You can iterate any of a number of different kinds of simulation processes that generate correlated random numbers (based only on the current value of those numbers), and you're guaranteed that once you pool enough of the results, you will end up with a pile of numbers that looks "as if" you had somehow managed to take independent samples from the complicated distribution you wanted to know about.
"So for example, if I want to estimate the probability that a standard normal random variable was less than 0.5, I could generate ten thousand independent realizations from a standard normal distribution and count up the number less than 0.5; say I got 6905 that were less than 0.5 out of 10000 total samples; my estimate for P(Z<0.5) would be 0.6905, which isn't that far off from the actual value. That'd be a Monte Carlo estimate.
"Now imagine I couldn't draw independent normal random variables, instead I'd start at 0, and then with every step add some uniform random number between -0.5 and 0.5 to my current value, and then decide based on a particular test whether I liked that new value or not; if I liked it, I'd use the new value as my current one, and if not, I'd reject it and stick with my old value. Because I only look at the new and current values, this is a Markov chain. If I set up the test to decide whether or not I keep the new value correctly (it'd be a random walk Metropolis-Hastings, and the details get a bit complex), then even though I never generate a single normal random variable, if I do this procedure for long enough, the list of numbers I get from the procedure will be distributed like a large number of draws from something that generates normal random variables. This would give me a Markov Chain Monte Carlo simulation for a standard normal random variable. If I used this to estimate probabilities, that would be a MCMC estimate." | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | I'd probably say something like this:
"Anytime we want to talk about probabilities, we're really integrating a density. In Bayesian analysis, a lot of the densities we come up with aren't analyticall | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
I'd probably say something like this:
"Anytime we want to talk about probabilities, we're really integrating a density. In Bayesian analysis, a lot of the densities we come up with aren't analytically tractable: you can only integrate them -- if you can integrate them at all -- with a great deal of suffering. So what we do instead is simulate the random variable a lot, and then figure out probabilities from our simulated random numbers. If we want to know the probability that X is less than 10, we count the proportion of simulated random variable results less than 10 and use that as our estimate. That's the "Monte Carlo" part, it's an estimate of probability based off of random numbers. With enough simulated random numbers, the estimate is very good, but it's still inherently random.
"So why "Markov Chain"? Because under certain technical conditions, you can generate a memoryless process (aka a Markovian one) that has the same limiting distribution as the random variable that you're trying to simulate. You can iterate any of a number of different kinds of simulation processes that generate correlated random numbers (based only on the current value of those numbers), and you're guaranteed that once you pool enough of the results, you will end up with a pile of numbers that looks "as if" you had somehow managed to take independent samples from the complicated distribution you wanted to know about.
"So for example, if I want to estimate the probability that a standard normal random variable was less than 0.5, I could generate ten thousand independent realizations from a standard normal distribution and count up the number less than 0.5; say I got 6905 that were less than 0.5 out of 10000 total samples; my estimate for P(Z<0.5) would be 0.6905, which isn't that far off from the actual value. That'd be a Monte Carlo estimate.
"Now imagine I couldn't draw independent normal random variables, instead I'd start at 0, and then with every step add some uniform random number between -0.5 and 0.5 to my current value, and then decide based on a particular test whether I liked that new value or not; if I liked it, I'd use the new value as my current one, and if not, I'd reject it and stick with my old value. Because I only look at the new and current values, this is a Markov chain. If I set up the test to decide whether or not I keep the new value correctly (it'd be a random walk Metropolis-Hastings, and the details get a bit complex), then even though I never generate a single normal random variable, if I do this procedure for long enough, the list of numbers I get from the procedure will be distributed like a large number of draws from something that generates normal random variables. This would give me a Markov Chain Monte Carlo simulation for a standard normal random variable. If I used this to estimate probabilities, that would be a MCMC estimate." | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
I'd probably say something like this:
"Anytime we want to talk about probabilities, we're really integrating a density. In Bayesian analysis, a lot of the densities we come up with aren't analyticall |
449 | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | Imagine you want to find a better strategy to beat your friends at the board game Monopoly. Simplify the stuff that matters in the game to the question: which properties do people land on most? The answer depends on the structure of the board, the rules of the game and the throws of two dice.
One way to answer the question is this. Just follow a single piece around the board as you throw the dice and follow the rules. Count how many times you land on each property (or program a computer to do the job for you). Eventually, if you have enough patience or you have programmed the rules well enough in you computer, you will build up a good picture of which properties get the most business. This should help you win more often.
What you have done is a Markov Chain Monte Carlo (MCMC) analysis. The board defines the rules. Where you land next only depends on where you are now, not where you have been before and the specific probabilities are determined by the distribution of throws of two dice. MCMC is the application of this idea to mathematical or physical systems like what tomorrow's weather will be or where a pollen grain being randomly buffeted by gas molecules will end up. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | Imagine you want to find a better strategy to beat your friends at the board game Monopoly. Simplify the stuff that matters in the game to the question: which properties do people land on most? The an | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
Imagine you want to find a better strategy to beat your friends at the board game Monopoly. Simplify the stuff that matters in the game to the question: which properties do people land on most? The answer depends on the structure of the board, the rules of the game and the throws of two dice.
One way to answer the question is this. Just follow a single piece around the board as you throw the dice and follow the rules. Count how many times you land on each property (or program a computer to do the job for you). Eventually, if you have enough patience or you have programmed the rules well enough in you computer, you will build up a good picture of which properties get the most business. This should help you win more often.
What you have done is a Markov Chain Monte Carlo (MCMC) analysis. The board defines the rules. Where you land next only depends on where you are now, not where you have been before and the specific probabilities are determined by the distribution of throws of two dice. MCMC is the application of this idea to mathematical or physical systems like what tomorrow's weather will be or where a pollen grain being randomly buffeted by gas molecules will end up. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
Imagine you want to find a better strategy to beat your friends at the board game Monopoly. Simplify the stuff that matters in the game to the question: which properties do people land on most? The an |
450 | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | OK here's my best attempt at an informal and crude explanation.
A Markov Chain is a random process that has the property that the future depends only on the current state of the process and not the past i.e. it is memoryless. An example of a random process could be the stock exchange. An example of a Markov Chain would be a board game like Monopoly or Snakes and Ladders where your future position (after rolling the die) would depend only on where you started from before the roll, not any of your previous positions. A textbook example of a Markov Chain is the "drunkard's walk". Imagine somebody who is drunk and can move only left or right by one pace. The drunk moves left or right with equal probability. This is a Markov Chain where the drunk's future/next position depends only upon where he is at present.
Monte Carlo methods are computational algorithms (simply sets of instructions) which randomly sample from some process under study. They are a way of estimating something which is too difficult or time consuming to find deterministically. They're basically a form of computer simulation of some mathematical or physical process. The Monte Carlo moniker comes from the analogy between a casino and random number generation. Returning to our board game example earlier, perhaps we want to know if some properties on the Monopoly board are visited more often than others. A Monte Carlo experiment would involve rolling the dice repeatedly and counting the number of times you land on each property. It can also be used for calculating numerical integrals. (Very informally, we can think of an integral as the area under the graph of some function.) Monte Carlo integration works great on a high-dimensional functions by taking a random sample of points of the function and calculating some type of average at these various points. By increasing the sample size, the law of large numbers tells us we can increase the accuracy of our approximation by covering more and more of the function.
These two concepts can be put together to solve some difficult problems in areas such as Bayesian inference, computational biology, etc where multi-dimensional integrals need to be calculated to solve common problems. The idea is to construct a Markov Chain which converges to the desired probability distribution after a number of steps. The state of the chain after a large number of steps is then used as a sample from the desired distribution and the process is repeated. There many different MCMC algorithms which use different techniques for generating the Markov Chain. Common ones include the Metropolis-Hastings and the Gibbs Sampler. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | OK here's my best attempt at an informal and crude explanation.
A Markov Chain is a random process that has the property that the future depends only on the current state of the process and not the pa | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
OK here's my best attempt at an informal and crude explanation.
A Markov Chain is a random process that has the property that the future depends only on the current state of the process and not the past i.e. it is memoryless. An example of a random process could be the stock exchange. An example of a Markov Chain would be a board game like Monopoly or Snakes and Ladders where your future position (after rolling the die) would depend only on where you started from before the roll, not any of your previous positions. A textbook example of a Markov Chain is the "drunkard's walk". Imagine somebody who is drunk and can move only left or right by one pace. The drunk moves left or right with equal probability. This is a Markov Chain where the drunk's future/next position depends only upon where he is at present.
Monte Carlo methods are computational algorithms (simply sets of instructions) which randomly sample from some process under study. They are a way of estimating something which is too difficult or time consuming to find deterministically. They're basically a form of computer simulation of some mathematical or physical process. The Monte Carlo moniker comes from the analogy between a casino and random number generation. Returning to our board game example earlier, perhaps we want to know if some properties on the Monopoly board are visited more often than others. A Monte Carlo experiment would involve rolling the dice repeatedly and counting the number of times you land on each property. It can also be used for calculating numerical integrals. (Very informally, we can think of an integral as the area under the graph of some function.) Monte Carlo integration works great on a high-dimensional functions by taking a random sample of points of the function and calculating some type of average at these various points. By increasing the sample size, the law of large numbers tells us we can increase the accuracy of our approximation by covering more and more of the function.
These two concepts can be put together to solve some difficult problems in areas such as Bayesian inference, computational biology, etc where multi-dimensional integrals need to be calculated to solve common problems. The idea is to construct a Markov Chain which converges to the desired probability distribution after a number of steps. The state of the chain after a large number of steps is then used as a sample from the desired distribution and the process is repeated. There many different MCMC algorithms which use different techniques for generating the Markov Chain. Common ones include the Metropolis-Hastings and the Gibbs Sampler. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
OK here's my best attempt at an informal and crude explanation.
A Markov Chain is a random process that has the property that the future depends only on the current state of the process and not the pa |
451 | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | Excerpt from Bayesian Methods for Hackers
The Bayesian landscape
When we setup a Bayesian inference problem with $N$ unknowns, we are implicitly creating a $N$ dimensional space for the prior distributions to exist in. Associated with the space is an additional dimension, which we can describe as the surface, or curve, of the space, that reflects the prior probability of a particular point. The surface of the space is defined by our prior distributions. For example, if we have two unknowns $p_1$ and $p_2$, and both are uniform on [0,5], the space created is the square of length 5 and the surface is a flat plane that sits ontop of the square (representing that every point is equally likely).
Alternatively, if the two priors are $\text{Exp}(3)$ and $\text{Exp}(10)$, then the space is all postive numbers on the 2-D plane, and the surface induced by the priors looks like a water fall that starts at the point (0,0) and flows over the positive numbers.
The visualization below demonstrates this. The more dark red the color, the more prior probability that the unknowns are at that location. Conversely, areas with darker blue represent that our priors assign very low probability to the unknowns being there.
These are simple examples in 2D space, where our brains can understand surfaces well. In practice, spaces and surfaces generated by our priors can be much higher dimensional.
If these surfaces describe our prior distributions on the unknowns, what happens to our space after we have observed data $X$. The data $X$ does not change the space, but it changes the surface of the space by pulling and stretching the fabric of the surface to reflect where the true parameters likely live. More data means more pulling and stretching, and our original shape becomes mangled or insignificant compared to the newly formed shape. Less data, and our original shape is more present. Regardless, the resulting surface describes the posterior distribution. Again I must stress that it is, unfortunately, impossible to visualize this in larger dimensions. For two dimensions, the data essentially pushes up the original surface to make tall mountains. The amount of pushing up is resisted by the prior probability, so that less prior probability means more resistance. Thus in the double exponential-prior case above, a mountain (or multiple mountains) that might erupt near the (0,0) corner would be much higher than mountains that erupt closer to (5,5), since there is more resistance near (5,5). The mountain, or perhaps more generally, the mountain ranges, reflect the posterior probability of where the true parameters are likely to be found.
Suppose the priors mentioned above represent different parameters $\lambda$ of two Poisson distributions. We observe a few data points and visualize the new landscape.
The plot on the left is the deformed landscape with the $\text{Uniform}(0,5)$ priors, and the plot on the right is the deformed landscape with the exponential priors. The posterior landscapes look different from one another. The exponential-prior landscape puts very little posterior weight on values in the upper right corner: this is because the prior does not put much weight there, whereas the uniform-prior landscape is happy to put posterior weight there. Also, the highest-point, corresponding the the darkest red, is biased towards (0,0) in the exponential case, which is the result from the exponential prior putting more prior wieght in the (0,0) corner.
The black dot represents the true parameters. Even with 1 sample point, as what was simulated above, the mountains attempts to contain the true parameter. Of course, inference with a sample size of 1 is incredibly naive, and choosing such a small sample size was only illustrative.
Exploring the landscape using the MCMC
We should explore the deformed posterior space generated by our prior surface and observed data to find the posterior mountain ranges. However, we cannot naively search the space: any computer scientist will tell you that traversing $N$-dimensional space is exponentially difficult in $N$: the size of the space quickly blows-up as we increase $N$ (see the curse of dimensionality ). What hope do we have to find these hidden mountains? The idea behind MCMC is to perform an intelligent search of the space. To say "search" implies we are looking for a particular object, which perhaps not an accurate description of what MCMC is doing. Recall: MCMC returns samples from the posterior distribution, not the distribution itself. Stretching our mountainous analogy to its limit, MCMC performs a task similar to repeatedly asking "How likely is this pebble I found to be from the mountain I am searching for?", and completes its task by returning thousands of accepted pebbles in hopes of reconstructing the original mountain. In MCMC and PyMC lingo, the returned sequence of "pebbles" are the samples, more often called the traces.
When I say MCMC intelligently searches, I mean MCMC will hopefully converge towards the areas of high posterior probability. MCMC does this by exploring nearby positions and moving into areas with higher probability. Again, perhaps "converge" is not an accurate term to describe MCMC's progression. Converging usually implies moving towards a point in space, but MCMC moves towards a broader area in the space and randomly walks in that area, picking up samples from that area.
At first, returning thousands of samples to the user might sound like being an inefficient way to describe the posterior distributions. I would argue that this is extremely efficient. Consider the alternative possibilities::
Returning a mathematical formula for the "mountain ranges" would involve describing a N-dimensional surface with arbitrary peaks and valleys.
Returning the "peak" of the landscape, while mathematically possible and a sensible thing to do as the highest point corresponds to most probable estimate of the unknowns, ignores the shape of the landscape, which we have previously argued is very important in determining posterior confidence in unknowns.
Besides computational reasons, likely the strongest reason for returning samples is that we can easily use The Law of Large Numbers to solve otherwise intractable problems. I postpone this discussion for the next chapter.
Algorithms to perform MCMC
There is a large family of algorithms that perform MCMC. Simplestly, most algorithms can be expressed at a high level as follows:
1. Start at current position.
2. Propose moving to a new position (investigate a pebble near you ).
3. Accept the position based on the position's adherence to the data
and prior distributions (ask if the pebble likely came from the mountain).
4. If you accept: Move to the new position. Return to Step 1.
5. After a large number of iterations, return the positions.
This way we move in the general direction towards the regions where the posterior distributions exist, and collect samples sparingly on the journey. Once we reach the posterior distribution, we can easily collect samples as they likely all belong to the posterior distribution.
If the current position of the MCMC algorithm is in an area of extremely low probability, which is often the case when the algorithm begins (typically at a random location in the space), the algorithm will move in positions that are likely not from the posterior but better than everything else nearby. Thus the first moves of the algorithm are not reflective of the posterior. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | Excerpt from Bayesian Methods for Hackers
The Bayesian landscape
When we setup a Bayesian inference problem with $N$ unknowns, we are implicitly creating a $N$ dimensional space for the prior distribu | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
Excerpt from Bayesian Methods for Hackers
The Bayesian landscape
When we setup a Bayesian inference problem with $N$ unknowns, we are implicitly creating a $N$ dimensional space for the prior distributions to exist in. Associated with the space is an additional dimension, which we can describe as the surface, or curve, of the space, that reflects the prior probability of a particular point. The surface of the space is defined by our prior distributions. For example, if we have two unknowns $p_1$ and $p_2$, and both are uniform on [0,5], the space created is the square of length 5 and the surface is a flat plane that sits ontop of the square (representing that every point is equally likely).
Alternatively, if the two priors are $\text{Exp}(3)$ and $\text{Exp}(10)$, then the space is all postive numbers on the 2-D plane, and the surface induced by the priors looks like a water fall that starts at the point (0,0) and flows over the positive numbers.
The visualization below demonstrates this. The more dark red the color, the more prior probability that the unknowns are at that location. Conversely, areas with darker blue represent that our priors assign very low probability to the unknowns being there.
These are simple examples in 2D space, where our brains can understand surfaces well. In practice, spaces and surfaces generated by our priors can be much higher dimensional.
If these surfaces describe our prior distributions on the unknowns, what happens to our space after we have observed data $X$. The data $X$ does not change the space, but it changes the surface of the space by pulling and stretching the fabric of the surface to reflect where the true parameters likely live. More data means more pulling and stretching, and our original shape becomes mangled or insignificant compared to the newly formed shape. Less data, and our original shape is more present. Regardless, the resulting surface describes the posterior distribution. Again I must stress that it is, unfortunately, impossible to visualize this in larger dimensions. For two dimensions, the data essentially pushes up the original surface to make tall mountains. The amount of pushing up is resisted by the prior probability, so that less prior probability means more resistance. Thus in the double exponential-prior case above, a mountain (or multiple mountains) that might erupt near the (0,0) corner would be much higher than mountains that erupt closer to (5,5), since there is more resistance near (5,5). The mountain, or perhaps more generally, the mountain ranges, reflect the posterior probability of where the true parameters are likely to be found.
Suppose the priors mentioned above represent different parameters $\lambda$ of two Poisson distributions. We observe a few data points and visualize the new landscape.
The plot on the left is the deformed landscape with the $\text{Uniform}(0,5)$ priors, and the plot on the right is the deformed landscape with the exponential priors. The posterior landscapes look different from one another. The exponential-prior landscape puts very little posterior weight on values in the upper right corner: this is because the prior does not put much weight there, whereas the uniform-prior landscape is happy to put posterior weight there. Also, the highest-point, corresponding the the darkest red, is biased towards (0,0) in the exponential case, which is the result from the exponential prior putting more prior wieght in the (0,0) corner.
The black dot represents the true parameters. Even with 1 sample point, as what was simulated above, the mountains attempts to contain the true parameter. Of course, inference with a sample size of 1 is incredibly naive, and choosing such a small sample size was only illustrative.
Exploring the landscape using the MCMC
We should explore the deformed posterior space generated by our prior surface and observed data to find the posterior mountain ranges. However, we cannot naively search the space: any computer scientist will tell you that traversing $N$-dimensional space is exponentially difficult in $N$: the size of the space quickly blows-up as we increase $N$ (see the curse of dimensionality ). What hope do we have to find these hidden mountains? The idea behind MCMC is to perform an intelligent search of the space. To say "search" implies we are looking for a particular object, which perhaps not an accurate description of what MCMC is doing. Recall: MCMC returns samples from the posterior distribution, not the distribution itself. Stretching our mountainous analogy to its limit, MCMC performs a task similar to repeatedly asking "How likely is this pebble I found to be from the mountain I am searching for?", and completes its task by returning thousands of accepted pebbles in hopes of reconstructing the original mountain. In MCMC and PyMC lingo, the returned sequence of "pebbles" are the samples, more often called the traces.
When I say MCMC intelligently searches, I mean MCMC will hopefully converge towards the areas of high posterior probability. MCMC does this by exploring nearby positions and moving into areas with higher probability. Again, perhaps "converge" is not an accurate term to describe MCMC's progression. Converging usually implies moving towards a point in space, but MCMC moves towards a broader area in the space and randomly walks in that area, picking up samples from that area.
At first, returning thousands of samples to the user might sound like being an inefficient way to describe the posterior distributions. I would argue that this is extremely efficient. Consider the alternative possibilities::
Returning a mathematical formula for the "mountain ranges" would involve describing a N-dimensional surface with arbitrary peaks and valleys.
Returning the "peak" of the landscape, while mathematically possible and a sensible thing to do as the highest point corresponds to most probable estimate of the unknowns, ignores the shape of the landscape, which we have previously argued is very important in determining posterior confidence in unknowns.
Besides computational reasons, likely the strongest reason for returning samples is that we can easily use The Law of Large Numbers to solve otherwise intractable problems. I postpone this discussion for the next chapter.
Algorithms to perform MCMC
There is a large family of algorithms that perform MCMC. Simplestly, most algorithms can be expressed at a high level as follows:
1. Start at current position.
2. Propose moving to a new position (investigate a pebble near you ).
3. Accept the position based on the position's adherence to the data
and prior distributions (ask if the pebble likely came from the mountain).
4. If you accept: Move to the new position. Return to Step 1.
5. After a large number of iterations, return the positions.
This way we move in the general direction towards the regions where the posterior distributions exist, and collect samples sparingly on the journey. Once we reach the posterior distribution, we can easily collect samples as they likely all belong to the posterior distribution.
If the current position of the MCMC algorithm is in an area of extremely low probability, which is often the case when the algorithm begins (typically at a random location in the space), the algorithm will move in positions that are likely not from the posterior but better than everything else nearby. Thus the first moves of the algorithm are not reflective of the posterior. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
Excerpt from Bayesian Methods for Hackers
The Bayesian landscape
When we setup a Bayesian inference problem with $N$ unknowns, we are implicitly creating a $N$ dimensional space for the prior distribu |
452 | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | So there are plenty of answers here paraphrased from statistics/probability textbooks, Wikipedia, etc. I believe we have "laypersons" where I work; I think they are in the marketing department. If I ever have to explain anything technical to them, I apply the rule "show don't tell." With that rule in mind, I would probably show them something like this.
The idea here is to try to code an algorithm that I can teach to spell--not by learning all of the hundreds (thousands?) of rules like When adding an ending to a word that ends with a silent e, drop the final e if the ending begins with a vowel. One reason that won't work is I don't know those rules (i'm not even sure the one I just recited is correct). Instead I am going to teach it to spell by showing it a bunch of correctly spelled words and letting it extract the rules from those words, which is more or less the essence of Machine Learning, regardless of the algorithm--pattern extraction and pattern recognition.
The success criterion is correctly spelling a word the algorithm has never seen before (i realize that can happen by pure chance, but that won't occur to the marketing guys, so i'll ignore--plus I am going to have the algorithm attempt to spell not one word, but a lot, so it's not likely we'll be deceived by a few lucky guesses).
An hour or so ago, I downloaded (as a plain text file) from the excellent Project Gutenberg Site, the Herman Hesse novel Siddhartha. I'll use the words in this novel to teach the algorithm how to spell.
So I coded the algorithm below that scanned this novel, three letters at a time (each word has one additional character at the end, which is 'whitespace', or the end of the word). Three-letter sequences can tell you a lot--for instance, the letter 'q' is nearly always followed by 'u'; the sequence 'ty' usually occurs at the end of a word; z rarely does, and so forth. (Note: I could just as easily have fed it entire words in order to train it to speak in complete sentences--exactly the same idea, just a few tweaks to the code.)
None of this involves MCMC though, that happens after training, when we give the algorithm a few random letters (as a seed) and it begins forming 'words'. How does the algorithm build words? Imagine that it has the block 'qua'; what letter does it add next? During training, the algorithm constructed a massive l*etter-sequence frequency matrix* from all of the thousands of words in the novel. Somewhere in that matrix is the three-letter block 'qua' and the frequencies for the characters that could follow the sequence. The algorithm selects a letter based on those frequencies that could possibly follow it. So the letter that the algorithm selects next depends on--and solely on--the last three in its word-construction queue.
So that's a Markov Chain Monte Carlo algorithm.
I think perhaps the best way to illustrate how it works is to show the results based on different levels of training. Training level is varied by changing the number of passes the algorithm makes though the novel--the more passes thorugh the greater the fidelity of its letter-sequence frequency matrices. Below are the results--in the form of 100-character strings output by the algorithm--after training on the novel 'Siddharta'.
A single pass through the novel, Siddhartha:
then whoicks ger wiff all mothany stand ar you livid theartim mudded
sullintionexpraid his sible his
(Straight away, it's learned to speak almost perfect Welsh; I hadn't expected that.)
After two passes through the novel:
the ack wor prenskinith show wass an twor seened th notheady theatin land
rhatingle was the ov there
After 10 passes:
despite but the should pray with ack now have water her dog lever pain feet
each not the weak memory
And here's the code (in Python, i'm nearly certain that this could be done in R using an MCMC package, of which there are several, in just 3-4 lines)
def create_words_string(raw_string) :
""" in case I wanted to use training data in sentence/paragraph form;
this function will parse a raw text string into a nice list of words;
filtering: keep only words having more than 3 letters and remove
punctuation, etc.
"""
pattern = r'\b[A-Za-z]{3,}\b'
pat_obj = re.compile(pattern)
words = [ word.lower() for word in pat_obj.findall(raw_string) ]
pattern = r'\b[vixlm]+\b'
pat_obj = re.compile(pattern)
return " ".join([ word for word in words if not pat_obj.search(word) ])
def create_markov_dict(words_string):
# initialize variables
wb1, wb2, wb3 = " ", " ", " "
l1, l2, l3 = wb1, wb2, wb3
dx = {}
for ch in words_string :
dx.setdefault( (l1, l2, l3), [] ).append(ch)
l1, l2, l3 = l2, l3, ch
return dx
def generate_newtext(markov_dict) :
simulated_text = ""
l1, l2, l3 = " ", " ", " "
for c in range(100) :
next_letter = sample( markov_dict[(l1, l2, l3)], 1)[0]
simulated_text += next_letter
l1, l2, l3 = l2, l3, next_letter
return simulated_text
if __name__=="__main__" :
# n = number of passes through the training text
n = 1
q1 = create_words_string(n * raw_str)
q2 = create_markov_dict(q1)
q3 = generate_newtext(q2)
print(q3) | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | So there are plenty of answers here paraphrased from statistics/probability textbooks, Wikipedia, etc. I believe we have "laypersons" where I work; I think they are in the marketing department. If I e | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
So there are plenty of answers here paraphrased from statistics/probability textbooks, Wikipedia, etc. I believe we have "laypersons" where I work; I think they are in the marketing department. If I ever have to explain anything technical to them, I apply the rule "show don't tell." With that rule in mind, I would probably show them something like this.
The idea here is to try to code an algorithm that I can teach to spell--not by learning all of the hundreds (thousands?) of rules like When adding an ending to a word that ends with a silent e, drop the final e if the ending begins with a vowel. One reason that won't work is I don't know those rules (i'm not even sure the one I just recited is correct). Instead I am going to teach it to spell by showing it a bunch of correctly spelled words and letting it extract the rules from those words, which is more or less the essence of Machine Learning, regardless of the algorithm--pattern extraction and pattern recognition.
The success criterion is correctly spelling a word the algorithm has never seen before (i realize that can happen by pure chance, but that won't occur to the marketing guys, so i'll ignore--plus I am going to have the algorithm attempt to spell not one word, but a lot, so it's not likely we'll be deceived by a few lucky guesses).
An hour or so ago, I downloaded (as a plain text file) from the excellent Project Gutenberg Site, the Herman Hesse novel Siddhartha. I'll use the words in this novel to teach the algorithm how to spell.
So I coded the algorithm below that scanned this novel, three letters at a time (each word has one additional character at the end, which is 'whitespace', or the end of the word). Three-letter sequences can tell you a lot--for instance, the letter 'q' is nearly always followed by 'u'; the sequence 'ty' usually occurs at the end of a word; z rarely does, and so forth. (Note: I could just as easily have fed it entire words in order to train it to speak in complete sentences--exactly the same idea, just a few tweaks to the code.)
None of this involves MCMC though, that happens after training, when we give the algorithm a few random letters (as a seed) and it begins forming 'words'. How does the algorithm build words? Imagine that it has the block 'qua'; what letter does it add next? During training, the algorithm constructed a massive l*etter-sequence frequency matrix* from all of the thousands of words in the novel. Somewhere in that matrix is the three-letter block 'qua' and the frequencies for the characters that could follow the sequence. The algorithm selects a letter based on those frequencies that could possibly follow it. So the letter that the algorithm selects next depends on--and solely on--the last three in its word-construction queue.
So that's a Markov Chain Monte Carlo algorithm.
I think perhaps the best way to illustrate how it works is to show the results based on different levels of training. Training level is varied by changing the number of passes the algorithm makes though the novel--the more passes thorugh the greater the fidelity of its letter-sequence frequency matrices. Below are the results--in the form of 100-character strings output by the algorithm--after training on the novel 'Siddharta'.
A single pass through the novel, Siddhartha:
then whoicks ger wiff all mothany stand ar you livid theartim mudded
sullintionexpraid his sible his
(Straight away, it's learned to speak almost perfect Welsh; I hadn't expected that.)
After two passes through the novel:
the ack wor prenskinith show wass an twor seened th notheady theatin land
rhatingle was the ov there
After 10 passes:
despite but the should pray with ack now have water her dog lever pain feet
each not the weak memory
And here's the code (in Python, i'm nearly certain that this could be done in R using an MCMC package, of which there are several, in just 3-4 lines)
def create_words_string(raw_string) :
""" in case I wanted to use training data in sentence/paragraph form;
this function will parse a raw text string into a nice list of words;
filtering: keep only words having more than 3 letters and remove
punctuation, etc.
"""
pattern = r'\b[A-Za-z]{3,}\b'
pat_obj = re.compile(pattern)
words = [ word.lower() for word in pat_obj.findall(raw_string) ]
pattern = r'\b[vixlm]+\b'
pat_obj = re.compile(pattern)
return " ".join([ word for word in words if not pat_obj.search(word) ])
def create_markov_dict(words_string):
# initialize variables
wb1, wb2, wb3 = " ", " ", " "
l1, l2, l3 = wb1, wb2, wb3
dx = {}
for ch in words_string :
dx.setdefault( (l1, l2, l3), [] ).append(ch)
l1, l2, l3 = l2, l3, ch
return dx
def generate_newtext(markov_dict) :
simulated_text = ""
l1, l2, l3 = " ", " ", " "
for c in range(100) :
next_letter = sample( markov_dict[(l1, l2, l3)], 1)[0]
simulated_text += next_letter
l1, l2, l3 = l2, l3, next_letter
return simulated_text
if __name__=="__main__" :
# n = number of passes through the training text
n = 1
q1 = create_words_string(n * raw_str)
q2 = create_markov_dict(q1)
q3 = generate_newtext(q2)
print(q3) | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
So there are plenty of answers here paraphrased from statistics/probability textbooks, Wikipedia, etc. I believe we have "laypersons" where I work; I think they are in the marketing department. If I e |
453 | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | MCMC is typically used as an alternative to crude Monte Carlo simulation techniques. Both MCMC and other Monte Carlo techniques are used to evaluate difficult integrals but MCMC can be used more generally.
For example, a common problem in statistics is to calculate the mean outcome relating to some probabilistic/stochastic model. Both MCMC and Monte Carlo techniques would solve this problem by generating a sequence of simulated outcomes that we could use to estimate the true mean.
Both MCMC and crude Monte Carlo techniques work as the long-run proportion of simulations that are equal to a given outcome will be equal* to the modelled probability of that outcome. Therefore, by generating enough simulations, the results produced by both methods will be accurate.
*I say equal although in general I should talk about measurable sets. A layperson, however, probably wouldn't be interested in this*
However, while crude Monte Carlo involves producing many independent simulations, each of which is distributed according to the modelled distribution, MCMC involves generating a random walk that in the long-run "visits" each outcome with the desired frequency.
The trick to MCMC, therefore, is picking a random walk that will "visit" each outcome with the desired long-run frequencies.
A simple example might be to simulate from a model that says the probability of outcome "A" is 0.5 and of outcome "B" is 0.5. In this case, if I started the random walk at position "A" and prescribed that in each step it switched to the other position with probability 0.2 (or any other probability that is greater than 0), I could be sure that after a large number of steps the random walk would have visited each of "A" and "B" in roughly 50% of steps--consistent with the probabilities prescribed by our model.
This is obviously a very boring example. However, it turns out that MCMC is often applicable in situations in which it is difficult to apply standard Monte Carlo or other techniques.
You can find an article that covers the basics of what it is and why it works here:
http://wellredd.uk/basics-markov-chain-monte-carlo/ | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | MCMC is typically used as an alternative to crude Monte Carlo simulation techniques. Both MCMC and other Monte Carlo techniques are used to evaluate difficult integrals but MCMC can be used more gener | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
MCMC is typically used as an alternative to crude Monte Carlo simulation techniques. Both MCMC and other Monte Carlo techniques are used to evaluate difficult integrals but MCMC can be used more generally.
For example, a common problem in statistics is to calculate the mean outcome relating to some probabilistic/stochastic model. Both MCMC and Monte Carlo techniques would solve this problem by generating a sequence of simulated outcomes that we could use to estimate the true mean.
Both MCMC and crude Monte Carlo techniques work as the long-run proportion of simulations that are equal to a given outcome will be equal* to the modelled probability of that outcome. Therefore, by generating enough simulations, the results produced by both methods will be accurate.
*I say equal although in general I should talk about measurable sets. A layperson, however, probably wouldn't be interested in this*
However, while crude Monte Carlo involves producing many independent simulations, each of which is distributed according to the modelled distribution, MCMC involves generating a random walk that in the long-run "visits" each outcome with the desired frequency.
The trick to MCMC, therefore, is picking a random walk that will "visit" each outcome with the desired long-run frequencies.
A simple example might be to simulate from a model that says the probability of outcome "A" is 0.5 and of outcome "B" is 0.5. In this case, if I started the random walk at position "A" and prescribed that in each step it switched to the other position with probability 0.2 (or any other probability that is greater than 0), I could be sure that after a large number of steps the random walk would have visited each of "A" and "B" in roughly 50% of steps--consistent with the probabilities prescribed by our model.
This is obviously a very boring example. However, it turns out that MCMC is often applicable in situations in which it is difficult to apply standard Monte Carlo or other techniques.
You can find an article that covers the basics of what it is and why it works here:
http://wellredd.uk/basics-markov-chain-monte-carlo/ | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
MCMC is typically used as an alternative to crude Monte Carlo simulation techniques. Both MCMC and other Monte Carlo techniques are used to evaluate difficult integrals but MCMC can be used more gener |
454 | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | I'm a DNA analyst that uses fully continuous probabilistic genotyping software to interpret DNA evidence and I have to explain how this works to a jury. Admittedly, we over simplify and I realize some of this over simplification sacrifices accuracy of specific details in the name of improving overall understanding. But, within the context of a jury understanding how this process is used in DNA interpretation without academic degrees and years of professional experience, they get the gist :)
Background: The software uses metropolis Hastings MCMC and a biological model that mimics the known behavior of DNA profiles (model is built based upon validation data generated by laboratory analyzing many DNA profiles from known conditions representing the range encountered in the unknown casework). There's 8 independent chains and we evaluate the convergence to determine whether to re-run increasing the burn in and post accepts (default burnin 100k accepts and post 400k accepts)
When asked by prosecution/defense about MCMC: we explain it stands for markov chain Monte Carlo and represents a special class/kind of algorithm used for complex problem-solving and that an algorithm is just a fancy word referring to a series of procedures or routine carried out by a computer... mcmc algorithms operate by proposing a solution, simulating that solution, then evaluating how well that simulation mirrors the actual evidence data being observed... a simulation that fits the evidence observation well has a higher probability than a simulation that does not fit the observation well... over many repeated samplings/guesses of proposed solutions, the Markov chains move away from the low probability solutions toward the high probability solutions that better fit/explain the observed evidence profile, until eventually equilibrium is achieved, meaning the algorithm has limited ability to sample new proposals yielding significantly increased probabilities
When asked about metropolis Hastings: we explain it's a refinement to MCMC algorithm describing its decision-making process accepting or rejecting a proposal... usually this is explained with an analogy of "hot/cold" children's game but I may have considered using "swipe right or left" when the jury is especially young!! :p But using our hot/cold analogy, we always accept a hot guess and will occasionally accept a cold guess a fraction of the time and explain the purpose of sometimes accepting the cold guess is to ensure the chains sample a wider range of possibilities as opposed to getting stuck around one particular proposal before actual equilibrium
Edited to add/clarify: with the hot/cold analogy we explain that in the children's game, the leader picks a target object/area within the room and the players take turns making guesses which direction to move relative to their current standing/position. The leader tells them to change their position/make the move if it's a hot guess and they lose their turn/stay in position if it's a cold guess. Similarly, within our software, the decision to move/accept depends only on the probability of the proposal compared to the probability of currently held position... HOWEVER, the target is pre-defined/known by the leader in the children's game whereas the target within our software isn't pre-defined--it's completely unknown (also why it's more important for our application to adequately sample the space and occasionally accept cold guesses)
Like I said, super super basic and absolutely lacking technical detail for sake of improving comprehension--we strive for explaining at about a middle-school level of education. Feel free to make suggestions. I'll incorporate them. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | I'm a DNA analyst that uses fully continuous probabilistic genotyping software to interpret DNA evidence and I have to explain how this works to a jury. Admittedly, we over simplify and I realize some | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
I'm a DNA analyst that uses fully continuous probabilistic genotyping software to interpret DNA evidence and I have to explain how this works to a jury. Admittedly, we over simplify and I realize some of this over simplification sacrifices accuracy of specific details in the name of improving overall understanding. But, within the context of a jury understanding how this process is used in DNA interpretation without academic degrees and years of professional experience, they get the gist :)
Background: The software uses metropolis Hastings MCMC and a biological model that mimics the known behavior of DNA profiles (model is built based upon validation data generated by laboratory analyzing many DNA profiles from known conditions representing the range encountered in the unknown casework). There's 8 independent chains and we evaluate the convergence to determine whether to re-run increasing the burn in and post accepts (default burnin 100k accepts and post 400k accepts)
When asked by prosecution/defense about MCMC: we explain it stands for markov chain Monte Carlo and represents a special class/kind of algorithm used for complex problem-solving and that an algorithm is just a fancy word referring to a series of procedures or routine carried out by a computer... mcmc algorithms operate by proposing a solution, simulating that solution, then evaluating how well that simulation mirrors the actual evidence data being observed... a simulation that fits the evidence observation well has a higher probability than a simulation that does not fit the observation well... over many repeated samplings/guesses of proposed solutions, the Markov chains move away from the low probability solutions toward the high probability solutions that better fit/explain the observed evidence profile, until eventually equilibrium is achieved, meaning the algorithm has limited ability to sample new proposals yielding significantly increased probabilities
When asked about metropolis Hastings: we explain it's a refinement to MCMC algorithm describing its decision-making process accepting or rejecting a proposal... usually this is explained with an analogy of "hot/cold" children's game but I may have considered using "swipe right or left" when the jury is especially young!! :p But using our hot/cold analogy, we always accept a hot guess and will occasionally accept a cold guess a fraction of the time and explain the purpose of sometimes accepting the cold guess is to ensure the chains sample a wider range of possibilities as opposed to getting stuck around one particular proposal before actual equilibrium
Edited to add/clarify: with the hot/cold analogy we explain that in the children's game, the leader picks a target object/area within the room and the players take turns making guesses which direction to move relative to their current standing/position. The leader tells them to change their position/make the move if it's a hot guess and they lose their turn/stay in position if it's a cold guess. Similarly, within our software, the decision to move/accept depends only on the probability of the proposal compared to the probability of currently held position... HOWEVER, the target is pre-defined/known by the leader in the children's game whereas the target within our software isn't pre-defined--it's completely unknown (also why it's more important for our application to adequately sample the space and occasionally accept cold guesses)
Like I said, super super basic and absolutely lacking technical detail for sake of improving comprehension--we strive for explaining at about a middle-school level of education. Feel free to make suggestions. I'll incorporate them. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
I'm a DNA analyst that uses fully continuous probabilistic genotyping software to interpret DNA evidence and I have to explain how this works to a jury. Admittedly, we over simplify and I realize some |
455 | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | This question is broad yet the answers are often quite casual. Alternatively, you can see this paper which gives a concise mathematical description of a broad class of MCMC algorithms including Metropolis-Hastings algorithms, Gibbs sampling, Metropolis-within-Gibbs and auxiliary variables methods, slice sampling, recursive proposals, directional sampling, Langevin and Hamiltonian Monte Carlo, NUTS sampling, pseudo-marginal Metropolis-Hastings algorithms, and pseudo-marginal Hamiltonian Monte Carlo, as discussed by the authors.
A credible review is given here
I'll find more time to elaborate its content in the format of stackexchange. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | This question is broad yet the answers are often quite casual. Alternatively, you can see this paper which gives a concise mathematical description of a broad class of MCMC algorithms including Metrop | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
This question is broad yet the answers are often quite casual. Alternatively, you can see this paper which gives a concise mathematical description of a broad class of MCMC algorithms including Metropolis-Hastings algorithms, Gibbs sampling, Metropolis-within-Gibbs and auxiliary variables methods, slice sampling, recursive proposals, directional sampling, Langevin and Hamiltonian Monte Carlo, NUTS sampling, pseudo-marginal Metropolis-Hastings algorithms, and pseudo-marginal Hamiltonian Monte Carlo, as discussed by the authors.
A credible review is given here
I'll find more time to elaborate its content in the format of stackexchange. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
This question is broad yet the answers are often quite casual. Alternatively, you can see this paper which gives a concise mathematical description of a broad class of MCMC algorithms including Metrop |
456 | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | First, we should explain Monte-Carlo sampling to the layperson. Imagine when you don't have the exact form of a function (for example, $f(x,y)=z=x^2+2*x*y$) but there is a machine in Europe (and Los Alamos) that replicates this function (numerically). We can put as many $(x,y)$ pairs into it and it will give us the value z. This numerical repetition is sampling and this process is a Monte-Carlo simulation of $f(x,y)$. After 1,0000 iterations, we almost know what the function $f(x,y)$ is.
Assuming the layperson knows the Monte-Carlo, in MCMC you don't want to waste your CPU efforts / time when you are sampling from a multi-dimensional space $f(x,y,z,t,s,...,zzz)$, as the standard Monte-Carlo sampling does. The key difference is that in MCMC you need to have a Markov-chain as a map to guide your efforts.
This video (starting at 5:50) has a very good statement of intuition.
Imagine you want to sample points that are on the green (multi-dimensional) branches in this picture. If you throw points all over the black super-space and check their value, you are WASTING a lot sampling (searching) energy. So it would make more sense to control your sampling strategy (which can be automated) to pick points closer to the green branches (where it matters). Green branches can be found by being hit once accidentally (or controlled), and the rest of the sampling effort (red points) will be generated afterward. The reason the red gets attracted to green line is because of the Markov chain transition matrix that works as your sampling-engine.
So in layman's terms, MCMC is an energy-saving (low cost) sampling method, especially when working in a massive and 'dark' (multi-dimensional) space. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | First, we should explain Monte-Carlo sampling to the layperson. Imagine when you don't have the exact form of a function (for example, $f(x,y)=z=x^2+2*x*y$) but there is a machine in Europe (and Los A | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
First, we should explain Monte-Carlo sampling to the layperson. Imagine when you don't have the exact form of a function (for example, $f(x,y)=z=x^2+2*x*y$) but there is a machine in Europe (and Los Alamos) that replicates this function (numerically). We can put as many $(x,y)$ pairs into it and it will give us the value z. This numerical repetition is sampling and this process is a Monte-Carlo simulation of $f(x,y)$. After 1,0000 iterations, we almost know what the function $f(x,y)$ is.
Assuming the layperson knows the Monte-Carlo, in MCMC you don't want to waste your CPU efforts / time when you are sampling from a multi-dimensional space $f(x,y,z,t,s,...,zzz)$, as the standard Monte-Carlo sampling does. The key difference is that in MCMC you need to have a Markov-chain as a map to guide your efforts.
This video (starting at 5:50) has a very good statement of intuition.
Imagine you want to sample points that are on the green (multi-dimensional) branches in this picture. If you throw points all over the black super-space and check their value, you are WASTING a lot sampling (searching) energy. So it would make more sense to control your sampling strategy (which can be automated) to pick points closer to the green branches (where it matters). Green branches can be found by being hit once accidentally (or controlled), and the rest of the sampling effort (red points) will be generated afterward. The reason the red gets attracted to green line is because of the Markov chain transition matrix that works as your sampling-engine.
So in layman's terms, MCMC is an energy-saving (low cost) sampling method, especially when working in a massive and 'dark' (multi-dimensional) space. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
First, we should explain Monte-Carlo sampling to the layperson. Imagine when you don't have the exact form of a function (for example, $f(x,y)=z=x^2+2*x*y$) but there is a machine in Europe (and Los A |
457 | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | We want to know the posterior distribution $P(\theta)$ and where modes are, this is the goal.
But we cannot calculate $P(\theta)$ analytically, this is the problem.
However, we can build a Markov Chain.
Sampling from the Markov Chain builds the histogram, and
The histogram approximates $P(\theta)$, this is the solution.
Surprisingly (For Metropolis Hastings) :
Prior can be an arbitrary normal distribution, although the choice impacts.
Proposal can be arbitrary normal distribution whose mean is current $\theta $.
The questions and confusions may be:
How come arbitrary normal distribution can be used as the prior?
How come the chance of transition from $\theta_i$ to $\theta_j$ in the Markov Chain $$q(\theta_j | \theta_i) \times \min\left(1, \frac { \textrm{prior}(\theta_j) \times L(E|\theta_j) \times q(\theta_i | \theta_j)} { \textrm{prior}(\theta_i) \times L(E|\theta_i) \times q(\theta_j | \theta_i)} \right) $$ can be equivalent with $P(\theta_j|\theta_i)$?
Analogy
There is a mountain and you want to know the approximate shape and where the peaks are, this is the goal.
But you cannot see nor measure the mountain, this is the problem.
You have a GPS so you know the coordinate of wherever you go, and you know you are climbing or descending when you make a next step.
Now, you do the random walk over the mountain, and plot $\theta_t = (latitude, longitude)$ every step you make, but you follow one rule: Climb more often than descend.
After 100,000,000 steps, the density of the plots tells where the peaks are, and the rough idea of the shape of the mountain.
What can be confusing when learning MCMC
Text book or article that:
Immediately starts with math formulas on how to do MCMC with no intuition nor idea behind explained. If we understand the ideas first, we can derive the math formulas. But it is super hard to reverse-engineer the ideas and intuitions from math formulas.
Does not give the intuition or analogy that a probability distributions with modes is similar to a mountain with peaks, and generate more samples around the modes is the essence of MCMC which is equivalent with visiting the mountain peaks more by tending to climb more than descend in the mountain random-walk.
Does not explain how and why we can build a Markov Chain that generates samples which approximates $P(\theta)$, and how the detailed balance condition plays the role there.
Does not explain the role of proposal distribution q, which is simulating the transition from state $\theta_i$ to $\theta_j$ in the Markov Chain.
Does not explain why we can use a normal distribution as $prior(\theta)$.
Related
Youtube video Markov Chain Monte Carlo gives the intuition that MCMC is like a mountain trekking to figure out the shape by random-walk the mountain. Having such concrete image will give the foundation to understand what MCMC is.
Real-life example in which Markov chain Monte Carlo is desirable? gives several links to the real life examples.
The Markov Chain Monte Carlo Revolution and CS168: The Modern Algorithmic Toolbox Lecture #14: Markov Chain Monte Carlo gives the MCMC application to cryptography.
Decrypting Substitution Cyphers provides the implementation of the MCMC application to cryptography.
MCMC Explained and Applied to Logistic Regression provides the MCMC implementation of logistic regression to find the parameters to fit the model. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | We want to know the posterior distribution $P(\theta)$ and where modes are, this is the goal.
But we cannot calculate $P(\theta)$ analytically, this is the problem.
However, we can build a Markov Cha | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
We want to know the posterior distribution $P(\theta)$ and where modes are, this is the goal.
But we cannot calculate $P(\theta)$ analytically, this is the problem.
However, we can build a Markov Chain.
Sampling from the Markov Chain builds the histogram, and
The histogram approximates $P(\theta)$, this is the solution.
Surprisingly (For Metropolis Hastings) :
Prior can be an arbitrary normal distribution, although the choice impacts.
Proposal can be arbitrary normal distribution whose mean is current $\theta $.
The questions and confusions may be:
How come arbitrary normal distribution can be used as the prior?
How come the chance of transition from $\theta_i$ to $\theta_j$ in the Markov Chain $$q(\theta_j | \theta_i) \times \min\left(1, \frac { \textrm{prior}(\theta_j) \times L(E|\theta_j) \times q(\theta_i | \theta_j)} { \textrm{prior}(\theta_i) \times L(E|\theta_i) \times q(\theta_j | \theta_i)} \right) $$ can be equivalent with $P(\theta_j|\theta_i)$?
Analogy
There is a mountain and you want to know the approximate shape and where the peaks are, this is the goal.
But you cannot see nor measure the mountain, this is the problem.
You have a GPS so you know the coordinate of wherever you go, and you know you are climbing or descending when you make a next step.
Now, you do the random walk over the mountain, and plot $\theta_t = (latitude, longitude)$ every step you make, but you follow one rule: Climb more often than descend.
After 100,000,000 steps, the density of the plots tells where the peaks are, and the rough idea of the shape of the mountain.
What can be confusing when learning MCMC
Text book or article that:
Immediately starts with math formulas on how to do MCMC with no intuition nor idea behind explained. If we understand the ideas first, we can derive the math formulas. But it is super hard to reverse-engineer the ideas and intuitions from math formulas.
Does not give the intuition or analogy that a probability distributions with modes is similar to a mountain with peaks, and generate more samples around the modes is the essence of MCMC which is equivalent with visiting the mountain peaks more by tending to climb more than descend in the mountain random-walk.
Does not explain how and why we can build a Markov Chain that generates samples which approximates $P(\theta)$, and how the detailed balance condition plays the role there.
Does not explain the role of proposal distribution q, which is simulating the transition from state $\theta_i$ to $\theta_j$ in the Markov Chain.
Does not explain why we can use a normal distribution as $prior(\theta)$.
Related
Youtube video Markov Chain Monte Carlo gives the intuition that MCMC is like a mountain trekking to figure out the shape by random-walk the mountain. Having such concrete image will give the foundation to understand what MCMC is.
Real-life example in which Markov chain Monte Carlo is desirable? gives several links to the real life examples.
The Markov Chain Monte Carlo Revolution and CS168: The Modern Algorithmic Toolbox Lecture #14: Markov Chain Monte Carlo gives the MCMC application to cryptography.
Decrypting Substitution Cyphers provides the implementation of the MCMC application to cryptography.
MCMC Explained and Applied to Logistic Regression provides the MCMC implementation of logistic regression to find the parameters to fit the model. | How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
We want to know the posterior distribution $P(\theta)$ and where modes are, this is the goal.
But we cannot calculate $P(\theta)$ analytically, this is the problem.
However, we can build a Markov Cha |
458 | How to know that your machine learning problem is hopeless? | Forecastability
You are right that this is a question of forecastability. There have been a few articles on forecastability in the IIF's practitioner-oriented journal Foresight. (Full disclosure: I'm an Associate Editor.)
The problem is that forecastability is already hard to assess in "simple" cases.
A few examples
Suppose you have a time series like this but don't speak German:
How would you model the large peak in April, and how would you include this information in any forecasts?
Unless you knew that this time series is the sales of eggs in a Swiss supermarket chain, which peaks right before western calendar Easter, you would not have a chance. Plus, with Easter moving around the calendar by as much as six weeks, any forecasts that don't include the specific date of Easter (by assuming, say, that this was just some seasonal peak that would recur in a specific week next year) would probably be very off.
Similarly, assume you have the blue line below and want to model whatever happened on 2010-02-28 so differently from "normal" patterns on 2010-02-27:
Again, without knowing what happens when a whole city full of Canadians watches an Olympic ice hockey finals game on TV, you have no chance whatsoever to understand what happened here, and you won't be able to predict when something like this will recur.
Finally, look at this:
This is a time series of daily sales at a cash and carry store. (On the right, you have a simple table: 282 days had zero sales, 42 days saw sales of 1... and one day saw sales of 500.) I don't know what item it is.
To this day, I don't know what happened on that one day with sales of 500. My best guess is that some customer pre-ordered a large amount of whatever product this was and collected it. Now, without knowing this, any forecast for this particular day will be far off. Conversely, assume that this happened right before Easter, and we have a dumb-smart algorithm that believes this could be an Easter effect (maybe these are eggs?) and happily forecasts 500 units for the next Easter. Oh my, could that go wrong.
Summary
In all cases, we see how forecastability can only be well understood once we have a sufficiently deep understanding of likely factors that influence our data. The problem is that unless we know these factors, we don't know that we may not know them. As per Donald Rumsfeld:
[T]here are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don't know we don't know.
If Easter or Canadians' predilection for Hockey are unknown unknowns to us, we are stuck - and we don't even have a way forward, because we don't know what questions we need to ask.
The only way of getting a handle on these is to gather domain knowledge.
Conclusions
I draw three conclusions from this:
You always need to include domain knowledge in your modeling and prediction.
Even with domain knowledge, you are not guaranteed to get enough information for your forecasts and predictions to be acceptable to the user. See that outlier above.
If "your results are miserable", you may be hoping for more than you can achieve. If you are forecasting a fair coin toss, then there is no way to get above 50% accuracy. Don't trust external forecast accuracy benchmarks, either.
The Bottom Line
Here is how I would recommend building models - and noticing when to stop:
Talk to someone with domain knowledge if you don't already have it yourself.
Identify the main drivers of the data you want to forecast, including likely interactions, based on step 1.
Build models iteratively, including drivers in decreasing order of strength as per step 2. Assess models using cross-validation or a holdout sample.
If your prediction accuracy does not increase any further, either go back to step 1 (e.g., by identifying blatant mis-predictions you can't explain, and discussing these with the domain expert), or accept that you have reached the end of your models' capabilities. Time-boxing your analysis in advance helps.
Note that I am not advocating trying different classes of models if your original model plateaus. Typically, if you started out with a reasonable model, using something more sophisticated will not yield a strong benefit and may simply be "overfitting on the test set". I have seen this often, and other people agree. | How to know that your machine learning problem is hopeless? | Forecastability
You are right that this is a question of forecastability. There have been a few articles on forecastability in the IIF's practitioner-oriented journal Foresight. (Full disclosure: I'm | How to know that your machine learning problem is hopeless?
Forecastability
You are right that this is a question of forecastability. There have been a few articles on forecastability in the IIF's practitioner-oriented journal Foresight. (Full disclosure: I'm an Associate Editor.)
The problem is that forecastability is already hard to assess in "simple" cases.
A few examples
Suppose you have a time series like this but don't speak German:
How would you model the large peak in April, and how would you include this information in any forecasts?
Unless you knew that this time series is the sales of eggs in a Swiss supermarket chain, which peaks right before western calendar Easter, you would not have a chance. Plus, with Easter moving around the calendar by as much as six weeks, any forecasts that don't include the specific date of Easter (by assuming, say, that this was just some seasonal peak that would recur in a specific week next year) would probably be very off.
Similarly, assume you have the blue line below and want to model whatever happened on 2010-02-28 so differently from "normal" patterns on 2010-02-27:
Again, without knowing what happens when a whole city full of Canadians watches an Olympic ice hockey finals game on TV, you have no chance whatsoever to understand what happened here, and you won't be able to predict when something like this will recur.
Finally, look at this:
This is a time series of daily sales at a cash and carry store. (On the right, you have a simple table: 282 days had zero sales, 42 days saw sales of 1... and one day saw sales of 500.) I don't know what item it is.
To this day, I don't know what happened on that one day with sales of 500. My best guess is that some customer pre-ordered a large amount of whatever product this was and collected it. Now, without knowing this, any forecast for this particular day will be far off. Conversely, assume that this happened right before Easter, and we have a dumb-smart algorithm that believes this could be an Easter effect (maybe these are eggs?) and happily forecasts 500 units for the next Easter. Oh my, could that go wrong.
Summary
In all cases, we see how forecastability can only be well understood once we have a sufficiently deep understanding of likely factors that influence our data. The problem is that unless we know these factors, we don't know that we may not know them. As per Donald Rumsfeld:
[T]here are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don't know we don't know.
If Easter or Canadians' predilection for Hockey are unknown unknowns to us, we are stuck - and we don't even have a way forward, because we don't know what questions we need to ask.
The only way of getting a handle on these is to gather domain knowledge.
Conclusions
I draw three conclusions from this:
You always need to include domain knowledge in your modeling and prediction.
Even with domain knowledge, you are not guaranteed to get enough information for your forecasts and predictions to be acceptable to the user. See that outlier above.
If "your results are miserable", you may be hoping for more than you can achieve. If you are forecasting a fair coin toss, then there is no way to get above 50% accuracy. Don't trust external forecast accuracy benchmarks, either.
The Bottom Line
Here is how I would recommend building models - and noticing when to stop:
Talk to someone with domain knowledge if you don't already have it yourself.
Identify the main drivers of the data you want to forecast, including likely interactions, based on step 1.
Build models iteratively, including drivers in decreasing order of strength as per step 2. Assess models using cross-validation or a holdout sample.
If your prediction accuracy does not increase any further, either go back to step 1 (e.g., by identifying blatant mis-predictions you can't explain, and discussing these with the domain expert), or accept that you have reached the end of your models' capabilities. Time-boxing your analysis in advance helps.
Note that I am not advocating trying different classes of models if your original model plateaus. Typically, if you started out with a reasonable model, using something more sophisticated will not yield a strong benefit and may simply be "overfitting on the test set". I have seen this often, and other people agree. | How to know that your machine learning problem is hopeless?
Forecastability
You are right that this is a question of forecastability. There have been a few articles on forecastability in the IIF's practitioner-oriented journal Foresight. (Full disclosure: I'm |
459 | How to know that your machine learning problem is hopeless? | The answer from Stephan Kolassa is excellent, but I would like to add that there is also often an economic stop condition:
When you are doing ML for a customer and not for fun, you should take a look at the amount of money the customer is willing to spend. If he pays your firm 5000€ and you spent a month on finding a model, you will loose money. Sounds trivial, but I have seen "there must be a solution!!!!"-thinking which led to huge cost overruns. So stop when the money is out and communicate the problem to your customer.
If you have done some work, you often have a feeling what is possible with the current dataset. Try to apply that to the amount of money you can earn with the model, if the amount is trivial or a net negative (e.g. due to the time to collect data, develop a solution etc.) you should stop.
As an example: we had a customer who wanted to predict when his machines break, We analyzed existing data and found essentially noise. We dug into the process and found that the most critical data was not recorded and was very difficult to collect. But without that data, our model was so poor that nobody would have used it and it was canned.
While I focused on the economics when working on a commercial product, this rule also applies to academia or for fun projects - while money is less of a concern in such circumstances, time is still a rare commodity.
E. g. in academia you should stop working when you produce no tangible results, and you have others, more promising projects you could do. But do not drop that project - please also publish null or "need more / other data" results, they are important, too! | How to know that your machine learning problem is hopeless? | The answer from Stephan Kolassa is excellent, but I would like to add that there is also often an economic stop condition:
When you are doing ML for a customer and not for fun, you should take a look | How to know that your machine learning problem is hopeless?
The answer from Stephan Kolassa is excellent, but I would like to add that there is also often an economic stop condition:
When you are doing ML for a customer and not for fun, you should take a look at the amount of money the customer is willing to spend. If he pays your firm 5000€ and you spent a month on finding a model, you will loose money. Sounds trivial, but I have seen "there must be a solution!!!!"-thinking which led to huge cost overruns. So stop when the money is out and communicate the problem to your customer.
If you have done some work, you often have a feeling what is possible with the current dataset. Try to apply that to the amount of money you can earn with the model, if the amount is trivial or a net negative (e.g. due to the time to collect data, develop a solution etc.) you should stop.
As an example: we had a customer who wanted to predict when his machines break, We analyzed existing data and found essentially noise. We dug into the process and found that the most critical data was not recorded and was very difficult to collect. But without that data, our model was so poor that nobody would have used it and it was canned.
While I focused on the economics when working on a commercial product, this rule also applies to academia or for fun projects - while money is less of a concern in such circumstances, time is still a rare commodity.
E. g. in academia you should stop working when you produce no tangible results, and you have others, more promising projects you could do. But do not drop that project - please also publish null or "need more / other data" results, they are important, too! | How to know that your machine learning problem is hopeless?
The answer from Stephan Kolassa is excellent, but I would like to add that there is also often an economic stop condition:
When you are doing ML for a customer and not for fun, you should take a look |
460 | How to know that your machine learning problem is hopeless? | There is another way. Ask yourself -
Who or what makes the best possible forecasts of this particular variable?"
Does my machine learning algorithm produce better or worse results than the best forecasts?
So, for example, if you had a large number of variables associated with different soccer teams and you were trying to forecast who would win, you might look at bookmaker odds or some form of crowd sourced prediction to compare with the results of your machine learning algorithm. If you are better you might be at the limit, if worse then clearly there is room for improvement.
Your ability to improve depends (broadly) on two things:
Are you using the same data as the best expert at this particular task?
Are you using the data as effectively as the best expert at this particular task?
It depends on exactly what I'm trying to do, but I tend to use the answers to these questions to drive the direction I go in when building a model, particularly whether to try and extract more data that I can use or to concentrate on trying to refine the model.
I agree with Stephan that usually the best way of doing this is to ask a domain expert. | How to know that your machine learning problem is hopeless? | There is another way. Ask yourself -
Who or what makes the best possible forecasts of this particular variable?"
Does my machine learning algorithm produce better or worse results than the best fo | How to know that your machine learning problem is hopeless?
There is another way. Ask yourself -
Who or what makes the best possible forecasts of this particular variable?"
Does my machine learning algorithm produce better or worse results than the best forecasts?
So, for example, if you had a large number of variables associated with different soccer teams and you were trying to forecast who would win, you might look at bookmaker odds or some form of crowd sourced prediction to compare with the results of your machine learning algorithm. If you are better you might be at the limit, if worse then clearly there is room for improvement.
Your ability to improve depends (broadly) on two things:
Are you using the same data as the best expert at this particular task?
Are you using the data as effectively as the best expert at this particular task?
It depends on exactly what I'm trying to do, but I tend to use the answers to these questions to drive the direction I go in when building a model, particularly whether to try and extract more data that I can use or to concentrate on trying to refine the model.
I agree with Stephan that usually the best way of doing this is to ask a domain expert. | How to know that your machine learning problem is hopeless?
There is another way. Ask yourself -
Who or what makes the best possible forecasts of this particular variable?"
Does my machine learning algorithm produce better or worse results than the best fo |
461 | What are the differences between Factor Analysis and Principal Component Analysis? | Principal component analysis involves extracting linear composites of observed variables.
Factor analysis is based on a formal model predicting observed variables from theoretical latent factors.
In psychology these two techniques are often applied in the construction of multi-scale tests
to determine which items load on which scales.
They typically yield similar substantive conclusions (for a discussion see Comrey (1988) Factor-Analytic Methods of Scale Development in Personality and Clinical Psychology).
This helps to explain why some statistics packages seem to bundle them together.
I have also seen situations where "principal component analysis" is incorrectly labelled "factor analysis".
In terms of a simple rule of thumb, I'd suggest that you:
Run factor analysis if you assume or wish to test a theoretical model of latent factors causing observed variables.
Run principal component analysis If you want to simply reduce your correlated observed variables to a smaller set of important independent composite variables. | What are the differences between Factor Analysis and Principal Component Analysis? | Principal component analysis involves extracting linear composites of observed variables.
Factor analysis is based on a formal model predicting observed variables from theoretical latent factors.
In p | What are the differences between Factor Analysis and Principal Component Analysis?
Principal component analysis involves extracting linear composites of observed variables.
Factor analysis is based on a formal model predicting observed variables from theoretical latent factors.
In psychology these two techniques are often applied in the construction of multi-scale tests
to determine which items load on which scales.
They typically yield similar substantive conclusions (for a discussion see Comrey (1988) Factor-Analytic Methods of Scale Development in Personality and Clinical Psychology).
This helps to explain why some statistics packages seem to bundle them together.
I have also seen situations where "principal component analysis" is incorrectly labelled "factor analysis".
In terms of a simple rule of thumb, I'd suggest that you:
Run factor analysis if you assume or wish to test a theoretical model of latent factors causing observed variables.
Run principal component analysis If you want to simply reduce your correlated observed variables to a smaller set of important independent composite variables. | What are the differences between Factor Analysis and Principal Component Analysis?
Principal component analysis involves extracting linear composites of observed variables.
Factor analysis is based on a formal model predicting observed variables from theoretical latent factors.
In p |
462 | What are the differences between Factor Analysis and Principal Component Analysis? | From my response here:
Is PCA followed by a rotation (such as varimax) still PCA?
Principal Component Analysis (PCA) and Common Factor Analysis (CFA) are distinct methods. Often, they produce similar results and PCA is used as the default extraction method in the SPSS Factor Analysis routines. This undoubtedly results in a lot of confusion about the distinction between the two.
The bottom line is that these are two different models, conceptually. In PCA, the components are actual orthogonal linear combinations that maximize the total variance. In FA, the factors are linear combinations that maximize the shared portion of the variance--underlying "latent constructs". That's why FA is often called "common factor analysis". FA uses a variety of optimization routines and the result, unlike PCA, depends on the optimization routine used and starting points for those routines. Simply there is not a single unique solution.
In R, the factanal() function provides CFA with a maximum likelihood extraction. So, you shouldn't expect it to reproduce an SPSS result which is based on a PCA extraction. It's simply not the same model or logic. I'm not sure if you would get the same result if you used SPSS's Maximum Likelihood extraction either as they may not use the same algorithm.
For better or for worse in R, you can, however, reproduce the mixed up "factor analysis" that SPSS provides as its default. Here's the process in R. With this code, I'm able to reproduce the SPSS Principal Component "Factor Analysis" result using this dataset. (With the exception of the sign, which is indeterminate). That result could also then be rotated using any of R's available rotation methods.
data(attitude)
# Compute eigenvalues and eigenvectors of the correlation matrix.
pfa.eigen <- eigen(cor(attitude))
# Print and note that eigenvalues are those produced by SPSS.
# Also note that SPSS will extract 2 components as eigenvalues > 1 = 2.
pfa.eigen$values
# Set a value for the number of factors (for clarity)
kFactors <- 2
# Extract and transform two components.
pfa.eigen$vectors[, seq_len(kFactors)] %*%
diag(sqrt(pfa.eigen$values[seq_len(kFactors)]), kFactors, kFactors) | What are the differences between Factor Analysis and Principal Component Analysis? | From my response here:
Is PCA followed by a rotation (such as varimax) still PCA?
Principal Component Analysis (PCA) and Common Factor Analysis (CFA) are distinct methods. Often, they produce similar | What are the differences between Factor Analysis and Principal Component Analysis?
From my response here:
Is PCA followed by a rotation (such as varimax) still PCA?
Principal Component Analysis (PCA) and Common Factor Analysis (CFA) are distinct methods. Often, they produce similar results and PCA is used as the default extraction method in the SPSS Factor Analysis routines. This undoubtedly results in a lot of confusion about the distinction between the two.
The bottom line is that these are two different models, conceptually. In PCA, the components are actual orthogonal linear combinations that maximize the total variance. In FA, the factors are linear combinations that maximize the shared portion of the variance--underlying "latent constructs". That's why FA is often called "common factor analysis". FA uses a variety of optimization routines and the result, unlike PCA, depends on the optimization routine used and starting points for those routines. Simply there is not a single unique solution.
In R, the factanal() function provides CFA with a maximum likelihood extraction. So, you shouldn't expect it to reproduce an SPSS result which is based on a PCA extraction. It's simply not the same model or logic. I'm not sure if you would get the same result if you used SPSS's Maximum Likelihood extraction either as they may not use the same algorithm.
For better or for worse in R, you can, however, reproduce the mixed up "factor analysis" that SPSS provides as its default. Here's the process in R. With this code, I'm able to reproduce the SPSS Principal Component "Factor Analysis" result using this dataset. (With the exception of the sign, which is indeterminate). That result could also then be rotated using any of R's available rotation methods.
data(attitude)
# Compute eigenvalues and eigenvectors of the correlation matrix.
pfa.eigen <- eigen(cor(attitude))
# Print and note that eigenvalues are those produced by SPSS.
# Also note that SPSS will extract 2 components as eigenvalues > 1 = 2.
pfa.eigen$values
# Set a value for the number of factors (for clarity)
kFactors <- 2
# Extract and transform two components.
pfa.eigen$vectors[, seq_len(kFactors)] %*%
diag(sqrt(pfa.eigen$values[seq_len(kFactors)]), kFactors, kFactors) | What are the differences between Factor Analysis and Principal Component Analysis?
From my response here:
Is PCA followed by a rotation (such as varimax) still PCA?
Principal Component Analysis (PCA) and Common Factor Analysis (CFA) are distinct methods. Often, they produce similar |
463 | What are the differences between Factor Analysis and Principal Component Analysis? | A basic, yet a kind of painstaking, explanation of PCA vs Factor analysis with the help of scatterplots, in logical steps. (I thank @amoeba who, in his comment to the question, has encouraged me to post an answer in place of making links to elsewhere. So here is a leisure, late response.)
PCA as variable summarization (feature extraction)
Hope you already have understanding of PCA. To revive now.
Suppose we have correlating variables $V_1$ and $V_2$. We center them (subtract the mean) and do a scatterplot. Then we perform PCA on these centered data. PCA is a form of axes rotation which offers axes P1 and P2 instead of V1 and V2. The key property of PCA is that P1 - called 1st principal component - gets oriented so that the variance of data points along it is maximized. The new axes are new variables which values are computable as long as we know the coefficients of rotation $a$ (PCA provides them) [Eq.1]:
$P1 = a1_1V_1 + a1_2V_2$
$P2 = a2_1V_1 + a2_2V_2$
Those coefficients are cosines of rotation (= direction cosines, principal directions) and comprise what are called eigenvectors, while eigenvalues of the covariance matrix are the principal component variances. In PCA, we typically discard weak last components: we thus summarize data by few first extracted components, with little information loss.
Covariances
V1 V2
V1 1.07652 .73915
V2 .73915 .95534
----PCA----
Eigenvalues %
P1 1.75756 86.500
P2 .27430 13.500
Eigenvectors
P1 P2
V1 .73543 -.67761
V2 .67761 .73543
With our plotted data, P1 component values (scores) P1 = .73543*V1 + .67761*V2 and component P2 we discard. P1's variance is 1.75756, the 1st eigenvalue of the covariance matrix, and so P1 explains 86.5% of the total variance which equals (1.07652+.95534) = (1.75756+.27430).
PCA as variable prediction ("latent" feature)
So, we discarded P2 and expect that P1 alone can reasonably represent the data. That is equivalent to say that $P1$ can reasonably well "reconstruct" or predict $V_1$ and $V_2$ [Eq.2]:
$V_1 = a1_{1}P1 + E_1$
$V_2 = a1_{2}P1 + E_2$
where coefficients $a$ are what we already know and $E$ are the errors (unpredictedness). This is actually a "regressional model" where observed variables are predicted (back) by the latent variable (if to allow calling a component a "latent" one) P1 extracted from those same variables. Look at the plot Fig.2, it is nothing else than Fig.1, only detailed:
P1 axis is shown tiled with its values (P1 scores) in green (these values are the projections of data points onto P1). Some arbitrary data points were labeled A, B,..., and their departure (error) from P1 are bold black connectors. For point A, details are shown: the coordinates of the P1 score (green A) onto V1 and V2 axes are the P1-reconstructed values of V1 and V2 according to Eq.2, $\hat{V_1} = a1_{1}P1$ and $\hat{V_2} = a1_{2}P1$. The reconstruction errors $E_1 = V_1-\hat{V_1}$ and $E_2 = V_2-\hat{V_2}$ are also displayed, in beige. The connector "error" length squared is sum of the two errors squared, according to Pythagorean.
Now, what is characteristic of PCA is that if we compute E1 and E2 for every point in the data and plot these coordinates - i.e. make the scatterplot of the errors alone, the cloud "error data" will coincide with the discarded component P2. And it does: the cloud is plotted on the same picture as the beige cloud - and you see it actually forms axis P2 (of Fig.1) as tiled with P2 component scores.
No wonder, you may say. It is so obvious: in PCA, the discarded junior component(s) is what precisely decompose(s) in the prediction errors E, in the model which explains (restores) original variables V by the latent feature(s) P1. Errors E together just constitute the left out component(s). Here is where factor analysis starts to differ from PCA.
The idea of common FA (latent feature)
Formally, the model predicting manifest variables by the extracted latent feature(s) is the same in FA as in PCA; [Eq.3]:
$V_1 = a_{1}F + E_1$
$V_2 = a_{2}F + E_2$
where F is the latent common factor extracted from the data and replacing what was P1 in Eq.2. The difference in the model is that in FA, unlike PCA, error variables (E1 and E2) are required to be uncorrelated with each other.
Digression. Here I want suddenly to interrupt the story and make a notion on what are coefficients $a$. In PCA, we said, these were entries of eigenvectors found within PCA (via eigen- or singular-value-decomposition). While latent P1 had its native variance. If we choose to standardize P1 to unit variance we'll have to compensate by appropriately scaling up coefficients $a$, in order to support the equation. That scaled up $a$s are called loadings; they are of interest numerically because they are the covariances (or correlations) between the latent and the observable variables and therefore can help interpret the latent feature. In both models - Eq.2 and Eq.3 - you are free to decide, without harming the equation, which way the terms are scaled. If F (or P1) is considered unit scaled, $a$ is loading; while if F (P1) has to have its native scale (variance), then $a$ should be de-scaled accordingly - in PCA that will equal eigenvector entries, but in FA they will be different and usually not called "eigenvectors". In most texts on factor analysis, F are assumed unit variance so $a$ are loadings. In PCA literature, P1 is typically discussed having its real variance and so $a$ are eigenvectors.
[A clarification: In Eq. 2-3 the "error" term $E$ does not mean random gaussian noise; rather, $E$ is simply unpredicted remains, which may include randomness but are not confined to it.]
OK, back to the thread. E1 and E2 are uncorrelated in factor analysis; thus, they should form a cloud of errors either round or elliptic but not diagonally oriented. While in PCA their cloud formed straight line coinciding with diagonally going P2. Both ideas are demonstrated on the pic:
Note that errors are round (not diagonally elongated) cloud in FA. Factor (latent) in FA is oriented somewhat different, i.e. it is not right the first principal component which is the "latent" in PCA. On the pic, factor line is strangely conical a bit - it will become clear why in the end.
What is the meaning of this difference between PCA and FA? Variables correlated, which is seen in the diagonally elliptical shape of the data cloud. P1 skimmed the maximal variance, so the ellipse is co-directed to P1. Consequently P1 explained by itself the correlation; but it did not explain the existing amount of correlation adequately; it looked to explain variation in data points, not correlatedness. Actually, it over-accounted for the correlation, the result of which was the appearance of the diagonal, correlated cloud of errors which compensate for the over-account. P1 alone cannot explain the strength of correlation/covariation comprehensively. Factor F can do it alone; and the condition when it becomes able to do it is exactly where errors can be forced to be uncorrelated. Since the error cloud is round no correlatedness - positive or negative - has remained after the factor was extracted, hence it is the factor which skimmed it all.
As a dimensionality reduction, PCA explains variance but explains correlations imprecisely. FA explains correlations but cannot account (by the common factors) as much data variation as PCA can. Factor(s) in FA account for that portion of variability which is the net correlational portion, called communality; and therefore factors can be interpreted as real yet unobservable forces/features/traits which hide "in" or "behind" the input variables to bring them to correlate. Because they explain correlation well mathematically. Principal components (few first ones) explain it mathematically not as well and so can be called "latent trait" (or such) only at some stretch and tentatively.
Multiplication of loadings is what explains (restores) correlation, or correlatedness in the form of covariance - if the analysis was based on covariance matrix (as in out example) rather than correlation matrix. Factor analysis that I did with the data yielded a_1=.87352, a_2=.84528, so product a_1*a_2 = .73837 is almost equal to the covariance .73915. On the other hand, PCA loadings were a1_1=.97497, a1_2=.89832, so a1_1*a1_2 = .87584 overestimates .73915 considerably.
Having explained the main theoretical distinction between PCA and FA, let's get back to our data to exemplify the idea.
FA: approximate solution (factor scores)
Below is the scatterplot showing the results of the analysis that we'll provisionally call "sub-optimal factor analysis", Fig.3.
A technical detail (you may skip): PAF method used for factor extraction.
Factor scores computed by Regression method.
Variance of the factor scores on the plot was scaled to the true
factor variance (sum of squared loadings).
See departures from Fig.2 of PCA. Beige cloud of the errors isn't round, it is diagonally elliptical, - yet it is evidently much fatter than the thin diagonal line having occured in PCA. Note also that the error connectors (shown for some points) are not parallel anymore (in PCA, they were by definition parallel to P2). Moreover, if you look, for example, at points "F" and "E" which lie mirror symmetrically over the factor's F axis, you'll find, unexpectedly, their corresponding factor scores to be quite different values. In other words, factor scores is not just linearly transformed principal component scores: factor F is found in its own way different from P1 way. And their axes do not fully coincide if shown together on the same plot Fig.4:
Apart from that they are a bit differently orienterd, F (as tiled with scores) is shorter, i.e. it accounts for smaller variance than P1 accounts for. As noted earlier, factor accounts only for variability which is responsible for correlatedness of V1 V2, i.e. the portion of total variance that is sufficient to bring the variables from primeval covariance 0 to the factual covariance .73915.
FA: optimal solution (true factor)
An optimal factor solution is when errors are round or non-diagonal elliptic cloud: E1 and E2 are fully uncorrelated. Factor analysis actually returns such an optimal solution. I did not show it on a simple scatterplot like the ones above. Why did I? - for it would have been the most interesting thing, after all.
The reason is that it would be impossible to show on a scatterplot adequately enough, even adopting a 3D plot. It is quite an interesting point theoretically. In order to make E1 and E2 completely uncorrelated it appears that all these three variables, F, E1, E2 have to lie not in the space (plane) defined by V1, V2; and the three must be uncorrelated with each other. I believe it is possible to draw such a scatterplot in 5D (and maybe with some gimmick - in 4D), but we live in 3D world, alas. Factor F must be uncorrelated to both E1 and E2 (while they two are uncorrelated too) because F is supposed to be the only (clean) and complete source of correlatedness in the observed data. Factor analysis splits total variance of the p input variables into two uncorrelated (nonoverlapping) parts: communality part (m-dimensional, where m common factors rule) and uniqueness part (p-dimensional, where errors are, also called unique factors, mutually uncorrelated).
So pardon for not showing the true factor of our data on a scatterplot here. It could be visualized quite adequately via vectors in "subject space" as done here without showing data points.
Above, in the section "The idea of common FA (latent feature)" I displayed factor (axis F) as wedge in order to warn that true factor axis does not lie on the plane V1 V2. That means that - in contrast to principal component P1 - factor F as axis is not a rotation of axis V1 or V2 in their space, and F as variable is not a linear combination of variables V1 and V2. Therefore F is modeled (extracted from variables V1 v2) as if an outer, independent variable, not a derivation of them. Equations like Eq.1 from where PCA begins, are inapplicable to compute true (optimal) factor in factor analysis, whereas formally isomorphic equations Eq.2 and Eq.3 are valid for both analyses. That is, in PCA variables generate components and components back predict variables; in FA factor(s) generate/predict variables, and not back - common factor model conceptually assumes so, even though technically factors are extracted from the observed variables.
Not only true factor is not a function of the manifest variables, true factor's values are not uniquely defined. In other words, they are simply unknown. That all is due to the fact that we're in the excessive 5D analytic space and not in our home 2D space of the data. Only good approximations (a number of methods exist) to true factor values, called factor scores, are there for us. Factor scores do lie in the plane V1 V2, like principal component scores are, they are computed as the linear functions of V1, V2, too, and it were they that I plotted in the section "FA: approximate solution (factor scores)". Principal component scores are true component values; factor scores are only reasonable approximation to the indetermined true factor values.
FA: roundup of the procedure
To gather in one small clot what the two previous sections said, and add final strokes. Actually, FA can (if you do it right, and see also data assumptions) find the true factor solution (by "true" I mean here optimal for the data sample). However, various methods of extraction exist (they differ in some secondary constraints they put). The true factor solution is up to loadings $a$ only. Thus, loadings are of optimal, true factors. Factor scores - if you need them - are computable out of those loadings in various ways and return approximations to factor values.
Thus, "factor solution" displayed by me in section "FA: approximate solution (factor scores)" was based actually on optimal loadings, i.e. on true factors. But the scores were not optimal, by destiny. The scores are computed to be a linear function of the observed variables, like component scores are, so they both could be compared on a scatterplot and I did it in didactic pursuit to show like a gradual pass from the PCA idea towards FA idea.
One must be wary when plotting on the same biplot factor loadings with factor scores in the "space of factors", be conscious that loadings pertain to true factors while scores pertain to surrogate factors (see my comments to this answer in this thread).
Rotation of factors (loadings) help interpret the latent features. Rotation of loadings can be done also in PCA if you use PCA as if factor analysis (that is, see PCA as variable prediction). PCA tends to converge in results with FA as the number of variables grow (see the extremely rich thread on practical and conceptual similarities and differences between the two methods). See my list of differences between PCA and FA in the end of this answer. Step by step computations of PCA vs FA on iris dataset is found here. There is a considerable number of good links to other participants' answers on the topic outside this thread; I'm sorry I only used few of them in the current answer.
There was later a nice question, asking:
If EFA factor and PCA component are computed in the same way, then
how come people say "each variable is a linear composite of factors"
in EFA but "each component is a linear composite of variables" in PCA?
My comment, which is based directly on the current answer, would be: "Both in FA model and PCA model a variable is a linear composite of factors (latents). However, only in PCA the backward direction is also true. In FA, the backward is true only for factor scores but not for the true factors". | What are the differences between Factor Analysis and Principal Component Analysis? | A basic, yet a kind of painstaking, explanation of PCA vs Factor analysis with the help of scatterplots, in logical steps. (I thank @amoeba who, in his comment to the question, has encouraged me to po | What are the differences between Factor Analysis and Principal Component Analysis?
A basic, yet a kind of painstaking, explanation of PCA vs Factor analysis with the help of scatterplots, in logical steps. (I thank @amoeba who, in his comment to the question, has encouraged me to post an answer in place of making links to elsewhere. So here is a leisure, late response.)
PCA as variable summarization (feature extraction)
Hope you already have understanding of PCA. To revive now.
Suppose we have correlating variables $V_1$ and $V_2$. We center them (subtract the mean) and do a scatterplot. Then we perform PCA on these centered data. PCA is a form of axes rotation which offers axes P1 and P2 instead of V1 and V2. The key property of PCA is that P1 - called 1st principal component - gets oriented so that the variance of data points along it is maximized. The new axes are new variables which values are computable as long as we know the coefficients of rotation $a$ (PCA provides them) [Eq.1]:
$P1 = a1_1V_1 + a1_2V_2$
$P2 = a2_1V_1 + a2_2V_2$
Those coefficients are cosines of rotation (= direction cosines, principal directions) and comprise what are called eigenvectors, while eigenvalues of the covariance matrix are the principal component variances. In PCA, we typically discard weak last components: we thus summarize data by few first extracted components, with little information loss.
Covariances
V1 V2
V1 1.07652 .73915
V2 .73915 .95534
----PCA----
Eigenvalues %
P1 1.75756 86.500
P2 .27430 13.500
Eigenvectors
P1 P2
V1 .73543 -.67761
V2 .67761 .73543
With our plotted data, P1 component values (scores) P1 = .73543*V1 + .67761*V2 and component P2 we discard. P1's variance is 1.75756, the 1st eigenvalue of the covariance matrix, and so P1 explains 86.5% of the total variance which equals (1.07652+.95534) = (1.75756+.27430).
PCA as variable prediction ("latent" feature)
So, we discarded P2 and expect that P1 alone can reasonably represent the data. That is equivalent to say that $P1$ can reasonably well "reconstruct" or predict $V_1$ and $V_2$ [Eq.2]:
$V_1 = a1_{1}P1 + E_1$
$V_2 = a1_{2}P1 + E_2$
where coefficients $a$ are what we already know and $E$ are the errors (unpredictedness). This is actually a "regressional model" where observed variables are predicted (back) by the latent variable (if to allow calling a component a "latent" one) P1 extracted from those same variables. Look at the plot Fig.2, it is nothing else than Fig.1, only detailed:
P1 axis is shown tiled with its values (P1 scores) in green (these values are the projections of data points onto P1). Some arbitrary data points were labeled A, B,..., and their departure (error) from P1 are bold black connectors. For point A, details are shown: the coordinates of the P1 score (green A) onto V1 and V2 axes are the P1-reconstructed values of V1 and V2 according to Eq.2, $\hat{V_1} = a1_{1}P1$ and $\hat{V_2} = a1_{2}P1$. The reconstruction errors $E_1 = V_1-\hat{V_1}$ and $E_2 = V_2-\hat{V_2}$ are also displayed, in beige. The connector "error" length squared is sum of the two errors squared, according to Pythagorean.
Now, what is characteristic of PCA is that if we compute E1 and E2 for every point in the data and plot these coordinates - i.e. make the scatterplot of the errors alone, the cloud "error data" will coincide with the discarded component P2. And it does: the cloud is plotted on the same picture as the beige cloud - and you see it actually forms axis P2 (of Fig.1) as tiled with P2 component scores.
No wonder, you may say. It is so obvious: in PCA, the discarded junior component(s) is what precisely decompose(s) in the prediction errors E, in the model which explains (restores) original variables V by the latent feature(s) P1. Errors E together just constitute the left out component(s). Here is where factor analysis starts to differ from PCA.
The idea of common FA (latent feature)
Formally, the model predicting manifest variables by the extracted latent feature(s) is the same in FA as in PCA; [Eq.3]:
$V_1 = a_{1}F + E_1$
$V_2 = a_{2}F + E_2$
where F is the latent common factor extracted from the data and replacing what was P1 in Eq.2. The difference in the model is that in FA, unlike PCA, error variables (E1 and E2) are required to be uncorrelated with each other.
Digression. Here I want suddenly to interrupt the story and make a notion on what are coefficients $a$. In PCA, we said, these were entries of eigenvectors found within PCA (via eigen- or singular-value-decomposition). While latent P1 had its native variance. If we choose to standardize P1 to unit variance we'll have to compensate by appropriately scaling up coefficients $a$, in order to support the equation. That scaled up $a$s are called loadings; they are of interest numerically because they are the covariances (or correlations) between the latent and the observable variables and therefore can help interpret the latent feature. In both models - Eq.2 and Eq.3 - you are free to decide, without harming the equation, which way the terms are scaled. If F (or P1) is considered unit scaled, $a$ is loading; while if F (P1) has to have its native scale (variance), then $a$ should be de-scaled accordingly - in PCA that will equal eigenvector entries, but in FA they will be different and usually not called "eigenvectors". In most texts on factor analysis, F are assumed unit variance so $a$ are loadings. In PCA literature, P1 is typically discussed having its real variance and so $a$ are eigenvectors.
[A clarification: In Eq. 2-3 the "error" term $E$ does not mean random gaussian noise; rather, $E$ is simply unpredicted remains, which may include randomness but are not confined to it.]
OK, back to the thread. E1 and E2 are uncorrelated in factor analysis; thus, they should form a cloud of errors either round or elliptic but not diagonally oriented. While in PCA their cloud formed straight line coinciding with diagonally going P2. Both ideas are demonstrated on the pic:
Note that errors are round (not diagonally elongated) cloud in FA. Factor (latent) in FA is oriented somewhat different, i.e. it is not right the first principal component which is the "latent" in PCA. On the pic, factor line is strangely conical a bit - it will become clear why in the end.
What is the meaning of this difference between PCA and FA? Variables correlated, which is seen in the diagonally elliptical shape of the data cloud. P1 skimmed the maximal variance, so the ellipse is co-directed to P1. Consequently P1 explained by itself the correlation; but it did not explain the existing amount of correlation adequately; it looked to explain variation in data points, not correlatedness. Actually, it over-accounted for the correlation, the result of which was the appearance of the diagonal, correlated cloud of errors which compensate for the over-account. P1 alone cannot explain the strength of correlation/covariation comprehensively. Factor F can do it alone; and the condition when it becomes able to do it is exactly where errors can be forced to be uncorrelated. Since the error cloud is round no correlatedness - positive or negative - has remained after the factor was extracted, hence it is the factor which skimmed it all.
As a dimensionality reduction, PCA explains variance but explains correlations imprecisely. FA explains correlations but cannot account (by the common factors) as much data variation as PCA can. Factor(s) in FA account for that portion of variability which is the net correlational portion, called communality; and therefore factors can be interpreted as real yet unobservable forces/features/traits which hide "in" or "behind" the input variables to bring them to correlate. Because they explain correlation well mathematically. Principal components (few first ones) explain it mathematically not as well and so can be called "latent trait" (or such) only at some stretch and tentatively.
Multiplication of loadings is what explains (restores) correlation, or correlatedness in the form of covariance - if the analysis was based on covariance matrix (as in out example) rather than correlation matrix. Factor analysis that I did with the data yielded a_1=.87352, a_2=.84528, so product a_1*a_2 = .73837 is almost equal to the covariance .73915. On the other hand, PCA loadings were a1_1=.97497, a1_2=.89832, so a1_1*a1_2 = .87584 overestimates .73915 considerably.
Having explained the main theoretical distinction between PCA and FA, let's get back to our data to exemplify the idea.
FA: approximate solution (factor scores)
Below is the scatterplot showing the results of the analysis that we'll provisionally call "sub-optimal factor analysis", Fig.3.
A technical detail (you may skip): PAF method used for factor extraction.
Factor scores computed by Regression method.
Variance of the factor scores on the plot was scaled to the true
factor variance (sum of squared loadings).
See departures from Fig.2 of PCA. Beige cloud of the errors isn't round, it is diagonally elliptical, - yet it is evidently much fatter than the thin diagonal line having occured in PCA. Note also that the error connectors (shown for some points) are not parallel anymore (in PCA, they were by definition parallel to P2). Moreover, if you look, for example, at points "F" and "E" which lie mirror symmetrically over the factor's F axis, you'll find, unexpectedly, their corresponding factor scores to be quite different values. In other words, factor scores is not just linearly transformed principal component scores: factor F is found in its own way different from P1 way. And their axes do not fully coincide if shown together on the same plot Fig.4:
Apart from that they are a bit differently orienterd, F (as tiled with scores) is shorter, i.e. it accounts for smaller variance than P1 accounts for. As noted earlier, factor accounts only for variability which is responsible for correlatedness of V1 V2, i.e. the portion of total variance that is sufficient to bring the variables from primeval covariance 0 to the factual covariance .73915.
FA: optimal solution (true factor)
An optimal factor solution is when errors are round or non-diagonal elliptic cloud: E1 and E2 are fully uncorrelated. Factor analysis actually returns such an optimal solution. I did not show it on a simple scatterplot like the ones above. Why did I? - for it would have been the most interesting thing, after all.
The reason is that it would be impossible to show on a scatterplot adequately enough, even adopting a 3D plot. It is quite an interesting point theoretically. In order to make E1 and E2 completely uncorrelated it appears that all these three variables, F, E1, E2 have to lie not in the space (plane) defined by V1, V2; and the three must be uncorrelated with each other. I believe it is possible to draw such a scatterplot in 5D (and maybe with some gimmick - in 4D), but we live in 3D world, alas. Factor F must be uncorrelated to both E1 and E2 (while they two are uncorrelated too) because F is supposed to be the only (clean) and complete source of correlatedness in the observed data. Factor analysis splits total variance of the p input variables into two uncorrelated (nonoverlapping) parts: communality part (m-dimensional, where m common factors rule) and uniqueness part (p-dimensional, where errors are, also called unique factors, mutually uncorrelated).
So pardon for not showing the true factor of our data on a scatterplot here. It could be visualized quite adequately via vectors in "subject space" as done here without showing data points.
Above, in the section "The idea of common FA (latent feature)" I displayed factor (axis F) as wedge in order to warn that true factor axis does not lie on the plane V1 V2. That means that - in contrast to principal component P1 - factor F as axis is not a rotation of axis V1 or V2 in their space, and F as variable is not a linear combination of variables V1 and V2. Therefore F is modeled (extracted from variables V1 v2) as if an outer, independent variable, not a derivation of them. Equations like Eq.1 from where PCA begins, are inapplicable to compute true (optimal) factor in factor analysis, whereas formally isomorphic equations Eq.2 and Eq.3 are valid for both analyses. That is, in PCA variables generate components and components back predict variables; in FA factor(s) generate/predict variables, and not back - common factor model conceptually assumes so, even though technically factors are extracted from the observed variables.
Not only true factor is not a function of the manifest variables, true factor's values are not uniquely defined. In other words, they are simply unknown. That all is due to the fact that we're in the excessive 5D analytic space and not in our home 2D space of the data. Only good approximations (a number of methods exist) to true factor values, called factor scores, are there for us. Factor scores do lie in the plane V1 V2, like principal component scores are, they are computed as the linear functions of V1, V2, too, and it were they that I plotted in the section "FA: approximate solution (factor scores)". Principal component scores are true component values; factor scores are only reasonable approximation to the indetermined true factor values.
FA: roundup of the procedure
To gather in one small clot what the two previous sections said, and add final strokes. Actually, FA can (if you do it right, and see also data assumptions) find the true factor solution (by "true" I mean here optimal for the data sample). However, various methods of extraction exist (they differ in some secondary constraints they put). The true factor solution is up to loadings $a$ only. Thus, loadings are of optimal, true factors. Factor scores - if you need them - are computable out of those loadings in various ways and return approximations to factor values.
Thus, "factor solution" displayed by me in section "FA: approximate solution (factor scores)" was based actually on optimal loadings, i.e. on true factors. But the scores were not optimal, by destiny. The scores are computed to be a linear function of the observed variables, like component scores are, so they both could be compared on a scatterplot and I did it in didactic pursuit to show like a gradual pass from the PCA idea towards FA idea.
One must be wary when plotting on the same biplot factor loadings with factor scores in the "space of factors", be conscious that loadings pertain to true factors while scores pertain to surrogate factors (see my comments to this answer in this thread).
Rotation of factors (loadings) help interpret the latent features. Rotation of loadings can be done also in PCA if you use PCA as if factor analysis (that is, see PCA as variable prediction). PCA tends to converge in results with FA as the number of variables grow (see the extremely rich thread on practical and conceptual similarities and differences between the two methods). See my list of differences between PCA and FA in the end of this answer. Step by step computations of PCA vs FA on iris dataset is found here. There is a considerable number of good links to other participants' answers on the topic outside this thread; I'm sorry I only used few of them in the current answer.
There was later a nice question, asking:
If EFA factor and PCA component are computed in the same way, then
how come people say "each variable is a linear composite of factors"
in EFA but "each component is a linear composite of variables" in PCA?
My comment, which is based directly on the current answer, would be: "Both in FA model and PCA model a variable is a linear composite of factors (latents). However, only in PCA the backward direction is also true. In FA, the backward is true only for factor scores but not for the true factors". | What are the differences between Factor Analysis and Principal Component Analysis?
A basic, yet a kind of painstaking, explanation of PCA vs Factor analysis with the help of scatterplots, in logical steps. (I thank @amoeba who, in his comment to the question, has encouraged me to po |
464 | What are the differences between Factor Analysis and Principal Component Analysis? | The top answer in this thread suggests that PCA is more of a dimensionality reduction technique, whereas FA is more of a latent variable technique. This is sensu stricto correct. But many answers here and many treatments elsewhere present PCA and FA as two completely different methods, with dissimilar if not opposite goals, methods and outcomes. I disagree; I believe that when PCA is taken to be a latent variable technique, it is quite close to FA, and they should better be seen as very similar methods.
I provided my own account of the similarities and differences between PCA and FA in the following thread: Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis? There I argue that for simple mathematical reasons the outcome of PCA and FA can be expected to be quite similar, given only that the number of variables is not very small (perhaps over a dozen). See my [long!] answer in the linked thread for mathematical details and Monte Carlo simulations. For a much more concise version of my argument see here: Under which conditions do PCA and FA yield similar results?
Here I would like to show it on an example. I will analyze the wine dataset from UCI Machine Learning Repository. It is a fairly well-known dataset with $n=178$ wines from three different grapes described by $p=13$ variables. Here is how the correlation matrix looks like:
I ran both PCA and FA analysis and show 2D projections of the data as biplots for both of them on the figure below (PCA on the left, FA on the right). Horizontal and vertical axes show 1st and 2nd component/factor scores. Each of the $n=178$ dots corresponds to one wine, and dots are colored according to the group (see legend):
The loadings of the 1st and 2nd component/factor onto the each of the $p=13$ original variables are shown as black lines. They are equal to correlations between each of the original variables and the two components/factors. Of course correlations cannot exceed $1$, so all loading lines are contained inside of the "correlation circle" showing maximal possible correlation. All loadings and the circle are arbitrarily scaled by a factor of $3$, otherwise they would be too small to be seen (so the radius of the circle is $3$ and not $1$).
Note that there is hardly any difference between PCA and FA! There are small deviations here and there, but the general picture is almost identical, and all the loadings are very similar and point in the same directions. This is exactly what was expected from the theory and is no surprise; still, it is instructive to observe.
PS. For a much prettier PCA biplot of the same dataset, see this answer by @vqv.
PPS. Whereas PCA calculations are standard, FA calculations might require a comment. Factor loadings were computed by an "iterated principal factors" algorithm until convergence (9 iterations), with communalities initialized with partial correlations. Once the loadings converged, the scores were calculated using Bartlett's method. This yields standardized scores; I scaled them up by the respective factor variances (given by loadings lengths). | What are the differences between Factor Analysis and Principal Component Analysis? | The top answer in this thread suggests that PCA is more of a dimensionality reduction technique, whereas FA is more of a latent variable technique. This is sensu stricto correct. But many answers here | What are the differences between Factor Analysis and Principal Component Analysis?
The top answer in this thread suggests that PCA is more of a dimensionality reduction technique, whereas FA is more of a latent variable technique. This is sensu stricto correct. But many answers here and many treatments elsewhere present PCA and FA as two completely different methods, with dissimilar if not opposite goals, methods and outcomes. I disagree; I believe that when PCA is taken to be a latent variable technique, it is quite close to FA, and they should better be seen as very similar methods.
I provided my own account of the similarities and differences between PCA and FA in the following thread: Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis? There I argue that for simple mathematical reasons the outcome of PCA and FA can be expected to be quite similar, given only that the number of variables is not very small (perhaps over a dozen). See my [long!] answer in the linked thread for mathematical details and Monte Carlo simulations. For a much more concise version of my argument see here: Under which conditions do PCA and FA yield similar results?
Here I would like to show it on an example. I will analyze the wine dataset from UCI Machine Learning Repository. It is a fairly well-known dataset with $n=178$ wines from three different grapes described by $p=13$ variables. Here is how the correlation matrix looks like:
I ran both PCA and FA analysis and show 2D projections of the data as biplots for both of them on the figure below (PCA on the left, FA on the right). Horizontal and vertical axes show 1st and 2nd component/factor scores. Each of the $n=178$ dots corresponds to one wine, and dots are colored according to the group (see legend):
The loadings of the 1st and 2nd component/factor onto the each of the $p=13$ original variables are shown as black lines. They are equal to correlations between each of the original variables and the two components/factors. Of course correlations cannot exceed $1$, so all loading lines are contained inside of the "correlation circle" showing maximal possible correlation. All loadings and the circle are arbitrarily scaled by a factor of $3$, otherwise they would be too small to be seen (so the radius of the circle is $3$ and not $1$).
Note that there is hardly any difference between PCA and FA! There are small deviations here and there, but the general picture is almost identical, and all the loadings are very similar and point in the same directions. This is exactly what was expected from the theory and is no surprise; still, it is instructive to observe.
PS. For a much prettier PCA biplot of the same dataset, see this answer by @vqv.
PPS. Whereas PCA calculations are standard, FA calculations might require a comment. Factor loadings were computed by an "iterated principal factors" algorithm until convergence (9 iterations), with communalities initialized with partial correlations. Once the loadings converged, the scores were calculated using Bartlett's method. This yields standardized scores; I scaled them up by the respective factor variances (given by loadings lengths). | What are the differences between Factor Analysis and Principal Component Analysis?
The top answer in this thread suggests that PCA is more of a dimensionality reduction technique, whereas FA is more of a latent variable technique. This is sensu stricto correct. But many answers here |
465 | What are the differences between Factor Analysis and Principal Component Analysis? | There are numerous suggested definitions on the web. Here is one from a on-line glossary on statistical learning:
Principal Component Analysis
Constructing new features which are
the principal components of a data
set. The principal components are
random variables of maximal variance
constructed from linear combinations
of the input features. Equivalently,
they are the projections onto the
principal component axes, which are
lines that minimize the average
squared distance to each point in the
data set. To ensure uniqueness, all of
the principal component axes must be
orthogonal. PCA is a
maximum-likelihood technique for
linear regression in the presence of
Gaussian noise on both inputs and
outputs. In some cases, PCA
corresponds to a Fourier transform,
such as the DCT used in JPEG image
compression. See "Eigenfaces for
recognition" (Turk&Pentland, J
Cognitive Neuroscience 3(1), 1991),
Bishop, "Probabilistic Principal
Component Analysis", and "Automatic
choice of dimensionality for PCA".choice of dimensionality for PCA".
Factor analysis
A generalization of PCA which is based
explicitly on maximum-likelihood. Like
PCA, each data point is assumed to
arise from sampling a point in a
subspace and then perturbing it with
full-dimensional Gaussian noise. The
difference is that factor analysis
allows the noise to have an arbitrary
diagonal covariance matrix, while PCA
assumes the noise is spherical. In
addition to estimating the subspace,
factor analysis estimates the noise
covariance matrix. See "The EM
Algorithm for Mixtures of Factor
Analyzers".choice of dimensionality
for PCA". | What are the differences between Factor Analysis and Principal Component Analysis? | There are numerous suggested definitions on the web. Here is one from a on-line glossary on statistical learning:
Principal Component Analysis
Constructing new features which are
the principal comp | What are the differences between Factor Analysis and Principal Component Analysis?
There are numerous suggested definitions on the web. Here is one from a on-line glossary on statistical learning:
Principal Component Analysis
Constructing new features which are
the principal components of a data
set. The principal components are
random variables of maximal variance
constructed from linear combinations
of the input features. Equivalently,
they are the projections onto the
principal component axes, which are
lines that minimize the average
squared distance to each point in the
data set. To ensure uniqueness, all of
the principal component axes must be
orthogonal. PCA is a
maximum-likelihood technique for
linear regression in the presence of
Gaussian noise on both inputs and
outputs. In some cases, PCA
corresponds to a Fourier transform,
such as the DCT used in JPEG image
compression. See "Eigenfaces for
recognition" (Turk&Pentland, J
Cognitive Neuroscience 3(1), 1991),
Bishop, "Probabilistic Principal
Component Analysis", and "Automatic
choice of dimensionality for PCA".choice of dimensionality for PCA".
Factor analysis
A generalization of PCA which is based
explicitly on maximum-likelihood. Like
PCA, each data point is assumed to
arise from sampling a point in a
subspace and then perturbing it with
full-dimensional Gaussian noise. The
difference is that factor analysis
allows the noise to have an arbitrary
diagonal covariance matrix, while PCA
assumes the noise is spherical. In
addition to estimating the subspace,
factor analysis estimates the noise
covariance matrix. See "The EM
Algorithm for Mixtures of Factor
Analyzers".choice of dimensionality
for PCA". | What are the differences between Factor Analysis and Principal Component Analysis?
There are numerous suggested definitions on the web. Here is one from a on-line glossary on statistical learning:
Principal Component Analysis
Constructing new features which are
the principal comp |
466 | What are the differences between Factor Analysis and Principal Component Analysis? | You are right about your first point, although in FA you generally work with both (uniqueness and communality).
The choice between PCA and FA is a long-standing debate among psychometricians. I don't quite follow your points, though. Rotation of principal axes can be applied whatever the method is used to construct latent factors. In fact, most of the times this is the VARIMAX rotation (orthogonal rotation, considering uncorrelated factors) that is used, for practical reasons (easiest interpretation, easiest scoring rules or interpretation of factor scores, etc.), although oblique rotation (e.g. PROMAX) might probably better reflect the reality (latent constructs are often correlated with each other), at least in the tradition of FA where you assume that a latent construct is really at the heart of the observed inter-correlations between your variables. The point is that PCA followed by VARIMAX rotation somewhat distorts the interpretation of the linear combinations of the original variables in the "data analysis" tradition (see the work of Michel Tenenhaus). From a psychometrical perspectice, FA models are to be preferred since they explicitly account for measurement errors, while PCA doesn't care about that. Briefly stated, using PCA you are expressing each component (factor) as a linear combination of the variables, whereas in FA these are the variables that are expressed as linear combinations of the factors (including communalities and uniqueness components, as you said).
I recommend you to read first the following discussions about this topic:
What are the differences between Factor Analysis and Principal Component Analysis
On the use of oblique rotation after PCA -- see reference therein | What are the differences between Factor Analysis and Principal Component Analysis? | You are right about your first point, although in FA you generally work with both (uniqueness and communality).
The choice between PCA and FA is a long-standing debate among psychometricians. I don't | What are the differences between Factor Analysis and Principal Component Analysis?
You are right about your first point, although in FA you generally work with both (uniqueness and communality).
The choice between PCA and FA is a long-standing debate among psychometricians. I don't quite follow your points, though. Rotation of principal axes can be applied whatever the method is used to construct latent factors. In fact, most of the times this is the VARIMAX rotation (orthogonal rotation, considering uncorrelated factors) that is used, for practical reasons (easiest interpretation, easiest scoring rules or interpretation of factor scores, etc.), although oblique rotation (e.g. PROMAX) might probably better reflect the reality (latent constructs are often correlated with each other), at least in the tradition of FA where you assume that a latent construct is really at the heart of the observed inter-correlations between your variables. The point is that PCA followed by VARIMAX rotation somewhat distorts the interpretation of the linear combinations of the original variables in the "data analysis" tradition (see the work of Michel Tenenhaus). From a psychometrical perspectice, FA models are to be preferred since they explicitly account for measurement errors, while PCA doesn't care about that. Briefly stated, using PCA you are expressing each component (factor) as a linear combination of the variables, whereas in FA these are the variables that are expressed as linear combinations of the factors (including communalities and uniqueness components, as you said).
I recommend you to read first the following discussions about this topic:
What are the differences between Factor Analysis and Principal Component Analysis
On the use of oblique rotation after PCA -- see reference therein | What are the differences between Factor Analysis and Principal Component Analysis?
You are right about your first point, although in FA you generally work with both (uniqueness and communality).
The choice between PCA and FA is a long-standing debate among psychometricians. I don't |
467 | What are the differences between Factor Analysis and Principal Component Analysis? | Differences between factor analysis and principal component analysis are:
• In factor analysis there is a structured model and some assumptions. In this respect it is a statistical technique which does not apply to principal component analysis which is a purely mathematical transformation.
• The aim of principal component analysis is to explain the variance while factor analysis explains the covariance between the variables.
One of the biggest reasons for the confusion between the two has to do with the fact that one of the factor extraction methods in Factor Analysis is called "method of principal components". However, it's one thing to use PCA and another thing to use the method of principal components in FA. The names may be similar, but there are significant differences. The former is an independent analytical method while the latter is merely a tool for factor extraction. | What are the differences between Factor Analysis and Principal Component Analysis? | Differences between factor analysis and principal component analysis are:
• In factor analysis there is a structured model and some assumptions. In this respect it is a statistical technique which doe | What are the differences between Factor Analysis and Principal Component Analysis?
Differences between factor analysis and principal component analysis are:
• In factor analysis there is a structured model and some assumptions. In this respect it is a statistical technique which does not apply to principal component analysis which is a purely mathematical transformation.
• The aim of principal component analysis is to explain the variance while factor analysis explains the covariance between the variables.
One of the biggest reasons for the confusion between the two has to do with the fact that one of the factor extraction methods in Factor Analysis is called "method of principal components". However, it's one thing to use PCA and another thing to use the method of principal components in FA. The names may be similar, but there are significant differences. The former is an independent analytical method while the latter is merely a tool for factor extraction. | What are the differences between Factor Analysis and Principal Component Analysis?
Differences between factor analysis and principal component analysis are:
• In factor analysis there is a structured model and some assumptions. In this respect it is a statistical technique which doe |
468 | What are the differences between Factor Analysis and Principal Component Analysis? | For me (and I hope this is useful) factor analysis is much more useful than PCA.
Recently, I had the pleasure of analysing a scale through factor analysis. This scale (although it's widely used in industry) was developed by using PCA, and to my knowledge had never been factor analysed.
When I performed the factor analysis (principal axis) I discovered that the communalities for three of the items were less than 30%, which means that over 70% of the items' variance was not being analysed. PCA just transforms the data into a new combination and doesn't care about communalities. My conclusion was that the scale was not a very good one from a psychometric point of view, and I've confirmed this with a different sample.
Essentially, if you want to predict using the factors, use PCA, while if you want to understand the latent factors, use Factor Analysis. | What are the differences between Factor Analysis and Principal Component Analysis? | For me (and I hope this is useful) factor analysis is much more useful than PCA.
Recently, I had the pleasure of analysing a scale through factor analysis. This scale (although it's widely used in in | What are the differences between Factor Analysis and Principal Component Analysis?
For me (and I hope this is useful) factor analysis is much more useful than PCA.
Recently, I had the pleasure of analysing a scale through factor analysis. This scale (although it's widely used in industry) was developed by using PCA, and to my knowledge had never been factor analysed.
When I performed the factor analysis (principal axis) I discovered that the communalities for three of the items were less than 30%, which means that over 70% of the items' variance was not being analysed. PCA just transforms the data into a new combination and doesn't care about communalities. My conclusion was that the scale was not a very good one from a psychometric point of view, and I've confirmed this with a different sample.
Essentially, if you want to predict using the factors, use PCA, while if you want to understand the latent factors, use Factor Analysis. | What are the differences between Factor Analysis and Principal Component Analysis?
For me (and I hope this is useful) factor analysis is much more useful than PCA.
Recently, I had the pleasure of analysing a scale through factor analysis. This scale (although it's widely used in in |
469 | What are the differences between Factor Analysis and Principal Component Analysis? | A quote from a really nice textbook (Brown, 2006, pp. 22, emphasis added).
PCA = principal components analysis
EFA = exploratory factor analysis
CFA = confirmatory factor analysis
Although related to EFA, principal components analysis (PCA) is frequently miscategorized as an estimation method of common factor analysis. Unlike the estimators discussed in the preceding paragraph (ML,
PF), PCA relies on a different set of quantitative methods that are
not based on the common factor model. PCA does not differentiate
common and unique variance. Rather, PCA aims to account for the
variance in the observed measures rather than explain the correlations
among them. Thus, PCA is more appropriately used as a data reduction
technique to reduce a larger set of measures to a smaller, more
manageable number of composite variables to use in subsequent
analyses. However, some methodologists have argued that PCA is a
reasonable or perhaps superior alternative to EFA, in view of the fact
that PCA possesses several desirable statistical properties (e.g.,
computationally simpler, not susceptible to improper solutions, often
produces results similar to those of EFA, ability of PCA to calculate
a participant’s score on a principal component whereas the
indeterminate nature of EFA complicates such computations). Although
debate on this issue continues, Fabrigar et al. (1999) provide several
reasons in opposition to the argument for the place of PCA in factor
analysis. These authors underscore the situations where EFA and PCA
produce dissimilar results; for instance, when communalities are low
or when there are only a few indicators of a given factor (cf.
Widaman, 1993). Regardless, if the overriding rationale and
empirical objectives of an analysis are in accord with the common
factor model, then it is conceptually and mathematically inconsistent to conduct PCA; that is, EFA is more appropriate if the stated
objective is to reproduce the intercorrelations of a set of
indicators with a smaller number of latent dimensions, recognizing the
existence of measurement error in the observed measures. Floyd and
Widaman (1995) make the related point that estimates based on EFA are
more likely to generalize to CFA than are those obtained from PCA in
that, unlike PCA, EFA and CFA are based on the common factor model.
This is a noteworthy consideration in light of the fact that EFA is
often used as a precursor to CFA in scale development and construct
validation. A detailed demonstration of the computational differences between PCA and EFA can be found in multivariate and factor analytic textbooks (e.g., Tabachnick & Fidell,
2001).
Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: Guilford Press. | What are the differences between Factor Analysis and Principal Component Analysis? | A quote from a really nice textbook (Brown, 2006, pp. 22, emphasis added).
PCA = principal components analysis
EFA = exploratory factor analysis
CFA = confirmatory factor analysis
Although related to | What are the differences between Factor Analysis and Principal Component Analysis?
A quote from a really nice textbook (Brown, 2006, pp. 22, emphasis added).
PCA = principal components analysis
EFA = exploratory factor analysis
CFA = confirmatory factor analysis
Although related to EFA, principal components analysis (PCA) is frequently miscategorized as an estimation method of common factor analysis. Unlike the estimators discussed in the preceding paragraph (ML,
PF), PCA relies on a different set of quantitative methods that are
not based on the common factor model. PCA does not differentiate
common and unique variance. Rather, PCA aims to account for the
variance in the observed measures rather than explain the correlations
among them. Thus, PCA is more appropriately used as a data reduction
technique to reduce a larger set of measures to a smaller, more
manageable number of composite variables to use in subsequent
analyses. However, some methodologists have argued that PCA is a
reasonable or perhaps superior alternative to EFA, in view of the fact
that PCA possesses several desirable statistical properties (e.g.,
computationally simpler, not susceptible to improper solutions, often
produces results similar to those of EFA, ability of PCA to calculate
a participant’s score on a principal component whereas the
indeterminate nature of EFA complicates such computations). Although
debate on this issue continues, Fabrigar et al. (1999) provide several
reasons in opposition to the argument for the place of PCA in factor
analysis. These authors underscore the situations where EFA and PCA
produce dissimilar results; for instance, when communalities are low
or when there are only a few indicators of a given factor (cf.
Widaman, 1993). Regardless, if the overriding rationale and
empirical objectives of an analysis are in accord with the common
factor model, then it is conceptually and mathematically inconsistent to conduct PCA; that is, EFA is more appropriate if the stated
objective is to reproduce the intercorrelations of a set of
indicators with a smaller number of latent dimensions, recognizing the
existence of measurement error in the observed measures. Floyd and
Widaman (1995) make the related point that estimates based on EFA are
more likely to generalize to CFA than are those obtained from PCA in
that, unlike PCA, EFA and CFA are based on the common factor model.
This is a noteworthy consideration in light of the fact that EFA is
often used as a precursor to CFA in scale development and construct
validation. A detailed demonstration of the computational differences between PCA and EFA can be found in multivariate and factor analytic textbooks (e.g., Tabachnick & Fidell,
2001).
Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: Guilford Press. | What are the differences between Factor Analysis and Principal Component Analysis?
A quote from a really nice textbook (Brown, 2006, pp. 22, emphasis added).
PCA = principal components analysis
EFA = exploratory factor analysis
CFA = confirmatory factor analysis
Although related to |
470 | What are the differences between Factor Analysis and Principal Component Analysis? | Expanding on @StatisticsDocConsulting's answer: the difference in loadings between EFA and PCA is non-trivial with a small number of variables. Here's a simulation function to demonstrate this in R:
simtestit=function(Sample.Size=1000,n.Variables=3,n.Factors=1,Iterations=100)
{require(psych);X=list();x=matrix(NA,nrow=Sample.Size,ncol=n.Variables)
for(i in 1:Iterations){for(i in 1:n.Variables){x[,i]=rnorm(Sample.Size)}
X$PCA=append(X$PCA,mean(abs(principal(x,n.Factors)$loadings[,1])))
X$EFA=append(X$EFA,mean(abs(factanal(x,n.Factors)$loadings[,1])))};X}
By default, this function performs 100 Iterations, in each of which it produces random, normally distributed samples (Sample.Size$=1000$) of three variables, and extracts one factor using PCA and ML-EFA. It outputs a list of two Iterations-long vectors composed of the mean magnitudes of the simulated variables' loadings on the unrotated first component from PCA and general factor from EFA, respectively. It allows you to play around with sample size and number of variables and factors to suit your situation, within the limits of the principal() and factanal() functions and your computer.
Using this code, I've simulated samples of 3–100 variables with 500 iterations each to produce data:
Y=data.frame(n.Variables=3:100,Mean.PCA.Loading=rep(NA,98),Mean.EFA.Loading=rep(NA,98))
for(i in 3:100)
{X=simtestit(n.Variables=i,Iterations=500);Y[i-2,2]=mean(X$PCA);Y[i-2,3]=mean(X$EFA)}
...for a plot of the sensitivity of mean loadings (across variables and iterations) to number of variables:
This demonstrates how differently one has to interpret the strength of loadings in PCA vs. EFA. Both depend somewhat on number of variables, but loadings are biased upward much more strongly in PCA. The difference between mean loadings these methods decreases as the number of variables increases, but even with 100 variables, PCA loadings average $.067$ higher than EFA loadings in random normal data. However, note that mean loadings will usually be higher in real applications, because one generally uses these methods on more correlated variables. I'm not sure how this might affect the difference of mean loadings. | What are the differences between Factor Analysis and Principal Component Analysis? | Expanding on @StatisticsDocConsulting's answer: the difference in loadings between EFA and PCA is non-trivial with a small number of variables. Here's a simulation function to demonstrate this in R:
s | What are the differences between Factor Analysis and Principal Component Analysis?
Expanding on @StatisticsDocConsulting's answer: the difference in loadings between EFA and PCA is non-trivial with a small number of variables. Here's a simulation function to demonstrate this in R:
simtestit=function(Sample.Size=1000,n.Variables=3,n.Factors=1,Iterations=100)
{require(psych);X=list();x=matrix(NA,nrow=Sample.Size,ncol=n.Variables)
for(i in 1:Iterations){for(i in 1:n.Variables){x[,i]=rnorm(Sample.Size)}
X$PCA=append(X$PCA,mean(abs(principal(x,n.Factors)$loadings[,1])))
X$EFA=append(X$EFA,mean(abs(factanal(x,n.Factors)$loadings[,1])))};X}
By default, this function performs 100 Iterations, in each of which it produces random, normally distributed samples (Sample.Size$=1000$) of three variables, and extracts one factor using PCA and ML-EFA. It outputs a list of two Iterations-long vectors composed of the mean magnitudes of the simulated variables' loadings on the unrotated first component from PCA and general factor from EFA, respectively. It allows you to play around with sample size and number of variables and factors to suit your situation, within the limits of the principal() and factanal() functions and your computer.
Using this code, I've simulated samples of 3–100 variables with 500 iterations each to produce data:
Y=data.frame(n.Variables=3:100,Mean.PCA.Loading=rep(NA,98),Mean.EFA.Loading=rep(NA,98))
for(i in 3:100)
{X=simtestit(n.Variables=i,Iterations=500);Y[i-2,2]=mean(X$PCA);Y[i-2,3]=mean(X$EFA)}
...for a plot of the sensitivity of mean loadings (across variables and iterations) to number of variables:
This demonstrates how differently one has to interpret the strength of loadings in PCA vs. EFA. Both depend somewhat on number of variables, but loadings are biased upward much more strongly in PCA. The difference between mean loadings these methods decreases as the number of variables increases, but even with 100 variables, PCA loadings average $.067$ higher than EFA loadings in random normal data. However, note that mean loadings will usually be higher in real applications, because one generally uses these methods on more correlated variables. I'm not sure how this might affect the difference of mean loadings. | What are the differences between Factor Analysis and Principal Component Analysis?
Expanding on @StatisticsDocConsulting's answer: the difference in loadings between EFA and PCA is non-trivial with a small number of variables. Here's a simulation function to demonstrate this in R:
s |
471 | What are the differences between Factor Analysis and Principal Component Analysis? | One can think of a PCA as being like a FA in which the communalities are assumed to equal 1 for all variables. In practice, this means that items that would have relatively low factor loadings in FA due to low communality will have higher loadings in PCA. This is not a desirable feature if the primary purpose of the analysis is to cut item length and clean a battery of items of those with low or equivocal loadings, or to identify concepts that are not well represented in the item pool. | What are the differences between Factor Analysis and Principal Component Analysis? | One can think of a PCA as being like a FA in which the communalities are assumed to equal 1 for all variables. In practice, this means that items that would have relatively low factor loadings in FA | What are the differences between Factor Analysis and Principal Component Analysis?
One can think of a PCA as being like a FA in which the communalities are assumed to equal 1 for all variables. In practice, this means that items that would have relatively low factor loadings in FA due to low communality will have higher loadings in PCA. This is not a desirable feature if the primary purpose of the analysis is to cut item length and clean a battery of items of those with low or equivocal loadings, or to identify concepts that are not well represented in the item pool. | What are the differences between Factor Analysis and Principal Component Analysis?
One can think of a PCA as being like a FA in which the communalities are assumed to equal 1 for all variables. In practice, this means that items that would have relatively low factor loadings in FA |
472 | What are the differences between Factor Analysis and Principal Component Analysis? | In a paper by Tipping and Bischop the tight relationship between Probabalistic PCA (PPCA) and Factor analysis is discussed. PPCA is closer to FA than the classic PCA is. The common model is
$$\mathbf{y} = \mu + \mathbf{Wx} + \epsilon$$
where $\mathbf{W} \in \mathbb{R}^{p,d}$, $\mathbf{x} \sim \mathcal{N}(\mathbf{0},\mathbf{I})$ and $\epsilon \sim \mathcal{N}(\mathbf{0},\mathbf{\Psi})$.
Factor analysis assumes $\mathbf{\Psi}$ is diagonal.
PPCA assumes $\mathbf{\Psi} = \sigma^2\mathbf{I}$
Michael E. Tipping, Christopher M. Bishop (1999). Probabilistic Principal Component Analysis, Journal of the Royal Statistical Society, Volume 61, Issue 3, Pages 611–622 | What are the differences between Factor Analysis and Principal Component Analysis? | In a paper by Tipping and Bischop the tight relationship between Probabalistic PCA (PPCA) and Factor analysis is discussed. PPCA is closer to FA than the classic PCA is. The common model is
$$\mathbf{ | What are the differences between Factor Analysis and Principal Component Analysis?
In a paper by Tipping and Bischop the tight relationship between Probabalistic PCA (PPCA) and Factor analysis is discussed. PPCA is closer to FA than the classic PCA is. The common model is
$$\mathbf{y} = \mu + \mathbf{Wx} + \epsilon$$
where $\mathbf{W} \in \mathbb{R}^{p,d}$, $\mathbf{x} \sim \mathcal{N}(\mathbf{0},\mathbf{I})$ and $\epsilon \sim \mathcal{N}(\mathbf{0},\mathbf{\Psi})$.
Factor analysis assumes $\mathbf{\Psi}$ is diagonal.
PPCA assumes $\mathbf{\Psi} = \sigma^2\mathbf{I}$
Michael E. Tipping, Christopher M. Bishop (1999). Probabilistic Principal Component Analysis, Journal of the Royal Statistical Society, Volume 61, Issue 3, Pages 611–622 | What are the differences between Factor Analysis and Principal Component Analysis?
In a paper by Tipping and Bischop the tight relationship between Probabalistic PCA (PPCA) and Factor analysis is discussed. PPCA is closer to FA than the classic PCA is. The common model is
$$\mathbf{ |
473 | What are the differences between Factor Analysis and Principal Component Analysis? | None of these response is perfect. Either FA or PCA has some variants. We must clearly point out which variants are compared.
I would compare the maximum likelihood factor analysis and the Hotelling's PCA.
The former assume the latent variable follow a normal distribution but PCA has no such an assumption. This has led to differences, such as the solution, the nesting of the components, the unique of the solution, the optimization algorithms. | What are the differences between Factor Analysis and Principal Component Analysis? | None of these response is perfect. Either FA or PCA has some variants. We must clearly point out which variants are compared.
I would compare the maximum likelihood factor analysis and the Hotelling' | What are the differences between Factor Analysis and Principal Component Analysis?
None of these response is perfect. Either FA or PCA has some variants. We must clearly point out which variants are compared.
I would compare the maximum likelihood factor analysis and the Hotelling's PCA.
The former assume the latent variable follow a normal distribution but PCA has no such an assumption. This has led to differences, such as the solution, the nesting of the components, the unique of the solution, the optimization algorithms. | What are the differences between Factor Analysis and Principal Component Analysis?
None of these response is perfect. Either FA or PCA has some variants. We must clearly point out which variants are compared.
I would compare the maximum likelihood factor analysis and the Hotelling' |
474 | What are the differences between Factor Analysis and Principal Component Analysis? | There many great answers for this post but recently, I came across another difference.
Clustering is one application where PCA and FA yield different results. When there are many features in the data, one may be attempted to find the top PC directions and project the data on these PCs, then proceed with clustering. Often this disturbs the inherent clusters in the data - This is a well proven result. Researchers suggest to proceed with sub-space clustering methods, which look for low-dimensional latent factors in the model.
Just to illustrate this difference consider the Crabs dataset in R. Crabs dataset has 200 rows and 8 columns, describing 5 morphological measurements on 50 crabs each of two colour forms and both sexes, of the species - Essentially there are 4 (2x2) different classes of crabs.
library(MASS)
data(crabs)
lbl <- rep(1:4,each=50)
pc <- princomp(crabs[,4:8])
plot(pc) # produce the scree plot
X <- as.matrix(crabs[,4:8]) %*% pc$loadings
library(mclust)
res_12 <- Mclust(X[,1:2],G=4)
plot(res_12)
res_23 <- Mclust(X[,2:3],G=4)
plot(res_23)
Clustering using PC1 and PC2:
Clustering using PC2 and PC3:
#using PC1 and PC2:
1 2 3 4
1 12 46 24 5
2 36 0 2 0
3 2 1 24 0
4 0 3 0 45
#using PC2 and PC3:
1 2 3 4
1 36 0 0 0
2 13 48 0 0
3 0 1 0 48
4 1 1 50 2
As we can see from the above plots, PC2 and PC3 carry more discriminating information than PC1.
If one tries to cluster using the latent factors using a Mixture of Factor Analyzers, we see much better result compared against using the first two PCs.
mfa_model <- mfa(y, g = 4, q = 2)
|............................................................| 100%
table(mfa_model$clust,c(rep(1,50),rep(2,50),rep(3,50),rep(4,50)))
1 2 3 4
1 0 0 0 45
2 16 50 0 0
3 34 0 0 0
4 0 0 50 5 | What are the differences between Factor Analysis and Principal Component Analysis? | There many great answers for this post but recently, I came across another difference.
Clustering is one application where PCA and FA yield different results. When there are many features in the data | What are the differences between Factor Analysis and Principal Component Analysis?
There many great answers for this post but recently, I came across another difference.
Clustering is one application where PCA and FA yield different results. When there are many features in the data, one may be attempted to find the top PC directions and project the data on these PCs, then proceed with clustering. Often this disturbs the inherent clusters in the data - This is a well proven result. Researchers suggest to proceed with sub-space clustering methods, which look for low-dimensional latent factors in the model.
Just to illustrate this difference consider the Crabs dataset in R. Crabs dataset has 200 rows and 8 columns, describing 5 morphological measurements on 50 crabs each of two colour forms and both sexes, of the species - Essentially there are 4 (2x2) different classes of crabs.
library(MASS)
data(crabs)
lbl <- rep(1:4,each=50)
pc <- princomp(crabs[,4:8])
plot(pc) # produce the scree plot
X <- as.matrix(crabs[,4:8]) %*% pc$loadings
library(mclust)
res_12 <- Mclust(X[,1:2],G=4)
plot(res_12)
res_23 <- Mclust(X[,2:3],G=4)
plot(res_23)
Clustering using PC1 and PC2:
Clustering using PC2 and PC3:
#using PC1 and PC2:
1 2 3 4
1 12 46 24 5
2 36 0 2 0
3 2 1 24 0
4 0 3 0 45
#using PC2 and PC3:
1 2 3 4
1 36 0 0 0
2 13 48 0 0
3 0 1 0 48
4 1 1 50 2
As we can see from the above plots, PC2 and PC3 carry more discriminating information than PC1.
If one tries to cluster using the latent factors using a Mixture of Factor Analyzers, we see much better result compared against using the first two PCs.
mfa_model <- mfa(y, g = 4, q = 2)
|............................................................| 100%
table(mfa_model$clust,c(rep(1,50),rep(2,50),rep(3,50),rep(4,50)))
1 2 3 4
1 0 0 0 45
2 16 50 0 0
3 34 0 0 0
4 0 0 50 5 | What are the differences between Factor Analysis and Principal Component Analysis?
There many great answers for this post but recently, I came across another difference.
Clustering is one application where PCA and FA yield different results. When there are many features in the data |
475 | What are the differences between Factor Analysis and Principal Component Analysis? | From Factor Analysis Vs. PCA (Principal Component Analysis)
Principal Component Analysis
Factor Analysis
Meaning
A component is a derived new dimension (or variable) so that the derived variables are linearly independent of each other.
A factor (or latent) is a common or underlying element with which several other variables are correlated.
Purpose
PCA is used to decompose the data into a smaller number of components and therefore is a type of Singular Value Decomposition (SVD).
Factor Analysis is used to understand the underlying ‘cause’ which these factors (latent or constituents) capture much of the information of a set of variables in the dataset data. Hence, it is also known as Common Factor Analysis (CFA).
Assumption
PCA looks to identify the dimensions that are composites of the observed predictors.
Factor analysis explicitly presumes that the latent (or factors) exist in the given data.
Objective
The aim of PCA is to explain as much of the cumulative variance in the predictors (or variables) as possible.
FA focuses on explaining the covariances or the correlations between the variables.
How much variation is explained?
The components explain all the variance in the data. PCA captures the maximum variance in the first component, then in the second component, and henceforth followed by the other components.
The latent themselves are not directly measurable, and they do not explain all the variance in the data. Hence, it results in an error term that is unique to each measured variable.
Process
In PCA, the components are calculated as the linear combinations of the original variables.
In factor analysis, the original variables are defined as the linear combinations of the factors.
Mathematical representation
Y = W1* PC1 + W2* PC2+… + W10 * PC10 +C Where, PCs are the components and Wis are the weights for each of the components.
X1 = W1F + e1 X2 = W2F + e2 X3 = W3*F + e3 Where, F is the factor, Wis are the weights and eis are the error terms. The error is the variance in each X that is not explained by the factor.
Interpretation of the weights
The weights are the correlation between the standardized scores of the predictors (or variables) and the principal components, also known as the factor loadings. For example, in PCA, the weights indicate which component contributes more to the target variable, Y, as the independent variables are standardized.
The weights in the factor analysis express the relationship or association of each variable (X) to the underlying factor (F). These are also known as the factor loadings and can be interpreted as the standardized regression coefficients.
Estimation of the weights
PCA uses the correlation matrix of the variables, which generates the eigenvectors (or the components) and estimates it as the betas (or the coefficients).
The process of factor analysis ascertains the optimal weights.
Pecking order
In PCA, the variables are specified and then estimate the weights (coefficients or betas) through regression.
In factor analysis, the latent (or the factors) are first specified and then estimate the factor returns through regression.
Use Cases and Applications of PCA
The use cases of PCA are:
PCA is highly used for image processing. It has wide applications in domains such as facial recognition, computer vision. Image
processing is a method to perform operations on an image to either
enhance an image or extract and determine information or patterns from
it.
It has its use in the field of investment to analyze stocks and predict portfolio returns. Also, it can be used to model the yield
curves.
PCA also has its applications in the area field of bioinformatics. One such use case is the genomic study done using gene expression
measurements.
Both the banking sector and marketing have vast applications of PCA. It can be used to profile customers based on their demographics.
PCA has been extensively used to conduct clinical studies. It is used in the healthcare sector and also by researchers in the domain of
food science.
In the field of psychology, PCA is used to understand psychological scales. It can be used to understand statically the
ineffective habits that we must have broken yesterday!
Use Case and Applications of Factor Analysis
Some of the business problems where factor analysis can be applied
are:
You may have heard of the old saying, “Don’t put all your eggs in one basket.” In case you have a stock portfolio, then you know what I
am referring to. Investment professionals rely on factor analysis to
diversify their stocks. It is used to predict the movement across
stocks in a consolidated sector or industry.
In the space of marketing, factor analysis can be used to analyze customer engagement. It is a measure of how much a product or brand is
interacting with its customers during the product’s life cycle.
The HR managers can employ factor analysis to encourage employee effectiveness. It can be done by identifying the features that have
the most impact on employee productivity.
Factor analysis can be applied to group (or segment) the customers based on the similarity or the same characteristics of the customers.
For example, in the insurance industry, the customers are categorized
based on their life stage, for example, youth, married, young family,
middle-age with dependents, retried. Another example is of restaurants
that would frame their menu to target customers based on the
demographics. For example, a fine dining restaurant in an upper
locality will not have the same menu as a tea stall near a college
campus.
Schools, colleges, or universities also apply factor analysis to make their decisions as the class curriculum would be dependent on the
difference in the levels of the classes. This ultimately determines
the salary and staffing limits of the teachers.
This technique is also handy for exploring the relationships in the category of socioeconomic status, dietary patterns.
Like PCA, factor analysis can also be used to understand the psycholo | What are the differences between Factor Analysis and Principal Component Analysis? | From Factor Analysis Vs. PCA (Principal Component Analysis)
Principal Component Analysis
Factor Analysis
Meaning
A component is a derived new dimension (or variable) so that the derived varia | What are the differences between Factor Analysis and Principal Component Analysis?
From Factor Analysis Vs. PCA (Principal Component Analysis)
Principal Component Analysis
Factor Analysis
Meaning
A component is a derived new dimension (or variable) so that the derived variables are linearly independent of each other.
A factor (or latent) is a common or underlying element with which several other variables are correlated.
Purpose
PCA is used to decompose the data into a smaller number of components and therefore is a type of Singular Value Decomposition (SVD).
Factor Analysis is used to understand the underlying ‘cause’ which these factors (latent or constituents) capture much of the information of a set of variables in the dataset data. Hence, it is also known as Common Factor Analysis (CFA).
Assumption
PCA looks to identify the dimensions that are composites of the observed predictors.
Factor analysis explicitly presumes that the latent (or factors) exist in the given data.
Objective
The aim of PCA is to explain as much of the cumulative variance in the predictors (or variables) as possible.
FA focuses on explaining the covariances or the correlations between the variables.
How much variation is explained?
The components explain all the variance in the data. PCA captures the maximum variance in the first component, then in the second component, and henceforth followed by the other components.
The latent themselves are not directly measurable, and they do not explain all the variance in the data. Hence, it results in an error term that is unique to each measured variable.
Process
In PCA, the components are calculated as the linear combinations of the original variables.
In factor analysis, the original variables are defined as the linear combinations of the factors.
Mathematical representation
Y = W1* PC1 + W2* PC2+… + W10 * PC10 +C Where, PCs are the components and Wis are the weights for each of the components.
X1 = W1F + e1 X2 = W2F + e2 X3 = W3*F + e3 Where, F is the factor, Wis are the weights and eis are the error terms. The error is the variance in each X that is not explained by the factor.
Interpretation of the weights
The weights are the correlation between the standardized scores of the predictors (or variables) and the principal components, also known as the factor loadings. For example, in PCA, the weights indicate which component contributes more to the target variable, Y, as the independent variables are standardized.
The weights in the factor analysis express the relationship or association of each variable (X) to the underlying factor (F). These are also known as the factor loadings and can be interpreted as the standardized regression coefficients.
Estimation of the weights
PCA uses the correlation matrix of the variables, which generates the eigenvectors (or the components) and estimates it as the betas (or the coefficients).
The process of factor analysis ascertains the optimal weights.
Pecking order
In PCA, the variables are specified and then estimate the weights (coefficients or betas) through regression.
In factor analysis, the latent (or the factors) are first specified and then estimate the factor returns through regression.
Use Cases and Applications of PCA
The use cases of PCA are:
PCA is highly used for image processing. It has wide applications in domains such as facial recognition, computer vision. Image
processing is a method to perform operations on an image to either
enhance an image or extract and determine information or patterns from
it.
It has its use in the field of investment to analyze stocks and predict portfolio returns. Also, it can be used to model the yield
curves.
PCA also has its applications in the area field of bioinformatics. One such use case is the genomic study done using gene expression
measurements.
Both the banking sector and marketing have vast applications of PCA. It can be used to profile customers based on their demographics.
PCA has been extensively used to conduct clinical studies. It is used in the healthcare sector and also by researchers in the domain of
food science.
In the field of psychology, PCA is used to understand psychological scales. It can be used to understand statically the
ineffective habits that we must have broken yesterday!
Use Case and Applications of Factor Analysis
Some of the business problems where factor analysis can be applied
are:
You may have heard of the old saying, “Don’t put all your eggs in one basket.” In case you have a stock portfolio, then you know what I
am referring to. Investment professionals rely on factor analysis to
diversify their stocks. It is used to predict the movement across
stocks in a consolidated sector or industry.
In the space of marketing, factor analysis can be used to analyze customer engagement. It is a measure of how much a product or brand is
interacting with its customers during the product’s life cycle.
The HR managers can employ factor analysis to encourage employee effectiveness. It can be done by identifying the features that have
the most impact on employee productivity.
Factor analysis can be applied to group (or segment) the customers based on the similarity or the same characteristics of the customers.
For example, in the insurance industry, the customers are categorized
based on their life stage, for example, youth, married, young family,
middle-age with dependents, retried. Another example is of restaurants
that would frame their menu to target customers based on the
demographics. For example, a fine dining restaurant in an upper
locality will not have the same menu as a tea stall near a college
campus.
Schools, colleges, or universities also apply factor analysis to make their decisions as the class curriculum would be dependent on the
difference in the levels of the classes. This ultimately determines
the salary and staffing limits of the teachers.
This technique is also handy for exploring the relationships in the category of socioeconomic status, dietary patterns.
Like PCA, factor analysis can also be used to understand the psycholo | What are the differences between Factor Analysis and Principal Component Analysis?
From Factor Analysis Vs. PCA (Principal Component Analysis)
Principal Component Analysis
Factor Analysis
Meaning
A component is a derived new dimension (or variable) so that the derived varia |
476 | What are common statistical sins? | Failing to look at (plot) the data. | What are common statistical sins? | Failing to look at (plot) the data. | What are common statistical sins?
Failing to look at (plot) the data. | What are common statistical sins?
Failing to look at (plot) the data. |
477 | What are common statistical sins? | Most interpretations of p-values are sinful! The conventional usage of p-values is badly flawed; a fact that, in my opinion, calls into question the standard approaches to the teaching of hypothesis tests and tests of significance.
Haller and Krause have found that statistical instructors are almost as likely as students to misinterpret p-values. (Take the test in their paper and see how you do.) Steve Goodman makes a good case for discarding the conventional (mis-)use of the p -value in favor of likelihoods. The Hubbard paper is also worth a look.
Haller and Krauss. Misinterpretations of significance: A problem students share with their teachers. Methods of Psychological Research (2002) vol. 7 (1) pp. 1-20 (PDF)
Hubbard and Bayarri. Confusion over Measures of Evidence (p's) versus Errors (α's) in Classical Statistical Testing. The American Statistician (2003) vol. 57 (3)
Goodman. Toward evidence-based medical statistics. 1: The P value fallacy. Ann Intern Med (1999) vol. 130 (12) pp. 995-1004 (PDF)
Also see:
Wagenmakers, E-J. A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14(5), 779-804.
for some clear cut cases where even the nominally "correct" interpretation of a p-value has been made incorrect due to the choices made by the experimenter.
Update (2016): In 2016, American Statistical Association issued a statement on p-values, see here. This was, in a way, a response to the "ban on p-values" issued by a psychology journal about a year earlier. | What are common statistical sins? | Most interpretations of p-values are sinful! The conventional usage of p-values is badly flawed; a fact that, in my opinion, calls into question the standard approaches to the teaching of hypothesis t | What are common statistical sins?
Most interpretations of p-values are sinful! The conventional usage of p-values is badly flawed; a fact that, in my opinion, calls into question the standard approaches to the teaching of hypothesis tests and tests of significance.
Haller and Krause have found that statistical instructors are almost as likely as students to misinterpret p-values. (Take the test in their paper and see how you do.) Steve Goodman makes a good case for discarding the conventional (mis-)use of the p -value in favor of likelihoods. The Hubbard paper is also worth a look.
Haller and Krauss. Misinterpretations of significance: A problem students share with their teachers. Methods of Psychological Research (2002) vol. 7 (1) pp. 1-20 (PDF)
Hubbard and Bayarri. Confusion over Measures of Evidence (p's) versus Errors (α's) in Classical Statistical Testing. The American Statistician (2003) vol. 57 (3)
Goodman. Toward evidence-based medical statistics. 1: The P value fallacy. Ann Intern Med (1999) vol. 130 (12) pp. 995-1004 (PDF)
Also see:
Wagenmakers, E-J. A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14(5), 779-804.
for some clear cut cases where even the nominally "correct" interpretation of a p-value has been made incorrect due to the choices made by the experimenter.
Update (2016): In 2016, American Statistical Association issued a statement on p-values, see here. This was, in a way, a response to the "ban on p-values" issued by a psychology journal about a year earlier. | What are common statistical sins?
Most interpretations of p-values are sinful! The conventional usage of p-values is badly flawed; a fact that, in my opinion, calls into question the standard approaches to the teaching of hypothesis t |
478 | What are common statistical sins? | The most dangerous trap I encountered when working on a predictive model is not to reserve a test dataset early on so as to dedicate it to the "final" performance evaluation.
It's really easy to overestimate the predictive accuracy of your model if you have a chance to somehow use the testing data when tweaking the parameters, selecting the prior, selecting the learning algorithm stopping criterion...
To avoid this issue, before starting your work on a new dataset you should split your data as:
development set
evaluation set
Then split your development set as a "training development set" and "testing development set" where you use the training development set to train various models with different parameters and select the bests according to there performance on the testing development set. You can also do grid search with cross validation but only on the development set. Never use the evaluation set while model selection is not 100% done.
Once you are confident with the model selection and parameters, perform a 10 folds cross-validation on the evaluation set to have an idea of the "real" predictive accuracy of the selected model.
Also if your data is temporal, it is best to choose the development / evaluation split on a time code: "It's hard to make predictions - especially about the future." | What are common statistical sins? | The most dangerous trap I encountered when working on a predictive model is not to reserve a test dataset early on so as to dedicate it to the "final" performance evaluation.
It's really easy to overe | What are common statistical sins?
The most dangerous trap I encountered when working on a predictive model is not to reserve a test dataset early on so as to dedicate it to the "final" performance evaluation.
It's really easy to overestimate the predictive accuracy of your model if you have a chance to somehow use the testing data when tweaking the parameters, selecting the prior, selecting the learning algorithm stopping criterion...
To avoid this issue, before starting your work on a new dataset you should split your data as:
development set
evaluation set
Then split your development set as a "training development set" and "testing development set" where you use the training development set to train various models with different parameters and select the bests according to there performance on the testing development set. You can also do grid search with cross validation but only on the development set. Never use the evaluation set while model selection is not 100% done.
Once you are confident with the model selection and parameters, perform a 10 folds cross-validation on the evaluation set to have an idea of the "real" predictive accuracy of the selected model.
Also if your data is temporal, it is best to choose the development / evaluation split on a time code: "It's hard to make predictions - especially about the future." | What are common statistical sins?
The most dangerous trap I encountered when working on a predictive model is not to reserve a test dataset early on so as to dedicate it to the "final" performance evaluation.
It's really easy to overe |
479 | What are common statistical sins? | Reporting p-values when you did data-mining (hypothesis discovery) instead of statistics (hypothesis testing). | What are common statistical sins? | Reporting p-values when you did data-mining (hypothesis discovery) instead of statistics (hypothesis testing). | What are common statistical sins?
Reporting p-values when you did data-mining (hypothesis discovery) instead of statistics (hypothesis testing). | What are common statistical sins?
Reporting p-values when you did data-mining (hypothesis discovery) instead of statistics (hypothesis testing). |
480 | What are common statistical sins? | A few mistakes that bother me:
Assuming unbiased estimators are always better than biased estimators.
Assuming that a high $R^2$ implies a good model, low $R^2$ implies a bad model.
Incorrectly interpreting/applying correlation.
Reporting point estimates without standard error.
Using methods which assume some sort of Multivariate Normality (such as Linear Discriminant Analysis) when more robust, better performing, non/semiparametric methods are available.
Using p-value as a measure of strength between a predictor and the response, rather than as a measure of how much evidence there is of some relationship. | What are common statistical sins? | A few mistakes that bother me:
Assuming unbiased estimators are always better than biased estimators.
Assuming that a high $R^2$ implies a good model, low $R^2$ implies a bad model.
Incorrectly inter | What are common statistical sins?
A few mistakes that bother me:
Assuming unbiased estimators are always better than biased estimators.
Assuming that a high $R^2$ implies a good model, low $R^2$ implies a bad model.
Incorrectly interpreting/applying correlation.
Reporting point estimates without standard error.
Using methods which assume some sort of Multivariate Normality (such as Linear Discriminant Analysis) when more robust, better performing, non/semiparametric methods are available.
Using p-value as a measure of strength between a predictor and the response, rather than as a measure of how much evidence there is of some relationship. | What are common statistical sins?
A few mistakes that bother me:
Assuming unbiased estimators are always better than biased estimators.
Assuming that a high $R^2$ implies a good model, low $R^2$ implies a bad model.
Incorrectly inter |
481 | What are common statistical sins? | Testing the hypotheses $H_0: \mu=0$ versus $H_1: \mu\neq 0$
(for example in a Gaussian setting)
to justify that $\mu=0$ in a model (i.e mix "$H_0$ is not rejected" and "$H_0$ is true").
A very good example of that type of (very bad) reasoning is when you test whether the variances of two Gaussians are equal (or not) before testing if their mean are equal or not with the assumption of equal variance.
Another example occurs when you test normality (versus non normality) to justify normality. Every statistician has done that in is life ? it is baaad :) (and should push people to check robustness to non Gaussianity) | What are common statistical sins? | Testing the hypotheses $H_0: \mu=0$ versus $H_1: \mu\neq 0$
(for example in a Gaussian setting)
to justify that $\mu=0$ in a model (i.e mix "$H_0$ is not rejected" and "$H_0$ is true").
A very good | What are common statistical sins?
Testing the hypotheses $H_0: \mu=0$ versus $H_1: \mu\neq 0$
(for example in a Gaussian setting)
to justify that $\mu=0$ in a model (i.e mix "$H_0$ is not rejected" and "$H_0$ is true").
A very good example of that type of (very bad) reasoning is when you test whether the variances of two Gaussians are equal (or not) before testing if their mean are equal or not with the assumption of equal variance.
Another example occurs when you test normality (versus non normality) to justify normality. Every statistician has done that in is life ? it is baaad :) (and should push people to check robustness to non Gaussianity) | What are common statistical sins?
Testing the hypotheses $H_0: \mu=0$ versus $H_1: \mu\neq 0$
(for example in a Gaussian setting)
to justify that $\mu=0$ in a model (i.e mix "$H_0$ is not rejected" and "$H_0$ is true").
A very good |
482 | What are common statistical sins? | Not really answering the question, but there's an entire book on this subject:
Phillip I. Good, James William Hardin (2003). Common errors in statistics (and how to avoid them). Wiley. ISBN 9780471460688 | What are common statistical sins? | Not really answering the question, but there's an entire book on this subject:
Phillip I. Good, James William Hardin (2003). Common errors in statistics (and how to avoid them). Wiley. ISBN 9780471460 | What are common statistical sins?
Not really answering the question, but there's an entire book on this subject:
Phillip I. Good, James William Hardin (2003). Common errors in statistics (and how to avoid them). Wiley. ISBN 9780471460688 | What are common statistical sins?
Not really answering the question, but there's an entire book on this subject:
Phillip I. Good, James William Hardin (2003). Common errors in statistics (and how to avoid them). Wiley. ISBN 9780471460 |
483 | What are common statistical sins? | interpreting Probability(data | hypothesis) as Probability(hypothesis | data) without the application of Bayes' theorem. | What are common statistical sins? | interpreting Probability(data | hypothesis) as Probability(hypothesis | data) without the application of Bayes' theorem. | What are common statistical sins?
interpreting Probability(data | hypothesis) as Probability(hypothesis | data) without the application of Bayes' theorem. | What are common statistical sins?
interpreting Probability(data | hypothesis) as Probability(hypothesis | data) without the application of Bayes' theorem. |
484 | What are common statistical sins? | Ritualized Statistics.
This "sin" is when you apply whatever thing you were taught, regardless of its appropriateness, because it's how things are done. It's statistics by rote, one level above letting the machine choose your statistics for you.
Examples are Intro to Statistics-level students trying to make everything fit into their modest t-test and ANOVA toolkit, or any time one finds oneself going "Oh, I have categorical data, I should use X" without ever stopping to look at the data, or consider the question being asked.
A variation on this sin involves using code you don't understand to produce output you only kind of understand, but know "the fifth column, about 8 rows down" or whatever is the answer you're supposed to be looking for. | What are common statistical sins? | Ritualized Statistics.
This "sin" is when you apply whatever thing you were taught, regardless of its appropriateness, because it's how things are done. It's statistics by rote, one level above lettin | What are common statistical sins?
Ritualized Statistics.
This "sin" is when you apply whatever thing you were taught, regardless of its appropriateness, because it's how things are done. It's statistics by rote, one level above letting the machine choose your statistics for you.
Examples are Intro to Statistics-level students trying to make everything fit into their modest t-test and ANOVA toolkit, or any time one finds oneself going "Oh, I have categorical data, I should use X" without ever stopping to look at the data, or consider the question being asked.
A variation on this sin involves using code you don't understand to produce output you only kind of understand, but know "the fifth column, about 8 rows down" or whatever is the answer you're supposed to be looking for. | What are common statistical sins?
Ritualized Statistics.
This "sin" is when you apply whatever thing you were taught, regardless of its appropriateness, because it's how things are done. It's statistics by rote, one level above lettin |
485 | What are common statistical sins? | Dichotomization of a continuous predictor variable to either "simplify" analysis or to solve for the "problem" of non-linearity in the effect of the continuous predictor. | What are common statistical sins? | Dichotomization of a continuous predictor variable to either "simplify" analysis or to solve for the "problem" of non-linearity in the effect of the continuous predictor. | What are common statistical sins?
Dichotomization of a continuous predictor variable to either "simplify" analysis or to solve for the "problem" of non-linearity in the effect of the continuous predictor. | What are common statistical sins?
Dichotomization of a continuous predictor variable to either "simplify" analysis or to solve for the "problem" of non-linearity in the effect of the continuous predictor. |
486 | What are common statistical sins? | Maybe stepwise regression and other forms of testing after model selection.
Selecting independent variables for modelling without having any a priori hypothesis behind the existing relationships can lead to logical fallacies or spurious correlations, among other mistakes.
Useful references (from a biological/biostatistical perspective):
Kozak, M., & Azevedo, R. (2011). Does using stepwise variable selection to build sequential path analysis models make sense? Physiologia plantarum, 141(3), 197–200. doi:10.1111/j.1399-3054.2010.01431.x
Whittingham, M. J., Stephens, P., Bradbury, R. B., & Freckleton, R. P. (2006). Why do we still use stepwise modelling in ecology and behaviour? The Journal of animal ecology, 75(5), 1182–9. doi:10.1111/j.1365-2656.2006.01141.x
Frank Harrell, Regression Modeling Strategies, Springer 2001. | What are common statistical sins? | Maybe stepwise regression and other forms of testing after model selection.
Selecting independent variables for modelling without having any a priori hypothesis behind the existing relationships can l | What are common statistical sins?
Maybe stepwise regression and other forms of testing after model selection.
Selecting independent variables for modelling without having any a priori hypothesis behind the existing relationships can lead to logical fallacies or spurious correlations, among other mistakes.
Useful references (from a biological/biostatistical perspective):
Kozak, M., & Azevedo, R. (2011). Does using stepwise variable selection to build sequential path analysis models make sense? Physiologia plantarum, 141(3), 197–200. doi:10.1111/j.1399-3054.2010.01431.x
Whittingham, M. J., Stephens, P., Bradbury, R. B., & Freckleton, R. P. (2006). Why do we still use stepwise modelling in ecology and behaviour? The Journal of animal ecology, 75(5), 1182–9. doi:10.1111/j.1365-2656.2006.01141.x
Frank Harrell, Regression Modeling Strategies, Springer 2001. | What are common statistical sins?
Maybe stepwise regression and other forms of testing after model selection.
Selecting independent variables for modelling without having any a priori hypothesis behind the existing relationships can l |
487 | What are common statistical sins? | Something I see a surprising amount in conference papers and even journals is making multiple comparisons (e.g. of bivariate correlations) and then reporting all the p<.05s as "significant" (ignoring the rightness or wrongness of that for the moment).
I know what you mean about psychology graduates, as well- I've finished a PhD in psychology and I'm still only just learning really. It's quite bad, I think psychology needs to take quantitative data analysis more seriously if we're going to use it (which, clearly, we should) | What are common statistical sins? | Something I see a surprising amount in conference papers and even journals is making multiple comparisons (e.g. of bivariate correlations) and then reporting all the p<.05s as "significant" (ignoring | What are common statistical sins?
Something I see a surprising amount in conference papers and even journals is making multiple comparisons (e.g. of bivariate correlations) and then reporting all the p<.05s as "significant" (ignoring the rightness or wrongness of that for the moment).
I know what you mean about psychology graduates, as well- I've finished a PhD in psychology and I'm still only just learning really. It's quite bad, I think psychology needs to take quantitative data analysis more seriously if we're going to use it (which, clearly, we should) | What are common statistical sins?
Something I see a surprising amount in conference papers and even journals is making multiple comparisons (e.g. of bivariate correlations) and then reporting all the p<.05s as "significant" (ignoring |
488 | What are common statistical sins? | Being exploratory but pretending to be confirmatory. This can happen when one is modifying the analysis strategy (i.e. model fitting, variable selection and so on) data driven or result driven but not stating this openly and then only reporting the "best" (i.e. with smallest p-values) results as if it had been the only analysis. This also pertains to the point if multiple testing that Chris Beeley made and results in a high false positive rate in scientific reports. | What are common statistical sins? | Being exploratory but pretending to be confirmatory. This can happen when one is modifying the analysis strategy (i.e. model fitting, variable selection and so on) data driven or result driven but not | What are common statistical sins?
Being exploratory but pretending to be confirmatory. This can happen when one is modifying the analysis strategy (i.e. model fitting, variable selection and so on) data driven or result driven but not stating this openly and then only reporting the "best" (i.e. with smallest p-values) results as if it had been the only analysis. This also pertains to the point if multiple testing that Chris Beeley made and results in a high false positive rate in scientific reports. | What are common statistical sins?
Being exploratory but pretending to be confirmatory. This can happen when one is modifying the analysis strategy (i.e. model fitting, variable selection and so on) data driven or result driven but not |
489 | What are common statistical sins? | The one that I see quite often and always grinds my gears is the assumption that a statistically significant main effect in one group and a non-statistically significant main effect in another group implies a significant effect x group interaction. | What are common statistical sins? | The one that I see quite often and always grinds my gears is the assumption that a statistically significant main effect in one group and a non-statistically significant main effect in another group i | What are common statistical sins?
The one that I see quite often and always grinds my gears is the assumption that a statistically significant main effect in one group and a non-statistically significant main effect in another group implies a significant effect x group interaction. | What are common statistical sins?
The one that I see quite often and always grinds my gears is the assumption that a statistically significant main effect in one group and a non-statistically significant main effect in another group i |
490 | What are common statistical sins? | Correlation implies causation, which is not as bad as accepting the Null Hypothesis. | What are common statistical sins? | Correlation implies causation, which is not as bad as accepting the Null Hypothesis. | What are common statistical sins?
Correlation implies causation, which is not as bad as accepting the Null Hypothesis. | What are common statistical sins?
Correlation implies causation, which is not as bad as accepting the Null Hypothesis. |
491 | What are common statistical sins? | Especially in epidemiology and public health - using arithmetic instead of logarithmic scale when reporting graphs of relative measures of association (hazard ratio, odds ratio or risk ratio).
More information here. | What are common statistical sins? | Especially in epidemiology and public health - using arithmetic instead of logarithmic scale when reporting graphs of relative measures of association (hazard ratio, odds ratio or risk ratio).
More in | What are common statistical sins?
Especially in epidemiology and public health - using arithmetic instead of logarithmic scale when reporting graphs of relative measures of association (hazard ratio, odds ratio or risk ratio).
More information here. | What are common statistical sins?
Especially in epidemiology and public health - using arithmetic instead of logarithmic scale when reporting graphs of relative measures of association (hazard ratio, odds ratio or risk ratio).
More in |
492 | What are common statistical sins? | Analysis of rate data (accuracy, etc) using ANOVA, thereby assuming that rate data has Gaussian distributed error when it's actually binomially distributed.
Dixon (2008) provides a discussion of the consequences of this sin and exploration of more appropriate analysis approaches. | What are common statistical sins? | Analysis of rate data (accuracy, etc) using ANOVA, thereby assuming that rate data has Gaussian distributed error when it's actually binomially distributed.
Dixon (2008) provides a discussion of the c | What are common statistical sins?
Analysis of rate data (accuracy, etc) using ANOVA, thereby assuming that rate data has Gaussian distributed error when it's actually binomially distributed.
Dixon (2008) provides a discussion of the consequences of this sin and exploration of more appropriate analysis approaches. | What are common statistical sins?
Analysis of rate data (accuracy, etc) using ANOVA, thereby assuming that rate data has Gaussian distributed error when it's actually binomially distributed.
Dixon (2008) provides a discussion of the c |
493 | What are common statistical sins? | A current popular one is plotting 95% confidence intervals around the raw performance values in repeated measures designs when they only relate to the variance of an effect. For example, a plot of reaction times in a repeated measures design with confidence intervals where the error term is derived from the MSE of a repeated measures ANOVA. These confidence intervals don't represent anything sensible. They certainly don't represent anything about the absolute reaction time. You could use the error term to generate confidence intervals around the effect but that is rarely done. | What are common statistical sins? | A current popular one is plotting 95% confidence intervals around the raw performance values in repeated measures designs when they only relate to the variance of an effect. For example, a plot of re | What are common statistical sins?
A current popular one is plotting 95% confidence intervals around the raw performance values in repeated measures designs when they only relate to the variance of an effect. For example, a plot of reaction times in a repeated measures design with confidence intervals where the error term is derived from the MSE of a repeated measures ANOVA. These confidence intervals don't represent anything sensible. They certainly don't represent anything about the absolute reaction time. You could use the error term to generate confidence intervals around the effect but that is rarely done. | What are common statistical sins?
A current popular one is plotting 95% confidence intervals around the raw performance values in repeated measures designs when they only relate to the variance of an effect. For example, a plot of re |
494 | What are common statistical sins? | While I can relate to much of what Michael Lew says, abandoning p-values in favor of likelihood ratios still misses a more general problem--that of overemphasizing probabilistic results over effect sizes, which are required to give a result substantive meaning. This type of error comes in all shapes and sizes and I find it to be the most insidious statistical mistake. Drawing on J. Cohen and M. Oakes and others, I've written a piece on this at here. | What are common statistical sins? | While I can relate to much of what Michael Lew says, abandoning p-values in favor of likelihood ratios still misses a more general problem--that of overemphasizing probabilistic results over effect si | What are common statistical sins?
While I can relate to much of what Michael Lew says, abandoning p-values in favor of likelihood ratios still misses a more general problem--that of overemphasizing probabilistic results over effect sizes, which are required to give a result substantive meaning. This type of error comes in all shapes and sizes and I find it to be the most insidious statistical mistake. Drawing on J. Cohen and M. Oakes and others, I've written a piece on this at here. | What are common statistical sins?
While I can relate to much of what Michael Lew says, abandoning p-values in favor of likelihood ratios still misses a more general problem--that of overemphasizing probabilistic results over effect si |
495 | What are common statistical sins? | My intro psychometrics course in undergrad spent at least two weeks teaching how to perform a stepwise regression. Is there any situation where stepwise regression is a good idea? | What are common statistical sins? | My intro psychometrics course in undergrad spent at least two weeks teaching how to perform a stepwise regression. Is there any situation where stepwise regression is a good idea? | What are common statistical sins?
My intro psychometrics course in undergrad spent at least two weeks teaching how to perform a stepwise regression. Is there any situation where stepwise regression is a good idea? | What are common statistical sins?
My intro psychometrics course in undergrad spent at least two weeks teaching how to perform a stepwise regression. Is there any situation where stepwise regression is a good idea? |
496 | What are common statistical sins? | Failing to test the assumption that error is normally distributed and has constant variance between treatments. These assumptions aren't always tested, thus least-squares model fitting is probably often used when it is actually inappropriate. | What are common statistical sins? | Failing to test the assumption that error is normally distributed and has constant variance between treatments. These assumptions aren't always tested, thus least-squares model fitting is probably of | What are common statistical sins?
Failing to test the assumption that error is normally distributed and has constant variance between treatments. These assumptions aren't always tested, thus least-squares model fitting is probably often used when it is actually inappropriate. | What are common statistical sins?
Failing to test the assumption that error is normally distributed and has constant variance between treatments. These assumptions aren't always tested, thus least-squares model fitting is probably of |
497 | What are common statistical sins? | This may be more of a pop-stats answer than what you're looking for, but:
Using the mean as an indicator of location when data is highly skewed.
This isn't necessarily a problem, if you and your audience knows what you're talking about, but this generally isn't the case, and the median is often likely to give a better idea of what's going on.
My favourite example is mean wages, which are usually reported as "average wages". Depending on the income/wealth inequality in a country, this can be vastly different from the median wage, which gives a much better indicator for where people are at in real life. For example, in Australia, where we have relatively low inequality, the median is 10-15% lower than the mean. In the US the difference is much starker, the median is less than 70% of the mean, and the gap is increasing.
Reporting on the "average" (mean) wage results in a rosier picture than is warranted, and could also give a large number of people the false impression that they aren't earning as much as "normal" people. | What are common statistical sins? | This may be more of a pop-stats answer than what you're looking for, but:
Using the mean as an indicator of location when data is highly skewed.
This isn't necessarily a problem, if you and your audie | What are common statistical sins?
This may be more of a pop-stats answer than what you're looking for, but:
Using the mean as an indicator of location when data is highly skewed.
This isn't necessarily a problem, if you and your audience knows what you're talking about, but this generally isn't the case, and the median is often likely to give a better idea of what's going on.
My favourite example is mean wages, which are usually reported as "average wages". Depending on the income/wealth inequality in a country, this can be vastly different from the median wage, which gives a much better indicator for where people are at in real life. For example, in Australia, where we have relatively low inequality, the median is 10-15% lower than the mean. In the US the difference is much starker, the median is less than 70% of the mean, and the gap is increasing.
Reporting on the "average" (mean) wage results in a rosier picture than is warranted, and could also give a large number of people the false impression that they aren't earning as much as "normal" people. | What are common statistical sins?
This may be more of a pop-stats answer than what you're looking for, but:
Using the mean as an indicator of location when data is highly skewed.
This isn't necessarily a problem, if you and your audie |
498 | What are common statistical sins? | My old stats prof had a "rule of thumb" for dealing with outliers: If you see an outlier on your scatterplot, cover it up with your thumb :) | What are common statistical sins? | My old stats prof had a "rule of thumb" for dealing with outliers: If you see an outlier on your scatterplot, cover it up with your thumb :) | What are common statistical sins?
My old stats prof had a "rule of thumb" for dealing with outliers: If you see an outlier on your scatterplot, cover it up with your thumb :) | What are common statistical sins?
My old stats prof had a "rule of thumb" for dealing with outliers: If you see an outlier on your scatterplot, cover it up with your thumb :) |
499 | What are common statistical sins? | That the p-value is the probability that the null hypothesis is true and (1-p) is the probability that the alternative hypothesis is true, of that failing to reject the null hypothesis means the alternative hypothesis is false etc. | What are common statistical sins? | That the p-value is the probability that the null hypothesis is true and (1-p) is the probability that the alternative hypothesis is true, of that failing to reject the null hypothesis means the alter | What are common statistical sins?
That the p-value is the probability that the null hypothesis is true and (1-p) is the probability that the alternative hypothesis is true, of that failing to reject the null hypothesis means the alternative hypothesis is false etc. | What are common statistical sins?
That the p-value is the probability that the null hypothesis is true and (1-p) is the probability that the alternative hypothesis is true, of that failing to reject the null hypothesis means the alter |
500 | What are common statistical sins? | In similar vein to @dirkan - The use of p-values as a formal measure of evidence of the null hypothesis being true. It does have some good heuristic and intuitively good features, but is essentially an incomplete measure of evidence because it makes no reference to the alternative hypothesis. While the data may be unlikely under the null (leading to a small p-value), the data may be even more unlikely under the alternative hypothesis.
The other problem with p-values, which also relates to some styles of hypothesis testing, is there is no principle telling you which statistic you should choose, apart from the very vague "large value" $\rightarrow$ "unlikely if null hypothesis is true". Once again, you can see the incompleteness showing up, for you should also have "large value" $\rightarrow$ "likely if alternative hypothesis is true" as an additional heuristic feature of the test statistic. | What are common statistical sins? | In similar vein to @dirkan - The use of p-values as a formal measure of evidence of the null hypothesis being true. It does have some good heuristic and intuitively good features, but is essentially | What are common statistical sins?
In similar vein to @dirkan - The use of p-values as a formal measure of evidence of the null hypothesis being true. It does have some good heuristic and intuitively good features, but is essentially an incomplete measure of evidence because it makes no reference to the alternative hypothesis. While the data may be unlikely under the null (leading to a small p-value), the data may be even more unlikely under the alternative hypothesis.
The other problem with p-values, which also relates to some styles of hypothesis testing, is there is no principle telling you which statistic you should choose, apart from the very vague "large value" $\rightarrow$ "unlikely if null hypothesis is true". Once again, you can see the incompleteness showing up, for you should also have "large value" $\rightarrow$ "likely if alternative hypothesis is true" as an additional heuristic feature of the test statistic. | What are common statistical sins?
In similar vein to @dirkan - The use of p-values as a formal measure of evidence of the null hypothesis being true. It does have some good heuristic and intuitively good features, but is essentially |