input
stringlengths 38
38.8k
| target
stringlengths 30
27.8k
|
---|---|
I have some question concern similarity measure
Suppose that we have a matrix ***M*** where ***M(i,j)*** is the similarity measure between user ***i*** and user ***j*** .
Each user is characterised by : id-user | country | id-artist | id-track
For this I choose to use Jaccard similarity metric.
Jaccard is determined to compute similarity between users based on the tracks that they listened.
My question is : is it possible to take account both id-track ***and*** id-artist to measure the similarity between users?
Thank you | Yes, multiple different ways.
First, we could consider (id-artist,id-track) items as the elements of our set, and compute the Jaccard similarity by comparing those sets. Note that if the artist's id gives us no additional information beyond the track id, this will give the same result, whereas it will give different results if a particular track id could be associated with multiple artists.
Second, we could compute the Jaccard similarity on tracks, and then the Jaccard similarity on artists, and then add the two (probably with some constant coefficient scaling the two). This way two users who listen to the same artists, but none of the same songs by those artists, will be rated as more similar than users who listen to different songs by different artists.
What coefficient makes sense? Well, you could start with 0.5 for each (i.e. just .5\*similarity\_artist+.5\*similarity\_track), see if that's reasonable, and adjust if it's not. |
Is the language $L = \{ a^ib^j \mid i\ \nmid\ j \ \} $ context free ?
If we fix $n \in N$ then we know that the language $L = \{ a^ib^j \mid \ \forall \ 1 \le k \le n \ , \ \ j\neq ki \} $ is context free (as it can be presented as a finite union of context free languages in a similar way to the example here: [Is $L= \{ a^ib^j \mid j\neq i \ and \ j\neq2i \ \} $ context free?](https://cs.stackexchange.com/questions/10635/is-l-aibj-mid-j-neq-i-and-j-neq2i-context-free?rq=1))
I think that it's not context free but have failed to prove it.
By reading other questions on this site I noticed this interesting observation: CFL's in $a^\*b^\*$ are closed under complement as can be seen here: [Are context-free languages in $a^\*b^\*$ closed under complement?](https://cs.stackexchange.com/questions/11110/are-context-free-languages-in-ab-closed-under-complement)
So our language $L$ is context free if and only if $ \bar L = \{ a^ib^j \mid \ \ i\ \mid\ j \ \} $ is context free. I tried using the pumping lemma but to no avail.
Thanks in advance | If I'm not mistaken, you can pump $\bar L$ using $\sigma = a^{n}b^{n^{2}}$, because $n \mod n^{2} = 0$. The result is that $\bar L$ is not context free. The property that you mentioned has an "iff", then $L$ is not context free. |
Consider the following problem:
>
> * Input: A Turing Machine M and a DFA D.
> * Question: Is $L(D) \subseteq L(M)$?
>
>
>
Of course, this problem is not decidable. Because it is known that judging whether a word belong to a Turing Machine is not decidable. If this problem is decidable, then we can write the word as regular expression and construct DFA from this regular expression, and then check the inclusion relationship between DFA and Turing Machine. So this is not decidable.
However, I feel difficult to prove or disprove whether it is recursively enumerable.
Any ideas are welcome. | Assume $n\ge2022$. Consider a sequence of $n$ INSERTs.
***Lower bound of time for $n$ INSERTs***
Each expansion that has been done expands the capacity by no more than $\sqrt n$. So at least $n/\sqrt n=\sqrt n$ expansions has happened.
Consider the last $\frac{\sqrt n}2$ expansions. The earliest one of them happens when the number of elements is at least $n-\frac12\sqrt n\sqrt n=\frac n2$. So the total time-cost of these expansions is at least $(\frac n2+\sqrt{\frac n2})\cdot(\frac12\sqrt n)\ge\frac n4\sqrt n$.
***Upper bound of time for $n$ INSERTs***
Consider four consecutive expansions, assuming the first starts at size $k$. The first expansion increases the capacity by $\sqrt k$. The second expansion by $\sqrt{k+\sqrt k}$. The third by $\sqrt{k+\sqrt k +\sqrt{k+\sqrt k}}$. The fourth by $\sqrt{k+\sqrt k +\sqrt{k+\sqrt k}+\sqrt{k+\sqrt k +\sqrt{k+\sqrt k}}}\ge\sqrt k + 1$. So with every third expansion, the size of capacity increase increases by at least $(\sqrt k + 1)-\sqrt k=1$.
Hence the increase of capacity at $t$-th expansion is at least $t/3$.
the total increase of capacity by the first $3t$ expansions is at least $3(1+2+\cdots+(t-1))=\frac{3(t-1)t}2$. On the other hand, the capacity is no more than $n+\sqrt n$ after $n$ INSERTs. Let $e$ be the number of expansions that happens during $n$ INSERTs. Then
$$\frac{3(e/3-1)e/3}2 \ge n + \sqrt n,$$
which implies $e\le 6\sqrt n$.
Each expansion takes at most $n+\sqrt n$ time. so all expansions take at most $e(n+\sqrt n)\le 6n\sqrt n$ time. Including the time to assign $n$ values, which take $n$ time, the total time is at most $7n\sqrt n$.
***So, the total time for $n$ INSERTs is $\Theta(n\sqrt n)$.***
The reasoning above are sloppy here and there. To some people including myself, it might be considered as a proof that is good enough. To some people including myself, it may not be acceptable. Anyway, this answer should be good enough to be "a fat hint". |
So considering that set of all turing machines is countably infinite, can we also say that set of all FA machines(DFA/NFA) or set of all PDA machines(DPDA/NPDA) are countably infinite, Considering that we can build all of them with Turing machine? | The answer to your first question is yes, the sets of FAs and PDAs are countable. It's easy to see that since each such machine can be completely described by a finite encoding of its relevant information, like its states and its transition function.
For the second question, there are lots of languages (uncountably many, as the video shows), and almost all of them simply cannot be recognized by any FA. Any non-regular language will suffice as an example, like $\{0^n1^n\mid n\ge 0\}$. The same result holds for the languages recognized by PDAs: there are languages that aren't context-free, like $\{0^n1^n0^n\mid n\ge 0\}$. |
I am attempting to analyze biological data, to see whether the number of events in a given time interval is more/less than expected based on the overall frequency. How would one approach this?
An example of how I would frame this:
Out of 100 ms, 16/44 events occur in 15 ms, and 28/44 events occur in the remaining 85 ms.
Do more events occur in the 15 ms interval than expected based on the overall frequency?
And I guess the null hypothesis is that there is a random distribution of the 44 events over the 100 ms. | >
> to see whether the number of events in a given time interval is more/less than expected based on the overall frequency.
>
>
> An example: Out of 100 ms, 16/44 events occur in 15 ms, and 28/44 events occur in the remaining 85 ms. Do more events occur in the 15 ms interval than expected based on the overall frequency? And I guess the null hypothesis is that there is a random distribution of the 44 events over the 100 ms.
>
>
>
If events are uniformly distributed over the total 100 ms interval, then the expected number of evens in the first 15 ms is the total number of events $\times$ 15/100.
I interpret your question as implying a *one tailed* alternative, but you should take care about how well justified that is and consider whether the two tailed alternative is what you need.
I will further assume events are independent (this assumption may not be justified, you should consider carefully whether it is reasonable).
Let $X$ be the number of events in the first 15 ms.
Given the assumptions, and conditioning on the total number of events observed, the number of events, $X$ in 15 ms will be binomial with $n=44$ and $p=0.15$.
$H\_0: \text{Events occur at the same rate before and after 15ms}$
$H\_1: \text{Events occur at a higher rate in the first 15ms}$
Or in symbols:
$H\_0: p=0.15$
$H\_1: p>0.15$
We observe $x=16$. The probability of observing 16 or more events in the first 15 ms is 0.0004022:
![enter image description here](https://i.stack.imgur.com/3UHrN.png)
At typical significance levels you'd reject $H\_0$ and conclude there was a higher rate of events in the first 15ms. |
I have to find equivalence classes for different languages based on Myhill-Nerode. I'm struggling a little bit finding these equivalence classes; for example, the language $L=\{b^\*a^n\mid n≡0\pmod5\}$ with alphabet $\{a,b\}$.
My first solution would be: $[\epsilon],[b^∗],[b^∗a],[b^∗aa],[b^∗aaa],[b^∗aaaa]$.
Would these be the correct classes? If not, i would appreciate any help! | If two strings $w\_1, w\_2$ of the same length are different from each other, then you can find a specific position where they differ:
$$w\_1 = \underbrace{\square\ldots \square}\_{k\text{ symbols }}\;x\;\underbrace{\square\ldots\ldots \square}\_{\ell\text{ symbols }}$$
$$w\_2 = \underbrace{\square\ldots \square}\_{k\text{ symbols }}\;y\;\underbrace{\square\ldots\ldots \square}\_{\ell\text{ symbols }}$$
$$x\neq y$$
You may already know the trick that when you concatenate the two strings, you can re-subdivide them:
$$w\_1w\_2 = \underbrace{\square\ldots \square}\_{k\text{ symbols }}\;x\;\underbrace{\square\ldots\ldots \square}\_{\ell\text{ symbols }}\;|\;\underbrace{\square\ldots \square}\_{k\text{ symbols }}\;y\;\underbrace{\square\ldots\ldots \square}\_{\ell\text{ symbols }}$$
$$w\_1w\_2 = \underbrace{\square\ldots \square}\_{k\text{ symbols }}\;x\;\underbrace{\square\ldots\square}\_{k\text{ symbols }}\;|\;\underbrace{\square\ldots\ldots \square}\_{\ell\text{ symbols }}\;y\;\underbrace{\square\ldots\ldots \square}\_{\ell\text{ symbols }}$$
You can do this because the $\square$ symbols can be anything. When you divide them this way, you can more easily see how a context free grammar can recognize the language.
---
Based on this trick, here is a definition of a PDA to recognize the language.
1. The PDA has four states, $P$, $Q\_0$, $Q\_1$, and $R$. The initial state is $P$.
2. When in state $P$, the machine will nondeterministically guess the position $k$ where the two strings differ.
Specifically, in state $P$ the machine may read a character from the input (ignoring it), and push the symbol $A$ onto the stack. It may do this as many times as it likes.
3. When in state $P$, the machine may decide that it will inspect the character in the current position. It reads the character at the current input (what I called $x$ above). If it reads $x=0$, the machine transitions to state $Q\_0$. If it reads $x=1$, the machine transitions to state $Q\_1$ instead.
**In this way, the machine uses its finite state to remember the value of $x$ for later**.
4. When in state $Q\_0$ or $Q\_1$, the machine first consumes $k$ characters of input. Specifically, it pops the symbol $A$ from the stack and consumes one character of input (ignoring it) until the stack is empty. (If it runs out of characters, the computation fails because the value of $k$ was invalid.)
5. Next, while in state $Q\_i$, the machine nondeterministically guesses the value of $\ell$. As before, it does this by consuming one character of input (ignoring it) and pushing $B$ onto the stack. It may repeat this process any number of times.
6. When in state $Q\_i$, the machine may decide that it will inspect the character in the current position. It reads the character at the current input (what I called $y$ above).
If it is in state $Q\_0$ and reads $y=1$, we've found a mismatch!
If it is in state $Q\_1$ and reads $y=0$, we've found a mismatch!
Otherwise, there is no mismatch at the chosen position. The machine should fail.
7. If the machine finds a mismatch, let it transition to state $R$. In state $R$, it should remove all the $B$ symbols from the stack, consuming one character from the input for each one. At the end of this process, it should be exactly at the end of the string and the stack should be empty. (If not, it has picked invalid values for $k$ and $\ell$.)
8. Overall, if $w\_1$ and $w\_2$ are different strings of the same length, one of the nondeterministic guesses of this machine will succeed, so the overall PDA will accept. Otherwise, all of the branches will fail, and the PDA will reject. This is the desired behavior. |
I am looking for a method to detect sequences within univariate discrete data without specifying the length of the sequence or the exact nature of the sequence beforehand (see e.g. [Wikipedia - Sequence Mining](http://en.wikipedia.org/wiki/Sequence_mining))
Here is example data
```
x <- c(round(rnorm(100)*10),
c(1:5),
c(6,4,6),
round(rnorm(300)*10),
c(1:5),
round(rnorm(70)*10),
c(1:5),
round(rnorm(100)*10),
c(6,4,6),
round(rnorm(200)*10),
c(1:5),
round(rnorm(70)*10),
c(1:5),
c(6,4,6),
round(rnorm(70)*10),
c(1:5),
round(rnorm(100)*10),
c(6,4,6),
round(rnorm(200)*10),
c(1:5),
round(rnorm(70)*10),
c(1:5),
c(6,4,6))
```
The method should be able to identify the fact that x contains the sequence 1,2,3,4,5 at least eight times and the sequence 6,4,6 at least five times ("at least" because the random normal part can potentially generate the same sequence).
I have found the `arules` and `arulesSequences` package but I could'nt make them work with univariate data. Are there any other packages that might be more appropriate here ?
I'm aware that only eight or five occurrences for each sequence is not going to be enough to generate statistically significant information, but my question was to ask if there was a good method of doing this, assuming the data repeated several times.
Also note the important part is that the method is done without knowing beforehand that the structure in the data had the sequences `1,2,3,4,5` and `6,4,6` built into it. The aim was to find those sequences from `x` and identify where it occurs in the data.
Any help would be greatly appreciated!
**P.S** This was put up here upon suggestion from a stackoverflow comment...
**Update:** perhaps due to the computational difficulty due to the number of combinations, the length of sequence can have a maximum of say 5? | **Finding the high-frequency sequences is the hard part:** once they have been obtained, basic matching functions will identify where they occur and how often.
Within a sequence of length `k` there are `k+1-n` `n`-grams, whence for n-grams up to length `n.max`, there are fewer than `k * n.max` n-grams. Any reasonable algorithm shouldn't have to do much more computing than that. Since the longest possible n-gram is `k`, *every* possible sequence could be explored in $O(k^2)$ time. (There may be an implicit factor of $O(k)$ for any hashing or associative tables used to keep track of the counts.)
**To tabulate all n-grams,** assemble appropriately shifted copies of the sequence and count the patterns that emerge. To be fully general we do not assume the sequence consists of positive integers: we treat its elements as factors. This slows things down a bit, but not terribly so:
```
ngram <- function(x, n) {
# Returns a tabulation of all n-grams of x
k <- length(x)
z <- as.factor(x)
y <- apply(matrix(1:n, ncol=1), 1, function(i) z[i:(k-n+i)])
ngrams <- apply(y, 1, function(s) paste("(", paste(s, collapse=","), ")", sep=""))
table(as.factor(ngrams))
}
```
(For pretty output later, the penultimate line encloses each n-gram in parentheses.)
**Let's generate the data** suggested in the question:
```
set.seed(17)
f <- function(n) c(round(rnorm(n, sd=10)), 1:5, c(6,4,6))
x <- unlist(sapply(c(100,300,70,100,200,70,70,100,200,70), f))
```
We will want to **look only at the highest frequencies:**
```
test <- function(y, e=0, k=2) {
# Returns all extraordinarily high counts in `y`, which is a table
# of n-grams of `x`. "Extraordinarily high" is at least `k` and
# is unusual for a Poisson distribution of mean `e`.
u <- max(k, ceiling(e + 5 * sqrt(e)))
y[y >= u]
}
```
**Let's do it!**
```
n.alphabet <- length(unique(x)) # Pre-compute to save time
n.string <- length(x) # Used for computing `e` below
n.max <- 9 # Longest subsequence to look for
threshold <- 4 # Minimum number of occurrences of interesting subsequences
y <- lapply(as.list(1:n.max),
function(i) test(ngram(x,i), e=(n.string+1-i) / n.alphabet^i, k=threshold))
```
**This calculation took 0.22 seconds** to find all high-frequency n-grams, with `n`=1, 2, ..., 9 within a string of length 1360. Here is a compact list (the corresponding frequencies can also be found in `y`: just print it out, for instance):
```
> temp <- lapply(as.list(1:n.max),
function(i) {cat(sprintf("%d-grams:", i), names(y[[i]]), "\n")})
1-grams: (-1) (-3) (-4) (-7) (0) (1) (2) (3) (4) (5) (6)
2-grams: (-1,-1) (-1,0) (-1,1) (-11,0) (-3,-7) (-3,-8) (-3,3) (-4,-4) (-6,-1) (-6,0) (-7,-3) (-7,-5) (-7,-7) (-7,-9) (-8,3) (-9,0) (-9,9) (0,5) (0,9) (1,2) (1,4) (10,6) (12,-7) (2,-5) (2,-7) (2,3) (3,-1) (3,-2) (3,2) (3,4) (4,-5) (4,-9) (4,4) (4,5) (4,6) (5,-2) (5,-4) (5,6) (6,-4) (6,1) (6,3) (6,4) (6,5) (6,7) (7,6) (8,-7) (8,14)
3-grams: (1,2,3) (2,3,4) (3,4,5) (4,5,6) (5,6,4) (6,4,6)
4-grams: (1,2,3,4) (2,3,4,5) (3,4,5,6) (4,5,6,4) (5,6,4,6)
5-grams: (1,2,3,4,5) (2,3,4,5,6) (3,4,5,6,4) (4,5,6,4,6)
6-grams: (1,2,3,4,5,6) (2,3,4,5,6,4) (3,4,5,6,4,6)
7-grams: (1,2,3,4,5,6,4) (2,3,4,5,6,4,6)
8-grams: (1,2,3,4,5,6,4,6)
9-grams:
``` |
I would like to use a data structure allowing fast access, either a balanced binary search tree (BST) for $O(\log n)$ access time or an open hash table for constant access time.
1) What is the exact memory usage of a BST or hash table, for storing $n$ values? For instance, if we consider a BST, are $n$ pointers sufficient for storing it? (We can neglect the size of the stored values. The only thing that interests me is the storage overhead involed by the use of a specific data structure)
2) If the choice has to be determined by the cost in space, subject to the constraint of a fast enough access time, what is the best data structure to use?
For the space cost criteria, I would like a precise description. I'm interested in two types of usage: static and dynamic. I ask this question in the context of a C implementation.
I'm mostly interested in values around $n=100000$. | When you're asking about "exact" memory usage, do consider that all of those pointers may not be necessary. To see why, consider that the number of binary trees with $n$ nodes is $C\_{2n}$, where:
$$C\_i = \frac{1}{i+1} { 2i \choose i }$$
are the [Catalan numbers](http://mathworld.wolfram.com/CatalanNumber.html). Using [Stirling's approximation](http://mathworld.wolfram.com/StirlingsApproximation.html), we find:
$$\log C\_{2n} = 2n - O(\log n)$$
So to represent a binary tree with $n$ nodes, it is sufficient to use two *bits* per node. That's a lot less than two *pointers*.
It's not too difficult to work how how to compress a static (i.e. non-updatable) binary search tree down to that size; do a depth-first or breadth-first search, and store a "1" for every branch node and a "0" for every leaf. (It is harder to see how to get $O(\log n)$ access time, and much harder to see how to allow updates to the tree. This is an active research area.)
Incidentally, while different balanced binary tree variants are interesting from a theoretical perspective, the consistent message from decades of experimental algorithmics is that in practice, any balancing scheme is as good as any other. The purpose of balancing a binary search tree is to avoid degenerate behaviour, no more and no less. Stepanov also noted that if he'd designed the STL today, he might consider in-memory B-trees instead of red-black trees, because they use cache more efficiently. They also use $n + o(n)$ extra pointers to store $n$ nodes, compared with $2n$ or $3n$ for most binary search trees.
As for hash tables, there is a similar analysis that you can do. If you are (say) storing $2^n$ integers in a hash table from the range $[0,2^m)$, and $2^n \ll 2^m$, then it is sufficient to use
$$\log {2^m \choose 2^n} \approx (m-n)2^n$$
bits. It is possible to achieve close to this bound using hash tables.
To give you the basic idea, consider an idealised hash table where you have $2^n$ elements stored in $2^n$ slots (i.e. load factor of one where every "chain" has length one).
If you hash $m$ bits of key into $m$ bits of hash, then store this in a $n$-bit hash table, then $n$ bits of the hash are implied by the position in the hash table, and you therefore only need to store the remaining $m-n$ bits. By using an invertible hash function (e.g. a [Feistel network](https://en.wikipedia.org/wiki/Feistel_cipher)), you can recover the key exactly.
Of course, traditional hash tables have Poisson behaviour, so you would need to use a technique like [cuckoo hashing](https://en.wikipedia.org/wiki/Cuckoo_hashing) to get close to a load factor of one with no chaining. See [*Backyard Cuckoo Hashing*](http://www.cs.huji.ac.il/~segev/papers/BackyardCuckooHashing.pdf) for further details.
So if space usage is a far more important factor than time (subject to time being "good enough"), it may be worth looking into this area of compressed data structures, and [succinct data structures](https://en.wikipedia.org/wiki/Succinct_data_structure) in particular. |
Suppose I have a dataset $X$ and target labels $Y$. For a fixed neural network architecture, how can I randomly and uniformly sample from the space of all possible assignments of weights such that the neural network maps $X$ to $Y$? | I am trying to formalize your question before discussing it.
If I understand correctly, you ask for the following:
For $X \subset \mathbb{R}^{n}$ and $Y \subset \mathbb{R}^m$, let $f:X \rightarrow Y$ be a map.
Let $w \in \mathbb{R}^q$ be weights. We consider a neural network $g: \mathbb{R}^{n}\times \mathbb{R}^q \rightarrow \mathbb{R}^m$, and let $g^{(w)}: \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}, x \mapsto g(x,w)$ be the neural network parametrized by $w$.
Now you want to sample from the set $\underline{W}(f,g,X):= \{w \in \mathbb{R}^{q} \mid f = (g^{(w)})\_{\mid X} \}$.
However, I think constructing $\underline{W}(f,g,X)$ is very difficult in general.
The following question arises:
Do you already have some $w \in \underline{W}(f,g,X)$ ?
If not, note that $\underline{W}(f,g,X) = \emptyset$ is possible! (its easy to construct an example for that)
Note also that all known universal approximation theorems have some requirements on $f$, and only state that $f$ can be **approximated** by some neural network. However, for fixed architecture, it might be that there is no $w \in \mathbb{R}^q$ with $f = (g^{(w)})\_{\mid X}$ nor that $f$ can be approximated by $(g^{(w)})\_{\mid X}$ (e.g. in terms of the uniform-norm).
If you have some $w \in \underline{W}(f,g,X)$, there are certains trivial permutations (e.g. permuting the nodes of a fully-connected layer, or some channels). Apart from that, I am not aware of a full description of $\underline{W}(f,g,X)$. And without further details or constraints, I think its there is no general answer at the moment.
I hope this helps! |
I'm trying to better understand log loss and how it works but one thing I can't seem to find is putting the log loss number into some sort of context. If my model has a log loss of 0.5, is that good? What's considered a good and bad score? How do these thresholds change? | So this is actually more complicated than Firebugs response and it all depends on the inherent variation of the process you are trying to predict.
When I say variation what I mean is 'if an event was to repeat under the exact same conditions, known and unknown, what's the probability that the same outcome will occur again'.
A perfect predictor would have a loss, for probability P:
Loss = P ln P + (1-P) ln (1-P)
If you are trying to predict something where, at its worse, some events will be predicted with an outcome of 50/50, then by integrating and taking the average the average loss would be: L=0.5
If what you are trying to predict is a tad more repeatable the loss of a perfect model is lower. So for example, say with sufficient information a perfect model was able to predict an outcome of an event where across all possible events the worst it could say is 'this event will happen with 90% probability' then the average loss would be L=0.18.
There is also a difference if the distribution of probabilities is not uniform.
So in answer to your question the answer is 'it depends on the nature of what you are trying to predict' |
What's the most suitable statistical test for testing whether the distribution of the (x,y) coordinates of the blue points is significantly different from the distribution of the (x,y) coordinates of the red points? I'd also want to know the directionality of this difference. The colored data points are those data points with labels, with the label for blue being distinct from the label for red. White data points are just unlabeled, so could very well be ignored.
![enter image description here](https://i.stack.imgur.com/T60jK.png) | A good test provides *insight* as well as a quantification of the apparent difference. A permutation test will do that, because you can plot the permutation distribution and it will show you just how and to what extent there is a difference in your data.
A natural test statistic would be the mean difference between the points in one group relative to those in the other -- but with little change you can apply this approach to any statistic you choose. This test views group membership arising from the random selection of (say) the red points among the collection of all blue or red points. Each possible sample yields a value of the test statistic (a vector in this case). The permutation distribution is the distribution of all these possible test statistics, each with equal probability.
For small datasets, like that of the question ($N=12$ points with subgroups of $n=5$ and $7$ points), the number of samples is small enough you can generate them all. For larger datasets, where $\binom{N}{n}$ is impracticably large, you can sample randomly. A few thousand samples will more than suffice. Either way, these distributions of vectors can be plotted in Cartesian coordinates, shown below using one circular shape per outcome for the full permutation distribution (792 points). This is the null, or reference, distribution for assessing the location of the mean difference *in the dataset,* shown with a red point and red vector directed towards it.
[![enter image description here](https://i.stack.imgur.com/nbLls.png)](https://i.stack.imgur.com/nbLls.png)
When this point cloud looks approximately Normal, the [Mahalanobis distance](https://stats.stackexchange.com/a/62147/919) of the data from the origin will approximately have a chi-squared distribution with $2$ degrees of freedom (one for each coordinate). This yields a p-value for the test, shown in the title of the figure. That's a useful calculation because it (a) quantifies how extreme the arrow appears and (b) can prevent our visual impressions from deceiving us. Here, although the data look extreme--most of the red points are displaced down and to the left of most of the blue points--the p-value of $0.156$ indicates that such an extreme-looking displacement occurs frequently among random groupings of these twelve points, advising us not to conclude there is a significant difference in their locations.
---
This `R` code gives the details of the calculations and construction of the figure.
```R
#
# The data, eyeballed.
#
X <- data.frame(x = c(1,2,5,6,8,9,11,13,14,15,18,19),
y = c(0,1.5,1,1.25, 10, 9, 3, 7.5, 8, 4, 10,11),
group = factor(c(0,0,0,1,0,1,1,1,1,0,1,1),
levels = c(0, 1), labels = c("Red", "Blue")))
#
# This approach, although inefficient for testing mean differences in location,
# readily generalizes: by precomputing all possible
# vector differences among all the points, any statistic based on differences
# observed in a sample can be easily computed.
#
dX <- with(X, outer(x, x, `-`))
dY <- with(X, outer(y, y, `-`))
#
# Given a vector `i` of indexes of the "red" group, compute the test
# statistic (in this case, a vector of mean differences).
#
stat <- function(i) rowMeans(rbind(c(dX[i, -i]), c(dY[i, -i])))
#
# Conduct the test.
#
N <- nrow(X)
n <- with(X, sum(group == "Red"))
p.max <- 2e3 # Use sampling if the number of permutations exceeds this
# set.seed(17)
if (lchoose(N, n) <= log(p.max)) {
P <- combn(seq_len(N), n)
stitle <- "P-value"
} else {
P <- sapply(seq_len(p.max), function(i) sample.int(N, n))
stitle <- "Approximate P-value"
}
S <- t(matrix(apply(P, 2, stat), 2)) # The permutation distribution
s <- stat(which(X$group == "Red")) # The statistic for the data
#
# Compute the Mahalanobis distance and its p-value.
# This works because the center of `S` is at (0,0).
#
delta <- s %*% solve(crossprod(S) / (nrow(S) - 1), s)
p <- pchisq(delta, 2, lower.tail = FALSE)
#
# Plot the reference distribution as a point cloud, then overplot the
# data statistic.
#
plot(S, asp = 1, col = "#00000020", xlab = "dx", ylab = "dy",
main = bquote(.(stitle)==.(signif(p, 3))))
abline(h = 0, v = 0, lty = 3)
arrows(0, 0, s[1], s[2], length = 0.15, angle = 18,
lwd = 2, col = "Red")
points(s[1], s[2], pch = 24, bg = "Red", cex = 1.25)
``` |
Let's say I have a dataset with the following format:
* customerid
* product
* orders\_in\_last7days
* orders\_in\_last6days
* orders\_in\_last5days
* orders\_in\_last4days
* orders\_in\_last3days
* orders\_in\_last2days
* orders\_in\_last1days
* orders\_currentday
This dataset could have multiple customers and some customers could place $n$ numbers of orders on different days. How can I flag customers that have unusual number of purchases on the current day, by looking at the distribution of orders on the previous day for that specific customer? | There are few good startups and open sources that offer solutions for ML monitoring (I actually work at a startup in the field).
You can find here a few **[comparison tools](https://book.mlcompendium.com/mlops-monitoring-and-alerts#tool-comparisons)** to compare between some of them according to different features. I recommend the [airtable](https://airtable.com/shr4rfiuOIVjMhvhL) by Ori on the top of the list, and [mlops.toys](https://mlops.toys/) (This is an open-source created by some of my colleagues so maybe I'm biased, but I love it).
The MLCompendium is, in general, a good source for information in many subjects in the ml field.
I really can't recommend the best tool for you because **it depends on your exact needs**:
* Do you look for monitoring on the way as part of a full pipeline tool, or some super-advanced tool specifically for monitoring to expand your existing pipeline?
* Do you work with Tabular data? NLP? Vision?
* What is the frequency of your predictions?
* Do you need to monitor all your data or just a segment of it?
* etc...
In addition, this short blog post a colleague of mine wrote on [Concept Drift Detection Methods](https://www.aporia.com/concept-drift-detection-methods/) may help you as well. You can find many more articles on the subject in the link to the MLCompendium I attached above. |
I want to generate a random partition of an $N\times N$ grid into $N$ connected groups having $N$ tiles each. How would I do this? Max grid size will be 10x10. Below is an example for a 5x5 grid.
[![example puzzle](https://i.stack.imgur.com/eWQvF.png)](https://i.stack.imgur.com/eWQvF.png) | Filesystems typically don't handle internal edits to files as you describe them. When you edit a program file, a text editor will pull the whole file into memory, allow you to make modifications to a buffer and then write the whole thing back out to disk. Actual file I/O operations are usually quite limited by the operating system, e.g. overwrite existing file contents, truncate the file to a smaller size discarding what's beyond EOF, and appending to the file.
*Text editors* manage the business of allowing arbitrary insertion and deletion of lines and blocks of text. These programs use an assortment of data structures, from arrays of lines to gap buffers to more elaborate structures, depending on the editor. [An old Blogspot article](https://ecc-comp.blogspot.com/2015/05/a-brief-glance-at-how-5-text-editors.html) describes how a few popular text editors work internally; I commend you to it and to the source code of the many open-source text editors available if you wish to learn more. |
Suppose I have a bunch of cities with different population sizes, and I wanted to see if there was a positive linear relationship between the number of liquor stores in a city and the number of DUIs. Where I'm determining whether this relationship is significant or not based on a t-test of the estimated regression coefficient.
Now clearly the pop. size of a city is going to be positively correlated with both the number of DUIs as well as the number of liquor stores. Thus if I run a simple linear regression on just liquor stores and see if its regression coefficient is statistically significant, I will likely run into a problem of multicollinearity, and over-estimate the effect of liquor stores on DUIs.
Which of the two methods should I use to correct for this?
1. I should divide the number of liquor stores in the city by its population in order to get a liquor store per capita value and then regress on that.
2. I should regress on both liquor stores and size, and then look to see if the liquor store coefficient is significant when controlling for size.
3. Some other method?
I honestly can't decide which seems more sensible. I vacillate between them, depending on which one I think about I'm able to convince myself that that's the right way.
On the one hand liquor stores per capita seems like the right variable to use, since DUIs are committed by individuals, but that doesn't seem very statistically rigorous. On the other hand, controlling for size seems statistically rigorous, but rather indirect. Furthermore, if I rescale after computing the liquor stores per capita variable, I get very similar regression coefficients between the two methods, but method 1 produces a smaller p-value. | I would regress the "DUI per capita" (Y) on "liquer stores per capita" (X) and "population size" (Z). This way your Y reflects the propensity to drunk driving of urban people, while X is the population characteristic of a given city. Z is a control variable just in case if there's size effect on Y. I don't think you are going to see multicollinearity issue in this setup.
This setup is more interesting than your model 1. Here, your base is to assume that the number of DUIs is proportional to population, while $\beta\_Z$ would capture nonlinearity, e.g. people in larger cities are more prone to drunk driving. Also X reflects cultural and legal environment directly, already adjusted to size. You may end up with roughly the same X for cities of different sizes in Sough. This also allows you introduce other control variables such as Red/Blue state, Coastal/Continental etc. |
I've recently become quite interested in parametricity after seeing Bernardy and Moulin's 2012 LICS paper ( <https://dl.acm.org/citation.cfm?id=2359499>). In this paper, they internalize unary parametricity in a pure type system with dependent types and hint at how you can extend the construction to arbitrary arities.
I've only seen binary parametricity defined before. My question is: what is an example of an interesting theorem that can be proved using binary parametricity, but not with unary parametricity? It would also be interesting to see an example of a theorem provable with tertiary parametricity, but not with binary (although I've seen evidence that n-parametricity is equivalent for n >= 2: see <http://www.sato.kuis.kyoto-u.ac.jp/~takeuti/art/par-tlca.ps.gz>) | Typically, you use binary parametricity to prove program equivalences. It's unnatural to do this with a unary model, since it only talks about one program at a time.
Normally, you use a unary model if all you are interested in is a unary property. For example, see our recent draft, *[Superficially Substructural Types](http://www.mpi-sws.org/~neelk/icfp12-superficial-krishnaswami-turon-dreyer-garg.pdf)*, in which we prove a type soundness result using a unary model. Since soundness talks about the behavior of one program (if $e : A$ then it either diverges or reduces to a value $v : A$), a unary model is sufficient. If we wanted to prove program equivalences in addition, we would need a binary model.
EDIT: I just realized that if you look at our paper, it just looks like a plain old logical relations/realizability model. I should say a little bit more about what makes it (and other models) parametric. Basically, a model is parametric when you can prove the identity extension lemma for it: that is, for any type expression, if all of the free type variables are bound to identity relations, then the type expression is the identity relation. We don't explicitly prove it as a lemma (I don't know why, but you rarely need to when doing operational models), but this property is essential for our language's soundness.
The definition of "relation" and "identity relation" in parametricity is actually a bit up for grabs, and this freedom is actually essential if you want to support fancy types like higher kinds or dependent types, or wish to work with fancier semantic structures. The most accessible account of this I know is in Bob Atkey's draft paper [*Relational Parametricity for Higher Kinds*](https://personal.cis.strath.ac.uk/robert.atkey/fomega-parametricity.html).
If you have a good appetite for category theory, this was first formulated in an abstract way by Rosolini in his paper [*Reflexive Graphs and Parametric Polymorphism*](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.9972). It has since been developed further by Dunphy and Reddy in their paper [*Parametric Limits*](http://www.cs.bham.ac.uk/~udr/papers/parametric.pdf), and also by Birkedal, Møgelberg, and Petersen in [*Domain-theoretical Models of Parametric Polymorphism*](http://dl.acm.org/citation.cfm?id=1316367). |
Consider the following DAG which shows the direct and indirect effect of $U$ on $Y$.
The total effect of $U \rightarrow Y$ is simply $(2\times4) + 3 = 11$.
I am looking for the derivation of the **biased** effect of $A \rightarrow Y$.
When I run simulation I get a biased effect of about 5.5
How can I retrieve mathematically this 5.5?
[![enter image description here](https://i.stack.imgur.com/QdAK8.png)](https://i.stack.imgur.com/QdAK8.png)
```
# R simulation
n = 100000
u = rnorm(n, 0, 1)
a = u*2
y = 3*u + 4*a
lm(y ~ a) # biased effect
lm(y ~ u) # total effect
lm(y ~ a + u) # correct effects
```
---
2 ways to compute the bias effect, which is what I observe from the simulation, I worked out but that do not make sense to me why it works. Is there a general formula to compute expected bias?
1. Divide the total effect of $U$ on $Y$, by the path $U \rightarrow A$, $\frac{11}{2}=5.5$
2. Let's label the paths: $U \rightarrow A = a$, $A \rightarrow Y = b$, and $U \rightarrow Y = c$, then we can compute the bias with: a + ($\frac{c}{b}$), $4 + \frac{3}{2} = 5.5$ | Just to help you understand what you are looking at a bit better on your residual plot, your data looks something like this:
[![enter image description here](https://i.stack.imgur.com/xX3xS.png)](https://i.stack.imgur.com/xX3xS.png)
Your model is fine until the price gets capped; then you need to determine whether the rest of the model is valid or not. The capped price has to be due to unrecorded data above that price because you would not expect to see data like that in reality for your particular problem. So then you have to think about what the data looks like above that price. It may be that the linear relationship no longer holds once you go above the grey line and this would be a limitation of using a linear model here. The data may curve and flatten off in reality, in which case a logarithmic curve would fit much better, so it would be unwise to predict data above that line with a linear model.
Also, do you care what happens above the grey line, or do you only need the model for the part where the model is valid? If you are only interested in the portion of the model that is valid, then you don't need to worry about the rest. These are some of the things you might want to think about. |
I've been trying to make these packages work for quite some time now but with no success. Basically the error is:
```
GraphViz's Executables not found
```
**EDIT**: I had not posted a terminal `log` with the error originally. I'm using `Ubuntu` now so I won't be able to reproduce the exact same error I got in the past (a year ago, so far away in the past...). However, I've been experiencing a similar --- if not the same --- error in my current setup; even while using a virtual environment with `pipenv`. The error seems to come from lines that were described in [@张乾元's answer](https://datascience.stackexchange.com/a/48563/57429):
```
Traceback (most recent call last):
File "example.py", line 49, in <module>
Image(graph.create_png())
File "/home/philippe/.local/lib/python3.6/site-packages/pydotplus/graphviz.py", line 1797, in <lambda>
lambda f=frmt, prog=self.prog: self.create(format=f, prog=prog)
File "/home/philippe/.local/lib/python3.6/site-packages/pydotplus/graphviz.py", line 1960, in create
'GraphViz\'s executables not found')
pydotplus.graphviz.InvocationException: GraphViz's executables not found
```
I've tried to install `GraphViz` via 2 different ways: via `pip install graphviz` and through the `.msi` package (and also tried to install `pydot`, `pydotplus` and `graphviz` in many different orders).
The code I'm trying to run is simply a `dot-to-png` converter for the [Iris Dataset](https://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html).
```py
from sklearn.tree import DecisionTreeClassifier
import sklearn.datasets as datasets
from sklearn.externals.six import StringIO
from sklearn.tree import export_graphviz
import pandas as pd
import pydotplus
from IPython.display import Image
iris = datasets.load_iris()
df = pd.DataFrame(iris.data, columns = iris.feature_names)
y = iris.target
dtree = DecisionTreeClassifier()
dtree.fit(df,y)
dot_data = StringIO()
export_graphviz(
dtree,
out_file = dot_data,
filled = True,
rounded = True,
special_characters = True
)
graph_1 = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph_1.create_png())
```
In `Jupyter Notebooks` and in `Atom`, the system seems to be looking for `GraphViz` inside `pydotplus`, as it points to `~\Anaconda3\lib\site-packages\pydotplus\graphviz.py`. Shouldn't it be the other way around?
Lastly, I just want to point out that I've already tried adding `GraphViz`'s path to the system's `PATH` using `C:\Users\Philippe\Anaconda3\Library\bin\graphviz`. | If you have Anaconda, you could use Conda manager.
Type `Conda` at Start Panel and try install via Conda.
For example:
```
pip3 install graphviz
``` |
I am writing up results from regression analysis where I used AICc model averaging to arrive at my final parameter estimates. I am wondering how best to refer to these parameters and their 95% confidence intervals. It seems like "significantly different" is taboo in the AIC world, but writing out "the parameter was x.x and its CI does not cross zero" seems much more laborious to me and the reader than saying "x.x was significantly different from zero."
This seems like it might be an issue that would not come up if I had just selected the lowest AICc as my best model, which is what many folks do (against Burnham and Anderson repeatedly stating otherwise). Selecting the best model let's you say "the parameter is important b/c it is in the final model."
Also, I'm wondering if there is an AIC model averaged equivalent to "marginally significant." I have parameters that have the predicted sign, indicate a fairly sizeable effect, but whose CI creeps over 0.0.
Philosophically I like model averaging, and I also have many good models that often only differ by an extra covariate or an interaction.
**EDIT:** This inquiry can probably be summarized by asking "In an AICc model averaging framework how does one interpret parameters whose confidence intervals span zero by only a small amount?" | If you have access, Ive found several papers that are very helpful when deciding what to report, what values to use and the common mistakes people make when using AIC. On mistake talked about is using 95% CI when you've used AIC procedures as discussed in Arnold 2010.
Arnold T.W. 2010. Uninformative Parameters and Model Selection Using Akaike’s Information Criterion. Journal of Wildlife Management
Aurr et al 2010. A protocol for data exploration to avoid common statistical problems. Methods in Ecology and Evolution
Symonds and Moussalli 2011. A brief guide to model selection, multimodel inference
and model averaging in behavioural ecology using Akaike’s information criterion |
If I have an ROC for a single classifier [y(x) in the range 0...1] that is 'worse than random', namely the AUC of the ROC is less than 0.5, would a classifier that reversed the class predictions [y'(x)=1-y(x)] be better than random to the same degree? | $y'(x)$ means you will work with $ROC' = ROC^{-1}$ (inverse of $ROC$), as all true positive will be falsely negative and vice versa. Therefore, $AUC' = 1 - AUC$ (As ROC is an increasing function and inside a unit square), and your answer is yes. |
Based on the little knowledge that I have on MCMC (Markov chain Monte Carlo) methods, I understand that sampling is a crucial part of the aforementioned technique. The most commonly used sampling methods are Hamiltonian and Metropolis.
Is there a way to utilise machine learning or even deep learning to construct a more efficient MCMC sampler? | A method that *could* connect the two concepts is that of a multivariate Metropolis Hastings algorithm. In this case, we have a target distribution (the posterior distribution) and a proposal distribution (typically a multivariate normal or t-distribution).
A well known fact is that the further the proposal distribution is from the posterior distribution, the less efficient the sampler is. So one could imagine using some sort of machine learning method to build up a proposal distribution that matches better to the true posterior distribution than a simple multivariate normal/t distribution.
However, it's not clear this would be any improvement to efficiency. By suggesting deep learning, I assume that you may be interested in using some sort of neural network approach. In most cases, this would be **significantly** more computationally expensive than the entire vanilla MCMC method itself. Similarly, I don't know any reason that NN methods (or even most machine learning methods) do a good job of providing adequate density *outside* the observed space, crucial for MCMC. So even ignoring the computational costs associated with building the machine learning model, I cannot see a good reason why this would improve the sampling efficiency. |
I have a dataset with 4519 samples labeled as "1", and 18921 samples labeled as "0" in a binary classification exercise. I am well aware that during the training phase of a classification algorithm (in this case, a Random Forest) the number of 0/1 samples should be balanced to prevent biasing the algorithm towards the majority class.
However, should the **test dataset be balanced as well**?
In other words, if train my model with 1000 random samples of "0" class, and 1000 random samples of "1" class, should I test the model with the remaining 3519 samples of "1" class, and randomly select another 3519 samples of the majority "0" class, or I can go with the remaining 17921?
What is the **impact of an imbalanced test dataset on the precision, recall, and overall accuracy** metrics?
Thanks | The answer to your first question:
>
> should the test dataset be balanced as well?
>
>
>
is, like many answers in data science, "it depends."
And really, it depends on the audience for, and interpretability of the model metrics, which is the thrust of your second question:
>
> What is the impact of an imbalanced test dataset on the precision, recall, and overall accuracy metrics?
>
>
>
Personally, if the metrics will just be used by you to evaluate the model, I would use the `sensitivity` and `specificity` within each class to evaluate the model, in which case, I care less about the balance of the classes in the test data as long as I have enough of both to be representative. I can account for the prior probabilities of the classes to evaluate the performance of the model.
On the other hand, if the metrics will be used to describe predictive power to a non-technical audience, say upper management, I would want to be able to discuss the overall accuracy, for which, I would want a reasonably balanced test set.
That said, it sounds like your test set is drawn independently of the training data. If you are going to balance the training data set, why not draw one balanced data set from the raw data and then split the training and test data? This will give you very similar class populations in both data sets without necessarily having to do any extra work. |
I have been trying to forecast the results of the following data. These are weekly numbers and I have tired ARIMA and ETS and it seems I am not getting the correct results. I have set the frequency as 365.25/7 and tried auto.arima with stepwise = FALSE and approximation = FALSE. Also tried Fourier. The results that I get are as seen below. Could anyone help me understand what I am doing wrong. How do we get the up and downs (drift) in the forecast ?
Point Forecast :
992.2797 1057.1385 1057.4956 1082.3302 1089.3869 1100.8245 1106.7030 1112.7030 1116.6169 1119.9958 1122.4300 1124.3969
Data is as follows. The information is from 2009-01-04 till 2018-06-15. I was using data from 2018 as test set.
311
1389
1006
1407
6456
1295
2419
1643
915
926
909
1165
1041
1271
2825
1034
967
3149
2188
1128
2427
1583
1049
1225
1134
1283
3861
1298
1169
1057
1220
1296
1457
2313
1511
1649
1429
944
1225
2932
1662
1068
2056
2680
1164
1350
1595
1528
1241
977
2713
2369
864
1499
2364
1317
1068
1756
1333
1148
1340
1519
1560
1326
1325
2219
1308
1283
1657
1350
1048
1134
2372
2392
1233
1495
1251
978
4284
907
909
1268
910
999
1027
2132
2397
2289
1336
1260
973
2092
1392
1155
2465
3046
927
836
2331
2956
1626
1565
2388
1984
868
1276
1045
980
2009
3757
1032
1666
1148
2032
1386
1733
1545
1910
1322
994
1990
951
1206
952
1987
2894
1598
1039
1871
1270
2705
1744
857
1819
1249
688
1848
1432
1957
2055
1069
1831
1207
1038
1819
1119
1892
2037
1200
1724
1974
1670
1853
1071
1569
2533
723
1315
1124
1053
820
1899
1017
1603
1093
1671
1115
1224
967
1853
1684
1017
811
1811
1094
1035
794
2612
1453
912
1368
857
2371
2156
883
685
1031
813
1272
1010
1876
1875
1261
888
1756
1129
1152
1039
1718
1852
1417
1782
1634
1414
1056
1069
1643
1836
1092
998
1531
1108
1020
1822
941
1081
1029
1495
981
1175
1648
1410
1186
866
1394
1253
867
732
1261
2273
1190
765
2220
1390
1384
1484
676
993
1135
830
848
810
2240
1494
856
686
1548
1018
779
1751
1593
886
685
836
841
1448
1084
755
1941
1921
1039
1093
829
1237
935
1305
824
1120
931
766
1463
1354
791
1062
803
779
1335
802
730
1177
1101
1255
1098
735
1609
1049
1109
1041
723
690
1000
1477
1034
1041
1176
1066
669
778
765
790
1436
1069
731
732
721
790
842
1203
1078
717
890
655
718
782
1265
855
1164
1173
735
1066
826
948
797
1188
816
1005
1131
736
566
1056
879
1198
1132
1253
1064
915
1351
1352
1184
1700
1005
937
1013
1322
1052
966
1356
1178
1985
1422
1051
1045
1537
1633
1543
1468
1251
1761
1483
2213
1794
2245
1170
1872
1737
1098
1283
1344
1388
1256
2408
1692
1789
2379
1209
1448
1167
2194
1480
1168
1023
1512
1333
1297
1501
1311
2672
1591
1319
1918
2003
2254
1513
1419
1675
1812
1230
1153
1500
1222
2288
1223
973
968
1058
1473
1372
1010
1257
1219
1081
2356
1645
1059
931
1973
1741
987
755
877
1210
997
1802
936
696
956
738
644
994
766
902
902
2061
925
759
752
969
793
1883
992
699
1704
813
1440
1044
902
1301
1594
959
622
1339
1092
1335
925
848
663
669
1061
1452
794
1430
884
760
1610
1226
860
806
1449
1755
1066
689
722
674
702
1499
793
613
632
618
625
649
1471
1735
811
662
718
763
1594
1353
1404
1865
953
605
983 | A time series is composed of signal and noise. A forecasting method attempts to extract and extrapolate the signal, and discard the noise. (By definition, noise is random and unforecastable, so trying to forecast the noise will make the forecast worse.)
The spikes you see may be systematic, as in AR or MA dynamics, in which case they will be modeled and forecasted. Or, more likely, they are noise, in which case they will not be forecasted, and this is correct.
A forecast is always smoother than the original series, because the noise has been removed.
As to where the ups and downs come from: most likely from seasonal or ARIMA behavior your model has detected. If I fit a straightforward `forecast::auto.arima()` to your data (which is inappropriate, given the seasonality), I get an ARIMA(1,1,3) model, which does exhibit some dynamics.
ARIMA models are not very happy about "long" seasonal cycles. You may want to look at [bats](/questions/tagged/bats "show questions tagged 'bats'") or [tbats](/questions/tagged/tbats "show questions tagged 'tbats'") models. Then again, if you have already included Fourier terms, these models will likely not improve matters dramatically.
You may want to look at some material on forecasting, e.g., [*Forecasting: Principles and Practice*](https://otexts.org/fpp2/). Or at [How to know that your machine learning problem is hopeless?](https://stats.stackexchange.com/q/222179/1352) |
What is the difference between a multiclass problem and a multilabel problem? | **Multiclass classification** means a classification task with more than two classes; e.g., classify a set of images of fruits which may be oranges, apples, or pears. Multiclass classification makes the assumption that each sample is *assigned to one and only one label*: a fruit can be either an apple or a pear but not both at the same time.
**Multilabel classification** assigns to each sample a set of target labels. This can be thought of as predicting properties of a data-point that are *not mutually exclusive*, such as topics that are relevant for a document. A text might be about any of religion, politics, finance or education at the same time or none of these.
Taken from <http://scikit-learn.org/stable/modules/multiclass.html>
---
Edit1 (Sept 2020):
For those who prefer contrasts of terms for a better understanding, look at these contrasts:
* **Multi-class vs Binary-class** is the question of the number of classes your classifier is modeling. In theory, a binary classifier is much simpler than multi-class problem, so it's useful to make this distinction. For example, Support Vector Machines (SVMs) can trivially learn a hyperplane to separate two classes, but 3 or more classes make the classification problem much more complicated. In the neural networks, we commonly use `Sigmoid` for binary, but `Softmax` for multi-class as the last layer of the model.
* **Multi-label vs Single-Label** is the question of how many classes any object or example can belong to. In the neural networks, if we need single label, we use a single `Softmax` layer as the last layer, thus learning a single probability distribution that spans across all classes. If we need multi-label classification, we use multiple `Sigmoids` on the last layer, thus learning separate distribution for each class.
Remarks: we combine multilabel with multiclass, in fact, it is safe to assume that all multi-label are multi-class classifiers.
When we have a binary classifier (say positive v/s negative classes), we wouldn't usually assign both labels or no-label at the same time! We usually convert such scenarios to a multi-class classifier where classes are one of `{positive, negative, both, none}`.
Hence multi-label AND binary classifier is not practical, and it is safe to assume all multilabel are multiclass.
On the other side, not all Multi-class classifiers are multi-label classifiers and we shouldn't assume it unless explicitly stated.
---
EDIT 2: Venn diagram for *my remarks*
[![Classifier](https://i.stack.imgur.com/XghaO.png)](https://i.stack.imgur.com/XghaO.png) |
Suppose we have a graph $G$ with $n$ vertices. Suppose *LP* is a linear programming problem where there is a variable for each vertex of $G$, each variable can take value $≥0$, for each odd cycle of $G$ we add to *LP* the constraint $x\_a+x\_b+x\_c+\dots +x\_i≥1$ where $x\_a,x\_b,x\_c,...,x\_i$ are the vertices of the cycle. The objective function of *LP* is $\min \sum\limits\_{1}^{n}{x\_i}$.
Suppose $S$ is an optimal solution of *LP*. If a variable $x\_v$ takes on a value $>0$ in $S$, is it guaranteed that there exists a minimum [odd cycle transversal](https://en.wikipedia.org/wiki/Odd_cycle_transversal) that contains $v$? By minimum odd cycle transversal, I mean an odd cycle transversal with the fewest number of vertices. | No, $v$ does not have to belong to any minimum odd cycle transversal.
Consider the following undirected graph. The vertices are split into eight groups: $C\_i$ for $i \in [0, 3]$, each of them containing $4$ vertices and $F\_i$ for $i \in [0, 3]$, each containing $3$ vertices. The following edges (and only them) are present in the graph:
1. All edges between $C\_i$ and $C\_{(i + 1) \bmod 4}$ for every $i \in [0, 3]$
2. All edges between $C\_i$ and $F\_i$ for every $i \in [0, 3]$
3. All edges between $F\_0$ and $F\_2$, all edges between $F\_1$ and $F\_3$
Let's prove the following statements:
1. Any OCT that contains a vertex from one of the $C\_i$'s has size at least $7$, but there are OCT's of size $6$ (for example, $F\_0 \cup F\_1$).
2. In any optimal solution to the LP relaxation, the variables corresponding to vertices from $F\_i$'s are set to zero. Moreover, there is only one optimal solution to the LP relaxation: set all variables corresponding to vertices of $C\_i$ to $1/3$.
If both are true, then, for *every* nonzero variable in the optimal solution to the LP, there is no minimal OCT that passes through the corresponding vertex. Because the graph is small enough, you can verify both these statements on a computer.
But I will give a short "human" proof to both.
For the minimum OCT part, it is clear that we should either delete each of the vertex group either fully, or not touch it at all (because just a single vertex from the group is "good enough representative" for the whole group). Moreover, we can see that deleting one $C\_i$ group is not enough. If we delete, say, the group $C\_0$, there is still an odd cyle $F\_1 \to C\_1 \to C\_2 \to C\_3 \to F\_3 \to F\_1$. Hence, we still have to delete at least one other group, for $7$ vertices in total. On the other hand, $F\_0 \cup F\_1$ is an OCT with size $6$.
Now let's deal with LP part.
It can be seen that all odd cycles in the graph pass through at least $3$ vertices from $C\_i$. Hence, assigning weight $1/3$ to each vertex of each $C\_i$ yields a solution with total cost $16/3$. On the other hand, consider all cycles of length $5$ in our graph. It can be proven that all vertices from $C\_i$'s lie on exactly $3/16$ fraction of them, but all vertices from $F\_i$'s lie on exactly $1/6$ fraction of them (the proof is a bit tedious to write down, so I will add it only by request). Then, by averaging the inequalities $x\_a + x\_b + \ldots + x\_\ell \geqslant 1$ over all these cycles, we get $\frac{1}{6} \sum\limits\_{v \in \bigcup F\_i} x\_v + \frac{3}{16} \sum\limits\_{v \in \bigcup C\_i} x\_v \geqslant 1$, implying $\sum\limits\_{v \in V} x\_v \geqslant \frac{16}{18} \sum\limits\_{v \in \bigcup F\_i} x\_v + \sum\limits\_{v \in \bigcup C\_i} x\_v = \frac{16}{3} \left(\frac{1}{6} \sum\limits\_{v \in \bigcup F\_i} x\_v + \frac{3}{16} \sum\limits\_{v \in \bigcup C\_i} x\_v \right) \geqslant \frac{16}{3}$. Moreover, the inequality is strict if some $x\_v$ with $v \in F\_i$ is not zero. Hence, in each optimal LP solution, non-zero weights are assigned only to vertices from $C\_i$'s. Moreover, it is possible to prove that there is only one optimal solution, with all weights of $C\_i$'s set $1/3$. It is not too important, though, because we already proved that all optimal LP solutions are pairwise disjoint from all optimal OCT's. |
I have summarising this from lot of blogs about Precision and Recall.
Precision is:
>
> Proportion of actual positives that classifier has predicted as positive.
>
>
>
meaning out of the sample identified as positives by classifier as positive, how many are actually positive?
and Recall is:
>
> Proportion of actual positives were predicted as positive correctly.
>
>
>
meaning out of the ground truth positives, how many were identified correctly by the classifier as positive?
---
That sounded very confusing to me. I couldn't interpret difference between both of them and relate each to real examples. some very small questions about interpretation I have are:
1. if avoiding false-positives matter the most to me, i should be measuring precision; And if avoiding false-negatives matters the most to me, i should be measuring recall. Is my understanding correct?
2. Suppose, I am predicting if a patient should be given a vaccine, that when given to healthy person is catastrophic and hence should only be given to an affected person; and I can't afford giving vaccine to healthy people. assuming positive stands for should-give-vaccine and negative is should-not-give-vaccine, should I be measuring Precision? or Recall of my classifier?
3. Suppose, I am predicting if an email is spam(+ve) or non-spam(-ve). and I can't afford a spam email being classified as non-spam, meaning can't afford false-negatives, should I be measuring Precision? or Recall of my classifier?
4. What does it mean to have high precision(> 0.95) and low recall(< 0.05)? And what does it mean to have low precision(> 0.95) and high recall(< 0.05)?
Put simply, in what kind of cases is to preferable or good choice to use Precision over Recall as metric and vice versa. I get the definition and I can't relate it to real examples to answer when one is preferable over other, so I would really like some clarification. | To make sure everything is clear let me quickly summarize what we are talking about. precision and recall are evaluation measures for binary classification, in which every instance has a ground truth class (also called gold standard class, I'll call it 'gold') and a predicted class, both being either positive or negative (note that it's important to clearly define which one is the positive one). Therefore there are four possibilities for every instance:
* gold positive and predicted positive -> TP
* gold positive and predicted negative -> FN (also called type II errors)
* gold negative and predicted positive -> FP (also called type I errors)
* gold negative and predicted negative -> TN
$$Precision=\frac{TP}{TP+FP}\ \ \ Recall=\frac{TP}{TP+FN}$$
In case it helps, I think a figure such as the one on the [Wikipedia Precision and Recall page](https://en.wikipedia.org/wiki/Precision_and_recall) summarizes these concepts quite well.
About your questions:
>
> 1. if avoiding false-positives matter the most to me, i should be measuring precision; And if avoiding false-negatives matters the most to me, i should be measuring recall. Is my understanding correct?
>
>
>
Correct.
>
> 2. Suppose, I am predicting if a patient should be given a vaccine, that when given to healthy person is catastrophic and hence should only be given to an affected person; and I can't afford giving vaccine to healthy people. assuming positive stands for should-give-vaccine and negative is should-not-give-vaccine, should I be measuring Precision? or Recall of my classifier?
>
>
>
Here one wants to avoid giving the vaccine to somebody who doesn't need it, i.e. we need to avoid predicting a positive for a gold negative instance. Since we want to avoid FP errors at all cost, we must have a very high precision -> precision should be used.
>
> Suppose, I am predicting if an email is spam(+ve) or non-spam(-ve). and I can't afford a spam email being classified as non-spam, meaning can't afford false-negatives, should I be measuring Precision? or Recall of my classifier?
>
>
>
We want to avoid false negative -> recall should be used.
Note: the choice of the positive class is important, here spam = positive. This is the standard way, but sometimes people confuse "positive" with a positive outcome, i.e. mentally associate positive with non-spam.
>
> 4. What does it mean to have high precision(> 0.95) and low recall(< 0.05)? And what does it mean to have low precision(> 0.95) and high recall(< 0.05)?
>
>
>
Let's say you're a classifier in charge of labeling a set of pictures based on whether they contain a dog (positive) or not (negative). You see that some pictures clearly contain a dog so you label them as positive, and some clearly don't so you label them as negative. Now let's assume that for a large majority of pictures you are not sure: maybe the picture is too dark, blurry, there's an animal but it is masked by another object, etc. For these uncertain cases you have two possible strategies:
* Label them as negative, in other words **favor precision**. Best case scenario, most of them turn out to be negative so you will get both high precision and high recall. But if most of these uncertain cases turn out to be actually positive, then you have a lot of FN errors: your recall will be very low, but your precision will still be very high since you are sure that all/most of the ones you labeled as positive are actually positive.
* Label them as positive, in other words **favor recall**. Now in the best case scenario most of them turn out to be positive, so high precision and high recall. But if most of the uncertain cases turn out to be actually negative, then you have a lot of FP errors: your precision will be very low, but your recall will still be very high since you're sure that all/most the true positive are labeled as positive.
Side note: it's not really relevant to your question but the example of spam is not very realistic for a case where high recall is important. Typically high recall is important in tasks where the goal is to find all the *potential* positive cases: for instance a police investigation to find everybody susceptible of being at a certain place at a certain time. Here FP errors don't matter since detectives are going to check afterwards but FN errors could cause missing a potential suspect. |
There are several threads on this site discussing [how](https://stats.stackexchange.com/questions/12053/what-should-i-check-for-normality-raw-data-or-residuals) to determine if the OLS residuals are [asymptotically](https://stats.stackexchange.com/questions/29709/what-happens-if-you-reject-normality-of-residuals-when-estimating-with-least-squ) normally distributed. Another way to evaluate the normality of the residuals with R code is provided in this excellent [answer](https://stats.stackexchange.com/questions/22468/residuals-in-linear-regression). This is another [discussion](https://stats.stackexchange.com/questions/12945/standardized-residuals-vs-regular-residuals) on the practical difference between standardized and observed residuals.
But let's say the residuals are definitely not normally distributed, like in this [example](https://stats.stackexchange.com/questions/29636/robust-regression-setting-the-limit-between-errors-and-influential-observations). Here we have several thousand observations and clearly we must reject the normally-distributed-residuals assumption. One way to address the problem is to employ some form of robust estimator as explained in the answer. However I am not limited to OLS and in facts I would like to understand the benefits of other glm or non-linear methodologies.
What is the most efficient way to model data violating the OLS normality of residuals assumption? Or at least what should be the first step to develop a sound regression analysis methodology? | I think you want to look at all the properties of the residuals.
1. normality
2. constant variance
3. correlated to a covariate.
4. combinations of the above
If it is just 1 and it is due to heavytails or skewness due to one heavy tail, robust regression might be a good approach or possibly a transformation to normality. If it is a non-constant variance try a variance stabilizing transformation or attempt to model the variance function. If it is just 3 that suggests a different form of model involving that covariate. Whatever the problem bootstrapping the vectors or reiduals is always an option. |
How do you correctly plot results from a GLM used to test a categorical variable? Here is a reproducible example in R (the data are listed below the code):
```
# run the following code if you don't have the libraries installed:
# install.packages("dplyr","ggplot2", "MASS")
# loading libraries
library(dplyr)
library(ggplot2)
library(MASS)
# making fake data - including response (rs) from a negative binomial distribution
# and a two-level categorical variable (type)
set.seed(246)
df <- data.frame(rs = c(rnegbin(n=100, mu=2, theta=1),
rnegbin(n=30, mu=3, theta=10)),
type = c(rep("A", times=100), rep("B", times=30)))
# now doing the stats
m1 <- glm.nb(rs~type, df)
anova(m1)
summary(m1)
# make summary table containing means, standard deviation (sd),
# count (n), and standard error (se) for each Type (A and B)
# going to use to graph results
df1 <- df %>%
group_by(type) %>%
summarise(means = mean(rs),
sd = sd(rs),
n = n(),
se = sd/sqrt(n))
df1
# now making the graph with df1
ggplot(df1, aes(x=type,y=means)) +
geom_bar(stat="identity", color="black",fill="grey") +
geom_errorbar(aes(ymax = means + se, ymin = means - se), width = 0.2) +
labs(y ="Mean response",x= "Categorical variable") +
theme_bw() +
theme(axis.text = element_text(size=22),
axis.title = element_text(size=24))
# the generated data:
rs type
2 A
0 A
1 A
1 A
2 A
0 A
0 A
2 A
0 A
3 A
9 A
0 A
11 A
0 A
0 A
4 A
4 A
3 A
1 A
5 A
1 A
0 A
0 A
1 A
0 A
6 A
13 A
3 A
2 A
5 A
1 A
0 A
4 A
1 A
4 A
1 A
1 A
3 A
3 A
3 A
3 A
1 A
2 A
0 A
2 A
6 A
2 A
3 A
0 A
2 A
1 A
2 A
1 A
0 A
0 A
0 A
0 A
0 A
0 A
0 A
1 A
0 A
3 A
1 A
1 A
1 A
1 A
4 A
0 A
3 A
4 A
0 A
1 A
9 A
6 A
0 A
0 A
0 A
1 A
8 A
3 A
0 A
1 A
0 A
4 A
3 A
0 A
5 A
1 A
1 A
0 A
1 A
1 A
13 A
1 A
0 A
6 A
3 A
0 A
1 A
1 B
4 B
4 B
5 B
6 B
2 B
4 B
2 B
3 B
1 B
2 B
5 B
2 B
2 B
4 B
1 B
3 B
4 B
2 B
5 B
5 B
4 B
6 B
1 B
5 B
2 B
1 B
5 B
3 B
2 B
```
[![enter image description here](https://i.stack.imgur.com/dxZYZ.png)](https://i.stack.imgur.com/dxZYZ.png)
The GLM says there is no statistical difference between Type A and Type B. I want to plot this result. I made a summary table that I use to plot the mean response for each Type and include error bars. I think I must be calculating standard error incorrectly. Do you have any advice for how to properly plot my results? | The useful formulas have been given in the answer by @Alecos
Papadopoulos, which I will refer to. Note that $\mathbf{\Sigma}$ is
positive definite iff $\rho\_{\text{min}} < \rho < 1$ with
$\rho\_{\text{min}} := - 1/(p-1)$.
For the ML estimation you can concentrate the vector $\boldsymbol{\mu}$
and the scalar $\sigma^2$ out of the log-likelihood function by replacing them by
their ML estimates, namely $\widehat{\boldsymbol{\mu}} = \bar{\mathbf{X}}$ and
$$
\widehat{\sigma}^2 = \frac{1}{np} \, \sum\_{i = 1}^n
\left[\mathbf{X}\_i - \bar{\mathbf{X}} \right]^\top \boldsymbol{\Sigma}^{\star -1}
\left[\mathbf{X}\_i - \bar{\mathbf{X}}\right]
$$
where $\boldsymbol{\Sigma}^\star:= \sigma^{-2} \boldsymbol{\Sigma}$. Thus the estimation
boils down to the one-dimensional optimisation of a function of $\rho$.
An interesting point is that for a vector $\mathbf{z}$ of length $p$ we have
$$
\mathbf{z}^\top \boldsymbol{\Sigma}^{\star -1} \mathbf{z} = \frac{1}{1-\rho}
\left[\mathbf{z}^\top \mathbf{z} - \nu (\mathbf{1}\_p^\top \mathbf{z})^2 \right]
$$
where $\mathbf{1}\_p$ is a vector of $p$ ones and $\nu:= \rho\, / \, [1 + (p-1) \rho]$.
So taking $\mathbf{z}\_i:= \mathbf{X}\_i - \bar{\mathbf{X}}$ we can use the sums
$$
A := \sum\_i \mathbf{z}\_i^\top \mathbf{z}\_i, \qquad
B := \sum\_i (\mathbf{1}\_p^\top \mathbf{z}\_i)^2
$$
which do not depend on the parameter.
The concentrated log-likelihood obtained after replacing $\boldsymbol{\mu}$ and $\sigma^2$
by their estimate is
$$
\ell\_{\text{c}}(\rho) = -\frac{np}{2} \, \log(2 \pi) -
\frac{n}{2}\,\log \widehat{\sigma}^2 -
\frac{n}{2}\,\log|\boldsymbol{\Sigma}^\star| -
\frac{np}{2}.
$$
which is easily maximised.
```
##' MLE of a normal vector with compound symmetric covariance.
##' @title Compound symmetry
##' @param X A matrix with n row and p columns.
##' @return A list of ML Estimates.
MLE <- function(X) {
n <- nrow(X)
p <- ncol(X)
p1 <- p - 1
rho.min <- - 1 / p1
## substract column means. We could use 'scale' as well
xbar <- apply(X, 2, mean)
Xcent <- sweep(X, MARGIN = 2, STATS = xbar, FUN = "-")
## sum of 'n' crossprods and sum of 'n' squared sum of 'p'
## components
A <- sum(apply(Xcent, MARGIN = 1, FUN = crossprod))
B <- sum(apply(Xcent, MARGIN = 1, sum)^2)
## concentrated (or profiled) deviance = - 2 log L, to be
## minimised
deviance <- function(rho1) {
nu <- rho1 / (1.0 + p1 * rho1)
sigma2.hat <- (A - nu * B) / n / p / (1.0 - rho1)
dev <- n * (p * log(sigma2.hat) + p1 * log(1.0 - rho1)
+ log(1.0 + p1 * rho1) + p)
attr(dev, "sigma2.hat") <- sigma2.hat
dev
}
opt <- optimise(deviance, interval = c(rho.min, 1.0))
list(mu = xbar,
sigma2 = attr(opt$objective, "sigma2.hat"),
rho = opt$minimum,
loglik = -opt$objective[1] / 2)
}
## Now try the function change/remove set.seed for other examples
set.seed(123)
p <- 10L; n <- 1000
rho.min <- -1 / (p - 1)
## draw 'rho', 'mu' and 'sigma2'
rho <- runif(1, min = rho.min, max = 1)
mu <- rnorm(p)
sigma2 <- rexp(1)
## build a p * p covariance matrix 'Sigma'
Sigma <- matrix(rho, nrow = p, ncol = p)
diag(Sigma) <- 1
Sigma <- sigma2 * Sigma
## build the matrix 'X'
G <- chol(Sigma)
X <- matrix(rnorm(n * p), nrow = n, ncol = p) %*% G
X <- sweep(X, MARGIN = 2, STATS = mu, FUN = "+")
##
MLE(X)
``` |
$Y\_{1...n}\sim \operatorname{Bin}(1,p)$, iid, and I need to find an unbiased estimator for $\theta=\operatorname{var}(y\_i)$.
I did some calculations and I think that the answer is $p(1-p)-\frac{p(1-p)}{n}$
* Is this correct?
* If not, how can I find an unbiased estimator? | This answer cannot be correct. An estimator cannot depend on the values of the parameters: since they are unknown it would mean that you cannot compute the estimate.
An unbiased estimator of the variance for *every* distribution (with finite second moment) is
$$ S^2 = \frac{1}{n-1}\sum\_{i=1}^n (y\_i - \bar{y})^2.$$
By expanding the square and using the definition of the average $\bar{y}$, you can see that
$$ S^2 = \frac{1}{n} \sum\_{i=1}^n y\_i^2 - \frac{2}{n(n-1)}\sum\_{i\neq j}y\_iy\_j,$$
so if the variables are IID,
$$E(S^2) = \frac{1}{n} nE(y\_j^2) - \frac{2}{n(n-1)} \frac{n(n-1)}{2} E(y\_j)^2. $$
As you see we do not need the hypothesis that the variables have a binomial distribution (except implicitly in the fact that the variance exists) in order to derive this estimator. |
Several hard graph problems remain hard on planar cubic bipartite graphs. They include Hamiltonian cycle problem and perfect P3 matching problem. I'm looking for a reference on interesting subclasses of planar cubic bipartite graphs. An interesting subclass contains infinite number of graphs and excludes infinite number of graphs.
More importantly, Which hard problems do remain hard on nontrivial subclasses of planar cubic bipartite graphs? | **Cubic Montone Planar 1-in-3 SAT**:
[1-in-3 SAT](http://en.wikipedia.org/wiki/One-in-three_3SAT) without negated variables and where each variable is in exactly 3 clauses, and the incidence graph (the bipartite graph where the variables and the clauses are the vertex sets) is planar.
<http://arxiv.org/abs/math/0003039>
If you are willing to relax the 3-regularity, this may be relevant: **Planar-3-Connected (3,4)-SAT**, 3SAT but where each variable is in at most 4 clauses, and the incidence graph is polyhedral (3-connected and planar).
<http://dx.doi.org/10.1016/0166-218X(94)90143-0> |
Could someone please explain to me why you need to normalize data when using K nearest neighbors.
I've tried to look this up, but I still can't seem to understand it.
I found the following link:
<https://discuss.analyticsvidhya.com/t/why-it-is-necessary-to-normalize-in-knn/2715>
But in this explanation, I don't understand why a larger range in one of the features affects the predictions. | Suppose you had a dataset (m "examples" by n "features") and all but one feature dimension had values strictly between 0 and 1, while a single feature dimension had values that range from -1000000 to 1000000. When taking the euclidean distance between pairs of "examples", the values of the feature dimensions that range between 0 and 1 may become uninformative and the algorithm would essentially rely on the single dimension whose values are substantially larger. Just work out some example euclidean distance calculations and you can understand how the scale affects the nearest neighbor computation. |
I was reviewing a paper of a double-blind conference (ML/AI-based conference). The authors improved the approximation bounds for some *special* instances of a problem.
To understand their proof better, I was thinking of a solution by myself while picking some hints from their proof. Using those hints, I came up with a better algorithm for the general instance of the problem. Moreover, my algorithm is much simpler than theirs.
I am feeling a bit greedy here; I want to suggest these changes to them and also want authorship in their paper. I can not ask them since I do not know them. I can not publish my own paper since a few ideas are borrowed from their proof and their manuscript is not online. What should I do? | I don’t think there’s a clear protocol. I’ve seen referees generously offer improvements. I’ve seen authors offer coauthorship for those improvements. I’ve seen the referee accept or decline the offer.
Of course, it’s perfectly legitimate for you to wait for them to publish and then submit your improvement. Delaying by a few months is a gamble (might get scooped, including by the authors). But the preceding options involve gambles as well. |
In the halting problem, we are interested if there is a Turing machine $T$ that can tell whether a given Turing machine $M$ halts or not on a given input $i$. Usually, the proof starts assuming such a $T$ exists. Then, we consider a case where we restrict $i$ to $M$ itself, and then derive a contradiction by using an instance of a diagonal argument. I am interested how would the proof go if we are given a promise that $i \not = M$? What about promise $i \not = M^\prime$, where $M^\prime$ is functionally equivalent to $M$? | Suppose HALTS is a TM that reads its input as a pair $M$ and $x$, where $M$ is a TM encoding and $x$ is any input to that TM.
Your question is if what would happen if we assumed HALTS solved the halting problem for all inputs $\langle M,x \rangle$ such that $x$ is not an encoding of a TM that is functionally equivalent to $M$.
I claim this implies a contradiction. I came up with this on the spot, so I welcome any and all criticism of my proof. The idea of the proof is that rather than diagonalizing something on itself, we make two mutually recursive TMs that behave differently on some input (thus are not functionally equivalent), but otherwise cause contradictions.
Let $D\_1$ and $D\_2$ be two mutually recursive TMs (which is to say we can simulate, print, etc, the description of $D\_2$ inside the program of $D\_1$ and vice versa). Note that we can make mutually recursive TMs from the recursion theorem.
Define $D\_1$ and $D\_2$ as follows: on input $x$, if $|x| < 10$ (10 chosen arbitrarily), then $D\_1$ accepts and $D\_2$ loops. (Thus, they are not functionally equivalent).
Given input $x$ with $|x| \ge 10$, define $D\_1$ to simulate HALTS on $\langle D\_2, x \rangle$ and halt if $D\_2$ halts or loop if $D\_2$ loops.
Given input $x$ with $|x| \ge 10$, define $D\_2$ to simulate HALTS on $\langle D\_1, x \rangle$ and loop if $D\_1$ halts or halt if $D\_1$ loops.
Then note that for any $x$ with $|x| \ge 10$, $D\_1$(x) either halts or loops. If $D\_1$ halts on input x, then we know HALTS($D\_2$, x) determined that $D\_2$ halts on input x. However, $D\_2$ halting on input x implies that HALTS($D\_1$, x) loops.
If $D\_1$ on input $x$ loops, the contradiction follows similarly.
This is a contradiction unless $x$ is an encoding for a turing machine functionally equivalent to $D\_1$ or $D\_2$, in which case HALTS has undefined behavior. However, $x$ was chosen at arbitrary from all strings of size greater than $10$. Thus, it remains to show there exists a turing machine with an encoding of size greater than 10 that behaves differently than $D\_1$ and $D\_2$. We can construct such a machine trivially. QED.
Thoughts? |
I came across this question from the 3rd chapter of the book [Neural Networks and Deep Learning](http://neuralnetworksanddeeplearning.com/index.html) by Michael Nielsen, this is [a question given](http://neuralnetworksanddeeplearning.com/chap3.html#exercise_195778) in his exercise.
>
> One way of expanding the MNIST training data is to use small rotations
> of training images. What's a problem that might occur if we allow
> arbitrarily large rotations of training images?
>
>
>
I would happy if someone explain why large rotations would be problematic. | The problem is that numbers are not invariant to rotations.
For example, see what happens when you rotate a 4 in steps of 90 degrees:
[![rotation of 4](https://i.stack.imgur.com/Momu1.jpg)](https://i.stack.imgur.com/Momu1.jpg)
So unless your task includes recognizing numbers which are written sideways or upside-down this does not provide a proper data augmentation. |
Consider following example. We asked 50 men and 50 women one question and they answered on 5 point likert scale (1 is completely disagree and 5 completely agree).
Suppose that 50 men responded 20 times 1, 3 times 2, 4 times 3, 20 times 4 and 3 times 5 and 50 women responded 15 times 1, 15 times 2, 15 times 3, 2 times 4 and 3 times 5.
I would like to test whether mean (or median) answers in both groups are significatly different.
The problem is that I can not use parametric (t-test) neither nonparametric (Mann-Whitney) test since data are nonnormal (thus t-test is not appropriate), ordinal and thus not continuous, multimodal, with different variances and with different shapes (thus Mann-Whitney test is not appropriate). How can I test my hypothesis? Thank you for any answer. | What you write is a compilation of many common misconceptions about these tests.
The short answer is: use the t test with Welch correction. Now, the details.
>
> I would like to test whether mean (or median) answers in both groups are significantly different.
>
>
>
Means and medians are different things. What people usually do, is that they *think* in terms of means, not medians, so by default this is also what you should aim for. The Likert scale was invented by the guy Rensis Likert precisely with the intention to make it useful for computing means (not only medians). See the James Carifio, Rocco Perla, "Resolving the 50-year debate around using and misusing Likert scales" in commentaries in Medical Education, 2008, Blackwell Publishing Ltd.
>
> The problem is that I can not use parametric (t-test) neither nonparametric (Mann-Whitney) test since data are nonnormal (thus t-test is not appropriate) (...), multimodal (...)
>
>
>
Definitely not! Both tests are robust with respect to the shape of the distribution. Only Mann-Whitney (it is usually called Wilcoxon-Mann-Whitney, or WMW test) requires both distributions to have the same shape.
>
> (...) ordinal and thus not continuous (...)
>
>
>
"Ordinal variable" means "arithmetic means doesn't make sense on it". Like education measured in 3-level scale "1 - grammar school", "2 - college" and "3 - university". This does not imply it is not continuous (although usually it is the case).
Neither the WMW nor the t-test require continuous variables.
>
> (...) with different variances (...)
>
>
>
When you use the t-test with Welch's correction (Welch B.L. The generalization of Student's problem when several different population variances are involved, Biometrika, 34, 28-38, 1938) than you don't need to worry about the unequal variances (and shapes). The t-test (as with any test based on means) is already very robust with respect to departures from normality (see e.g. Michael R. Chernick, Robert H. Friis "Introductory Biostatistics for The Health Sciences", Willey Interscience 2003 and many other books). This property comes from the fact, that it is based on means. By the virtue of the Central Limit Theorem, the distribution of the mean very quickly converges to normal distribution.
>
> (...) and with different shapes (thus Mann-Whitney test is not appropriate).
>
>
>
Yes, you've got it right. Technically speaking, the Mann-Whitney U test does not test for median, but whether one distribution is offset from the other, which is something subtly different. In particular, it makes this test sensitive to *differences in distribution between groups*. (see Morten W. Fagerland and Leiv Sandvik "The Wilcoxon-Mann-Whitney test under scrutiny", John Willey & Sons, 2009). These differences can translate e.g. into differences in variance or skewness. So this test, in contrast to the Welch test (the t-test with Welch modification), is not safe for when there is no variance homogeneity. |
For detection, a common way to determine if one object proposal was right is *Intersection over Union* (IoU, IU). This takes the set $A$ of proposed object pixels and the set of true object pixels $B$ and calculates:
$$IoU(A, B) = \frac{A \cap B}{A \cup B}$$
Commonly, IoU > 0.5 means that it was a hit, otherwise it was a fail. For each class, one can calculate the
* True Positive ($TP(c)$): a proposal was made for class $c$ and there actually was an object of class $c$
* False Positive ($FP(c)$): a proposal was made for class $c$, but there is no object of class $c$
* Average Precision for class $c$: $\frac{\#TP(c)}{\#TP(c) + \#FP(c)}$
The mAP (mean average precision) = $\frac{1}{|classes|}\sum\_{c \in classes} \frac{\#TP(c)}{\#TP(c) + \#FP(c)}$
If one wants better proposals, one does increase the IoU from 0.5 to a higher value (up to 1.0 which would be perfect). One can denote this with mAP@p, where $p \in (0, 1)$ is the IoU.
But what does `mAP@[.5:.95]` (as found in [this paper](https://arxiv.org/abs/1512.04412)) mean? | `mAP@[.5:.95]`(someone denoted `mAP@[.5,.95]`) means average mAP over different IoU thresholds, from 0.5 to 0.95, step 0.05 (0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95).
>
> There is
> an associated MS COCO challenge with a new evaluation
> metric, that averages mAP over different IoU thresholds,
> from 0.5 to 0.95 (written as “0.5:0.95”). [[Ref](https://www.cs.cornell.edu/%7Esbell/pdf/cvpr2016-ion-bell.pdf)]
>
>
>
>
> We evaluate the
> mAP averaged for IoU ∈ [0.5 : 0.05 : 0.95] (COCO’s
> standard metric, simply denoted as mAP@[.5, .95])
> and mAP@0.5 (PASCAL VOC’s metric). [[Ref](https://arxiv.org/pdf/1506.01497.pdf)]
>
>
>
>
> To evaluate our final detections, we use the official
> COCO API [20], which measures mAP averaged over IOU
> thresholds in [0.5 : 0.05 : 0.95], amongst other metrics. [[Ref](https://arxiv.org/pdf/1611.10012.pdf)]
>
>
>
BTW, the [source code](https://github.com/pdollar/coco/blob/master/PythonAPI/pycocotools/cocoeval.py#L501) of [coco](https://github.com/pdollar/coco) shows exactly what `mAP@[.5:.95]` is doing:
`self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True)`
References
----------
* [cocoapi](https://github.com/pdollar/coco)
* [Inside-Outside Net: Detecting Objects in Context with Skip Pooling and
Recurrent Neural Networks](https://www.cs.cornell.edu/%7Esbell/pdf/cvpr2016-ion-bell.pdf)
* [Faster R-CNN: Towards Real-Time Object
Detection with Region Proposal Networks](https://arxiv.org/pdf/1506.01497.pdf)
* [Speed/accuracy trade-offs for modern convolutional object detectors](https://arxiv.org/pdf/1611.10012.pdf) |
Oversampling of under-represented data is a way to combat class imbalance. For example, if we have a training data set with 100 data points of class A and 1000 data points of class B, we can over sample the 100 A data (may be with some sophisticated oversampling methods) to generate 1000 A data to mitigate the data imbalance.
Now, let's say we have 1100 data points of class B, and class A has 2 subclasses, A1 and A2, which have 100 and 10 data points, respectively. And we are still interested in binary classification.
In this case, how should I over sample data of class A to address class imbalance? Should I over sample A1 to 1000 and A2 to 100, or over sample both A1 and A2 to 550?
Besides running an experiment, is there any theoretical analysis of this kind of class imbalance problem? | It totally depends on you and your problem. In fact, if the distribution of the data must be unchanged, oversampling is not a suitable solution.
If you can change the `Loss function` of the algorithm, It will be very helpful. There are many useful metrics which were introduced for evaluating the performance of classification methods for imbalanced data-sets. Some of them are **[Kappa](https://en.wikipedia.org/wiki/Cohen%27s_kappa)**, **[CEN](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.708.1668&rep=rep1&type=pdf)**, **[MCEN](https://link.springer.com/chapter/10.1007/978-3-319-94120-2_8)**, **[MCC](https://en.wikipedia.org/wiki/Matthews_correlation_coefficient)**, and **[DP](https://eva.fing.edu.uy/pluginfile.php/69453/mod_resource/content/1/7633-10048-1-PB.pdf)**.
**Disclaimer**:
If you use python, **[PyCM](https://github.com/sepandhaghighi/pycm)** module can help you to find out these metrics.
Here is a simple code to get the recommended parameters from this module:
```
>>> from pycm import *
>>> cm = ConfusionMatrix(matrix={"Class1": {"Class1": 1, "Class2":2}, "Class2": {"Class1": 0, "Class2": 5}})
>>> print(cm.recommended_list)
["Kappa", "SOA1(Landis & Koch)", "SOA2(Fleiss)", "SOA3(Altman)", "SOA4(Cicchetti)", "CEN", "MCEN", "MCC", "J", "Overall J", "Overall MCC", "Overall CEN", "Overall MCEN", "AUC", "AUCI", "G", "DP", "DPI", "GI"]
```
After that, each of these parameters you want to use as the loss function can be used as follows:
```
>>> y_pred = model.predict #the prediction of the implemented model
>>> y_actu = data.target #data labels
>>> cm = ConfusionMatrix(y_actu, y_pred)
>>> loss = cm.Kappa #or any other parameter (Example: cm.SOA1)
``` |
Let say A = 1 and B = 1
and then A+B = 1
now by using duality(replacing or gate by and gate and 1 by 0) we can say that, A.B = 0
but this is not 0, because 1.1 = 1, so please anyone clear my misunderstanding here, Thank in advance | $$\overline{A+B}=\overline A\cdot\overline B$$
and
$$\overline{A\cdot B}=\overline A+\overline B$$ |
I have two logistic regression models, using the same data set and same dependent binary variable but with different sample sizes due to different IV's. How would I go about comparing the two models aside from using a classification matrix? | Don't just compare the classification matrices--compare the entire ROC curves. |
I am replicating, in Keras, the work of a paper where I know the values of `epoch` and `batch_size`. Since the dataset is quite large, I am using `fit_generator`. I would like to know what to set in `steps_per_epoch` given `epoch` value and `batch_size`. Is there a standard way? | As mentioned in Keras' [webpage](https://keras.io/models/sequential/#fit_generator) about `fit_generator()`:
>
> ***steps\_per\_epoch***: Integer. Total number of steps (batches of samples)
> to yield from generator **before declaring one epoch finished** and
> starting the next epoch. It should typically be equal to
> **ceil(num\_samples / batch\_size)**. Optional for Sequence: if unspecified,
> will use the len(generator) as a number of steps.
>
>
>
You can set it equal to `num_samples // batch_size`, which is a typical choice.
However, `steps_per_epoch` give you the chance to "trick" the generator when updating the learning rate using `ReduceLROnPlateau()` [callback](https://keras.io/callbacks/#reducelronplateau), because this callback checks the drop of the loss once each epoch has finished. If the loss has stagnated for a `patience` number of consecutive epochs, the callback decreases the learning rate to "slow-cook" the network. If your dataset is huge, as it is usually the case when you need to use generators, you would probably like to decay the learning rate within a single epoch (since it includes a big number of data). This can be achieved by setting `steps_per_epoch` to a value that is **less than** `num_samples // batch_size` without affecting the overall number of training epochs of your model.
Imagine this case as using mini-epochs within your normal epochs to change the learning rate because your loss has stagnated. I have found it very useful [in my applications](https://github.com/pcko1/Deep-Drug-Coder/blob/master/ddc_pub/ddc_v3.py#L1008-L1011). |
I'm having a problem with an exercise, I'm supposed to calculate the *exact* worst case runtime and the worst case runtime in *Big O Notation* for a given algorithm.
This is what I'm struggling to understand, I think I know how to recognize what's the runtime using the big O Notation as it's only a nested loop but what is exactly an *exact* runtime?
How am I supposed to calculate it?
How would that look like with a simple bubble sort algorithm? | You can’t calculate any runtime except for some very simple model of a CPU. You can calculate the exact number of comparisons, or the exact number of operations moving array elements of a particular implementation of an algorithm, but it’s practically impossible to predict how many seconds or microseconds the algorithm will run for. |
What diagnostic plots (and perhaps formal tests) do you find most informative for regressions where the outcome is a count variable?
I'm especially interested in Poisson and negative binomial models, as well as zero-inflated and hurdle counterparts of each. Most of the sources I've found simply plot the residuals vs. fitted values without discussion of what these plots "should" look like.
Wisdom and references greatly appreciated. The back story on why I'm asking this, if it's relevant, is [my other question](https://stats.stackexchange.com/questions/70397/enormous-ses-in-zero-inflated-negative-binomial-regression).
Related discussions:
* [Interpreting residual diagnostic plots for glm models?](https://stats.stackexchange.com/questions/29271/interpreting-residual-diagnostic-plots-for-glm-models)
* [Assumptions of generalized linear models](https://stats.stackexchange.com/questions/49762/assumptions-of-generalized-linear-models)
* [GLMs - Diagnostics and Which Family](https://stats.stackexchange.com/questions/44643/glms-diagnostics-and-which-family) | Here is what I usually like doing (for illustration I use the overdispersed and not very easily modelled quine data of pupil's days absent from school from `MASS`):
1. **Test and graph the original count data** by plotting observed frequencies and fitted frequencies (see chapter 2 in [Friendly](http://rads.stackoverflow.com/amzn/click/1580256600)) which is supported by the `vcd` package in `R` in large parts. For example, with `goodfit` and a `rootogram`:
```r
library(MASS)
library(vcd)
data(quine)
fit <- goodfit(quine$Days)
summary(fit)
rootogram(fit)
```
or with **[Ord plots](http://www.jstor.org/stable/2343403)** which help in identifying which count data model is underlying (e.g., here the slope is positive and the intercept is positive which speaks for a negative binomial distribution):
```r
Ord_plot(quine$Days)
```
or with the **"XXXXXXness" plots** where XXXXX is the distribution of choice, say Poissoness plot (which speaks against Poisson, try also `type="nbinom"`):
```r
distplot(quine$Days, type="poisson")
```
2. Inspect usual **goodness-of-fit measures** (such as likelihood ratio statistics vs. a null model or similar):
```r
mod1 <- glm(Days~Age+Sex, data=quine, family="poisson")
summary(mod1)
anova(mod1, test="Chisq")
```
3. Check for **over / underdispersion** by looking at `residual deviance/df` or at a formal test statistic (e.g., [see this answer](https://stats.stackexchange.com/questions/66586/is-there-a-test-to-determine-whether-glm-overdispersion-is-significant/66593#66593)). Here we have clearly overdispersion:
```r
library(AER)
deviance(mod1)/mod1$df.residual
dispersiontest(mod1)
```
4. Check for **influential and leverage points**, e.g., with the `influencePlot` in the `car` package. Of course here many points are highly influential because Poisson is a bad model:
```r
library(car)
influencePlot(mod1)
```
5. Check for **zero inflation** by fitting a count data model and its zeroinflated / hurdle counterpart and compare them (usually with AIC). Here a zero inflated model would fit better than the simple Poisson (again probably due to overdispersion):
```r
library(pscl)
mod2 <- zeroinfl(Days~Age+Sex, data=quine, dist="poisson")
AIC(mod1, mod2)
```
6. **Plot the residuals** (raw, deviance or scaled) on the y-axis vs. the (log) predicted values (or the linear predictor) on the x-axis. Here we see some very large residuals and a substantial deviance of the deviance residuals from the normal (speaking against the Poisson; Edit: @FlorianHartig's answer suggests that normality of these residuals is not to be expected so this is not a conclusive clue):
```r
res <- residuals(mod1, type="deviance")
plot(log(predict(mod1)), res)
abline(h=0, lty=2)
qqnorm(res)
qqline(res)
```
7. If interested, plot a **half normal probability plot** of residuals by plotting ordered absolute residuals vs. expected normal values [Atkinson (1981)](http://biomet.oxfordjournals.org/content/68/1/13.short). A special feature would be to simulate a reference ‘line’ and envelope with simulated / bootstrapped confidence intervals (not shown though):
```r
library(faraway)
halfnorm(residuals(mod1))
```
8. **Diagnostic plots** for log linear models for count data (see chapters 7.2 and 7.7 in Friendly's book). Plot predicted vs. observed values perhaps with some interval estimate (I did just for the age groups--here we see again that we are pretty far off with our estimates due to the overdispersion apart, perhaps, in group F3. The pink points are the point prediction $\pm$ one standard error):
```r
plot(Days~Age, data=quine)
prs <- predict(mod1, type="response", se.fit=TRUE)
pris <- data.frame("pest"=prs[[1]], "lwr"=prs[[1]]-prs[[2]], "upr"=prs[[1]]+prs[[2]])
points(pris$pest ~ quine$Age, col="red")
points(pris$lwr ~ quine$Age, col="pink", pch=19)
points(pris$upr ~ quine$Age, col="pink", pch=19)
```
This should give you much of the useful information about your analysis and most steps work for all standard count data distributions (e.g., Poisson, Negative Binomial, COM Poisson, Power Laws). |
Since there exists a bijection of sets from $\{0,1\}^\*$ to $\mathbb{N\_0}$, we might view one-way-functions as functions $f :\mathbb{N\_0} \rightarrow \mathbb{N\_0}$. My question is, suppose $f,g$ are one-way-functions, is then $(f+g)(n):=f(n)+g(n)$ a one-way-function or can one construct a counterexample? (The length of $n$ is $\text{ floor}(\frac{\log(n)}{\log(2)})=$ the number of bits to represent $n$)
Comment on answer of @Bulat:
Suppose $f$ is an owf. If (?) there exists a $k \in \mathbb{N}$ such that for all $x \in \mathbb{N\_0}$ we have $f(x) \le x^k$. Then as @Bulat mentioned, construct $g(x) = x^k-f(x) \ge 0$. Then $g(x)$ is an owf as $f$ is, but $h(x) = g(x)+f(x) = x^k$ is not an owf. So the question is, if there exists such an $k$.
The argument would also work considering $k^x$ instead of $x^k$. But the same question remains? Why would such an $k$ exist?
Thanks for your help! | The answer is **no**.
Given $f$ one-way, consider $g(x) = -f(x)$. $g$ is then also one-way, because inverting $g$ would imply inverting $f$. In particular, supposing $g$ is not one-way, one can invert $f(x)$ simply by negating it and applying the inverter for $g$ (and the success probability is even equal).
In this setting, $h(x) = f(x) - f(x) = 0$ is not one-way as it can be. |
So I was searching on how to handle missing data and came across [this post](https://machinelearningmastery.com/handle-missing-data-python/) from Machine Learning Mastery.
This article states that some algorithms can be made robust to missing data, such as Naive Bayes and KNN.
>
> Not all algorithms fail when there is missing data.
>
> There are algorithms that can be made robust to missing data, such as k-Nearest Neighbors that can ignore a column from a distance measure when a value is missing. Naive Bayes can also support missing values when making a prediction.
>
>
>
But then it says that sklearn implementations are not robust to missing data.
>
> Sadly, the scikit-learn implementations of Naive Bayes, decision trees, and k-Nearest Neighbors are not robust to missing values.
>
>
>
Are there ML libs (preferably in Python, but could also be in other languages) that these algorithms are robust to missing data? | I suggest you use facebook's [faiss](https://github.com/facebookresearch/faiss). It is a library for similarity search, which can be used to compute kNN on large vector collections.
From [facebook's own numbers](https://engineering.fb.com/2017/03/29/data-infrastructure/faiss-a-library-for-efficient-similarity-search/):
>
> * With approximate indexing, a brute-force k-nearest-neighbor graph (k = 10) on 128D CNN descriptors of 95 million images of the YFCC100M data set with 10-intersection of 0.8 can be constructed in 35 minutes on four Maxwell Titan X GPUs, including index construction time.
> * Billion-vector k-nearest-neighbor graphs are now easily within reach. One can make a brute-force k-NN graph (k = 10) of the Deep1B data set with 10-intersection of 0.65 in under 12 hours on four Maxwell Titan X GPUs, or 0.8 in under 12 hours on eight Pascal P100-PCIe GPUs. Lower-quality graphs can be produced in under 5 hours on the Titan X configuration.
>
>
> |
I am currently reading some notes on linear and polynomial regression. The notes say the following:
>
> The **linear** model is
> $$\hat{f}\_L (X) = \beta\_0 + \beta\_1 X$$
> The **quadratic** model is
> $$\hat{f}\_Q(X) = \beta\_0 + \beta\_1 X + \beta\_2 X^2$$
>
>
>
Notice that the $\hat{f}$ don't have an error term, $\epsilon$, as would be expected for regression models. Why don't these models have an error term? | Have a look at [wikipedia](https://en.wikipedia.org/wiki/Linear_model)
The hat on top of a variable (e.g. $\hat{f}\_L(X)$) typically indicates that the variable is an **estimator** of some observable. Namely, it is a function of a sample from a random variable, which can be used to estimate something of real world relevance, such as Celsius temperature given Farenheit temperature measurement. This view is derived from a more general view concerning relationships of random variables themselves. There, as you see in wiki first equation, the errors are present. I would say that it is more precise to call what you have above a **linear estimator** and not a **linear model**, but I am not the author of the book you are reading |
I'm trying to learn some basic Machine Learning and some basic R. I have made a very naive implementation of $L\_2$ regularization in R based on the formula:
$\hat w^{ridge} = (X^TX +\lambda I)^{-1} X^T y$
My code looks like this:
```
fitRidge <- function(X, y, lambda) {
# Add intercept column to X:
X <- cbind(1, X)
# Calculate penalty matrix:
lambda.diag <- lambda * diag(dim(X)[2])
# Apply formula for Ridge Regression:
return(solve(t(X) %*% X + lambda.diag) %*% t(X) %*% y)
}
```
Note that I'm not yet trying to find an optimal $\lambda$, I'm simply estimating $\hat w^{ridge}$ for a given $\lambda$. However, something seems off. When I enter $\lambda = 0$ I get the expected OLS result. I checked this by applying lm.ridge(lambda = 0) on the same dataset and it gives me the same coefficients. However, when I input any other penalty, like $\lambda=2$ or $\lambda=5$ my coefficients and the coefficients given by lm.ridge disagree wildly. I tried looking at the implementation of lm.ridge but I couldn't work out what it does (and therefore what it does differently).
Could anyone explain why there is a difference between my results and the results from lm.ridge? Am I doing something wrong in my code? I've tried playing around with `scale()` but couldn't find an answer there.
EDIT:
To see what happens, run the following:
```
library(car)
X.prestige <- as.matrix.data.frame(Prestige[,c(1,2,3,5)])
y.prestige <- Prestige[,4]
fitRidge(X.prestige, y.prestige, 0)
coef(lm.ridge(formula = prestige~education+income+women+census, data = Prestige, lambda = 0))
fitRidge(X.prestige, y.prestige, 2)
coef(lm.ridge(formula = prestige~education+income+women+census, data = Prestige, lambda = 2))
```
EDIT2:
Okay, so based on responses below, I've gotten a somewhat clearer understanding of the problem. I've also closely re-read the section about RR in TESL by Hastie, Tibshirani and Friedman, where I discovered that the intercept is often estimated simply as the mean of the response. It seems that many sources on RR online are overly vague. I actually suspect many writers have never implemented RR themselves and might not have realized some important things as many of them leave out 3 important facts:
1. Intercept is not penalized in the normal case, the formula above only applies to the other coefficients.
2. RR is not equivariant under scaling, i.e. different scales gives different results even for the same data.
3. Following from 1, how one actually estimates intercept.
I tried altering my function accordingly:
```
fitRidge <- function(X, Y, lambda) {
# Standardize X and Y
X <- scale(X)
Y <- scale(Y)
# Generate penalty matrix
penalties <- lambda * diag(ncol(X))
# Estimate intercept
inter <- mean(Y)
# Solve ridge system
coeff <- solve(t(X) %*% X + penalties, t(X) %*% Y)
# Create standardized weight vector
wz <- c(inter, coeff )
return(wz)
}
```
I still don't get results equivalent to lm.ridge though, but it might just be a question of translating the formula back into the original scales. However, I can't seem to work out how to do this. I thought it would just entail multiplying by the standard deviation of the response and adding the mean, as usual for standard scores, but either my function is still wrong or rescaling is more complex than I realize.
Any advice? | First, very simply, I don't think your call to `solve` looks right, this is what I would expect
```
solve(t(X) %*% X + lambda.diag, t(X) %*% y)
```
Your code seems to be explicitly calculating a matrix inverse and then multiplying. This is mathematically correct, but computationally incorrect. It is always better to solve the system of equations. I've gotten in the habit of reading equations like $y = X^{-1}z$ as "solve the system of equations $Xy = z$ for $y$."
On a more mathematical note, you should not have to include an intercept term when fitting a ridge regression.
It is very important, when applying penalized methods, to standardize your data (as you point out with your comment about `scale`. It's also important to realize that penalties are not generally applied to the intercept term, as this would cause the model to violate the attractive property that the average predictions equal the average response (on the training data).
Together, these facts (centered data, no intercept penalty) imply that the intercept parameter estimate in a ridge regression is known a priori, it is zero.
The coefficient vector from ridge regression is the solution to the penalized optimization problem
$$ \beta = argmin \left( (y - X\beta)^t (y - X\beta) + \frac{1}{2}\sum\_{j > 0} \beta\_j^2 \right) $$
Taking a partial with respect to the intercept parameter
$$ \frac{\partial L}{\partial \beta\_0} =
\sum\_{i=1}^{n} \left( y - \sum\_{j=0}^q \beta\_j x\_{ij} \right) x\_{i0} $$
But $x\_{0i}$ are the entries in the model matrix corresponding to the intercept, so $x\_{0i} = 1$ always. So we get
$$\sum\_{i=1} y\_i + \sum\_{j=0}^q \beta\_j \sum\_{i=1}^n x\_{ij} $$
The first term, with the sum over y, is zero because $y$ is centered (or not, a good check of understanding is to work out what happens if you don't center y). In the second term, each predictor is centered, so the sum over $i$ is zero for *every* predictor $j$ *except* the intercept. For the intercept, the second term $i$ sum comes out to $n$ (it's $1 + 1 + 1 + \cdots$). So this whole thing reduces to
$$ n \beta\_0 $$
Setting this partial equal to zero, $n\beta\_0 = 0$, we recover $\beta\_0 = 0$, as expected.
So, you do not need to bind on an intercept term to your model matrix. Your function should either expect standardized data (and if you plan on making it public, it should check this is so), or standardize the data itself. Once this is done, the intercept is known to be zero. I'll leave it as an exersize to work out what the intercept should be when you translate the coefficients back to the un-normalized scale.
>
> I still don't get results equivalent to lm.ridge though, but it might just be a question of translating the formula back into the original scales. However, I can't seem to work out how to do this. I thought it would just entail multiplying by the standard deviation of the response and adding the mean, as usual for standard scores, but either my function is still wrong or rescaling is more complex than I realize.
>
>
>
It's a bit more complicated, but not too bad if you are careful. Here's a place where I answered a very similar question:
[GLMnet - “Unstandardizing” Linear Regression Coefficients](https://stats.stackexchange.com/questions/155362/glmnet-unstandardizing-linear-regression-coefficients/155379#155379)
You may have to make a very simple change if you are not standardizing $y$. |
I'm wondering what the difference is between:
1. 'predicted by residual plot' where I plot the residuals of the regression with the predicted values of the regression ;
2. the case where I plot the residuals with the predictor variables.
Also I'm wondering how to make such a plot in R in the case of multiple regression. Do I have to make a plot for each predictor separately? | A plot of residuals versus predicted response is essentially used to spot possible heteroskedasticity (non-constant variance across the range of the predicted values), as well as influential observations (possible outliers). Usually, we expect such plot to exhibit no particular pattern (a funnel-like plot would indicate that variance increase with mean). Plotting residuals against one predictor can be used to check the linearity assumption. Again, we do not expect any systematic structure in this plot, which would otherwise suggest some transformation (of the response variable or the predictor) or the addition of higher-order (e.g., quadratic) terms in the initial model.
More information can be found in any textbook on regression or on-line, e.g. [Graphical Residual Analysis](http://itl.nist.gov/div898/handbook/pmd/section6/pmd614.htm) or [Using Plots to Check Model Assumptions](http://www.ma.utexas.edu/users/mks/statmistakes/modelcheckingplots.html).
As for the case where you have to deal with multiple predictors, you can use [partial residual plot](http://en.wikipedia.org/wiki/Partial_residual_plot), available in R in the [car](http://cran.r-project.org/web/packages/car/index.html) (`crPlot`) or [faraway](http://cran.r-project.org/web/packages/faraway/index.html) (`prplot`) package. However, if you are willing to spend some time reading on-line documentation, I highly recommend installing the [rms](http://cran.r-project.org/web/packages/rms/index.html) package and its ecosystem of goodies for regression modeling. |
Wikipedia article on machine code and more in general Wikipedia's take on machine code seems pretty amiguous sometimes.
Often it makes me wonder if there's more, or it is just bad generalization.
Look at this:
>
> Machine code or machine language is ......Numerical
> machine code (i.e., not assembly code) may be regarded as the
> lowest-level representation of a compiled or assembled computer
> program or as a primitive and hardware-dependent programming language.
>
>
>
It seems to treat numerical machine codes as a subset of machine codes.
Does this imply assembly has to be considered a machine code too?
Or...Is it just implying that exists some non-numerical machine code?
Maybe it's just bad-written, or maybe it's me.
Again, article on object code:
>
> In a general sense an object code is a sequence of statements or
> instructions in a computer language,[2] usually a machine code
> language (i.e., binary) or.....
>
>
>
"i.e, binary" Does this mean that there are non-binary machine codes?
I'm a little bit confused. | **Yes and no and, really, it doesn't matter.**
All data in computers, including programs, is stored in devices that have two possible states – e.g., memory cells that can be at a high voltage or low voltage or areas of a disc that are magnetized north–south or south–north. These two-state systems are naturally interpreted as the binary digits 0 and 1. Any sequence of 0s and 1s can naturally be interpreted as a number written in binary.
Machine code is data that is stored on computers. Therefore, it is stored in a sequence of two-state cells, which can be viewed as a sequence of binary digits, which can be viewed as a binary number (or a sequence of binary numbers if you divide it into blocks). Does that mean it *is* a binary number? Really, that's a question of philosophy. The computer doesn't say "That pattern of 1s and 0s represents the number 3476247, which means I should do X": it just says "That pattern of 1s and 0s means I should do X." To the computer, it's just an instruction. But if you and I want to talk about that instruction, it's probably easiest to first see it as a number and then figure out what instruction it stands for. |
I am doing a research on foreign direct investment in the EU countries. I came across an article in which the authors assign 4 values to a dummy variable, to be more specific, they assign the value 0 for the years before an event and values of 1, 2 or 3 depending on a country's announced EU accession potential after the event.
I am thinking of doing something similar for my research. I will create a variable taking the value zero before a country’s entrance announcement, one after the announcement and two after the official EU accession.
I have three questions concerning that:
* Would you say that the authors were right in calling their
announcement variable a dummy variable?
* Do you see any problems with my approach? The other option I came up
with is to create separate dummy varibles for a country's entrance
announcement and it's actual accession.
* I might also have an independent variable for trade costs. Would
including EU accession in the regression be a problem since there
might be some collinearity between the two - trade costs decreasing
after entering the EU? | From your description it sounds as though this could be treated as an ordered categorical predictor 0 = before announcement, 1 = after announcement, 2 = process completed. If you want to treat these as ordered and equally spaced so that from 0 to 1 is the same as from 1 to 2 then using a single variable as your predictor is fine. If you cannot make those assumptions then you would need to represent the variable in some other way. You can have two variables representing contrasts between the three situations. They would be dummies; what you have is not.
As for your question about the correlation between this predictor and another: that is not a problem as long as you identify it and think carefully about the implications when interpreting your model. |
It seems that the distinction between fibers and threads is that fibers are cooperatively scheduled, whereas threads are preemptively scheduled. The point of the scheduler seems like a way to make an otherwise serial processor resource act in a parallel way, by "time-sharing" the CPU. However, on a dual-core processor with each core running its own thread, I assume there's no need to pause the execution of one thread for the other to continue because they're not "time-sharing" a single processor.
So, if the difference between threads and fibers is the way they are interrupted by the scheduler, and interrupting is not necessary when running on physically separate cores, **why can't fibers take advantage of multiple processor cores when threads can?**
Sources of confusion:
..mainly wikipedia
1. <http://en.wikipedia.org/wiki/Fiber_%28computer_science%29>
>
> A disadvantage is that fibers cannot utilize multiprocessor machines without also using preemptive threads
>
>
>
2. <http://en.wikipedia.org/wiki/Computer_multitasking#Multithreading>
>
> ...[fibers] tend to lose some or all of the benefits of threads on machines with multiple processors.
>
>
> | The main distinction, as you point out in your question, is whether or not the scheduler will ever preempt a thread. The way a programmer thinks about sharing data structures or about synchronizing between "threads" is very different in preemptive and cooperative systems.
In a cooperative system (which goes by many names, [*cooperative multi-tasking*](http://en.wikipedia.org/wiki/Computer_multitasking#COOP), [*nonpreemptive multi-tasking*](http://en.wikipedia.org/wiki/Nonpreemptive_multitasking), [*user-level threads*](http://en.wikipedia.org/wiki/Thread_(computing)#N:1_.28user-level_threading.29), [*green threads*](http://en.wikipedia.org/wiki/Green_threads), and [*fibers*](http://en.wikipedia.org/wiki/Fiber_(computer_science)) are five common ones currently) the programmer is guaranteed that their code will run *atomically* as long as they don't make any system calls or call `yield()`. This makes it particularly easy to deal with data structures shared between multiple fibers. Unless you need to make a system call as part of a critical section, critical sections don't need to be marked (with mutex `lock` and `unlock` calls, for example). So in code like:
```
x = x + y
y = 2 * x
```
the programmer needn't worry that some other fiber could be working with the `x` and `y` variables at the same time. `x` and `y` will be updated together atomically from the perspective of all the other fibers. Similarly, all the fibers could share some more complicated structure, like a tree and a call like `tree.insert(key, value)` would not need to be protected by any mutex or critical section.
In contrast, in a preemptive multithreading system, as with truly parallel/multicore threads, *every possible interleaving of instructions between threads is possible* unless there are explicit critical sections. An interrupt and preemption could become between any two instructions. In the above example:
```
thread 0 thread 1
< thread 1 could read or modify x or y at this point
read x
< thread 1 could read or modify x or y at this point
read y
< thread 1 could read or modify x or y at this point
add x and y
< thread 1 could read or modify x or y at this point
write the result back into x
< thread 1 could read or modify x or y at this point
read x
< thread 1 could read or modify x or y at this point
multiply by 2
< thread 1 could read or modify x or y at this point
write the result back into y
< thread 1 could read or modify x or y at this point
```
So to be correct on a preemptive system, or on a system with truly parallel threads, you need to surround every critical section with some kind of synchronization, like a mutex `lock` at the beginning and a mutex `unlock` at the end.
Fibers are thus more similar to [*asynchronous i/o*](http://en.wikipedia.org/wiki/Asynchronous_I/O) libraries than they are to preemptive threads or truly parallel threads. The fiber scheduler is invoked and can switch fibers during long latency i/o operations. This can give the benefit of multiple simultaneous i/o operations without requiring synchronization operations around critical sections. Thus using fibers can, perhaps, have less programming complexity than preemptive or truly parallel threads, but the lack of synchronization around critical sections would lead to disastrous results if you tried to run the fibers truly simultaneously or preemptively. |
At least in Java, if I write this code:
```java
float a = 1000.0F;
float b = 0.00004F;
float c = a + b + b;
float d = b + b + a;
boolean e = c == d;
```
the value of $e$ would be $false$. I believe this is caused by the fact that floats are very limited in the way of accurately representing numbers. But I don't understand why just changing the position of $a$ could cause this inequality.
I reduced the $b$s to one in both line 3 and 4 as below, the value of $e$ however becomes $true$:
```java
float a = 1000.0F;
float b = 0.00004F;
float c = a + b;
float d = b + a;
boolean e = c == d;
```
What exactly happened in line 3 and 4? Why addition operations with floats are not associative?
Thanks in advance. | In typical floating point implementations, the result of a single operation is produced as if the operation was performed with infinite precision, and then rounded to the nearest floating-point number.
Compare $a+b$ and $b+a$: The result of each operation performed with infinite precision is the same, therefore these identical infinite precision results are rounded in an identical way. In other words, floating-point addition is commutative.
Take $b + b + a$: $b$ is a floating-point number. With ***binary*** floating point numbers, $2b$ is also a floating-point number (the exponent is larger by one), so $b+b$ is added without any rounding error. Then $a$ is added to the ***exact*** value $b+b$. The result is the ***exact*** value $2b + a$, rounded to the nearest floating-point number.
Take $a + b + b$: $a + b$ is added, and there will be a rounding error $r$, so we get the result $a+b+r$. Add $b$, and the result is the ***exact*** value $2b + a + r$, rounded to the nearest floating-point number.
So in one case, $2b + a$, rounded. In the other case, $2b + a + r$, rounded.
PS. Whether for two particular numbers $a$ and $b$ both calculations give the same result or not depends on the numbers, and on the rounding error in the calculation $a + b$, and is usually hard to predict. Using single or double precision makes no difference to the problem in principle, but since the rounding errors are different, there will be values of a and b where in single precision the results are equal and in double precision they are not, or vice versa. The precision will be a lot higher, but the problem that two expressions are mathematically the same but not the same in floating-point arithmetic stays the same.
PPS. In some languages, floating point arithmetic may be performed with higher precision or a higher range of numbers than given by the actual statements. In that case, it would be much much more likely (but still not guaranteed) that both sums give the same result.
PPPS. A comment asked whether we should ask if floating point numbers are equal or not at all. Absolutely if you know what you are doing. For example, if you sort an array, or implement a set, you get yourself into awful trouble if you want to use some notion of "approximately equal". In a graphical user interface, you may need to recalculate object sizes if the size of an object has changed - you compare oldSize == newSize to avoid that recalculation, knowing that in practice you almost never have almost identical sizes, and your program is *correct* even if there is an unnecesary recalculation. |
I am new to grammars and I want to learn context free grammars which are the base of programming languages. After solving some problems, I encountered the language
$$\{a^nb^nc^n\mid n\geq 1\}\,.$$
Can anybody tell me if this is a context free language? I can't make any context free grammar for it and I don't know any other proof. | This is my approach to prove that a given language is not a CFL.
Try hard to come up with a Context free grammar for the given language. If you can come up with such a grammar, then the language is indeed a CFL.
If you can't, then you can use the pumping lemma to show that a given language is not a CFL.
Assume L is context free. Then L satisfies P.L.
Then there exists n by the P.L.
Let z = a^n b^n c^n
Because |z|>=n and z in L, by PL there exist uvwxy s.t.
```
z = uvwxy
|vwx| <= n
|vx| >= 1
forall i>=0, u v^i w x^i y in L
```
But if there exist u,v,w,x,y satisfying the first three constraints, we can show
that there exists an i s.t. the fourth constraint fails.
Case uvw consists only of "a". Then when i=0, the string is not in L (fewer "a"s
than "b"s or "c"s).
Case vwx contains only "b"s - similar.
Case vwx contains only "c"s - similar.
Case vwx contains only two types of characters from {a,b,c}.
Then uwy has more of the remaining type of characters.
The string vw cannot contain "a"s, "b"s, and "c"s, since vwx is a substring
of z and has length <=n (cannot contain the last "a" and the first "c" both).i.e this is a contradiction and our assumption is wrong that L is CFL. |
I am trying to use Stata for the first time to calculate Spearman's Rank Correlation for differences in weight and hemoglobin between annual visits for a small dataset. Do I enter the differences with the negative sign directly or do I square the differences first? | There are a couple options.
Certainly you could rank them; given your research question, I would combine the two "No" responses but not the others. Combining categories throws away information. You could then do ordinal logistic regression with "amount of drinking" as the DV and "entrepreneur" as an IV (with other IVs possibly added).
Another alternative is to take advantage of the precision of your categories and turn it into a "days drinking per month":
```
No, I have never drunk alcohol = 0
No, I no longer drink = 0
Yes, but only rarely = 1
Yes, I drink alcohol 2 or 3 days per month = 2.5
Yes, I drink alcohol 1 or 2 days per week = 6
Yes, I drink alcohol 3 or 4 days per week = 15
Yes, I drink alcohol 5 or 6 days per week = 25
Yes, I drink alcohol everyday = 30
```
or something similar. Then you could do a t-test, or perhaps a count regression. Is this valid? Well, I think it's at least reasonable. You could do sensitivity analysis with different conversion factors.
Lots of variables are not exactly ordinal and not exactly interval either. |
I have two client-server protocols which perform the same function but they have different complexities in time (in terms of number of operations) and space (in terms of number of objects of same type). The size on which are based the complexities is $n$.
* In the fist one, the server has to send only one object and the client has to perform $n$ operations.
* In the second one, the server has to send $\log n$ objects and the client has to perform $\log n$ operations.
What is the best algorithm ? Does it depend on the environment of execution ? | First, let me point out that there is no obvious memory issue in your
question, only bandwidth. Nowhere is it written that objects (supposed
to be the same size and same transmission cost, which you should make
precise) have to be stored or remembered. But they clearly have to be
transmitted.
It all depends on the load and the computing/transmission power of
server and clients, on whether they are homogeneous or heterogeneous,
and on the size distribution of the communication.
You may well want to **mix the two protocols**, and choose which to use for
each transmission task, based on size of the task, on current load, and currently available power of the server and client(s). |
How would you explain why the Fast Fourier Transform is faster than the Discrete Fourier Transform, if you had to give a presentation about it for the general (non-mathematical) public? | Calculating one value of X(k) in the frequency domain costs N complex multiplications, so calculating all N of them one by one, "starting over" every time, costs N times N is N^2 complex multiplications.
However, if you first calculate the DFT of all even entries ((N/2)^2 multiplications) and all odd entries (another (N/2)^2 multiplications), for a total of N^2 / 2 multiplications, you can calculate the full DFT from those with just another N/2 multiplications and N additions. For large N that last step can be ignored, and thus you gained a factor of two. If N is a power of two N=2^q, then you can repeat this trick q times by doing it also for the smaller DFTs of the even and odd entries, winning q times a factor 2, thus it becomes N^2 / (2^q) = N^2 / N = N multiplications, but you have to do that q = log2(N) times, so in the asymptotical case you end up with something of the order N log(N).
To see why you can combine the DFTs of even and odd entries in such an easy way, you must realize that you're just summing entries multiplied with evenly spaced complex numbers on the unit circle in the complex plane (Nth roots of 1, which all lay on a circle). So, if you treat the odd entries as the only input for some DFT then you you're only a single rotation away from what you need for the full DFT: the rotation to bring the entries back to where they should be. A rotation means a multiplication, hence N/2 extra multiplications before you can combine the two smaller DFTs with just additions.
Note that the same trick can be applied when N doesn't factor into just 2^q, but when it can be factored into many small prime numbers. For example, if N=p^q you can repeat q times a stage where you calculate p DFT's each of N/p points and then combine those with (p-1)N/p multiplications (rotating p-1 of the DFT's of the previous stage) and (p-1)N additions, for each of the q stages. |
I'll phrase my question using an intuitive and rather extreme example:
**Is the expected compression ratio (using zip compression) of a children's book higher than that of a novel written for adults?**
I read somewhere that specifically the compression ratio for zip compression can be considered an indicator for the information (as interpreted by a human being) contained in a text. Can't find this article anymore though.
I am not sure how to attack this question. Of course no compression algorithm can grasp the meaning of verbal content. So what would a zip compression ratio reflect when applied to a text? Is it just symbol patterns - like word repetitions will lead to higher ratio - so basically it would just reflect the vocabulary?
**Update:**
Another way to put my question would be whether there is a correlation which goes beyond repetition of words / restricted vocabulary.
---
Tangentially related:
[Relation of Word Order and Compression Ratio and Degree of Structure](http://www.joyofdata.de/blog/relation-of-word-order-and-compression-ratio/) | Shannon's [noisless coding theorem](http://en.wikipedia.org/wiki/Source_coding_theorem) is the formal statement that the size of an optimally compressed data stream is equivalent to the amount of information in that data stream.
But you are also right that the definition of "amount of information" depends on your probability model for the data stream. If you have a correct table of letter probabilities before you start the compression then [Huffman coding](http://en.wikipedia.org/wiki/Huffman_coding) gets within a constant factor of optimal, [Arithmetic coding](http://en.wikipedia.org/wiki/Arithmetic_coding) gets even closer to optimal. But if there are correlations between neighboring symbols (which there are in real human produced text) you can do better by choosing codes for pairs of letters. And you can do even better if you look at triples, and so on. Additionally, you typically don't have a very good probabilistic model of your data stream when you start, so you need to adaptively construct the probability table and then assign variable length symbols depending on the probabilities as you learn more.
The kinds of compression used in [zip/gzip](http://en.wikipedia.org/wiki/DEFLATE), [compress](http://en.wikipedia.org/wiki/LZW), and [7-zip](http://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Markov_chain_algorithm) are all variants of [Lempel-Ziv coding](http://en.wikipedia.org/wiki/LZ77). Lempel-Ziv coding is adaptive, builds probability tables over varying length chunks of letters, and is optimal in the sense that given an infinite random stream that is *ergodic* (the probabilities are stable over time) it will, *in the limit*, adaptively find a table that takes into account correlations over arbitrary distances, and then use that table to produce a stream that approaches optimally coded.
Of course, neither children's books nor novels for adults are infinitely long ergodic random processes (although some of them may seem like they are) so the conditions of the theorems don't hold.
Here's an interesting interview with Jacob Ziv where he talks a little about the universality of Lempel-Ziv coding:
<http://www.ieeeghn.org/wiki/index.php/Oral-History:Jacob_Ziv#Lempel-Ziv_algorithm.2C_Viterbi_algorithm.2C_and_Ziv-Zakai_bound>. |
I have implemented Gaussian elimination for solving system of linear equations in the field of modulo prime remainders. If there is a pivot equal to zero I assume the system has no solution but how to calculate number of solutions of such systems when all pivots are non-zero? (i.e. one and more solutions) | The integers modulo a prime form a field, so all assumptions done applying Gaussian eliminations work exactly the same. Luckily, there are no numerical instability problems. The system can be inconsistent (no solutions), underdetermined (several solutions modulo $p$) or have a unique solution modulo $p$. |
I'm fooling around with threshold time series models. While I was digging through what others have done, I ran across the CDC's site for flu data.
<http://www.cdc.gov/flu/weekly/>
About 1/3 of the way down the page is a graph titled "Pneumonia and Influenza Mortality....". It shows the actuals in red, and two black seasonal series. The top seasonal series is labeled "Epidemic Threshold" and appears to be some constant percent/amount above the "Seasonal Baseline" series.
My first question is: Is that really how they determine when to publicly say we're in an epidemic (some percent above baseline)? It looks to me like they're in the noise range, not to mention the "other factors" influence that is obviously not accounted for in that baseline series. To me, there are way too many false positives.
My second question is: Can you point me to any real world examples/publications of threshold models (hopefully in R)? | The CDC uses the epidemic threshold of
>
> 1.645 standard deviations above the baseline for that time of year.
>
>
>
The definition may have multiple sorts of detection or mortality endpoints. (The one you are pointing to is pneumonia and influenza mortality. The lower black curve is not really a series, but rather a modeled seasonal mean, and the upper black curve is 1.645 sd's above that mean).
<http://www.cdc.gov/mmwr/PDF/ss/ss5107.pdf>
<http://www.cdc.gov/flu/weekly/pdf/overview.pdf>
```
> pnorm(1.645)
[1] 0.950015
```
So it's a 95% threshold. (And it does look as though about 1 out of 20 weeks are over the threshold. You pick your thresholds, not to be perfect, but to have the sensitivity you deem necessary.) The seasonal adjustment model appears to be sinusoidal. There is an [R "flubase" package](http://search.r-project.org/cgi-bin/namazu.cgi?query=seasonal+model+epidemic&max=100&result=normal&sort=score&idxname=functions&idxname=vignettes&idxname=views) that should be consulted. |
Why are programs kept in main memory for execution? As per my understanding, we need a memory management scheme to manage several processes in the main memory. These things can be done in secondary memory as well (I think). Just to improve performance, we need main memory or there is something more to it? | They are not. On modern desktop/server OSes, the program generally mmapped (or equivalent) into virtual memory and the currently running parts are paged in as needed. However, the remainder of the program can sit on secondary storage and never be accessed.
The reason for the currently executing parts of memory being paged in as opposed to running directly from secondary storage is performance. In general, any read access to secondary storage tends to be cached in memory, either directly through virtual memory or indirectly through something like a disk cache. In fact, in the main memory there's an extensive cache hierarchy (typically 3 layers on x86 Intel / AMD processors) so even the idea that the program is store in "main" memory is mostly a convenient simplification, not a complete description of reality. What actually happens is that some parts of the program are paged into main memory (by the OS) and some parts of the the program from main memory is cached into the L3 cache (by the CPU) and some parts of that are cached into the L2 cache(by the CPU) and some parts of that are cached into the L1 instruction cache (by the CPU).
On embedded systems and more exotic architectures, deviations from this pattern definitely exist. Execute-in-place was already mentioned but another example is Harvard architecture processors, where instruction and data memory are completely separate and code runs from the instruction memory which isn't addressable as data. Small embedded microcontrollers like PIC and AVR are examples of such processors. |
I wonder how I can modify the K-means algorithm so that the cluster volumes are not equal to each other. The K-means objective is to minimize within cluster sum of squares $\sum\_{i=1}^{p} {\parallel \mathit{X}\_i-\mathit{L}\_{\mathit{Z}\_i} \parallel}\_2^2$, and this objective assumes that all cluster variances are the same. If we assume that the clusters are Gaussian with mean $\mathit{L}\_{\mathit{Z}\_i}$ and variance $\sigma\_{\mathit{Z}\_i}^2$ where $\mathit{Z}\_i$ stands for the cluster assignment of data point $i$, then the objective for the cluster assignments become $\sum\_{i=1}^{p} \frac {{\parallel \mathit{X}\_i-\mathit{L}\_{\mathit{Z}\_i} \parallel}\_2^2} {\sigma\_{\mathit{Z}\_i}^2}$. So, I tried modifying K-means such that $\mathit{Z}\_i$ update is performed using this new update rule, and $\sigma\_{\mathit{Z}\_i}^2$ are also updated in each iteration. However, when I use this new modified K-means, almost all data points are assigned to the same cluster, which is weird. What might be the problem about that approach? I know EM can be used for this unequal-volume GMM purpose, but I want a simpler approach like K-means, and I am really curious about why what I tried is not feasible. Thanks! | In one medical research paper, Proitsi et al. (2009) write:
>
> "The WLSMV is a robust estimator which does not assume normally
> distributed variables and provides the best option for modelling
> categorical or ordered data (Brown, 2006)".
>
>
>
For your convenience, I'm including the cited reference in the reference list below (I use APA format):
Brown, T. (2006). *Confirmatory factor analysis for applied research.* New York: Guildford.
Proitsi, P., Hamilton, G., Tsolaki, M., Lupton, M., Daniilidou, M., Hollingworth, P., ..., Powell, J. F. (2009, in press). A multiple indicators multiple causes (MIMIC) model of behavioural and psychological symptoms in dementia (BPSD). *Neurobiology Aging*. doi:10.1016/j.neurobiolaging.2009.03.005
I hope this is helpful and answers your question. |
I've read an article saying that when using planned contrasts to find means that are different in an one way ANOVA, constrasts should be orthogonal so that they are uncorrelated and prevent the type I error from being inflated.
I don't understand why orthogonal would mean uncorrelated under any circumstances. I can't find a visual/intuitive explanation of that, so I tried to understand these articles/answers
<https://www.psych.umn.edu/faculty/waller/classes/FA2010/Readings/rodgers.pdf>
[What does orthogonal mean in the context of statistics?](https://stats.stackexchange.com/questions/12128/what-does-orthogonal-mean-in-the-context-of-statistics)
but to me, they contradict each other. The first says that if two variables are uncorrelated and/or orthogonal then they are linearly independent, but that the fact that they are linearly independant does not imply that they are uncorrelated and/or orthogonal.
Now on the second link there are answers that state things like "orthogonal means uncorrelated" and "If X and Y are independent then they are Orthogonal. But the converse is not true".
Another interesting comment in the second link state that the correlation coefficient between two variables is equal to the cosine of the angle between the two vectors corresponding to these variables, which implies that two orthogonal vectors are completely uncorrelated (which isn't what the first article claims).
So what's the true relationship between independence, orthogonal and correlation ? Maybe I missed something but I can't find out what it is. | **Independence** is a statistical concept. Two [random variables](https://en.wikipedia.org/wiki/Random_variable) $X$ and $Y$ are statistically independent if their joint distribution is the product of the marginal distributions, i.e.
$$
f(x, y) = f(x) f(y)
$$
if each variable has a density $f$, or more generally
$$
F(x, y) = F(x) F(y)
$$
where $F$ denotes each random variable's cumulative distribution function.
**Correlation** is a weaker but related statistical concept. The (Pearson) correlation of two random variables is the expectancy of the product of the standardized variables, i.e.
$$
\newcommand{\E}{\mathbf E}
\rho = \E \left [
\frac{X - \E[X]}{\sqrt{\E[(X - \E[X])^2]}}
\frac{Y - \E[Y]}{\sqrt{\E[(Y - \E[Y])^2]}}
\right ].
$$
The variables are *uncorrelated* if $\rho = 0$. It can be shown that two random variables that are independent are necessarily uncorrelated, but not vice versa.
**Orthogonality** is a concept that originated in geometry, and was [generalized](https://en.wikipedia.org/wiki/Orthogonality#Definitions) in linear algebra and related fields of mathematics. In linear algebra, orthogonality of two vectors $u$ and $v$ is defined in [inner product spaces](https://en.wikipedia.org/wiki/Inner_product_space), i.e. [vector spaces](https://en.wikipedia.org/wiki/Vector_space) with an inner product $\langle u, v \rangle$, as the condition that
$$
\langle u, v \rangle = 0.
$$
The inner product can be defined in different ways (resulting in different inner product spaces). If the vectors are given in the form of sequences of numbers, $u = (u\_1, u\_2, \ldots u\_n)$, then a typical choice is the [dot product](https://en.wikipedia.org/wiki/Dot_product), $\langle u, v \rangle = \sum\_{i = 1}^n u\_i v\_i$.
---
Orthogonality is therefore not a statistical concept per se, and the confusion you observe is likely due to different translations of the linear algebra concept to statistics:
a) Formally, a space of random variables can be considered as a vector space. It is then possible to define an inner product in that space, in different ways. [One](https://stats.stackexchange.com/a/134317/17023) common [choice](https://stats.stackexchange.com/a/29172/17023) is to define it as the covariance:
$$
\langle X, Y \rangle = \mathrm{cov} (X, Y)
= \E [ (X - \E[X]) (Y - \E[Y]) ].
$$
Since the correlation of two random variables is zero exactly if the covariance is zero, *according to this definition* uncorrelatedness is the same as orthogonality. (Another possibility is to define the inner product of random variables simply as the [expectancy of the product](https://stats.stackexchange.com/a/16315/17023).)
b) Not all the [variables we consider in statistics](https://stats.stackexchange.com/a/156554/17023) are random variables. Especially in linear regression, we have independent variables which are not considered random but predefined. Independent variables are usually given as sequences of numbers, for which orthogonality is naturally defined by the dot product (see above). We can then investigate the statistical consequences of regression models where the independent variables are or are not orthogonal. In this context, orthogonality does not have a specifically statistical definition, and even more: it does not apply to random variables.
*Addition responding to Silverfish's comment:* Orthogonality is not only relevant with respect to the original regressors but also with respect to contrasts, because (sets of) simple contrasts (specified by contrast vectors) can be seen as transformations of the design matrix, i.e. the set of independent variables, into a new set of independent variables. Orthogonality for contrasts is [defined](https://en.wikipedia.org/wiki/Contrast_(statistics)#Definitions) via the dot product. If the original regressors are mutually orthogonal and one applies orthogonal contrasts, the new regressors are mutually orthogonal, too. This ensures that the set of contrasts can be seen as describing a decomposition of variance, e.g. into main effects and interactions, the idea underlying [ANOVA](https://en.wikipedia.org/wiki/Analysis_of_variance).
Since according to variant a), uncorrelatedness and orthogonality are just different names for the same thing, in my opinion it is best to avoid using the term in that sense. If we want to talk about uncorrelatedness of random variables, let's just say so and not complicate matters by using another word with a different background and different implications. This also frees up the term orthogonality to be used according to variant b), which is highly useful especially in discussing multiple regression. And the other way around, we should avoid applying the term correlation to independent variables, since they are not random variables.
---
[Rodgers et al.'s](https://web.archive.org/web/20100709201307/http://www.psych.umn.edu/faculty/waller/classes/FA2010/Readings/rodgers.pdf) presentation is largely in line with this view, especially as they understand orthogonality to be distinct from uncorrelatedness. However, they do apply the term correlation to non-random variables (sequences of numbers). This only makes sense statistically with respect to the [sample correlation coefficient](https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient#For_a_sample) $r$. I would still recommend to avoid this use of the term, unless the number sequence is considered as a sequence of [realizations](https://en.wikipedia.org/wiki/Realization_(probability)) of a random variable.
I've scattered links to the answers to the two related questions throughout the above text, which should help you put them into the context of this answer. |
I have just started (independent) learning about quantum computation in general from Nielsen-Chuang book.
I wanted to ask if anyone could try finding time to help me with whats going on with the measurement postulate of quantum mechanics. I mean, I am not trying to question the postulate; its just that I do not get how the value of the state of the system after measurement comes out to $M\_m/\sqrt{ <\psi|M\_m^+ M\_m|\psi> }$.
Even though its just what the postulate seems to say, I find it really awkward that why is it this expression. I do not know if what I ask here makes sense, but this is proving to be something which for some reason seems to block me from reading any further, | I don't know if this is an "explanation", but hopefully it is a useful "description".
More generally than projective measurements, one always measures an *operator*. (A projector is a special case of this.) So what does it mean to "measure an operator"?
Well, operators often correspond to 'observable' physical quantities. The most important in quantum mechanics, for instance, is energy; but one can also (sometimes indirectly) measure other quantities, such as angular momentum, *z*-components of magnetic fields, etc. What is being measured always gives real-valued results --- in principle, some definite result (e.g. an electron is in the 'spin +1/2' state as opposed to 'spin −1/2', or in the first excited energy level as opposed to the ground-state in a hydrogen atom, etc.), albeit each *a priori possible* result is realized with some probability.
We assign each of the real-valued outcomes of a measurement to a subspace. The way we do this is to describe a Hermitian operator --- *i.e.* an operator which associates a real eigenvalue to different subspaces, with the subspaces summing up to the whole Hilbert space. A projector is such an operator, where the real values are 0 and 1; *i.e.* describing that a vector belongs to a designated subspace (yielding a value of 1), or its orthocomplement (yielding a value of 0). These Hermitian operators are **observables**, and the eigenspaces are those for which the observable has a "definite" value.
But what about those vectors which are not eigenvectors, and do not have "definite" values for these observables? Here is the non-explaining part of the description: we project them into one of the eigenspaces, to obtain an eigenvector with a well-defined value. Which projection we apply is determined at random. The probability distribution is given by the familiar Born rule:
$$ \Pr\limits\_{|\psi\rangle}\bigl( E = c \bigr) \;=\; \langle \psi | \Pi\_c | \psi \rangle \;, $$
where $\Pi\_c$ is the projector onto the *c*-eigenspace of an 'observable quantity' *E* (represented by a Hermitian operator $A = \sum\_c \; c \cdot \Pi\_c$). The post-measured state is *some* projection of the state $|\psi\rangle$ onto *some* eigenspace of the observable *A*. And so if $| \psi\_0 \rangle$ is the pre-measurement state, $| \psi\_1 \rangle$ is the post-measurement state, and $\Pi\_c$ is the 'actual result' measured (*i.e.* the eigenspace onto which the pre-measurement state was actually projected), we have the proportionality result
$$ | \psi\_1 \rangle \;\propto\; \Pi\_c | \psi\_0 \rangle $$
by the projection rule just described. This is why there is the projector in your formula.
In general, the vector $| \psi'\_1 \rangle = \Pi\_c | \psi\_0 \rangle$ is not a unit vector; because we wish to describe the post-measurement state by another unit vector, we must rescale it by
$$ \|\;|\psi'\_1\rangle\;\| = \sqrt{\langle \psi'\_1 | \psi'\_1 \rangle} = \sqrt{\langle \psi\_0 | \Pi\_c | \psi\_0 \rangle} \;,$$
which is the square-root of the probability with which the result would occur *a priori*. And so, we recover the formula in your question,
$$ | \psi\_1 \rangle \;=\; \frac{\Pi\_c | \psi\_0 \rangle}{ \sqrt{ \langle \psi\_0 | \Pi\_c | \psi\_0 \rangle }} \;.$$
(If this formula seems slightly clumsy, take heart that it looks and feels a little bit better if you represent quantum states by density operators.)
**Edited to add:** the above should not be construed as a description of POVMs. A "positive operator valued measurement" is better seen as describing the *expectation value* of various measurable observables *Ec* in a collection { *Ec* }*c* ∈ C . |
**ORIGINAL Phrasing:** Let's say I have a population distribution, A. For example, this could be the distribution of the maximum height (not the current height) that someone will reach during their life. Let's say A only contains data for people born in 1900 and 1901.
Now imagine a subpopulation of this distribution, B. This distribution may or may not have the same statistics as A - I don't know. For example, B might be all of the Males in A, which might have a higher mean. Or it might be all of the people born in 1900, which I would not expect to be very different from A in terms of statistics.
Now let's say I draw a random sample from A such that it contains only members of B. So for example either all males, or all people born in 1900. It's assumed that I don't know everyone's gender or birth year in A, so I can't actually construct B and look at it.
How can I use my sample to determine if B's statistics (e.g. mean) differ from A? It seems like standard C.I. / p-value approaches can work. But at the same time I'm not sure, because the random sample of B may by chance not be representative of B, especially if the sample is small. In other words, B itself may have its own (different) variance and mean than the sample. So for example, if I did have B, I could compute C.I. with its statistics to determine if its mean is the same as A, but the C.I. computed from the sample will be different than those computed from B itself. And then there's the fact that B itself is also a subset of A.
Am I overthinking things? Are standard C.I. / p-value approaches sufficient?
**EDITED Phrasing:** Let's say I have a population distribution, A with size NA = 1000000. For example, this could be the distribution of the maximum height (not the current height) that someone will reach during their life. Let's say A only contains data for people born in 1890 through 1910. We don't actually have the birth year for any person, but we know it is in that range.
In theory, there is therefore a subpopulation, B, with size NB, of all people in A born in 1890, but this population cannot be observed, and NB is not known.
Now someone comes along and labels NC = 1000 people as being born in 1890. We'll consider these people sample C. He says that they were randomly selected from the NB people in B to be labeled (somehow, this person knew everyone who belonged in B, even though I don't). So far these are the ONLY people whose birth years I know.
Can we determine if B's heights are different from the total population A using the sample C, and put confidence on that? As a secondary followup, what if I can't assume that C is a random sample from B? What if someone just gives me a bunch of 1890 labels but I'm not sure whether that was randomly done or not?
My concerns are that there's two types of error going on. The error in B itself when comparing its mean to that of A, which I can't really observe, and the error in the fact that C, even if randomly sampled, can be a bad sample of B.
The other issue is that C (and B also) is part of A, so in comparing C to A, there's duplicate observations that belong to both distributions.
Am I overthinking things? Are standard C.I. / p-value approaches sufficient? | **Problem:** Consider a population of $N$ people, with each person $i$ having a characteristic vector $(X\_i, S\_i)$ which consists of a **characteristic of interest** $X\_i$ and a binary **covariate** $S\_i \in \{ 0,1 \}$. Let $N\_\* \equiv \sum\_{i=1}^N S\_i$ be the size of the subpopulation of people with $S\_i=1$, and suppose that we are interested in comparing the population to the subpopulation. Without loss of generality, we can order the indices so that the first $N\_\*$ people are in the subpopulation, and the remaining $N-N\_\*$ people are not in the subpopulation.$^\dagger$
The goal of the analysis is to compare the characteristics of the population and the subpopulation. For definiteness, we will assume that we wish to compare the **means** of these groups. The means of the population, subpopulation, and non-subpopulation, are denoted respectively by:
$$\bar{X}\_N = \frac{1}{N} \sum\_{i=1}^N X\_i \quad \quad \quad \bar{X}\_{\*} =\frac{1}{N\_\*} \sum\_{i=1}^{N\_\*} X\_i \quad \quad \quad \bar{X}\_{\*\*} =\frac{1}{N-N\_\*} \sum\_{i=N\_\*+1}^{N} X\_i.$$
Now, if I understand your description of your sampling correctly, you are sampling *from the whole population* via simple-random-sampling without replacement (SRSWOR), so you will get people from the subpopulation and the non-subpopulation at random. (This is a little ambiguous - at one point you say you sample from the population so that you only get values from the subpopulation, but at another point you say you can't identify the covariate until after sampling, which would mean that it is impossible to sample only from the subpopulation.) You want to use this data to test whether the mean of the population differs from the mean of the subpopulation. That is, you want to form a confidence interval or hypothesis test on the (unknown) value $\bar{X}\_N - \bar{X}\_{\*} $.
---
**Solution:** If that is a correct interpretation of your sampling scheme then this problem can be handled using ordinary methods for comparison of two groups (the subpopulation and the non-subpopulation). To see this, it we note that with a bit of algebra it have be shown that:
$$\bar{X}\_N - \bar{X}\_{\*} = \frac{N-N\_\*}{N} \cdot (\bar{X}\_{\*\*} - \bar{X}\_{\*}).$$
Thus, comparing the mean of the population and subpopulation is equivalent to comparing of the mean of the subpopulation with the mean of the *non-subpopulation*. (For a test of equality these are exactly the same, and for a confidence interval for the difference, you apply the scalar multiplier in the formula shown.) Comparison of these two groups is the famous [Behrens-Fisher problem](https://en.wikipedia.org/wiki/Behrens%E2%80%93Fisher_problem) which is a well-known problem in statistics. The comparison is usually undertaken using a confidence interval or hypothesis test for the mean difference via [Welch's approximation](https://en.wikipedia.org/wiki/Welch%27s_t-test).
---
**UPDATE:** From the updated description of your sampling mechanism, it now appears that you are saying that you only have sample data from the subpopulation of interest, but you do not have any sample data from the remaining part of the population. If that is the case then you can estimate the mean of the subpopulation, but there is no basis to estimate the difference in means between the subpopulation and the rest of the population. If you would like to accomplish the latter then you will need some data (or some known summary statistics) from the remaining part of the population.
---
$^\dagger$ That is, we take $S\_1 = ... = S\_{N\_\*} = 1$ and $S\_{N\_\*+1} = ... = S\_{N} = 0$. |
I collected data of 60 countries to identify whether there is a relationship between average income per person and average life expectancy. However, I am having some trouble interpreting my lin-reg and r value.
```
y = 0.000437x + 67.68
r = 0.814
```
Does this inconsistency have to do with the nature of my data? Considering that that the data for income is in the 10-30 thousands, while the data for life expectancy is all below 100, would that influence the reliability of the persons r correlation test? If so, how?
Also, I have checked and double checked both calculations a number of times, on the calculator, manually, on excel and through an online calculator and I still get the same answers.
If someone could help me interpret this, that would be very helpful. | First, please make sure you're not using "insignificant" to describe 0.000437 being very small. The word "significant" is associated with hypothesis test and p-value and if you mix this up with the magnitude of the regression coefficient, people will be very confused.
Here is a formula that can explain your problem:
Given a regression mode $y = \beta\_0 + \beta\_1 x$,
$\beta\_1 = \rho \frac{s\_y}{s\_x}$
Where $\rho$ is the Pearson's correlation coefficient, $s\_y$ is the standard deviation of $y$, and $s\_x$ is the standard deviation of $x$.
You can see that, more than often, your $\beta\_1$ will not be exactly equal to $\rho$. The only time it will happen is when both your dependent and independent variables have the same standard deviation. So, it's very understandable that you can have a very low $\beta\_1$ while a very high $\rho$.
Second, the magnitude of $\beta\_1$ can be easily manipulated. For example, if you recode 1000 to 1, and change the unit from "dollar" to "thousand dollar," the $\beta\_1$ will change. Instead of comparing the $\beta\_1$ to your correlation coefficient, think if your $\beta\_1$ makes sense. For example, why would you expect the $\beta\_1$ to be close to 0.8? If some country increases its national income by 1000 dollars, would it make sense that life expectancy increases by 800 years? It can't be. In fact, your 0.000437 is much more realistic. |
As I understand, the term "NP-hardness" is applicable when we also talk about optimization or search problems (i.e. return the satisfying assignment for 3-SAT). How do we formally define NP-hardness for such problems? The standard definition:
>
> The problem is NP-hard when any problem from NP is polynomial-time reducible to this problem
>
>
>
doesn't make much sense, because of how the reduction is defined:
>
> Language $A$ is polynomial-time reducible to $B$ if there exists a poly-time computable function $f$, such that $x \in A$ iff $f(x) \in B$.
>
>
>
The problem is that $B$ (e.g. our search problem) doesn't define a language (there may be other equivalent definitions, such as $A(x) \in \{true, false\}$, but they'll lead to the same problems).
My friend suggested that we can define a second poly-time computable function $g^{-1}$, which converts an "answer" for $B$ to answer for $A$: $x \in A$ iff $g^{-1}(B(f(x)))$ is $true$, where $B(y)$ is any correct answer for $y$. This makes sense, but I've never seen that.
So, what's the standard definition? For an answer, I would also ask for an appropriate citation (not to Wikipedia or random slides). | There is a slight abuse of notation going on. We say that a function $f$ is NP-hard if $f\in FP$ implies $P=NP$. For example, if $L$ is NP complete and $M\_L(x,y)$ is a verifier for $L$, then any function $f$ which maps $x$ to some $y$ such that $M\_L(x,y)$ whenever such $y$ exists is of course NP-hard in this sense. We don't usually talk about actual reductions in this context, however the natural way to go about saying $L$ reduces to computing $f$ is to say that there exists a polynomial time oracle machine $M^f$ with access to $f$ that decides $L$.
See also the [zoo](https://complexityzoo.uwaterloo.ca/Complexity_Zoo:F#fnp) on the class FNP. The fact that "function NP" problems are defined relative to a specific verifier introduces some difficulty when talking about search to decision reduction. |
Jon and Frank ordered 2 footlong sandwiches, but instead got 4 6-inch sandwiches. Jon had ordered spicy italian, and Frank chicken teriyaki. The sandwiches aren't labeled and are wrap, so they can't tell which sandwiches are which.
What would be the odds of Jon picking the 2 sandwiches that are are his own? (the 2 spicy italian sandwiches)?
I know that there is a 50/50 chance of choosing a correct sandwich, since there are 2 terikaki and 2 spicy, but once 1 is chosen, there is only 1/3 chances to get the second one right.. but how do the the odds of getting the first 50/50 choice AND the second 1/3 choice correct add up?
Thanks! | You need to multiply the probabilities with "AND" (and you would add them instead of multiplying them if it was "or"). Here, as you point out, at the first pick he has 1 chance out of 2 of picking one of his sandwiches; at the second pick, only 1 out of 3 chances. So the result is **1/2\*1/3 = 1/6**, or 0.16666667. |
I am wondering what the differences are between mixed and unmixed GLMs. For instance, in SPSS the drop down menu allows users to fit either:
* `analyze-> generalized linear models-> generalized linear models` &
* `analyze-> mixed models-> generalized linear`
Do they deal with missing values differently?
My dependent variable is binary and I have several categorical and continuous independent variables. | The advent of [generalized linear models](http://en.wikipedia.org/wiki/Generalized_linear_model) has allowed us to build regression-type models of data when the distribution of the response variable is non-normal--for example, when your DV is binary. (If you would like to know a little more about GLiMs, I wrote a fairly extensive answer [here](https://stats.stackexchange.com/questions/20523/difference-between-logit-and-probit-models/30909#30909), which may be useful although the context differs.) However, a GLiM, e.g. a logistic regression model, assumes that your data are *independent*. For instance, imagine a study that looks at whether a child has developed asthma. Each child contributes *one* data point to the study--they either have asthma or they don't. Sometimes data are not independent, though. Consider another study that looks at whether a child has a cold at various points during the school year. In this case, each child contributes *many* data points. At one time a child might have a cold, later they might not, and still later they might have another cold. These data are not independent because they came from the same child. In order to appropriately analyze these data, we need to somehow take this non-independence into account. There are two ways: One way is to use the [generalized estimating equations](http://en.wikipedia.org/wiki/Generalized_estimating_equation) (which you don't mention, so we'll skip). The other way is to use a [generalized linear mixed model](http://en.wikipedia.org/wiki/Generalized_linear_mixed_model). GLiMMs can account for the non-independence by adding random effects (as @MichaelChernick notes). Thus, the answer is that your second option is for non-normal repeated measures (or otherwise non-independent) data. (I should mention, in keeping with @Macro's comment, that general-*ized* linear mixed models include linear models as a special case and thus can be used with normally distributed data. However, in typical usage the term connotes non-normal data.)
**Update:** *(The OP has asked about GEE as well, so I will write a little about how all three relate to each other.)*
Here's a basic overview:
* a typical GLiM (I'll use logistic regression as the prototypical case) lets you model an *independent* binary response as a function of covariates
* a GLMM lets you model a *non-independent* (or clustered) binary response *conditional on the attributes of each individual cluster* as a function of covariates
* the GEE lets you model the *population mean response* of *non-independent* binary data as a function of covariates
Since you have multiple trials per participant, your data are not independent; as you correctly note, "[t]rials within one participant are likely to be more similar than as compared to the whole group". Therefore, you should use either a GLMM or the GEE.
The issue, then, is how to choose whether GLMM or GEE would be more appropriate for your situation. The answer to this question depends on the subject of your research--specifically, the target of the inferences you hope to make. As I stated above, with a GLMM, the betas are telling you about the effect of a one unit change in your covariates on a particular participant, given their individual characteristics. On the other hand with the GEE, the betas are telling you about the effect of a one unit change in your covariates on the average of the responses of the entire population in question. This is a difficult distinction to grasp, especially because there is no such distinction with linear models (in which case the two are the same thing).
One way to try to wrap your head around this is to imagine averaging over your population on both sides of the equals sign in your model. For example, this might be a model:
$$
\text{logit}(p\_i)=\beta\_{0}+\beta\_{1}X\_1+b\_i
$$
where:
$$
\text{logit}(p)=\ln\left(\frac{p}{1-p}\right),~~~~~\&~~~~~~b\sim\mathcal N(0,\sigma^2\_b)
$$
There is a parameter that governs the response distribution ($p$, the probability, with binary data) on the left side for each participant. On the right hand side, there are coefficients for the effect of the covariate[s] and the baseline level when the covariate[s] equals 0. The first thing to notice is that the actual intercept for any specific individual is *not* $\beta\_0$, but rather $(\beta\_0+b\_i)$. But so what? If we are assuming that the $b\_i$'s (the random effect) are normally distributed with a mean of 0 (as we've done), certainly we can average over these without difficulty (it would just be $\beta\_0$). Moreover, in this case we don't have a corresponding random effect for the slopes and thus their average is just $\beta\_1$. So the average of the intercepts plus the average of the slopes must be equal to the logit transformation of the average of the $p\_i$'s on the left, mustn't it? Unfortunately, **no**. The problem is that in between those two is the $\text{logit}$, which is a *non-linear* transformation. (If the transformation were linear, they would be equivalent, which is why this problem doesn't occur for linear models.) The following plot makes this clear:
![enter image description here](https://i.stack.imgur.com/elkIH.png)
Imagine that this plot represents the underlying data generating process for the probability that a small class of students will be able to pass a test on some subject with a given number of hours of instruction on that topic. Each of the grey curves represents the probability of passing the test with varying amounts of instruction for one of the students. The bold curve is the average over the whole class. In this case, the effect of an additional hour of teaching *conditional on the student's attributes* is $\beta\_1$--the same for each student (that is, there is not a random slope). Note, though, that the students baseline ability differs amongst them--probably due to differences in things like IQ (that is, there is a random intercept). The average probability for the class as a whole, however, follows a different profile than the students. The strikingly counter-intuitive result is this: **an additional hour of instruction can have a sizable effect on the probability of *each* student passing the test, but have relatively little effect on the probable *total* proportion of students who pass**. This is because some students might already have had a large chance of passing while others might still have little chance.
The question of whether you should use a GLMM or the GEE is the question of which of these functions you want to estimate. If you wanted to know about the probability of a given student passing (if, say, you *were* the student, or the student's parent), you want to use a GLMM. On the other hand, if you want to know about the effect on the population (if, for example, you were the *teacher*, or the principal), you would want to use the GEE.
For another, more mathematically detailed, discussion of this material, see [this answer](https://stats.stackexchange.com/questions/16390/when-to-use-generalized-estimating-equations-vs-mixed-effects-models/16415#16415) by @Macro. |
I'm new to statistics, like WAY new. So green, in fact, I'm scarcely pushing (0,1,0) in the RGB color model.
I'm presently the recordkeeper for our office's 17-week "Coin flip pool". As of right now, 7 weeks have passed and we are in our 8th week, and, therefore, there are 9 weeks remaining. Each contestant has amassed a certain number of points gained by essentially flipping 3 coins a week and for each "head" result, they get 1 point - if they get 3 "heads" in a week, that's 3 points. Current accumulated scores across all 23 contestants ranges from 7 to 15, so each has their own weekly average of success.
**One contestant recently asked what his odds of winning were.** This obviously necessitates knowing his current score, his current performance, the current score of the other contestants, the average weekly performance of other contestants, and, possibly, the average potential score of 3-coin-flips-resulting-in-heads-a-week.
I've searched and searched and found only betting odds calculators and basic explanations of what the probability of the result of any single series might be, but I don't know enough about this subject nor what keywords to use to possibly narrow down my results to find the right answer.
I can provide a whole spreadsheet's worth of data (if I don't readily know how to share it within Stack Exchange).
As I know next to nothing about this subject, would someone kindly nudge me in the right direction? Especially if this ends up being far to complicated a topic for a Stack Exchange question.
Thank you for your time!
**Edit (20181027T08:45-05:00)**:
Additional clarification that might make this simpler (or not, statistics is not my forte after all): Knowing the absolute chance of winning at the end of Week 17 is not necessary, only a projection based on current performance (maybe that's the same thing?).
Some sample data (actual data from my spreadsheet)
```
Person Score Performance Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7 Week 8
Alpha 2 0.250 2 0 0 0 0 0 0 0
Bravo 10 1.250 2 0 0 2 1 3 2 0
Charlie 12 1.500 3 1 0 2 2 1 3 0
Delta 8 0.875 0 1 1 1 2 1 0 1
Echo 11 1.375 2 0 2 1 1 2 3 0
Foxtrot 13 1.625 1 2 2 2 2 1 3 0
Golf 9 1.125 2 1 1 1 1 1 2 0
Hotel 12 1.500 2 1 1 0 2 3 3 0
India 8 1.000 1 1 0 1 2 1 2 0
Juliett 9 1.125 2 1 1 0 1 2 2 0
Kilo 9 1.125 2 1 0 2 2 0 2 0
Lima 11 1.375 2 1 2 1 1 2 2 0
Mike 15 1.875 1 1 2 3 2 3 3 0
November 9 1.125 2 0 2 1 1 1 2 0
Oscar 12 1.500 1 2 0 2 2 2 3 0
Papa 10 1.250 1 2 1 1 0 2 3 0
Quebec 11 1.375 2 1 2 1 1 1 3 0
Romeo 7 0.875 1 1 1 0 1 1 2 0
Sierra 11 1.375 2 1 2 3 2 0 1 0
Tango 8 1.000 2 2 0 1 1 1 1 0
Uniform 8 1.000 1 1 0 2 0 0 3 1
Victor 11 1.375 2 2 1 1 1 2 1 1
Whiskey 10 1.250 2 0 3 1 1 2 1 0
X-ray 9 1.000 0 1 2 1 1 1 2 0
```
Score: total "heads" so far
Performance": average heads (score / 8 at present)
Week #: total "heads" results that week
**Edit (20181027T09:52-05:00)**:
For commentors (commentators?) asking why Week 8 has so few successes: Week 8 is "in progress" and ends Tuesday morning. One coin-flip is performed Thursday, Sunday, and Monday so probability will change as each day's flip occurs. American sports fanatics might start to see where this is headed.
**Edit (20181027T14:37-05:00)**:
As Martijn Weterings and I have discussed in the talk section, this is indeed not exactly a coin-flipping contest. Under my prospective simplification is it the NFL Regular season: 17 games for which there are two outcomes (and their inverse): Team A wins or loses (for which Team B loses or wins). We are using a variant that adjusts the underdog team's score by a positive amount (a handicap, if you will). Based on last year's pool statistics with the inclusion of a handicap the odds of either team winning after score adjustment is 1:1 (50%, yes?) - more accurately 45.824% by averaging the averages on last season's sheet.
This is why I described this problem as a coin-flipping contest. While true betting odds compute all manner of variables, that work has been done for us and for simplification, just went with "the handicap makes it so each team has an equal shot at winning the game". This handicap variant helps *immensely* because a good majority of our players don't follow the sport at all and would get squarely defeated week after week by those that do. The handicap is an equalizer of sorts.
More accurately each contestant selects three "coins" from a pool of 16 and if any of those "coins" lands as "heads" they get a point - from 0 to 3 points per week. We are presently in Week 8 and as of this writing, only 2 coins have been flipped (the Thursday night game and the Saturday morning game). Which is why the probability can change with time as not all 16 results occur simultaneously. However, for the sake of simplicity (and again I know next to nothing about statistics to say "simplicity" repeatedly) probability of every participant will update as more of these contests are completed.
I apologize for not realizing the true reality was less simple than "contestant flips 3 coins" and more "there is a pool of coins from which each contestant 'bets' will be heads". | **A relatively simple formula can be obtained, requiring only readily-computed sums and products.** The computational effort is proportional to the number of players, times the number of distinct scores they exhibit so far, times the number of remaining rounds, times the number of flips per round.
---
**Let's establish notation:**
* Let $x=(x\_1, x\_2, \ldots, x\_{23})$ be the current scores.
* Let $n=17 - 7 = 10$ be the number of rounds to go.
* Let the number of flips remaining during those rounds be $m = 3\*n = 30.$
* Let the coin have probability $p=1/2$ of coming heads.
There are $m+1$ possible scores for player $j$ at the end, given by $x\_j+Z\_j$ where $Z\_j\in\{0,1,\ldots, m\}.$ The chances for $Z\_j$ follow a Binomial$(m,p)$ distribution.
Suppose player $j$ ends up with score $x\_j+z$ with probability $q$ (which we can readily compute). In this case the chance this player wins outright is the chance that every other player's final score is less than $x\_j+z.$ That, too, is readily computed from the Binomial distribution, because the scores of each player are independent, causing the individual chances to multiply.
To be clear,
* Let $F$ be the cumulative distribution function for the Binomial$(m,p)$ distribution.
* Let $f$ be the probability function, $$f(z) = \binom{m}{z}p^z (1-p)^{m-z}.$$
Thus, for each player $j$ and any possible number $z,$ $\Pr(Z\_j \le z) = F(z).$ In particular note that $$\Pr(Z\_j \lt z) = F(z-1).$$
Finally, let $\mathcal{W}\_j$ be the event "Player $j$ wins outright" and $\mathcal{T}\_j$ be the event "Player $j$ ties for the win." From the foregoing and the axioms of probability it is immediate that
>
> $$\Pr(\mathcal{W}\_j \mid Z\_j=z) = \prod\_{i\ne j} F(x\_j - z - x\_i - 1)$$
>
>
>
and
>
> $$\Pr(\mathcal{T}\_j \mid Z\_j=z) = \prod\_{i\ne j} F(x\_j - z - x\_i) - \Pr(\mathcal{W}\_j \mid Z\_j=z).$$
>
>
>
**We obtain the chances of wins and ties by summing over all the possible outcomes** $Z\_j,$
>
> $$\Pr(\mathcal{W}\_j) = \sum\_{z=0}^m f(z) \Pr(\mathcal{W}\_j\mid Z\_j=z)$$
>
>
>
(and likewise for the ties).
For the $24$ players listed in the question, this calculation produces the following chances of wins and ties (with the nine unique scores given at the top):
```
2 7 8 9 10 11 12 13 15
Win 0 4e-04 0.0014 0.0041 0.0106 0.0248 0.0529 0.1038 0.3251
Tie 0 1e-03 0.0026 0.0064 0.0136 0.0259 0.0446 0.0692 0.1215
```
[![Figure: bar chart displaying the chances.](https://i.stack.imgur.com/PHtfP.png)](https://i.stack.imgur.com/PHtfP.png)
(Despite the presentation of results by distinct scores, do not forget that the answers depend on how many players currently have each score.)
---
**One can also simulate the game to estimate the chances.** This is perfectly straightforward; the details appear in the code at the end.
In a simulation of 10,000 independent continuations, the outcomes were these:
```
2 7 8 9 10 11 12 13 15
Win 0 5e-04 0.0015 0.0041 0.0095 0.0261 0.0544 0.1018 0.3282
Tie 0 1e-03 0.0026 0.0063 0.0120 0.0250 0.0391 0.0660 0.1150
```
The agreement is good, suggesting the original calculations are correct.
It may be worth remarking that (a) the sum of winning chances cannot exceed $1$ and indeed will never equal $1$ whenever there is any chance of a tie; and (b) the sum of all winning chances plus tieing chances will never be less than $1$ and will always *exceed* $1$ whenever there is a chance of any three-way (or more-way) tie.
A better solution would be to weight ties inversely by how many people are in each tie, assuming if the game is tied at the end, winning will be equally distributed among all those tied with the best score. This solution can be obtained using the same techniques, but is combinatorially more involved (it requires applying an inclusion-exclusion approach).
---
**Because such calculations likely need to be repeated after each round, here to help with that is the `R` code.** It is not efficiently written, because it makes repeated calls to compute $F$ (with `pbinom`) and $f$ (with `dbinom`). These calls can all be made once and stored in arrays, thereby speeding up the calculation. This will not change the asymptotic behavior of the algorithm and would be useful only for large numbers of players or games with many coin flips to go.
```
#
# Inputs.
#
x <- c(2,10,12,8,11,13,9,12,8,9,9,11,15,9,12,10,11,7,11,8,8,11,10,9) # Current scores
n <- 17 - 7 # Number of rounds left
n.flip <- 3 # Flips per round
p <- 1/2 # Chance of success per flip
#
# Derived quantities.
#
n.players <- length(x) # Number of players
m <- n.flip * n # Number of flips to go
z <- 0:m # Possible outcomes for any player
prob <- dbinom(z, n.flips, p) # Their chances
#
# Compute individual chances of wins and ties.
#
scores <- sort(unique(x))
chances <- sapply(scores, function(score) {
j <- min(which(x == score))
y1 <- sapply(0:m, function(k) {
exp(sum(pbinom(x[j] + k - x[(1:n.players)[-j]], m, p, log.p=TRUE)))
})
y <- sapply(0:n.flips, function(k) {
exp(sum(pbinom(x[j] + k-1 - x[(1:n.players)[-j]], m, p, log.p=TRUE)))
})
c(Win=sum(prob * y), Tie=sum(prob * (y1-y)))
})
#
# Check with a simulation. It will do a few thousand iterations per second.
#
set.seed(17)
sim <- replicate(1e4, {
Z <- rbinom(n.players, m, p) # The future results
final <- x + Z # The final scores
scores <- table(final) # The unique final scores
k <- length(scores)
if (scores[k]==1) {
Win <- final == max(final) # Tally who wins
Tie <- rep(0, n.players)
} else {
Tie <- final == max(final) # Tally who ties
Win <- rep(0, n.players)
}
rbind(Win, Tie)
})
sim <- apply(sim, 1:2, mean) # Average over the iterations
#
# Display the results.
#
colnames(chances) <- paste(scores)
scores <- sort(unique(x))
sim <- sapply(scores, function(score) sim[, min(which(x==score))])
colnames(sim) <- paste(sort(unique(x)))
print(round(sim, 4))
print(round(chances, 4))
``` |
I've got a question concerning a negative binomial regression: Suppose that you have the following commands:
```
require(MASS)
attach(cars)
mod.NB<-glm.nb(dist~speed)
summary(mod.NB)
detach(cars)
```
(Note that cars is a dataset which is available in R, and I don't really care if this model makes sense.)
What I'd like to know is: How can I interpret the variable `theta` (as returned at the bottom of a call to `summary`). Is this the shape parameter of the negbin distribution and is it possible to interpret it as a measure of skewness? | Yes, `theta` is the shape parameter of the negative binomial distribution, and no, you cannot really interpret it as a measure of skewness. More precisely:
* skewness will depend on the value of `theta`, but also on the mean
* there is no value of `theta` that will guarantee you lack of skew
If I did not mess it up, in the `mu`/`theta` parametrization used in negative binomial regression, the skewness is
$$
{\rm Skew}(NB) = \frac{\theta+2\mu}{\sqrt{\theta\mu(\theta+\mu)}}
= \frac{1 + 2\frac{\mu}{\theta}}{\sqrt{\mu(1+\frac{\mu}{\theta})}}
$$
In this context, $\theta$ is usually interpreted as a measure of overdispersion with respect to the Poisson distribution. The variance of the negative binomial is $\mu + \mu^2/\theta$, so $\theta$ really controls the excess variability compared to Poisson (which would be $\mu$), and not the skew. |
Say there's a public encryption scheme whose public key is $p\_k$ and secret key is $s\_k$. Prover $P$ wants to convince verifier $V$ that he knows $s\_k$. The protocol is:
1. $V$ uniformly generates $m$ and sends $c = Enc\_{p\_k}(m)$ to $P$
2. $P$, receiving $c$, sends $m' = Dec\_{s\_k}(c)$ to $V$
3. $V$ checks whether $m = m'$. If so, accept. Else, reject.
Completeness and soundness are obvious. Intuitively decryption shouldn't leak information about the secret key $s\_k$ if this scheme is CCA-secure. But I just haven't come up with a proper simulator to argue this. If $V$ cheats, it's hard to get the correct plaintext without knowledge of $s\_k$. If the simulator just guess, the probability is so low that exponentially many rounds are required. So the question is:
Is this protocol really zero knowledge? If so, how to construct the simulator? | First, I think the CCA-security is not enough, and the encryption scheme must be at least IND-CCA2-secure.
Second, I believe the protocol is not zero-knowledge, at least in the auxiliary-input model. The reasoning is as follows: Let z be the auxiliary input to the malicious verifier V\*. It's safe to assume that $z = Enc\_{p\_k}(m)$, where m is unknown to V\*. Then, V\* can deviate from the original protocol: Instead of choosing a random m and computing c (as prescribed in the protocol), she lets c = z, and sends c to the prover. This way, V\* exploits P as a "decryption oracle," and obtains knowledge beyond what she could possibly compute.
One piece of advice: Read the paper [Towards Practical Public Key Systems Secure Against Chosen Ciphertext Attacks](http://www.springerlink.com/content/pecu04f221rn4p0r/), specially section 4. It suggests a protocol similar to yours; one which has never been realized. It also proposes the now-famous "knowledge-of-exponent" assumption (KEA), one which was later used to construct a non-black-box ZK protocol. (The reason why you can't find a simulator, beyond the line of reasoning above, is that you try to imagine a black-box simulator. That's impossible by the result of [Goldreich and Krawczyk](http://dx.doi.org/10.1137/S0097539791220688)). For more info on the use of the KEA in constructing non-black-box ZK, see [[Hada and Tanaka](http://eprint.iacr.org/1999/009.ps.gz)] and the more recent [[Bellare and Palacio](http://cseweb.ucsd.edu/~mihir/papers/eka.ps)].
**--Edit--** @cyker: I'm working on a similar issue as (presumably) you're doing: The (right) definition of simulators. A partial result of mine, closely related to the protocol you offered, can be found at <http://eprint.iacr.org/2010/150>. The latest results are submitted but not published online yet. |
I am trying out a multiclass classification setting with 3 classes. The class distribution is skewed with most of the data falling in 1 of the 3 classes. (class labels being 1,2,3, with 67.28% of the data falling in class label 1, 11.99% data in class 2, and remaining in class 3)
I am training a multiclass classifier on this dataset and I am getting the following performance:
```
Precision Recall F1-Score
Micro Average 0.731 0.731 0.731
Macro Average 0.679 0.529 0.565
```
I am not sure why all Micro average performances are equal and also Macro average performances are low compared to Micro average. | That's how it should be. I had the same result for my research. It seemed weird at first. But precision and recall should be the same while micro-averaging the result of multi-class single-label classifier. This is because if you consider a misclassification c1=c2 (where c1 and c2 are 2 different classes), the misclassification is a false positive (fp) with respect to c2 and false negative (fn) with respect to c1. If you sum the fn and fp for all classes, you get the same number because you are counting each misclassification as fp with respect to one class and fn with respect to another class. |
These are the linear regression assumptions:
>
> Linearity: The relationship between X and the mean of Y is linear.
> Homoscedasticity: The variance of residual is the same for any value of X.
> Independence: Observations are independent of each other.
> Normality: For any fixed value of X, Y is normally distributed.
>
>
>
Do they apply to Lasso Regression, too? | I will be a contrarian and say that most assumptions do not apply to LASSO regression.
In the classical linear model, those assumptions are used to show that the OLS estimator is the minimum-variance linear unbiased estimator (Gauss-Markov theorem) and to have correct t-stats, F-stats, and confidence intervals.
In LASSO regression, those are less important. LASSO gives a biased estimator, so any of the Gauss-Markov business for the minimum-variance linear unbiased estimator no longer applies; indeed, LASSO is not even a linear estimator of the coefficients, so Gauss-Markov doubly does not apply. Then the hypothesis testing and confidence intervals would not be of primary concern for a LASSO regression. Indeed, it does not even seem agreed upon how such inference would be performed.
(Linearity still matters, but that’s because OLS and LASSO both estimate the same parameters, which are the coefficients on the features.)
Overall, the typical assumptions are assumed in an OLS linear regression because of nice properties of later inferences that are not particularly important for a situation where LASSO regression would be applied. |
I want to know how to to input a self-defined distance in R, in hierarchical clustering analysis. R implements only some default distance metrics, for example "Euclidean", "Manhattan" etc. Suppose I want to input a self-defined distance '1-cos(x-y)'. Then what should I do?
Writing a function is obviously a solution. But, it will be quite complicated, and also difficult to write. Please help me. I am unable to write the code. | hclust() takes a distance matrix, which you can construct yourself, doing the calculations in R or reading them in from elsewhere. as.dist() can be used to convert an arbitrary matrix into a 'dist' object, which is a convenient representation of a distance matrix that hclust() understands. Obviously whether your own distances make any sense is another question, but it's easy to try out.
If you want to apply an arbitrary function to all pairs of X and Y to get a matrix, have a look at outer() |
I'm following this [example](http://scikit-learn.org/stable/modules/multiclass.html#multioutput-classification) on the scikit-learn website to perform a multioutput classification with a Random Forest model.
```
from sklearn.datasets import make_classification
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.utils import shuffle
import numpy as np
X, y1 = make_classification(n_samples=5, n_features=5, n_informative=2, n_classes=2, random_state=1)
y2 = shuffle(y1, random_state=1)
Y = np.vstack((y1, y2)).T
forest = RandomForestClassifier(n_estimators=10, random_state=1)
multi_target_forest = MultiOutputClassifier(forest, n_jobs=-1)
multi_target_forest.fit(X, Y).predict(X)
print(multi_target_forest.predict_proba(X))
```
From this `predict_proba` I get a 2 5x2 arrays:
```
[array([[ 0.8, 0.2],
[ 0.4, 0.6],
[ 0.8, 0.2],
[ 0.9, 0.1],
[ 0.4, 0.6]]), array([[ 0.6, 0.4],
[ 0.1, 0.9],
[ 0.2, 0.8],
[ 0.9, 0.1],
[ 0.9, 0.1]])]
```
I was really expecting a `n_sample` by `n_classes` matrix. I'm struggling to understand how this relates to the probability of the classes present.
The [docs](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html#sklearn.multioutput.MultiOutputClassifier.predict_proba) for `predict_proba` states:
>
> array of shape = [n\_samples, n\_classes], or a list of n\_outputs such arrays if n\_outputs > 1.
>
>
> The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes\_.
>
>
>
I'm guessing I have the latter in the description, but I'm still struggling to understand how this relates to my class probabilities.
Furthermore, when I attempt to access the `classes_` attribute for the `forest` model I get an `AttributeError` and this attribute does not exist on the `MultiOutputClassifier`. How can I relate the classes to the output?
```
print(forest.classes_)
AttributeError: 'RandomForestClassifier' object has no attribute 'classes_'
``` | In the `MultiOutputClassifier`, you're treating the two outputs as separate classification tasks; from the docs you linked:
>
> This strategy consists of fitting one classifier per target.
>
>
>
So the two arrays in the resulting list represent each of the two classifiers / dependent variables. The arrays then are the binary classification outputs (columns that are probability of class 0, probability of class 1) that @chrisckwong821 mentioned, but one for each problem.
In other words, the return value of `predict_proba` will be a list whose length is equal to the width of your `y`, i.e. `n_outputs`, in your case 2. Your quote from the `predict_proba` documentation references `n_outputs`, which is introduced in the documentation for `fit`:
>
> `fit(self, X, y[, sample_weight])`
>
>
> `y` : (sparse) array-like, shape `(n_samples, n_outputs)`
>
>
> |
I have a large sequence of vectors of length N. I need some unsupervised learning algorithm to divide these vectors into M segments.
For example:
![enter image description here](https://i.stack.imgur.com/O1hyC.png)
K-means is not suitable, because it puts similar elements from different locations into a single cluster.
Update:
The real data looks like this:
![enter image description here](https://i.stack.imgur.com/VyVCU.png)
Here, I see 3 clusters: `[0..50], [50..200], [200..250]`
Update 2:
I used modified k-means and got this acceptable result:
![enter image description here](https://i.stack.imgur.com/WBlZn.png)
Borders of clusters: `[0, 38, 195, 246]` | Please see my comment above and this is my answer according to what I understood from your question:
As you correctly stated you do not need *Clustering* but *Segmentation*. Indeed you are looking for *Change Points* in your time series. The answer really depends on the complexity of your data. If the data is as simple as above example you can use the difference of vectors which overshoots at changing points and set a threshold detecting those points like bellow:
![enter image description here](https://i.stack.imgur.com/6BKAl.png)
As you see for instance a threshold of 20 (i.e. $dx<-20$ and $dx>20$) will detect the points. Of course for real data you need to investigate more to find the thresholds.
Pre-processing
--------------
Please note that there is a trade-off between accurate location of the change point and the accurate number of segments i.e. if you use the original data you'll find the exact change points but the whole method is to sensitive to noise but if you smooth your signals first you may not find the exact changes but the noise effect will be much less as shown in figures bellow:
![enter image description here](https://i.stack.imgur.com/ceDxi.png)
![enter image description here](https://i.stack.imgur.com/PLIpr.png)
Conclusion
----------
My suggestion is to smooth your signals first and go for a simple clustering mthod (e.g. using [GMM](http://www.autonlab.org/tutorials/gmm14.pdf)s) to find an accurate estimation of the number of segments in signals. Given this information you can start finding changing points constrained by the number of segments you found from previous part.
I hope it all helped :)
Good Luck!
UPDATE
------
Luckily your data is pretty straightforward and clean. I strongly recommend dimensionality reduction algorithms (e.g. simple [PCA](http://sebastianraschka.com/Articles/2014_pca_step_by_step.html)). I guess it reveals the internal structure of your clusters. Once you apply PCA to the data you can use k-means much much easier and more accurate.
A Serious(!) Solution
---------------------
According to your data I see the generative distribution of different segments are different which is a great chance for you to segment your time series. See [this (original](http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2012_KhaleghiRMP12.pdf), [archive](https://web.archive.org/web/20170709232827/https://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2012_KhaleghiRMP12.pdf), [other source](http://www.jmlr.org/proceedings/papers/v22/khaleghi12/khaleghi12.pdf)) which is probably the best and most state-of-the-art solution to your problem. The main idea behind this paper is that if different segments of a time series are generated by different underlying distributions you can find those distributions, set tham as ground truth for your clustering approach and find clusters.
For example assume a long video in which the first 10 minutes somebody is biking, in the second 10 mins he is running and in the third he is sitting. you can cluster these three different segments (activities) using this approach. |
Are there undecidable properties of linear bounded automata (avoiding the empty set language trick)? What about for a deterministic finite automaton? (put aside intractability).
I would like to get an example (if possible) of an undecidable problem that is defined *without using Turing machines* explicitly.
Is Turing completeness of a model necessary to support uncomputable problems? | It is not clear what you are asking in the later part of the question mainly because "a problem about a machine model" is not defined.
>
> I would like to get an example(if possible) of undecidable problem without needing Turing Machine
>
>
>
Let be $\{M\_i\}$ be a class of machines and lets use $i$ as the code of $M\_i$. We can interpret $i$ also as the code of $i$th TM and then ask that given $M\_i$ does the $i$th TM halt? And this problem about $M\_i$s is undecidable.
A language is just a set of strings, what interpretation you assign to the strings has no effect on the decidability of the language. Unless you formally define what you mean by *a machine model* and *a problem about those machines* your later questions cannot be answered.
>
> Is Turing complete the minimal machinery to support an undecidable problem?
>
>
>
Again, the point I mentioned above applies. A more reasonable question would be: are all undecidability proofs go through something similar to the undecidability of halting problem for TMs? (The answer is: there are other ways).
Another possible question is: what is the smallest subset of TMs where the halting problem for them is undecidable. Obviously such a class should contain problems which do not halt (otherwise the problem is trivially decidable). We can easily create artificial subsets of TMs where the halting problem is not decidable without being able to compute anything useful. A more interesting question is about large decidable sets of TMs where the halting is decidable for them.
Here is another point: as soon as you have very small ability to manipulate bits (e.g. a polynomial size $\mathsf{CNF}$) you can create a machine $N$ with three inputs: $e$, $x$, and $c$ such that it output 1 iff $c$ is a halting accepting computation of TM $M\_e$ on input $x$. Then you can ask the problems like: is there a $c$ s.t. $N(e,x,c)$ is 1? which is an undecidable problem. |
I have seen the post [Bayesian vs frequentist interpretations of probability](https://stats.stackexchange.com/questions/31867/bayesian-vs-frequentist-interpretations-of-probability/503079#503079) and [others like it](https://stats.stackexchange.com/questions/173056/how-exactly-do-bayesians-define-or-interpret-probability) but this does not address the question I am posing. These other posts provide interpretations related to prior and posterior probabilities, $\pi(\theta)$ and $\pi(\theta|\boldsymbol{x})$, not $P(X=x|\theta=c)$. **I am not interested in the likelihood as a function of the parameter and the observed data, I am interested in the interpretation of the probability distribution of unrealized data points.**
For example, let $X\_1,...,X\_n\sim Bernoulli(\theta)$ be the result of $n$ coin tosses and $\theta\sim Beta(a,b)$ so that $\pi(\theta|\boldsymbol{x})$ is the pdf of a $Beta(a+\sum x,b + n - \sum x)$.
How do Bayesians interpret $\theta=c$? $\theta$ of course is treated as an unrealized or unobservable realization of a random variable, but that still does not define or interpret the probability of heads. $\pi(\theta)$ is typically considered as the prior belief of the experimenter regarding $\theta$, but what is $\theta=c$? That is, how do we interpret a single value in the support of $\pi(\theta)$? Is it a long-run probability? Is it a belief? How does this influence our interpretation of the prior and posterior?
For instance, if $\theta=c$ and equivalently $P(X=1|\theta=c)=c$ is my belief that the coin will land heads, then $\pi(\theta)$ is my belief about my belief, and in some sense so too is the prior predictive distribution $P(X=1)=\int\theta\pi(\theta)d\theta=\frac{a}{a+b}$. To say "if $\theta=c$ is known" is to say that I know my own beliefs. To say "if $\theta$ is unknown" is to say I only have a belief about my beliefs. **How do we justify interpreting beliefs about beliefs as applicable to the coin under investigation?**
If $\theta=c$ and equivalently $P(X=1|\theta=c)=c$ is the unknown fixed true long-run probability for the coin under investigation: **How do we justify blending two interpretations of probability in Bayes theorem as if they are equivalent? How does Bayes theorem not imply there is only one type of probability? How are we able to apply posterior probability statements to the unknown fixed true $\theta=c$ under investigation?**
The answer must address these specific questions. While references are much appreciated, the answers to these questions must be provided. I have provided four **Options** or proposals in my own solution below as an answer, with the challenges of interpreting $P(X=x|\theta=c)$ as a belief or as a long-run frequency. Please identify which **Option** in my answer most closely maps to your answer, and provide suggestions for improving my answer.
I am not writing $P(X=x|\theta=c)$ to be contemptuous. I am writing it to be explicit since $P(X=x|Y=y)$ is not the same thing as $P(X=x|Y)$. One might instead be inclined to write in terms of a sample from the prior and use an index of realizations of $\theta$. However, I do not want to present this in terms of a finite sample from the prior.
More generally, how do Bayesians interpret $P(X=x|\theta=c)$ or $P(X\le x|\theta=c)$ for any probability model and does this interpretation pose any challenges when interpreting $P(\theta=s|\boldsymbol{x})$ or $P(\theta\le s|\boldsymbol{x})$?
I've seen a few other posts tackle questions about Bayesian ***posterior probability***, but the solutions aren't very satisfying and usually only consider a superficial interpretation, e.g. *coherent representations of information*.
Related threads:
[Examples of Bayesian and frequentist approaches giving different results](https://stats.stackexchange.com/questions/43471/examples-of-bayesian-and-frequentist-approach-giving-different-answers/506653#506653)
[Bayesian vs frequentist interpretations of probability](https://stats.stackexchange.com/questions/31867/bayesian-vs-frequentist-interpretations-of-probability/503079#503079)
**UPDATE:**
I received several answers. It appears that a belief interpretation for $P(X=x|\theta=c)$ is the most appropriate under the Bayesian paradigm, with $\theta$ as the limiting proportion of heads (which is not a probability) and $\pi(\theta)$ representing belief about $\theta$. I have amended Option 1 in my answer to accurately reflect *two* different belief interpretations for $P(X=x|\theta=c)$. I have also suggested how Bayes theorem can produce reasonable point and interval estimates for $\theta$ despite these shortcoming regarding interpretation. | *I have posted a related (but broader) question and answer [here](https://stats.stackexchange.com/questions/539950/) which may shed some more light on this matter, giving the full context of the model setup for a Bayesian IID model.*
You can find a good primer on the Bayesian interpretation of these types of models in [Bernardo and Smith (1994)](https://onlinelibrary.wiley.com/doi/book/10.1002/9780470316870), and you can find a more detailed discussion of these particular interpretive issues in [O'Neill (2009)](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1751-5823.2008.00059.x). A starting point for the operational meaning of the parameter $\theta$ is obtained from the [strong law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers#Strong_law), which in this context says that:
$$\mathbb{P} \Bigg( \lim\_{n \rightarrow \infty} \frac{1}{n} \sum\_{i=1}^n X\_i = \theta \Bigg) = 1.$$
This gets us part-way to a full interpretation of the parameter, since it shows almost sure equivalence with the Cesàro limit of the observable sequence. Unfortunately, the Cesàro limit in this probability statement does not always exist (though it exists almost surely within the IID model). Consequently, using the approach set out in [O'Neill (2009)](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1751-5823.2008.00059.x), you can consider $\theta$ to be the Banach limit of the sequence $X\_1,X\_2,X\_3$, which always exists and is equivalent to the Cesàro limit when the latter exists. So, we have the following useful parameter interpretation as an operationally defined function of the observable sequence.
>
> **Definition:** The parameter $\theta$ is the Banach limit of the sequence $\mathbf{X} = (X\_1,X\_2,X\_3,...)$.
>
>
>
(Alternative definitions that define the parameter by reference to an underlying sigma-field can also be used; these are essentially just different ways to do the same thing.) This interpretation means that the parameter is a function of the observable sequence, so once that sequence is given the parameter is fixed. Consequently, it is not accurate to say that $\theta$ is "unrealised" --- if the sequence is well-defined then $\theta$ must have a value, albeit one that is unobserved (unless we observe the whole sequence). The sampling probability of interest is then given by the representation theorem of de Finetti.
>
> **Representation theorem (adaptation of de Finetti):** If $\mathbf{X}$ is an exchangeable sequence of binary values (and with $\theta$ defined as above), it follows that the elements of $\mathbf{X}|\theta$ are independent with sampling distribution $X\_i|\theta \sim \text{IID Bern}(\theta)$ so that for all $k \in \mathbb{N}$ we have:
> $$\mathbb{P}(\mathbf{X}\_k=\mathbf{x}\_k | \theta = c) = \prod\_{i=1}^k c^{x\_i} (1-c)^{1-x\_i}.$$
> This particular version of the theorem is adapted from O'Neill (2009), which is itself a minor re-framing of de Finetti's famous representation theorem.
>
>
>
Now, within this IID model, the specific probability $\mathbb{P}(X\_i=1|\theta=c) = c$ is just the sampling probability of a positive outcome for the value $X\_i$. This represents the probability of a single positive indicator conditional on the Banach limit of the sequence of indicator random variables being equal to $c$.
Since this is an area of interest to you, I strongly recommend you read [O'Neill (2009)](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1751-5823.2008.00059.x) to see the broader approach used here and how it is contrasted with the frequentist approach. That paper asks some similar questions to what you are asking here, so I think it might assist you in understanding how these things can be framed in an operational manner within the Bayesian paradigm.
>
> How do we justify blending two interpretations of probability in Bayes theorem as if they are equivalent?
>
>
>
I presume here that you are referring to the fact that there are certain limiting correspondences analogous to the "frequentist interpretation" of probability at play in this situation. Bayesians generally take an epistemic interpretation of the meaning of probability (what Bernardo and Smith call the "subjective interpretation"). Consequently, all probability statements are interpreted as beliefs about uncertainty on the part of the analyst. Nevertheless, Bayesians also accept that the law-of-large-numbers (LLN) is valid and applies to their models under appropriate conditions, so it may be the case that the epistemic probability of an event is equivalent to the limiting frequency of a sequence.
In the present case, the definition of the parameter $\theta$ is the Banach limit of the sequence of observable values, so it necessarily corresponds to a limiting frequency. Probability statements about $\theta$ are therefore also probability statements about a limiting frequency for the observable sequence of values. There is no contradiction in this. |
What's the best way that anyone can do to have a good introduction to the theory of distributed system, any books or references, and topics should be covered first and requirements to start learning in this topic. | Roger Wattenhofer's *[Principles of Distributed Computing](http://disco.ethz.ch/lectures/podc_allstars/)* lecture collection is also a good place to start.
It is freely available online, it assumes no prior knowledge on the area, and the material is very well up-to-date — it even covers some results that were presented at conferences a couple of months ago. |
I might be missing something obvious but I can't find references about the complexity of counting *matchings* (not perfect matchings) in bipartite graphs. Here is the formal problem:
* Input: a bipartite graph $G = (U, V, E)$ with $E \subseteq U \times V$
* Output: the number of matchings of $G$, where a [matchings](https://en.wikipedia.org/wiki/Matching_(graph_theory)) is a subset $F \subseteq E$ such that there is no $v \in U \sqcup V$ that occurs in two edges of $F$.
**What is the complexity of this problem?** Is it #P-hard?
It is well-known that counting *perfect* matchings on bipartite graphs is #P-hard, and it is known that counting matchings of *arbitrary* graphs (or even planar 3-regular graphs) is #P-hard by [this paper](http://www.sciencedirect.com/science/article/pii/S0304397507004653), but I didn't find anything about counting non-perfect matchings on bipartite graphs. | The problem of counting such "imperfect" matchings in bipartite graphs is #P-complete.
This has been proved by Les Valiant himself, on page 415 of the paper
>
> Leslie G. Valiant
>
> The Complexity of Enumeration and Reliability Problems
>
> SIAM J. Comput., 8(3), 410–421
>
>
> |
I am so confused between concurrency vs parallelism in multi-core processors.
If a **concurrent process** has many threads and if the **processor is multi-core** then can each thread run on multiple cores? If the answer is YES then multiple threads are running in parallel but the process is concurrent. How is it possible and where is the concept of concurrency applied?
but if your **answer is no** then why do we even need **concurrency concept**. Why cant all processes be based on **parallelism concept**?
If I assume that **multiple threads of the concurrent process run in parallel**.
Then which is better performance-wise concurrency in multi-core vs parallelism in multi-core and why? | I agree the usage of the terms *concurrent* and *parallel* in computing is confusing. Part of the confusion is because the English word *concurrent* means: "at the same time", but the usage in computing is slightly different.
In computing we say two threads have the *potential* to run concurrently if there are no dependencies between them. When we say two threads *are* running concurrently, we *might* mean that they are running in parallel on two different physical cpu cores. But we also might mean this in the virtual sense of [multitasking](https://en.wikipedia.org/wiki/Computer_multitasking). For example a single cpu core might be running both threads using preemptive multitasking to switch back and forth between them every 1/100th of a second. So over a timespan of an entire second it *appears* the two threads are running simultaneously, while in fact only one of them is running at a time and we're just switching back and forth between them very quickly.
I try to use the word *concurrency* to refer only to *potential* concurrency.
I try to use the word *parallel* when I want to indicate that two (potentially) concurrent threads are actually running simultaneously on different cpu cores. And I try to use the term *multitasked* or *interleaved* when I want to indicate that two (potentially) concurrent threads are sharing the same cpu core over a long period of time, with the cpu core switching back and forth between the two threads. |
I am taking a graduate course in Applied Statistics that uses the following textbook (to give you a feel for the level of the material being covered): [Statistical Concepts and Methods](http://amzn.com/0471072044), by G. K. Bhattacharyya and R. A. Johnson.
The Professor requires us to use SAS for the homeworks.
My question is that: is there a Java library(ies), that can be used instead of SAS for problems typically seen in such classes.
I am currently trying to make do with [Apache Math Commons](http://commons.apache.org/math/) and though I am impressed with the library (it's ease of use and understandability) it seems to lack even simple things such as the ability to draw histograms (thinking of combining it with a charting library).
I have looked at Colt, but my initial interest died down pretty quickly.
Would appreciate any input -- and I've looked at similar questions on Stackoverflow but have not found anything compelling.
NOTE: I am aware of R, SciPy and Octave and java libraries that make calls to them -- I am looking for a Java native library or set of libraries that can together provide the features I'm looking for.
NOTE: The topics covered in such a class typically include: one-samle and two-sample tests and confidence intervals for means and medians, descriptive statistics, goodness-of-fit tests, one- and two-way ANOVA, simultaneous inference, testing variances, regression analysis, and categorical data analysis. | When I am forced to use java for basic statistics, apache commons math is the way to go. For plots, I use and recommend [JFreeChart](http://www.jfree.org/jfreechart/). The latter is widely spread, so stackoverflow even has a [populated tag for it](https://stackoverflow.com/questions/tagged/jfreechart?sort=votes&pagesize=50).
**Edit**
If one looks for a suite, then maybe [Deducer](http://www.deducer.org/pmwiki/pmwiki.php?n=Main.DeducerManual) is an option. The GUI is based on JGR meanwhile the statistical parts are called in R. It seems to be extendable both [via R and java](http://www.deducer.org/pmwiki/pmwiki.php?n=Main.Development). One could e.g. skip the calls to the Rengine but call referenced java libraries instead. But I admit, I did not try it yet.
As far as I have understood the OP, the optimum would be something like [Rapidminer](http://en.wikipedia.org/wiki/RapidMiner) **for Statistics**, since Rapidminer is a pure java framework which supports GUI access (including visualizations), usage as library and custom plugin development. To the best of my knowledge, something like that for statistics does not exist. I do **not recommend** Rapidminer **for that particular task**, because to the best of my knowledge it only includes the most basic statistical tests. The visualizations have been extended lately, but I cannot estimate how customizable they are now. |
Are the following two sets equal? One the set of regular expression over an alphabet, and the other set is the set of all strings which can be generated by using the symbols of an alphabet(Σ\*)? | >
> Are [...] the set of regular expression[s] over [$\Sigma$] [...] and the set of all strings [in $\Sigma^\*$] equal?
>
>
>
Since the *types* of objects don't even match, clearly not!
Going back to the definitions, regular expressions have a certain syntax. They contain symbols from the fixed alphabet $\Sigma$, but also operators *not* in $\Sigma$ like $+$, $\mid$, $^\*$, and others. Their *interpretations* are formal languages over $\Sigma$. That is, if $R$ is a regular expression over $\Sigma$ then $L(R) \subseteq \Sigma^\*$. |
this is the arithmetic series:
$a/b,2a/b,3a/b,...,ba/b$
The new series:
$log(a/b),log(2a/b),log(3a/b),...,log(ba/b)$
I want the complexity of sum of the latter.
p.s: sorry about formatting.
---
edit 1: I realized that I need to provide more info:
$a,b \in \mathbb{N} $
$a>b$
a and b are not constants.
and the answer is something similar to :
$O(b\*log(a))$ | $f(a,b)=\sum\_{k=1}^b \log(k\*\frac{a}{b})=\log\prod\_{k=1}^b \left(k\*\frac{a}{b}\right)=\log \frac{b!\*a^b}{b^b}=\log(b!)+b\log\frac{a}{b}=b\log a +[\log(b!)-b\log(b)]$
by Stirling's formula, $\log(b!)=b\log b -b +\mathcal{O}(\log{b})$
Therefore $f(a,b)=b\log a +o(b\log a)$ so $f(a,b)\sim b\log a$
NB: all asymptotics ($\sim$, $\mathcal{O}$ and $o$) are taken as $a,b\to\infty$ with $a<b$)
For a more precise evaluation than just an equivalent, replace Stirling's approximation (<https://en.wikipedia.org/wiki/Stirling%27s_approximation>) with Stirling's bounds (same wikipedia article). |
Let $L\_\epsilon$ be the language of all $2$-CNF formulas $\varphi$, such that at least $(\frac{1}{2}+\epsilon)$ of $\varphi$'s clauses can be satisfied.
I need to prove that there exists $\epsilon'$ s.t $L\_\epsilon$ is $\mathsf{NP}$-hard for any $\epsilon<\epsilon'$.
We know that $\text{Max}2\text{Sat}$ can be approximate to $\frac{55}{56}$ precent of the clauses from a $\text{Max}3\text{Sat}$ reduction. How should I solve this one? | In his famous paper, [Håstad](http://www.nada.kth.se/~johanh/optimalinap.ps) shows that it is NP-hard to approximate MAX2SAT better than $21/22$. This likely means that is is NP-hard to distinguish instances which are $\leq \alpha$ satisfiable and instances which are $\geq (22/21) \alpha$ satisfiable, for some $\alpha \geq 1/2$. Now imagine padding an instance so that it becomes a $p$-fraction of a new instance, the rest of which is exactly $1/2$-satisfiable (say it consists of groups of clauses of the form $a \land \lnot a$). The numbers now become $1/2 + p (\alpha - 1/2)$ and $1/2 + p((22/21)\alpha - 1/2)$. The latter number can be made as close to $1/2$ as we want. |
In our programming language concepts course, our instructor claimed that it's okay for a final state to lead to another state in a finite state diagram.
But this seems to be a fundamentally contradictory concept. Because a final state by definition is one that terminates transitions, i.e., that once you reach it, there's nothing else left to do.
And yet he presented a slide such as this one, where final states are represented by two circles... How is it possible for B, D, E, and H to be final states when they're so clearly not?
[![enter image description here](https://i.stack.imgur.com/wW597.gif)](https://i.stack.imgur.com/wW597.gif) | Indeed, it is confusing! To solve your problem, call them "accepting" states instead of "final" states. Because that is what they really are, just a marker that tells us that at this moment the string processed belongs to the language. |
In Strassen's algorithm, we calculate the time complexity based on n being the number of rows of the square matrix. Why don't we take n to be the total number of entries in the matrices (so if we were multiplying two 2x2 matrices, we would have n = 8 for the four entries in each matrix)?
Then, using the naïve method of multiplying matrices, we would end up with only n multiplications and n/2 additions.
For instance, multiplying [1 2, 3 4] by [5 6, 7 8] yields [1\*5+2\*7 1\*6+2\*8, 3\*5+4\*7 3\*6+4\*8]. Here, n = 8 and we are doing n = 8 multiplications and n/2 = 4 additions.
So even a naïve multiplication algorithm would yield a time complexity of O(n).
Of course, this reasoning is wrong because the time complexity cannot be linear but I don't understand why.
I would appreciate any input. Thank you! | For n = 100, the naive algorithm takes 1,000,000 multiplications and almost as many additions. If we let n = number of rows / columns, then it takes $n^3$ multiplications and $n^3 - n^2$ additions. If we let $m = 2n^2$, that is the total number of elements in both matrices, then it is $m^{1.5} / 2^{1.5}$ multiplications and $m^{1.5} / 2^{1.5} - m/2$ additions. Both numbers are $O(m^{1.5})$ and not $O(m)$.
But the reason why we argue in the number of rows and columns, and not the problem size, is that the number of rows and columns usually follows naturally from the problem we try to solve. And in case of the Strassen algorithm which recursively divides a matrix into smaller sub matrices, its easier to reason that you need 7 matrix multiplications of half size. The algorithm is based on number of rows and columns and not on number of elements. And lastly, if the matrices don't have identical size, the number of calculations can easily be found from the numbers of rows and columns, but number of elements leaves a lot of room how many operations are needed, depending on the shapes of the matrices. |
I'm a freshmen studying computer science and I already know that I want to go into academia with focus of theoretical comp sci. I already read some of papers referenced in [this question](https://cstheory.stackexchange.com/questions/1168/what-papers-should-everyone-read) and [this question](https://cstheory.stackexchange.com/questions/1562/why-go-to-theoretical-computer-science-research) convinced me further.
What should I be doing **now**, as an undergrad, to get involved in the field? What can I do to prepare for research in the field? | As a freshman undergraduate, your best bet is to express this interest to your professors in the CS department, who can help you out (providing this help a big part of their job!). Most of them would, I would expect, be delighted to help out an undergraduate who was interested in the same things they are. At the very least, they can give you good advice about what classes to take at your institution, and advice that is customized to your situation. |
For a 2-place real function $H$, *H-volume* of $[x\_1,x\_2]\times[y\_1,y\_2]$ is $H(x\_2,y\_2)-H(x\_2,y\_1)-H(x\_1,y\_2)+H(x\_1,y\_1)$. What is really the intuition of the *H-volume*? | The H-Volume is the volume contained by the rectangle $[x\_1,x\_2] \times [y\_1,y\_2]$ of a 3-dimensional function $H(x,y)$. To visualize this, see the Figure [![H-Volume of the $\Pi$ copula](https://i.stack.imgur.com/MZcl3.png)](https://i.stack.imgur.com/MZcl3.png)
which is the contour plot of the independence copula (which is simply a 3-dimensional function with some special properties that make it a copula function).
The H-Volume is the volume contained within the box labeled $R3$. However, remember that the Copula function is defined as the H-Volume of the copula function $H$ from $[0,u] \times [0,v]$. Thus, $H(x\_2,y\_2)$ in reference to the figure would be the volume contained by $R1+R2+R3+R4$. To get the region of interest, which is just $[x\_1,x\_2] \times [y\_1,y\_2]$, we must subtract out $R2$ and $R4$. However, by subtracting out $R2$ and $R4$, we have also subtracted out $R1$ twice. We thus add $R1$ back into the equation (recall that $R1$ is included when computing $H(x\_2,y\_2)$.
To think about it in 3-D terms, see the Figure [![H-Volume of the $\Pi$ copula 3-D Visualization](https://i.stack.imgur.com/69bQf.png)](https://i.stack.imgur.com/69bQf.png). The H-Volume of this 3-D function, which happens to be the independence copula density, is the volume enclosed under the blue shaded area, where the points are given by the rectangle $[x\_1,x\_2] \times [y\_1,y\_2]$. |
I obtained a poor discrimination(AUROC) and a good callibration(according to hosmer lemeshow) in a logistic regression model. How can I address this situation? | Clearly, your explanatory variables doesn't explain the response very well - at least in the model you are using. You could try adding interaction terms, and/or use b-splines of the explanatory variables if they are continuous and their relationship to the response may be nonlinear. |
**Background.** I am writing some code for semi-automated grading, using peer grading as part of the grading process. Students are given pairs of essays at a time, and the students have a slider to choose which is better and how much better it is. e.g., the slider might look something like this:
`A---X-B`
Based on the results of the peer grading, essays are ranked and the teacher will then grade the top X% and bottom X% and scores for all essays will be automatically calculated based on this. I have already come up with methods for doing this ranking/scoring process; that part works well.
**My question.** How should I select which pairs of essays to give to students?
Simulations suggest we need an essay to be peer-graded at least 3 times, to get an accurate ranking. Thus, each essay should appear in at least 3 of the pairs that are presented for peer grading.
We can think of this as a graph problem. Think of the essays as nodes. Each edge represents a pair of essays that are presented during the peer grading process. The accuracy results above suggest that the degree of each node (or of most nodes) should be at least 3. What sort of graph should I use? How should I generate the graph to be used during peer grading?
One challenge is that if you have clusters in the graph, this will skew the peer-gradings. For example, we wouldn't want to have high-quality essays peer-graded mostly against high-quality essays, because that would skew the results of the peer grading.
What would you recommend?
I think this problem could be modelled with a undirected graph using something like the following:
* Start by taking the node with the least degree and link it with the next least
* Continue until your average degree is at least 3
* Maximise node connectivity
* Minimise number of cliques
Is this a good approach? If not what would you recommend instead? | There are two parts to this: (a) selecting a graph (*experimental design*) to determine which pairs of essays the students will evaluate in the peer grading process, and (b) ranking all the essays, based upon the student's peer grades, to determine which the teacher should rank. I will suggest some methods for each.
Choosing a graph
----------------
**Problem statement.** The first step is to generate a graph. In other words, you need to select which pairs of essays to show to the students, during the peer grading exercise.
**Suggested solution.** For this task, I suggest that you generate a random graph $G$, selected uniformly at random from the set of all 3-regular (simple) graphs.
**Justification and details.** It is known that that a random $d$-regular graph is a good expander. In fact, the regular graphs have asymptotically optimal expansion factor. Also, because the graph is random, this should eliminate the risk of skewing the grading. By selecting a graph uniformly at random, you are ensuring that your approach is equally fair to all students. I suspect that a uniformly random 3-regular graph will be optimal for your purposes.
This raises the question: how do we select a 3-regular (simple) graph on $n$ vertices, uniformly at random?
Fortunately, there are known algorithms for doing this. Basically, you do the following:
1. Create $3n$ points. You can think of this as 3 copies of each of the $n$ vertices. Generate, uniformly at random, a random perfect matching on these $3n$ points. (In other words, repeat the following procedure until all $3n$ points are paired off: select any unpaired point, and pair it with another point chosen uniformly at random from the set of unpaired points.)
2. For each two points that are matched by the matching, draw an edge between the corresponding vertices (that they are a copy of). This gives you a graph on $n$ vertices.
3. Next, test if the resulting graph is simple (i.e., it has no self-loops and no repeated edges). If it is not simple, discard the graph and go back to step 1. If it is simple, you are done; output this graph.
It is known that this procedure generates a uniform distribution on the set of 3-regular (simple) graphs. Also, it is known that at step 3 you have a constant probability of accepting the resulting graph, so on average the algorithm will do $O(1)$ trials -- so this is pretty efficient (e.g., polynomial running time).
I have seen this approach credited to Bollobas, Bender, and Canfield. The approach is also summarized briefly [on Wikipedia](https://en.wikipedia.org/wiki/Random_regular_graph#Algorithms_for_random_regular_graphs). You can also find a discussion [on this blog post](https://egtheory.wordpress.com/2012/03/29/random-regular-graphs/).
Technically speaking, this requires that the number $n$ be even (otherwise there is no 3-regular graph on $n$ vertices). However, this is easy to deal with. For instance, if $n$ is odd, you can randomly choose one essay, set it aside, generate a random 3-regular graph on the remaining essays, then add 3 more edges from the set-aside essay to 3 randomly chosen other essays. (This means that there will be 3 essays that are actually graded 4 times, but that shouldn't do any harm.)
Ranking all the essays
----------------------
**Problem statement.** OK, so now you have a graph, and you have presented these pairs of essays (as indicated by the edges in the graph) to the students for them to grade during the peer grading exercise. You have the results of each comparison of essays. Now your task is to infer a linear ranking on all of the essays, to help you determine which ones to have the teacher evaluate.
**Solution.** I suggested you use the [Bradley-Terry model](https://en.wikipedia.org/wiki/Pairwise_comparison). It is a mathematical approach that solves exactly this problem. It was designed for ranking players in some sport, based upon the results of matches between some pairs of the players. It assumes that each player has an (unknown) strength, which can be quantified as a real number, and the probability that Alice beats Bob is determined by some smooth function of the difference of their strengths. Then, given the pairwise win/loss records, it estimates the strength of each player.
This should be perfect for you. You can treat each essay as a player. Each comparison between two essays (during the peer grading process) is like the result of a match between them. The Bradley-Terry model will allow you to take all of that data, and infer a *strength* for each essay, where higher strengths correspond to better essays. Now you can use those strengths to rank-order all of the essays.
**Details and discussion.** In fact, the Bradley-Terry model is even better than what you asked for. You asked for a linear ranking, but the Bradley-Terry model actually gives a (real-number) rating to each essay. This means you know not only whether essay $i$ is stronger than essay $j$, but a rough estimate of *how* much stronger it is. For instance, you could use this to inform your selection of which essays to rank.
There are alternative ways to infer ratings or rankings for all the essays, given the data you have. For instance, the Elo method is another. I summarize several of them [in my answer to a different question](https://cs.stackexchange.com/a/16776/755); read that answer for more details.
One other comment: The Bradley-Terry model assumes that the result of each comparison between two players is a win or a loss (i.e., a binary result). However, it sounds like you will actually have more detailed data: your slider will give a rough estimate of how much better the peer grader rated one essay than another. The simplest approach would be to just map each slider to a binary result. However, if you really want, you might be able to use all of the data, by using a more sophisticated analysis. The Bradley-Terry model involves doing logistic regression. If you generalize that to use [ordered logit](https://en.wikipedia.org/wiki/Ordered_logit), I bet that you could take advantage of the extra information you have from each slider, given that the results from the sliders are not binary but are one of several possibilities.
Efficient use of the teacher
----------------------------
You suggest having the teacher manually grade the top X% and bottom X% of all of the essays (using the ranking inferred from the results of the peer-grading). This could work, but I suspect it is not the most efficient use of the teacher's limited time. Instead, I'd like to suggest an alternate approach.
I suggest that you have the teacher grade a subset of the essays, with the subset carefully selected to try to provide the best possible calibration for all of the essays that weren't graded by the teacher. For this, I think it might help if you selected a sample of essays that cover the range of possible answers (so for every essay, there is some teacher-graded essay that is not too far away from it). For this, I can think of two approaches you could consider trying:
* **Clustering.** Take the ratings that are produced by the Terry-Bradley model. This is a set of $n$ real numbers, one real number per essay. Now cluster them. Suppose you want to have the teacher grade $k$ essays. One approach would be to use $k$-means clustering (on these one-dimensional data points) to cluster the essays into $k$ clusters, and then randomly select one essay from each cluster for the teacher to grade -- or have the teacher grade the "cluster head" of each cluster.
* **Furthest-point first.** An alternative is to try to select a subset of $k$ essays that are as different from each other as possible. The "furthest-point first" (FPF) algorithm is a clean approach for this. Assume that you have some distance function $d(e\_i,e\_j)$ that lets you quantify the distance between two essays $e\_i$ and $e\_j$: a small distance means that the essays are similar, a larger distance means they are dissimilar. Given a set $S$ of essays, let $d(e,S) = \min\_{e' \in S} d(e,e')$ be the distance from $e$ to the nearest essay in $S$. The furthest-point first algorithm computes a list of $k$ essays, $e\_1,e\_2,\dots,e\_k$, as follows: $e\_{i+1}$ is the essay that maximizes $d(e,\{e\_1,e\_2,\dots,e\_i\})$ (out of all essays $e$ such that $e \notin \{e\_1,e\_2,\dots,e\_i\}$). This algorithms generates a set of $k$ essays that are as dissimilar from each other as possible -- which means that each of the remaining essays is pretty similar to at least one of those $k$. Therefore, it would be reasonable to have the teacher grade the $k$ essays selected by the FPF algorithm.
I suspect either of these approaches might provide more accurate scores than having the teacher grade the top X% and bottom X% of essays -- since the very best and worst essays probably are not representative of the mass of essays in the middle.
In both approaches, you could use a more sophisticated distance function that takes into account not just the strength estimates based upon peer grading but also other factors derived from the essays. The simplest possible distance function would take into account only the result of the Terry-Bradley model, i.e., $d(e\_1,e\_2) = (s(e\_1)-s(e\_2))^2$ where $s(e)$ is the strength of essay $e$ as estimated by the Terry-Bradley model based upon the results of the peer grading. However, you can do something more sophisticated. For instance, you could compute the normalized Levenshtein edit distance between essay $e\_1$ and $e\_2$ (treating them as text strings, computing the edit distance, and dividing by the length of the larger of the two) and use that as another factor in the distance function. You could also compute feature vectors using a bag-of-words model on the words in the essays, and use the L2 distance between these feature vectors (with features normalized using tf-idf) as another factor in the distance function. You might use a distance function that is a weighted average of the difference in strengths (based upon the Terry-Bradley estimates), the normalized edit distance, and anything else that seems helpful. Such a more sophisticated distance function might help do a better job of helping the clustering algorithm select which are the best $k$ essays to have the teacher grade. |
I was trying to come up with a formula for the number of swaps used in selection sort. So we know that selection sort gives the minimum number of swaps to sort an array.
The formula I came up with is given an unsorted array and it's descending or ascending order. We find the number of elements dissimilar to the sorted array. When we subtract 1 from this number we can get the number of swaps.
For example,
Let the array be
[3, 4,2 ,9,1]
Using selection sort for descending order:
[9,4,2,3,1] ---[9,4,3,2,1] which gives a total of 2 swaps
My logic:
Descending array is [9,4,3,2,1]. So three elements are in incorrect position which are 9, 3 and 2. So, 3-1 = 2 swaps.
Let us consider ascending order:
Section sort:
[1,4,2,9,3]--[1,2,4,9,3]--[1,2,3,9,4]--[1,2,3,4,9] which gives a total of 4 swaps.
My logic:
Ascending array is [1,2,3,4,9]. So all five elements are in incorrect position from the sorted array which gives a total swap count of 5-1 = 4.
But my logic seems to be incorrect when tested on hacker rank. Could you give me an example where this logic fails. Thanks :) | Considering following array
`[5,4,3,2,1]`
Now for `ascending` order, `four` elements are in incorrect position `i.e. 5,4,2 and 1`
So according to your logic,
`No of swaps = No. of elements at incorrect position - 1` therefore `No. of swaps = 4-1 i.e. 3`
Now, according to `Selection sort`,
`[5,4,3,2,1] Original Array`
`1st Pass: [1,4,3,2,5] i.e. 1 swap`
`2nd Pass: [1,2,3,4,5] i.e. 2 swaps`
We are done, with `Only 2 swaps not 3 swaps`.
Similarly for `Descending` order,
Now, according to `Selection sort`,
`[1,2,3,4,5] Original Array`
`1st Pass: [5,2,3,4,1] i.e. 1 swap`
`2nd Pass: [5,4,3,2,1] i.e. 2 swaps`
We are done, with `Only 2 swaps not 3 swaps`.
Hope this helps ! |
I have a random variable $Y = \frac{e^{X}}{1 + e^{X}}$ and I know $X \sim N(\mu, \sigma^2)$.
Is there a way to compute $\mathbb{E}(Y)$? I have tried to work out the integral, but haven't made much progress. Is it even possible? | As mentioned already in the question comments and answer by @Martijn there doesn't appear to be an analytical solution for $E(Y)$ apart from the special case where $\mu = 0$ which gives $E(Y) = 0.5$.
Additionally by [Jensen's inequality](https://en.wikipedia.org/wiki/Jensen%27s_inequality) we have that $E(Y) = E(f(X)) < f(E(X))$ if $\mu > 0$ and conversely that $E(Y) = E(f(X)) > f(E(X))$ if $\mu < 0$. Since $f(x) = \frac{e^x}{1 + e^x}$ is convex when $x < 0$ and concave when $x > 0$ and most of the normal density mass will lie in those regions depending on the value of $\mu$.
There are many ways to approximate $E(Y)$, I have detailed a few I am familiar with and included some R code at the end.
Sampling
========
This is quite easy to understand/implement:
$$
E(Y) = \int\_\infty^\infty f(x) \mathcal{N}(x|\mu, \sigma^2) dx \approx \frac{1}{n} \Sigma\_{i = 1}^{n}f(x\_i)
$$
where we draw samples $x\_1, \ldots, x\_n$ from $\mathcal{N}(\mu, \sigma^2)$.
Numerical integration
=====================
This includes many methods of approximating the integral above - in the code I used R's [integrate](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/integrate.html) function which uses adaptive quadrature.
Unscented transform
===================
See for example [The Unscented Kalman Filter for Nonlinear Estimation](https://www.seas.harvard.edu/courses/cs281/papers/unscented.pdf) by Eric A. Wan and Rudolph van der Merwe which describes:
>
> The unscented transformation (UT) is a method for calculating
> the statistics of a random variable which undergoes
> a nonlinear transformation
>
>
>
The method involves calculating a small number of "sigma points" which are then transformed by $f$ and a weighted mean is taken. This is in contrast to randomly sampling many points, transforming them with $f$ and taking the mean.
This method is much more computationally efficient than random sampling. Unfortunately I couldn't find an R implementation online so haven't included it in the code below.
Code
====
The following code creates data with different values of $\mu$ and fixed $\sigma$. It outputs `f_mu` which is $f(E(X))$, and approximations of $E(Y) = E(f(X))$ via `sampling` and `integration`.
```
integrate_approx <- function(mu, sigma) {
f <- function(x) {
plogis(x) * dnorm(x, mu, sigma)
}
int <- integrate(f, lower = -Inf, upper = Inf)
int$value
}
sampling_approx <- function(mu, sigma, n = 1e6) {
x <- rnorm(n, mu, sigma)
mean(plogis(x))
}
mu <- seq(-2.0, 2.0, by = 0.5)
data <- data.frame(mu = mu,
sigma = 3.14,
f_mu = plogis(mu),
sampling = NA,
integration = NA)
for (i in seq_len(nrow(data))) {
mu <- data$mu[i]
sigma <- data$sigma[i]
data$sampling[i] <- sampling\_approx(mu, sigma)
data$integration[i] <- integrate_approx(mu, sigma)
}
```
output:
```
mu sigma f_mu sampling integration
1 -2.0 3.14 0.1192029 0.2891102 0.2892540
2 -1.5 3.14 0.1824255 0.3382486 0.3384099
3 -1.0 3.14 0.2689414 0.3902008 0.3905315
4 -0.5 3.14 0.3775407 0.4450018 0.4447307
5 0.0 3.14 0.5000000 0.4999657 0.5000000
6 0.5 3.14 0.6224593 0.5553955 0.5552693
7 1.0 3.14 0.7310586 0.6088106 0.6094685
8 1.5 3.14 0.8175745 0.6613919 0.6615901
9 2.0 3.14 0.8807971 0.7105594 0.7107460
```
EDIT
====
I actually found an easy to use unscented transformation in the python package [filterpy](https://filterpy.readthedocs.io/en/latest/) (although it is actually quite quick to implement from scratch):
```
import filterpy.kalman as fp
import numpy as np
import pandas as pd
def sigmoid(x):
return 1.0 / (1.0 + np.exp(-x))
m = 9
n = 1
z = 1_000_000
alpha = 1e-3
beta = 2.0
kappa = 0.0
means = np.linspace(-2.0, 2.0, m)
sigma = 3.14
points = fp.MerweScaledSigmaPoints(n, alpha, beta, kappa)
ut = np.empty_like(means)
sampling = np.empty_like(means)
for i, mean in enumerate(means):
sigmas = points.sigma_points(mean, sigma**2)
trans_sigmas = sigmoid(sigmas)
ut[i], _ = fp.unscented_transform(trans_sigmas, points.Wm, points.Wc)
x = np.random.normal(mean, sigma, z)
sampling[i] = np.mean(sigmoid(x))
print(pd.DataFrame({"mu": means,
"sigma": sigma,
"ut": ut,
"sampling": sampling}))
```
which outputs:
```
mu sigma ut sampling
0 -2.0 3.14 0.513402 0.288771
1 -1.5 3.14 0.649426 0.338220
2 -1.0 3.14 0.716851 0.390582
3 -0.5 3.14 0.661284 0.444856
4 0.0 3.14 0.500000 0.500382
5 0.5 3.14 0.338716 0.555246
6 1.0 3.14 0.283149 0.609282
7 1.5 3.14 0.350574 0.662106
8 2.0 3.14 0.486598 0.710284
```
So the unscented transform seems to perform quite poorly for these values of $\mu$ and $\sigma$. This is maybe not surprising since the unscented transform attempts to find the best normal approximation to $Y = f(X)$ and in this case it is far from normal:
```
import matplotlib.pyplot as plt
x = np.random.normal(means[0], sigma, z)
plt.hist(sigmoid(x), bins=50)
plt.title("mu = {}, sigma = {}".format(means[0], sigma))
plt.xlabel("f(x)")
plt.show()
```
[![histogram](https://i.stack.imgur.com/PIw6A.png)](https://i.stack.imgur.com/PIw6A.png)
For smaller values of $\sigma$ it seems to be OK. |
First post here, glad I found this exchange. I am in my Operating Systems course at university learning process scheduling. In my homework assignment we have to fill out a simulated scheduling chart along with the corresponding Gant chart. The arrival time of the first process is at time 0, and it's service time is 5 seconds. The second process doesn't arrive until time 7. The time quantum is 6. How do I fill out the gant chart if nothing is in the queue? | your first process will take time only upto 5 seconds as its service time is only upto 5 seconds and then you have to keep blank space between 5-7 and then your next process arriving at 7 would be processed,i think. |
I wish to find the posterior of a joint distribution of 4 parameters whose prior and likelihoods are known, but I do not understand how to accept and reject samples, in any other case other than single variable cases used in Metropolis-Hastings.
I am a mathematics undergrad, and I would be really grateful if you could direct me to some good literature.
Thanking you in advance.
PS:- This is for a problem in Operations Research(Reliability Theory). If you require a more specific description of the problem statement, please let me know. | The question you may ask first is what defines "important feature".
Random forest is a supervised learning algorithm, you need to specify a label first then the algorithm will tell you which feature is more important **respect the given label**. In other words, specifying different label will have different results for variable importance.
Without using the label, algorithm such as PCA will define the variable that have **large variance** is important, which is another good starting point. This is intuitive because variable with large variance usually has more information and variable with zero variance means everyone is the same, and therefore this feature can be less useful. |
I am looking for a book *(English only)* that I can treat as a reference text *(more colloquially as a bible)* about probability and is as *complete* - with respect to an undergraduate/graduate education in Mathematics - as possible. What I mean by that is that the book should contain and rigorously address the following topics:
* Measure Theory (As a mathematical foundation for probability)
+ It is of course fine if this theory is addressed with an emphasis on probability and not only for the sake of mathematical measure theory, although the latter would be great too.
* Introduction to Probability, i.e. the most common theory a student is exposed to when taking a first course in theoretic Probability. For example: distributions, expected value, modes of convergence, Borel Cantelli Lemmas, LLN, CLT, Gaussian Random Vectors
* More advanced topics such as: Conditional Expectation (defined through sigma-Algebras), Martingales, Markov Processes, Brownian Motion
I want it to be one book so I can carry a physical copy of it with me and work through the material in my spare time.
**Examples:**
1. The book by Jean-Francois Le Gall which can be found here: <https://www.math.u-psud.fr/~jflegall/IPPA2.pdf> but (unfortunately for me) is written in French.
2. Rick Durrett's book on Probability which can be found here <https://services.math.duke.edu/~rtd/PTE/pte.html> - the [critique available](http://rads.stackoverflow.com/amzn/click/0521765390) for this book seems a bit mixed, uncertain about how to weigh that.
I am well aware that it's not easy to meet all of the above criteria simultaneously, but I would be grateful for any recommendation. | [Foundations of Modern Probability](http://rads.stackoverflow.com/amzn/click/0387953132) by Olav Kallenberg meets all your criteria. It is quite concise and mathematically rigorous and as one reviewer puts it "without any non-mathematical distractions". |
This was an exam question for my course and I am struggling to actually answer it in a way that is not fluff.
Here is my current answer:
*CFGs describe how non-terminal symbols are converted into terminal symbols via a parser. However, a scanner defines what those terminal symbols convert to in terms of lexical tokens. CFGs are grammatical descriptions of a language instead of simply defining what tokens should be scanned from an input string.*
**What is the correct way to answer this?** | You don't use CFGs because typically lexical analysis can be performed using regular automata, and these are faster than context-free parsers. It's a question of efficiency. |
Let there is an algorithm whose running time is $O(n^2)$. Suppose we apply a preprocessing step on the algorithm in $O(n)$ so that it reduces the input size to $O(\sqrt{n})$ but doesn't effect the true answer of the algorithm. It is just for optimization. Now is it true to say that the running time of the algorithm is $O(n)$ when preprocessing is applied? Because $O(n^2)$ becomes $O(\sqrt{n}^2)$ which is $O(n)$. | Undisputably, the global time is
$$O(n)+O((c\sqrt n)^2)=O(n).$$ |
To explore how the `LASSO` regression works, I wrote a small piece of code that should optimize `LASSO` regression by picking the best alpha parameter.
I cannot figure out why the `LASSO` regression is giving me such unstable results for the alpha parameter after cross validation.
Here is my Python code:
```
from sklearn.linear_model import Lasso
from sklearn.cross_validation import KFold
from matplotlib import pyplot as plt
# generate some sparse data to play with
import numpy as np
import pandas as pd
from scipy.stats import norm
from scipy.stats import uniform
### generate your own data here
n = 1000
x1x2corr = 1.1
x1x3corr = 1.0
x1 = range(n) + norm.rvs(0, 1, n) + 50
x2 = map(lambda aval: aval*x1x2corr, x1) + norm.rvs(0, 2, n) + 500
y = x1 + x2 #+ norm.rvs(0,10, n)
Xdf = pd.DataFrame()
Xdf['x1'] = x1
Xdf['x2'] = x2
X = Xdf.as_matrix()
# Split data in train set and test set
n_samples = X.shape[0]
X_train, y_train = X[:n_samples / 2], y[:n_samples / 2]
X_test, y_test = X[n_samples / 2:], y[n_samples / 2:]
kf = KFold(X_train.shape[0], n_folds = 10, )
alphas = np.logspace(-16, 8, num = 1000, base = 2)
e_alphas = list()
e_alphas_r = list() # holds average r2 error
for alpha in alphas:
lasso = Lasso(alpha=alpha, tol=0.004)
err = list()
err_2 = list()
for tr_idx, tt_idx in kf:
X_tr, X_tt = X_train[tr_idx], X_test[tt_idx]
y_tr, y_tt = y_train[tr_idx], y_test[tt_idx]
lasso.fit(X_tr, y_tr)
y_hat = lasso.predict(X_tt)
# returns the coefficient of determination (R^2 value)
err_2.append(lasso.score(X_tt, y_tt))
# returns MSE
err.append(np.average((y_hat - y_tt)**2))
e_alphas.append(np.average(err))
e_alphas_r.append(np.average(err_2))
## print out the alpha that gives the minimum error
print 'the minimum value of error is ', e_alphas[e_alphas.index(min(e_alphas))]
print ' the minimizer is ', alphas[e_alphas.index(min(e_alphas))]
## <<< plotting alphas against error >>>
plt.figsize = (15, 15)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(alphas, e_alphas, 'b-')
ax.plot(alphas, e_alphas_r, 'g--')
ax.set_ylim(min(e_alphas),max(e_alphas))
ax.set_xlim(min(alphas),max(alphas))
ax.set_xlabel("alpha")
plt.show()
```
If you run this code repeatedly, it gives wildly different results for alpha:
```
>>>
the minimum value of error is 3.99254192539
the minimizer is 1.52587890625e-05
>>> ================================ RESTART ================================
>>>
the minimum value of error is 4.07412455842
the minimizer is 6.45622425334
>>> ================================ RESTART ================================
>>>
the minimum value of error is 4.25898253597
the minimizer is 1.52587890625e-05
>>> ================================ RESTART ================================
>>>
the minimum value of error is 3.79392968781
the minimizer is 28.8971008254
>>>
```
Why is the alpha value not converging properly? I know that my data is synthetic, but the distribution is the same. Also, the variation is very small in `x1` and `x2`.
what could be causing this to be so unstable?
The same thing written in R gives different results - it always returns the highest possible value for alpha as the "optimal\_alpha".
-------------------------------------------------------------------------------------------------------------------------------------
I also wrote this in R, which gives me a slightly different answer, which I don't know why?
```
library(glmnet)
library(lars)
library(pracma)
set.seed(1)
k = 2 # number of features selected
n = 1000
x1x2corr = 1.1
x1 = seq(n) + rnorm(n, 0, 1) + 50
x2 = x1*x1x2corr + rnorm(n, 0, 2) + 500
y = x1 + x2
filter_out_label <- function(col) {col!="y"}
alphas = logspace(-5, 6, 100)
for (alpha in alphas){
k = 10
optimal_alpha = NULL
folds <- cut(seq(1, nrow(df)), breaks=k, labels=FALSE)
total_mse = 0
min_mse = 10000000
for(i in 1:k){
# Segement your data by fold using the which() function
testIndexes <- which(folds==i, arr.ind=TRUE)
testData <- df[testIndexes, ]
trainData <- df[-testIndexes, ]
fit <- lars(as.matrix(trainData[Filter(filter_out_label, names(df))]),
trainData$y,
type="lasso")
# predict
y_preds <- predict(fit, as.matrix(testData[Filter(filter_out_label, names(df))]),
s=alpha, type="fit", mode="lambda")$fit # default mode="step"
y_true = testData$y
residuals = (y_true - y_preds)
mse=sum(residuals^2)
total_mse = total_mse + mse
}
if (total_mse < min_mse){
min_mse = total_mse
optimal_alpha = alpha
}
}
print(paste("the optimal alpha is ", optimal_alpha))
```
The output from the R code above is:
```
> source('~.....')
[1] "the optimal alpha is 1e+06"
```
In fact, no matter what I set for the line "`alphas = logspace(-5, 6, 100)`", I always get back the highest value for alpha.
I guess there are actually 2 different questions here :
1. Why is the alpha value so unstable for the version written in Python?
2. Why does the version written in R give me a different result? (I realize that the `logspace` function is different from `R` to `python`, but the version written in `R` always gives me the largest value of `alpha` for the optimal alpha value, whereas the python version does not).
It would be great to know these things... | I don't know python very well, but I did find one problem with your R code.
You have the 2 lines:
```
residuals = sum(y_true - y_preds)
mse=residuals^2
```
Which sums the residuals, then squares them. This is very different from squaring the residuals, then summing them (which it appears that the python code does correctly). I would suspect that this may be a big part of the difference between the R code and the python code. Fix the R code and run it again to see if it behaves more like the python code.
I would also suggest that instead of just saving the "best" alpha and the corresponding mse that you store all of them and plot the relationship. It could be that for your setup there is a region that is quite flat so that the difference between the mse at different points is not very big. If this is the case, then very minor changes to the data (even the order in the cross-validation) can change which point, among many that are essentially the same, gives the minimum. Having a situation that results in a flat region around the optimum will often lead to what you are seeing and the plot of all the alpha values with the corresponding mse values could be enlightening. |
I was thinking about how nature can efficiently compute ridiculous (i.e. NP) problems with ease. For example, a quantum system requires a $2^n$ element vector to represent the state, where $n$ is just the number of particles. Nature doesn't need any extra time despite the exponential nature of "solving" this $n$-particle system.
This may not be a wholly valid assumption, but the action principle in physics makes me think that nature always wants to do things the easiest way. If that's not true, then this question is probably moot.
If we found that nature was NOT capable of solving some problems efficiently, does this mean we are doomed in terms of being able to solve NP problems in polynomial time? Are the laws of physics a strong enough weapon for tackling P vs. NP? Is the converse of the first question/assertion also true (if nature can do it, then there must be a way for us to as well)? | Here are five remarks that might be helpful to you:
1. The current belief is that, despite the exponentiality of the wavefunction, quantum mechanics will *not* let us solve NP-complete problems in polynomial time (though it famously *does* let us solve certain "special" NP problems, like factoring and discrete logarithms). The basic difficulty is that, even if a solution to an NP problem is "somewhere" in the wavefunction, that isn't useful if a measurement will only reveal that solution with exponentially-small probability. To get a *useful* quantum algorithm, you need to use quantum interference to make the correct answer observed with high probability, and it's only known how to get an exponential speedup that way (compared to the best-known classical algorithm) for a few special problems like factoring.
2. The action principle doesn't imply that Nature has any magical minimization powers. The easiest way to see that is that any physical law formulated in terms of the action principle, can *also* be formulated in terms of the ordinary time-evolution of a state, without reference to anything being minimized.
3. If P=NP, then certainly NP-complete problems can be solved in polynomial time in the physical universe, since universal Turing computers exist (you're using one now). However, the converse direction is far from obvious! For example, even if you assume P≠NP, it's still *logically* possible (if very unlikely) that quantum computers could solve NP-complete problems in polynomial time.
4. The mere assumption that there are *some* problems that we can't solve efficiently, certainly doesn't imply that NP-complete problems have to be among those problems! (Maybe it will turn out that quantum gravity lets us solve NP-complete problems in linear time, but the *PSPACE*-complete problems still take exponential time... :-D )
5. For whatever it's worth, my money is firmly on the conjecture not only that P≠NP, but also that NP-complete problems are intractable in the physical universe---using quantum computers, analog computers, "black hole computers," or any other resource. For more about my reasons why, you might enjoy my old survey article [NP-complete Problems and Physical Reality](http://www.scottaaronson.com/papers/npcomplete.pdf) |
I am asked to prove that every Euclidean models satisfies $\diamond \diamond \diamond \varphi \to \diamond \diamond \varphi$. How can this be done? I don't see how it could even be true. | Suppose $w$ is a world in an Euclidean frame, and $w\mathbin R v$, then by the Euclidean property $w$ reaches both of the "two" worlds $v$ and $v$ (indeed, the same world), and thus $v\mathbin R v$. So in a Euclidean frame, any world that is reachable from some other world, reaches itself (or every world with an incoming arrow has a reflexive arrow).
Now, suppose $w\vDash\Diamond\Diamond\Diamond\phi$, then we know that $w\mathbin R v$ for some world $v$ such that $v\vDash \Diamond\Diamond\phi$. This, in turn, means that $v\mathrel R u$ for some world $u$ such that $u\vDash \Diamond\phi$. By the above argument we see that $v\mathbin R v$, since $v$ is reachable from the world $w$. Thus we have $v\mathrel R v$ and $v\mathrel R u$, which implies that $u\mathrel R v$ by the Euclidean property. Furthermore, we have $u\mathrel R u$ as well, since $u$ is reachable from $v$.
The final step is to see that because $u\vDash \Diamond \phi$, there is some $z$ such that $u\mathrel R z$ and $z\vDash\phi$. By the same reasoning as before, we can find out that not only $u\mathrel R z$, but also $v\mathrel R z$. This gives us that $w\mathrel R v\mathrel R z$, and thus $w\vDash \Diamond\Diamond\phi$.
---
As a sidenote, the way to see Euclidean frames, is as a bunch of "clusters" of worlds, where a cluster is a group of worlds that are completely connected with each other (i.e. each world in the cluster can reach every other world in the cluster), with (optionally) some "outside" worlds that reach any number (including 0) of the worlds inside one (and only one) cluster, but are not reachable from any other world.
An example in a picture, with the "clusters" in dots connected with red arrows, and the "outside" worlds those dots with blue arrows leaving them:
[![enter image description here](https://i.stack.imgur.com/FJAN2.png)](https://i.stack.imgur.com/FJAN2.png) |
### Background and Empirical Example
I have two studies; I ran an experiment (Study 1) and then replicated it (Study 2). In Study 1, I found an interaction between two variables; in Study 2, this interaction was in the same direction but not significant. Here is the summary for Study 1's model:
```
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.75882 0.26368 21.840 < 2e-16 ***
condSuppression -1.69598 0.34549 -4.909 1.94e-06 ***
prej -0.01981 0.08474 -0.234 0.81542
condSuppression:prej 0.36342 0.11513 3.157 0.00185 **
```
And Study 2's model:
```
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.24493 0.24459 21.444 <2e-16 ***
prej 0.13817 0.07984 1.731 0.0851 .
condSuppression -0.59510 0.34168 -1.742 0.0831 .
prej:condSuppression 0.13588 0.11889 1.143 0.2545
```
Instead of saying, "I guess I don't have anything, because I 'failed to replicate,'" what I did was combine the two data sets, created a dummy variable for what study the data came from, and then ran the interaction again after controlling for study dummy variable. This interaction was significant even after controlling for it, and I found that this two-way interaction between condition and dislike/prej was not qualified by a three-way interaction with the study dummy variable.
### Introducing Bayesian Analysis
I had someone suggest that this is a great opportunity to use Bayesian analysis: In Study 2, I have information from Study 1 that I can use as prior information! In this way, Study 2 is doing a Bayesian updating from the frequentist, ordinary least squares results in Study 1. So, I go back and re-analyze the Study 2 model, now using informative priors on the coefficients: All the coefficients had a normal prior where the mean was the estimate in Study 1 and the standard deviation was the standard error in Study 1.
This is a summary of the result:
```
Estimates:
mean sd 2.5% 25% 50% 75% 97.5%
(Intercept) 5.63 0.17 5.30 5.52 5.63 5.74 5.96
condSuppression -1.20 0.20 -1.60 -1.34 -1.21 -1.07 -0.80
prej 0.02 0.05 -0.08 -0.01 0.02 0.05 0.11
condSuppression:prej 0.34 0.06 0.21 0.30 0.34 0.38 0.46
sigma 1.14 0.06 1.03 1.10 1.13 1.17 1.26
mean_PPD 5.49 0.11 5.27 5.41 5.49 5.56 5.72
log-posterior -316.40 1.63 -320.25 -317.25 -316.03 -315.23 -314.29
```
It looks like now we have pretty solid evidence for an interaction from the Study 2 analysis. This agrees with what I did when I simply stacked the data on top of one another and ran the model with study number as a dummy-variable.
### Counterfactual: What If I Ran Study 2 First?
That got me thinking: What if I had run Study 2 first and then used the data from Study 1 to update my beliefs on Study 2? I did the same thing as above, but in reverse: I re-analyzed the Study 1 data using the frequentist, ordinary least squares coefficient estimates and standard deviations from Study 2 as prior means and standard deviations for my analysis of Study 1 data. The summary results were:
```
Estimates:
mean sd 2.5% 25% 50% 75% 97.5%
(Intercept) 5.35 0.17 5.01 5.23 5.35 5.46 5.69
condSuppression -1.09 0.20 -1.47 -1.22 -1.09 -0.96 -0.69
prej 0.11 0.05 0.01 0.08 0.11 0.14 0.21
condSuppression:prej 0.17 0.06 0.05 0.13 0.17 0.21 0.28
sigma 1.10 0.06 0.99 1.06 1.09 1.13 1.21
mean_PPD 5.33 0.11 5.11 5.25 5.33 5.40 5.54
log-posterior -303.89 1.61 -307.96 -304.67 -303.53 -302.74 -301.83
```
Again, we see evidence for an interaction, however this might not have necessarily been the case. Note that the point estimate for both Bayesian analyses aren't even in the 95% credible intervals for one another; the two credible intervals from the Bayesian analyses have more non-overlap than they do overlap.
### What Is The Bayesian Justification For Time Precedence?
My question is thus: What is the justifications that Bayesians have for respecting the chronology of how the data were collected and analyzed? I get results from Study 1 and use them as informative priors in Study 2 so that I use Study 2 to "update" my beliefs. But if we assume that the results I get are randomly taken from a distribution with a true population effect... then why do I privilege the results from Study 1? What is the justification for using Study 1 results as priors for Study 2 instead of taking Study 2 results as priors for Study 1? Does the order in which I collected and calculated the analyses really matter? It does not seem like it should to me—what is the Bayesian justification for this? Why should I believe the point estimate is closer to .34 than it is to .17 just because I ran Study 1 first?
---
### Responding to Kodiologist's Answer
Kodiologist remarked:
>
> The second of these points to an important departure you have made from Bayesian convention. You didn't set a prior first and then fit both models in Bayesian fashion. You fit one model in a non-Bayesian fashion and then used that for priors for the other model. If you used the conventional approach, you wouldn't see the dependence on order that you saw here.
>
>
>
To address this, I fit the models for Study 1 and Study 2 where all regression coefficients had a prior of $\text{N}(0, 5)$. The `cond` variable was a dummy variable for experimental condition, coded 0 or 1; the `prej` variable, as well as the outcome, were both measured on 7-point scales ranging from 1 to 7. Thus, I think it is a fair choice of prior. Just by how the data are scaled, it would be very, very rare to see coefficients much larger than what that prior suggests.
The mean estimates and standard deviation of those estimates are about the same as in the OLS regression. Study 1:
```
Estimates:
mean sd 2.5% 25% 50% 75% 97.5%
(Intercept) 5.756 0.270 5.236 5.573 5.751 5.940 6.289
condSuppression -1.694 0.357 -2.403 -1.925 -1.688 -1.452 -0.986
prej -0.019 0.087 -0.191 -0.079 -0.017 0.040 0.150
condSuppression:prej 0.363 0.119 0.132 0.282 0.360 0.442 0.601
sigma 1.091 0.057 0.987 1.054 1.088 1.126 1.213
mean_PPD 5.332 0.108 5.121 5.259 5.332 5.406 5.542
log-posterior -304.764 1.589 -308.532 -305.551 -304.463 -303.595 -302.625
```
And Study 2:
```
Estimates:
mean sd 2.5% 25% 50% 75% 97.5%
(Intercept) 5.249 0.243 4.783 5.082 5.246 5.417 5.715
condSuppression -0.599 0.342 -1.272 -0.823 -0.599 -0.374 0.098
prej 0.137 0.079 -0.021 0.084 0.138 0.192 0.287
condSuppression:prej 0.135 0.120 -0.099 0.055 0.136 0.214 0.366
sigma 1.132 0.056 1.034 1.092 1.128 1.169 1.253
mean_PPD 5.470 0.114 5.248 5.392 5.471 5.548 5.687
log-posterior -316.699 1.583 -320.626 -317.454 -316.342 -315.561 -314.651
```
Since these means and standard deviations are the more or less the same as the OLS estimates, the order effect above still occurs. If I plug-in the posterior summary statistics from Study 1 into the priors when analyzing Study 2, I observe a different final posterior than when analyzing Study 2 first and then using those posterior summary statistics as priors for analyzing Study 1.
Even when I use the Bayesian means and standard deviations for the regression coefficients as priors instead of the frequentist estimates, I would still observe the same order effect. So the question remains: What is the Bayesian justification for privileging the study that came first? | First I should point out that:
1. In your significance-testing approach, you followed up a negative result with a different model that gave you another chance to get a positive result. Such a strategy increases your project-wise type-I error rate. Significance-testing requires choosing your analytic strategy in advance for the $p$-values to be correct.
2. You're putting a lot of faith in the results of Study 1 by translating your findings from that sample so directly into priors. Remember, a prior is not just a reflection of past findings. It needs to encode the entirety of your preexisting beliefs, including your beliefs before the earlier findings. If you admit that Study 1 involved sampling error as well as other kinds of less tractiable uncertainty, such as model uncertainty, you should be using a more conservative prior.
The second of these points to an important departure you have made from Bayesian convention. You didn't set a prior first and then fit both models in Bayesian fashion. You fit one model in a non-Bayesian fashion and then used that for priors for the other model. If you used the conventional approach, you wouldn't see the dependence on order that you saw here. |