input
stringlengths 38
38.8k
| target
stringlengths 30
27.8k
|
---|---|
>
> Design a linear time algorithm for finding the least positive integer missing from an unsorted array. Changes in the array are allowed.
>
>
> For example, for the array -10,-1,2,3,6,30, the answer is 1.
>
>
>
I thought about finding the maximum and minimum numbers in the array (this is $\Theta(n)$), then a loop from the minimum number to the maximum number, and in every iteration, check if the current number is in the array, but this will take $\Theta(n^2)$. | Hint (if extra memory is allowed): It is enough to determine which of the elements $1,\ldots,n$ are in the input array. Using an auxiliary array, you can accomplish this in $O(n)$. |
I was reading a few notes on Proof by Restriction and I am confused:
A Valid Proof by Restriction is the following:
Directed Hamiltonian Cycle Problem is NP Complete because if we look only at *instances* of DHC where For $G=(V,E)\quad (u,v)\in E \leftrightarrow (v,u) \in E$ then it reduces to Hamiltonian Cycle which we know is NP complete.
A *wrong* proof is the following:
>
> **Subset Sum problem**
>
> INSTANCE: Integers $a\_1, a\_2,…,a\_n$ and integer B.
>
>
> QUESTION: Is there a sequence of 0’s and 1’s, $x\_1, x\_2,…,x\_n$ such that:
> $$\sum\_{i=1}^n a\_ix\_i \leq B$$
>
>
>
is a special case of
>
> **Real Subset Problem**
> INSTANCE: Integers $a\_1, a\_2,…,a\_n$ and integer B.
>
>
> QUESTION: Is there a sequence of real numbers $x\_1, x\_2,…,x\_n$ such that:
> $$\sum\_{i=1}^n a\_ix\_i \leq B$$
>
>
>
so it is NP Complete.
My notes say that the this proof is wrong since it restricts the question and not the instances but I don't seem to understand the difference.
Further, I can't really understand how Proof by Restriction works; for all I know I could be restricting an NP Complete problem to a trivial case which can be solved in Polynomial time. | Think about the set of all possible instances of DHC. A subset of these instances are those where, for every directed edge $(u, v)$, there is always a matching directed edge $(v, u)$. (In general, this doesn't have to be the case, but it CAN be the case, which is why this is a valid restriction.)
Now think about the set of all possible instances of SubsetSum. For each such instance, you're supposed to answer with a set of 0/1-valued $x\_i$. By your first definition, there are NO valid answers that include a real number in the $x\_i$. So, when you suddenly allow real-valued solutions in the second version of SubsetSum, you're **relaxing** the problem, not restricting it. (You're giving yourself more leeway by allowing more possible solutions.) |
This is actually a problem that our professor gave us, and I'm clueless of how to answer this. I browsed through various sources, but none were helpful regarding this question.
The question is,
>
> In the definition of semantics of logic, P implies Q is defined as
> true under the assignment of both P and Q are false. Although this is
> rather unusual at a glance, explain what would be the issue with
> logic, if the definition is differently.
>
>
>
Any helpful answer is highly appreciated. | This question is a soft question IMO, it assumes there's common ground about what "should be true" independent of definitions but it doesn't clarify what those assumptions are. If I were to break it down, I'd say it's looking for a particular bit of reasoning (e.g. a proof or an inference rule) to hold and then asking you to show that that reasoning would be invalid if we defined the semantics of implication differently. In essence it's asking if there exists a proof that, under this new model, would be invalid.
Consider $\neg (Q \vee \neg Q) \to \neg (Q \vee \neg Q)$ which is provable, and should be valid in all models (no matter what we assign to $Q$ that is). As a general rule of reasoning it should always hold that $P \to P$ no matter what. I posit this is sufficiently fundamental common ground of "what should be true". So if we find this reasoning invalid, we have our soft contradiction. We can construct instances of $P$ for which $P$ is certainly false as shown above. So despite the very reasonable proof of $\neg (Q \vee \neg Q) \to \neg (Q \vee \neg Q)$ that would actually be false under the model...in fact it's negation would be valid in the model! |
Suppose, x is a random binary variable with values {0, 1}, and $E[x] > 0.5.$ Is it true that, for a random sample $S$ of $x$, $P[\mu\_S(x) > 0.5] > 0.5.$ In other words, if expectation of a binary variable is above 0.5, then mean of a random sample is more likely to be above 0.5 too. Is it true?
This is a justification of kNN. We take sample mean, round it, and output it as an answer for a given point. If sample mean is not representative of the expectation, then what is the point of kNN? | Assuming your random sample consist of $n$ independent $X\_i$'s, then sum of them $s(x)=\sum X\_i$ is binomial with parameters $(n,p=E[X])$, and $P(\mu\_s(x)>0.5)=P(s(x)>0.5n)$.
Here, let $n=4,p=0.51$, then $P(s(x)>2)\approx 0.328 <0.5$. So, although it is intuitive, it's not correct for all $n,E[X]$. |
I'm trying to find examples of languages that don't seem regular, but are. A reference to where such examples may be found is also appreciated.
So far I've found two. One is $L\_1=\{a^ku\,\,|\,\,u\in \{a,b\}^∗$ and $u$ contains at least $k$ $a$'s, for $k\geq 1\}$, from [this post](https://cs.stackexchange.com/questions/8991/is-this-language-regular-or-not), and the other is $L\_2 = \{uww^rv\,\,|\,\, u,w,v\in\{a,b\}^+\}$, which is an exercise (exercise 19 from section 4.3) in [An Introduction to Formal Languages and Automata](https://vdoc.pub/documents/an-introduction-to-formal-languages-and-automata-2frnr178t2e0) by Peter Linz.
I suppose the aspect of seeming to be regular depends on your familiarity with the topic, but, for me, I would have said that those languages were not regular at a first glance. The trick seems to be to write a simple language in more complicated terms, like using $ww^R$, which reminds us of the irregular language of even length palindromes.
I'm not looking for extremely complicated ways of expressing a regular language, just some examples where the definition of the language seems to rely on concepts that usually make a language irregular, but are then "absorbed" by the other terms in the definition. | My favorite example of this, which is often used as a difficult/tricky exercise, is the language:
$$L=\{w\in \{0,1\}^\*:w \text{ has an equal number of } 01\text{ and }10\}$$
This has the strong flavor of the non-regular "same number of $0$ and $1$", but the alternation of $0$ and $1$ makes it regular nonetheless. |
I'm in the process of preparing to teach
an introductory course on data science using the R programming language.
My audience is undergraduate students majoring in business subjects.
A typical business undergrad does not have any computer programming experience,
but has taken a few classes which use Excel.
Personally, I am very comfortable with R (or other programming languages)
because I majored in computer science.
However, I have the feeling that many of my students
will feel wary of learning a programming language
because it may seem difficult to them.
I do have some familiarity with Excel,
and it is my belief that while Excel can be useful for simple data science,
it is necessary for students to learn
a serious programming language for data science (e.g., R or Python).
How do I convince myself and the students
that Excel is insufficient
for a serious business student studying data science,
and that it is necessary for them to learn some programming?
### Edited in response to comment
Here are some of the topics that I will be covering:
* Data processing and data cleaning
* How to manipulate a data table,
e.g., select a subset of rows (filter),
add new variables (mutate),
sort rows by columns
* SQL joins using the [dplyr](https://cran.r-project.org/web/packages/dplyr/dplyr.pdf) package
* How to draw plots (scatter plots, bar plots, histograms etc.)
using the [ggplot2](https://cran.r-project.org/web/packages/ggplot2/index.html) package
* How to estimate and interpret statistical models such as
linear regression, logistic regression,
classification trees, and k-nearest neighbors
Because I don't know Excel very well,
I don't know whether all of these tasks can be done easily in Excel. | Excel and Data Science - sounds really strange to me. Maybe Excel and 'Data Analysis'.
Anyways, I think a good compromise between Excel and R is: KNIME (<http://www.knime.org/knime-analytics-platform>). It's free on the desktop and much easier to get started. You can import / export to Excel but also use R, Python or Java if the ~ 1.000 nodes miss some functionality that you need. Since the workflows are visually created, it's also much easier to show them to someone who doesn't know any programming languages - which is quite an advantage in some companies. |
I want to better understand the R packages [`Lars`](http://www.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf) and [`Glmnet`](http://www.stanford.edu/~hastie/Papers/glmnet.pdf), which are used to solve the Lasso problem:
$$min\_{(\beta\_0 \beta) \in R^{p+1}} \left[\frac{1}{2N}\sum\_{i=1}^{N}(y\_i-\beta\_0-x\_i^T\beta)^2 + \lambda||\beta ||\_{l\_{1}} \right]$$
(for $p$ Variables and $N$ samples, see [www.stanford.edu/~hastie/Papers/glmnet.pdf](http://www.stanford.edu/~hastie/Papers/glmnet.pdf) on page 3)
Therefore, I applied them both on the same toy dataset. Unfortunately, the two methods do not give the same solutions for the same data input. Does anybody have an idea where the difference comes from?
I obtained the results as follows: After generating some data (8 samples, 12 features, Toeplitz design, everything centered), I computed the whole Lasso path using Lars. Then, I ran Glmnet using the sequence of lambdas computed by Lars (multiplied by 0.5) and hoped to obtain the same solution, but I did not.
One can see that the solutions are similar. But how can I explain the differences? Please find my code below. There is a related question here: [GLMNET or LARS for computing LASSO solutions?](https://stats.stackexchange.com/questions/7057/glmnet-or-lars-for-computing-lasso-solutions) , but it does not contain the answer to my question.
Setup:
```
# Load packages.
library(lars)
library(glmnet)
library(MASS)
# Set parameters.
nb.features <- 12
nb.samples <- 8
nb.relevant.indices <- 3
snr <- 1
nb.lambdas <- 10
# Create data, not really important.
sigma <- matrix(0, nb.features, nb.features)
for (i in (1:nb.features)) {
for (j in (1:nb.features)) {
sigma[i, j] <- 0.99 ^ (abs(i - j))
}
}
x <- mvrnorm(n=nb.samples, rep(0, nb.features), sigma, tol=1e-6, empirical=FALSE)
relevant.indices <- sample(1:nb.features, nb.relevant.indices, replace=FALSE)
x <- scale(x)
beta <- rep(0, times=nb.features)
beta[relevant.indices] <- runif(nb.relevant.indices, 0, 1)
epsilon <- matrix(rnorm(nb.samples),nb.samples, 1)
simulated.snr <-(norm(x %*% beta, type="F")) / (norm(epsilon, type="F"))
epsilon <- epsilon * (simulated.snr / snr)
y <- x %*% beta + epsilon
y <- scale(y)
```
lars:
```
la <- lars(x, y, intercept=TRUE, max.steps=1000, use.Gram=FALSE)
co.lars <- as.matrix(coef(la, mode="lambda"))
print(round(co.lars, 4))
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
# [1,] 0.0000 0 0 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
# [2,] 0.0000 0 0 0.0000 0.0000 0.1735 0.0000 0.0000 0.0000 0.0000
# [3,] 0.0000 0 0 0.2503 0.0000 0.4238 0.0000 0.0000 0.0000 0.0000
# [4,] 0.0000 0 0 0.1383 0.0000 0.7578 0.0000 0.0000 0.0000 0.0000
# [5,] -0.1175 0 0 0.2532 0.0000 0.8506 0.0000 0.0000 0.0000 0.0000
# [6,] -0.3502 0 0 0.2676 0.3068 0.9935 0.0000 0.0000 0.0000 0.0000
# [7,] -0.4579 0 0 0.6270 0.0000 0.9436 0.0000 0.0000 0.0000 0.0000
# [8,] -0.7848 0 0 0.9970 0.0000 0.9856 0.0000 0.0000 0.0000 0.0000
# [9,] -0.3175 0 0 0.0000 0.0000 3.4488 0.0000 0.0000 -2.1714 0.0000
# [10,] -0.4842 0 0 0.0000 0.0000 4.7731 0.0000 0.0000 -3.4102 0.0000
# [11,] -0.4685 0 0 0.0000 0.0000 4.7958 0.0000 0.1191 -3.6243 0.0000
# [12,] -0.4364 0 0 0.0000 0.0000 5.0424 0.0000 0.3007 -4.0694 -0.4903
# [13,] -0.4373 0 0 0.0000 0.0000 5.0535 0.0000 0.3213 -4.1012 -0.4996
# [14,] -0.4525 0 0 0.0000 0.0000 5.6876 -1.5467 1.5095 -4.7207 0.0000
# [15,] -0.4593 0 0 0.0000 0.0000 5.7355 -1.6242 1.5684 -4.7440 0.0000
# [16,] -0.4490 0 0 0.0000 0.0000 5.8601 -1.8485 1.7767 -4.9291 0.0000
# [,11] [,12]
# [1,] 0.0000 0.0000
# [2,] 0.0000 0.0000
# [3,] 0.0000 0.0000
# [4,] -0.2279 0.0000
# [5,] -0.3266 0.0000
# [6,] -0.5791 0.0000
# [7,] -0.6724 0.2001
# [8,] -1.0207 0.4462
# [9,] -0.4912 0.1635
# [10,] -0.5562 0.2958
# [11,] -0.5267 0.3274
# [12,] 0.0000 0.2858
# [13,] 0.0000 0.2964
# [14,] 0.0000 0.1570
# [15,] 0.0000 0.1571
```
glmnet with lambda=(lambda\_lars / 2):
```
glm2 <- glmnet(x, y, family="gaussian", lambda=(0.5 * la$lambda), thresh=1e-16)
co.glm2 <- as.matrix(t(coef(glm2, mode="lambda")))
print(round(co.glm2, 4))
# (Intercept) V1 V2 V3 V4 V5 V6 V7 V8 V9
# s0 0 0.0000 0 0 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
# s1 0 0.0000 0 0 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
# s2 0 0.0000 0 0 0.2385 0.0000 0.4120 0.0000 0.0000 0.0000
# s3 0 0.0000 0 0 0.2441 0.0000 0.4176 0.0000 0.0000 0.0000
# s4 0 0.0000 0 0 0.2466 0.0000 0.4200 0.0000 0.0000 0.0000
# s5 0 0.0000 0 0 0.2275 0.0000 0.4919 0.0000 0.0000 0.0000
# s6 0 0.0000 0 0 0.1868 0.0000 0.6132 0.0000 0.0000 0.0000
# s7 0 -0.2651 0 0 0.2623 0.1946 0.9413 0.0000 0.0000 0.0000
# s8 0 -0.6609 0 0 0.7328 0.0000 1.6384 0.0000 0.0000 -0.5755
# s9 0 -0.4633 0 0 0.0000 0.0000 4.6069 0.0000 0.0000 -3.2547
# s10 0 -0.4819 0 0 0.0000 0.0000 4.7546 0.0000 0.0000 -3.3929
# s11 0 -0.4767 0 0 0.0000 0.0000 4.7839 0.0000 0.0567 -3.5122
# s12 0 -0.4715 0 0 0.0000 0.0000 4.7915 0.0000 0.0965 -3.5836
# s13 0 -0.4510 0 0 0.0000 0.0000 5.6237 -1.3909 1.3898 -4.6583
# s14 0 -0.4552 0 0 0.0000 0.0000 5.7064 -1.5771 1.5326 -4.7298
# V10 V11 V12
# s0 0.0000 0.0000 0.0000
# s1 0.0000 0.0000 0.0000
# s2 0.0000 0.0000 0.0000
# s3 0.0000 0.0000 0.0000
# s4 0.0000 0.0000 0.0000
# s5 0.0000 -0.0464 0.0000
# s6 0.0000 -0.1293 0.0000
# s7 0.0000 -0.4868 0.0000
# s8 0.0000 -0.8803 0.3712
# s9 0.0000 -0.5481 0.2792
# s10 0.0000 -0.5553 0.2939
# s11 0.0000 -0.5422 0.3108
# s12 0.0000 -0.5323 0.3214
# s13 -0.0503 0.0000 0.1711
# s14 0.0000 0.0000 0.1571
``` | Finally we were able to produce the same solution with both methods! First issue is that glmnet solves the lasso problem as stated in the question, but lars has a slightly different normalization in the objective function, it replaces $\frac{1}{2N}$by $\frac{1}{2}$. Second, both methods normalize the data differently, so the normalization must be swiched off when calling the methods.
To reproduce that, and see that the same solutions for the lasso problem can be computed using lars and glmnet, the following lines in the code above must be changed:
```
la <- lars(X,Y,intercept=TRUE, max.steps=1000, use.Gram=FALSE)
```
to
```
la <- lars(X,Y,intercept=TRUE, normalize=FALSE, max.steps=1000, use.Gram=FALSE)
```
and
```
glm2 <- glmnet(X,Y,family="gaussian",lambda=0.5*la$lambda,thresh=1e-16)
```
to
```
glm2 <- glmnet(X,Y,family="gaussian",lambda=1/nbSamples*la$lambda,standardize=FALSE,thresh=1e-16)
``` |
From [Wikipedia](http://en.wikipedia.org/wiki/Randomized_algorithm) about randomized algorithms
>
> One has to distinguish between **algorithms** that use the random
> input to reduce the expected running time or memory usage, but always
> terminate with a correct result in a bounded amount of time, and
> **probabilistic algorithms**, which, depending on the random input, have a chance of producing an incorrect result (Monte Carlo
> algorithms) or fail to produce a result (Las Vegas algorithms) either
> by signalling a failure or failing to terminate.
>
>
>
1. I was wondering how the first kind of "**algorithms** use the random
input to reduce the expected running time or memory usage, but
always terminate with a correct result in a bounded amount of time?
2. What differences are between it and Las Vegas algorithms which may
fail to produce a result?
3. If I understand correctly, probabilistic algorithms and randomized algorithms are not the same concept. Probabilistic algorithms are just one
kind of randomized algorithms, and the other kind is those use the
random input to reduce the expected running time or memory usage,
but always terminate with a correct result in a bounded amount of
time? | 1. An example of such an algorithm is randomized Quick Sort, where you randomly permute the list or randomly pick the pivot value, then use Quick Sort as normal. Quick Sort has a worst case running time of $O(n^{2})$, but on a random list has an expected running time of $O(n\log n)$, so it always terminates after $O(n^{2})$ steps, but we can expect the randomized instance to terminate after $O(n\log n)$ steps, always with a correct answer.
2. This gives an subset of Las Vegas algorithms. Las Vegas algorithms also allow for the possibility that (with low probability) it may not terminate at all - not just terminate with a little bit more time.
3. These in turn are really just a type of Monte Carlo algorithm, where the answer can be incorrect (with low probability), which is at least conceptually different to maybe not answering.
There's a whole bunch of detail I've left out of course, you might want to look up the complexity classes ZPP, RP and BPP, which formalise these ideas. |
Let $L$ a regular language and define the *[subsequence](https://en.wikipedia.org/wiki/Subsequence) closure* of $L$ as
$\qquad \displaystyle S(L) = \{ w \mid \exists w' \in L.\ w \text{ subsequence of } w'\}$.
The problem I want to solve is to find for such subsequences $w \in S(L)$ which letters can be inserted into them so that the result is also in $S(L)$. Formally:
>
> Given $w\_1\dots w\_n \in S(L)$, output all pairs $(i,a) \in \{0,\dots,n\} \times \Sigma$ for which $w\_1 \dots w\_{i} a w\_{i+1} \dots w\_n \in S(L)$.
>
>
>
Consider, for instance, the language$\{ab, abc, abcc\}$. The string $b$ is in $S(L)$ and inserting $a$ at the beginning -- corresponding to $(0,a)$ -- yields $ab \in S(L)$. On the other hand, the string $cb$ is not in $S(L)$; there is no way to convert it to a language string by insertion.
Using this language, if the input string is $b$ the possible insertions I am looking for are $(0,a)$ and $(1,c)$ at the end. If the input string is $bc$ the possible insertions are $(0,a), (1,c)$ and $(2,c)$.
The use of this algorithm is in a user interface: the user builds strings belonging to the language starting from an empty string and adding one character at a time in different positions. At each step the UI prompts the user with all the possible valid letters in all the possible insertion positions.
I have a working naive algorithm that involves a lot of back-tracking, and it is way too slow even in relatively simple cases. I was wondering if there is something better, or -- failing that -- if there are any available studies of this problem. | There are plenty of algorithms for finding the min-cut of an undirected graph. [Karger's algorithm](http://en.wikipedia.org/wiki/Karger%27s_algorithm) is a simple yet effective randomized algorithm.
In short, the algorithm works by selecting edges uniformly at random and contracting them with self-loops removed. The process halts when there are two nodes remaining, and the two nodes represent a cut. To increase the probability of success,
the randomized algorithm is ran several times. While doing the runs, one keeps track of the smallest cut found so far.
See the Wikipedia entry for more details. For perhaps a better introduction, check out the first chapter of Probability and Computing: Randomized Algorithms and Probabilistic Analysis by Michael Mitzenmacher and Eli Upfal. |
In the Floyd-Warshall algorithm we have:
Let $d\_{ij}^{(k)}$ be the weight of a shortest path from vertex $i$ to $j$ for which all intermediate vertices are in the set $\{1, 2, \cdots, k\}$ then
\begin{align\*}
&d\_{ij}^{(k)}= \begin{cases}
w\_{ij} & \text{ if } k = 0 \\
\min\{d\_{ij}^{(k-1)}, d\_{ik}^{(k-1)} + d\_{kj}^{(k-1)}\} & \text{ if } k > 0
\end{cases}\\
\end{align\*}
In fact it considers whether $k$ is an intermediate vertex in the shortest path from $i$ to $j$ or not. If $k$ is an intermediate it selects $d\_{ik}^{(k-1)} + d\_{kj}^{(k-1)}$ becuase it decomposes the shortest path to $i \stackrel{p\_1}{\leadsto} k \stackrel{p\_2}{\leadsto} j$ otherwise $d\_{ij}^{(k-1)}$ since $k$ is not an intermediate vertex so it has no effect on the shortest path.
**My problem is**, For a given shortest path between $i$ and $j$, $k$ is an intermediate vertex or not and its existence is deduced from the structure of the graph not our decision. so we have no freedom to select or not to select the $k$, because if $k$ is an intermediate vertex so we must choose $d\_{ik}^{(k-1)} + d\_{kj}^{(k-1)}$ and if not we must choose $d\_{ij}^{(k-1)}$. But when in formula it takes $\min$ between two numbers, it sounds like that it has option to select any of them while based on the structure of the graph there is no option for us. I believe the formula must be
\begin{align\*}
&d\_{ij}^{(k)}= \begin{cases}
w\_{ij} & \text{ if } k = 0 \\
d\_{ij}^{(k-1)} & \text{ if } k > 0 \text{ and } k \notin \text{ intermediate}(p)\\
d\_{ik}^{(k-1)} + d\_{kj}^{(k-1)} & \text{ if } k > 0 \text{ and } k \in \text{ intermediate}(p)
\end{cases}\\
\end{align\*} | In fact the algorithm *determines* whether the vertex $k$ is "intermediate" on the path from $i$ to $j$. If indeed $d\_{ik}^{(k-1)} + d\_{kj}^{(k-1)} < d\_{ij}^{(k-1)} $ during the computation we know that (up to the first $k$ vertices) the vertex $k$ is needed to obtain a shorter path between $i$ and $j$.
So, in my opinion, the algorithm takes into account the structure of the graph! |
Typically, efficient algorithms have a polynomial runtime and an exponentially-large solution space. This means that the problem must be easy in two senses: first, the problem can be solved in a polynomial number of steps, and second, the solution space must be very structured because the runtime is only polylogarithmic in the number of possible solutions.
However, sometimes these two notions diverge, and a problem is easy only in the first sense. For instance, a common technique in approximation algorithms and parameterized complexity is (roughly) to prove that the solution space can actually be restricted to a much smaller size than the naive definition and then use brute-force to find the best answer in this restricted space. If we can *a priori* restrict ourselves to, say, n^3 possible answers, but we still need to check each one, then in some sense such problems are still "hard" in that there's no better algorithm than brute-force.
Conversely, if we have a problem with a doubly-exponential number of possible answers, but we can solve it in only exponential time, then I'd like to say that such a problem is "easy" ("structured" may be a better word) since runtime is only log of the solution space size.
Does anyone know of any papers that consider something like hardness based on the gap between an efficient algorithm and brute-force or hardness relative to the size of the solution space? | One problem with formalizing the question is that the phrase "solution space for Problem A" is not well-defined. The definition of a solution space needs a *verifier* algorithm which, given an instance and a candidate solution, verifies whether or not the solution is correct. Then, the solution space of an instance wrt to a verifier is the set of candidate solutions that make the verifier output "correct".
For example, take the problem SAT0: given a Boolean formula, is it satisfied by the all-zeroes assignment? This problem is trivially in polynomial time, but its solution space can vary wildly, depending on which verifier you use. If your verifier ignores the candidate solution and just checks if all-zeroes works on the instance, then the "solution space" for any SAT0 instance on that verifier is trivial: it is all possible solutions. If your verifier checks to see if the candidate solution is a satisfying assignment, then the solution space of a SAT0 instance can actually be quite complex, arguably as complex as any SAT instance's solution space.
That said, the problem of "avoiding brute-force search" can be formalized in the following way (as seen in the paper "Improving exhaustive search implies superpolynomial lower bounds"). You are given a verifier algorithm that runs in time $t(n,k)$, on instances of size $n$ and candidate solutions of $k$ bits. The question is, \*on arbitrary instances of size $n$, can we determine if there is a correct solution (wrt this verifier) with at most $k$ bits, in much less than $O(2^k t(n,k))$ time?
Note $O(2^k t(n,k))$ is the cost of trying all strings of length up to k, and running the verifier. So the above can be seen as asking whether we can improve on brute-force search for the given verifier. The area of "exact algorithms for NP-hard problems" can be seen as a long-term effort to study the difficulty of improving on brute-force search for certain very natural verifiers: e.g. the question of finding better-than-$2^n$ algorithms for SAT is the question of whether we can always improve over brute-force search for the verifier that checks if the given candidate solution is a satisfying assignment to the given SAT instance.
The paper shows some interesting consequences of improving on brute-force search for some problems. Even improving on brute-force search for "polynomial-size solution spaces" would have interesting consequences. |
Let's say I have two 1-dimensional arrays, $a\_1$ and $a\_2$. Each contains 100 data points. $a\_1$ is the actual data, and $a\_2$ is the model prediction. In this case, the $R^2$ value would be:
$$
R^2 = 1 - \frac{SS\_{res}}{SS\_{tot}} \quad\quad\quad\quad\quad\ \ \quad\quad(1).
$$
In the meantime, this would be equal to the square value of the correlation coefficient,
$$
R^2 = (\text{Correlation Coefficient})^2 \quad (2).
$$
Now if I swap the two: $a\_2$ is the actual data, and $a\_1$ is the model prediction. From equation $(2)$, because correlation coefficient does not care which comes first, the $R^2$ value would be the same. However, from equation $(1)$, $SS\_{tot}=\sum\_i(y\_i - \bar y )^2$, the $R^2$ value will change, because the $SS\_{tot}$ has changed if we switch $y$ from $a\_1$ to $a\_2$; in the meantime, $SS\_{res}=\sum\_i(y\_i -f\_i)^2$ does not change.
My question is: **How can these contradict each other?**
**Edit**:
1. I was wondering that, will the relationship in Eq. (2) still stand, if it is not a simple linear regression, i.e., the relationship between IV and DV is not linear (could be exponential / log)?
2. Will this relationship still stand, if the sum of the prediction errors does not equal zero? | This is true that $SS\_{tot}$ will change ... but you forgot the fact that the regression sum of of squares will change as well. So let's consider the simple regression model and denote the Correlation Coefficient as $r\_{xy}^2=\dfrac{S\_{xy}^2}{S\_{xx}S\_{yy}}$, where I used the sub-index $xy$ to emphasize the fact that $x$ is the independent variable and $y$ is the dependent variable. Obviously, $r\_{xy}^2$ is unchanged if you swap $x$ with $y$. We can easily show that $SSR\_{xy}=S\_{yy}(R\_{xy}^2)$, where $SSR\_{xy}$ is the regression sum of of squares and $S\_{yy}$ is the total sum of squares where $x$ is independent and $y$ is dependent variable. Therefore: $$R\_{xy}^2=\dfrac{SSR\_{xy}}{S\_{yy}}=\dfrac{S\_{yy}-SSE\_{xy}}{S\_{yy}},$$ where $SSE\_{xy}$ is the corresponding residual sum of of squares where $x$ is independent and $y$ is dependent variable. Note that in this case, we have $SSE\_{xy}=b^2\_{xy}S\_{xx}$ with $b=\dfrac{S\_{xy}}{S\_{xx}}$ (See e.g. Eq. (34)-(41) [here](http://mathworld.wolfram.com/CorrelationCoefficient.html).) Therefore: $$R\_{xy}^2=\dfrac{S\_{yy}-\dfrac{S^2\_{xy}}{S^2\_{xx}}.S\_{xx}}{S\_{yy}}=\dfrac{S\_{yy}S\_{xx}-S^2\_{xy}}{S\_{xx}.S\_{yy}}.$$ Clearly above equation is symmetric with respect to $x$ and $y$. In other words: $$R\_{xy}^2=R\_{yx}^2.$$ To summarize when you change $x$ with $y$ in the simple regression model, both numerator and denominator of $R\_{xy}^2=\dfrac{SSR\_{xy}}{S\_{yy}}$ will change in a way that $R\_{xy}^2=R\_{yx}^2.$ |
Assume I have one variable X that I experimentally manipulate, and then measure the corresponding values obtained for another variable, Y. Assume also that the two variables are both measured along the same scale (units).
Why is only regression - but not correlation - an appropriate tool to quantify the effect of X on Y? | Since [you can estimate](https://stats.stackexchange.com/questions/32464/how-does-the-correlation-coefficient-differ-from-regression-slope) slope of simple linear regression using correlation coefficient
$$ \hat {\beta} = {\rm cor}(Y\_i, X\_i) \cdot \frac{ {\rm SD}(Y\_i) }{ {\rm SD}(X\_i) } $$
It is *not true* that there are cases when regression could be appropriate where correlation is not. The only such case where the statement could make sense is if you are talking about multivariate relations to account for, but still, you can use [partial correlation](https://stats.stackexchange.com/questions/174022/how-could-i-get-a-correlation-value-that-accounts-for-gender/174025#174025) as well in such cases.
As noted by *whuber*, regression is much more sophisticated model that gives you more information then correlation alone, but the difference is not about appropriateness, but about their utility and the fact that regression provides additional information. |
Suppose that $X$ and $Y$ are two random variables that are defined on the same support. Furthermore, suppose that $H(X | Y) = \log n$ for some $n$. I am now interested in how much the term $H(X | Y, X \neq Y)$ may differ from $H(X | Y)$.
Suppose that $X|Y=y$ is uniformly distributed (for every y). Intuitively, as $H(X | Y) = \log n$, for a typical element $y$ in the support of $X$ and $Y$, we also have $H(X | Y=y) = \log n$. Thinking about a uniform distribution, X takes on at least n different values, and, therefore, we should expect $H(X | Y, X \neq Y) \ge \log(n-1)$ as the conditioning on $X \neq Y$ eliminates only a single potential element.
Can such a result be obtained in full generality?
---
Thanks very much for your answers. I am new to stackexchange so I don't know yet where and how to respond to suggested answers.
The fact that you can bound $H(X|Y) - H(X|Y, X \neq Y) \le \log(\frac{n}{n-1})$ is very interesting, as you can compute:
$\log(\frac{n}{n-1}) = \log( \frac{1}{1-\frac{1}{n}} ) = - \log(1-\frac{1}{n}) = -\Theta(\log(e^{-\frac{1}{n}})) = \Theta(\frac{1}{n})$. This result matches the intuition very well as if the entropy $H(X|Y) = \log(n)$ then knowing that $H(X|Y, X \neq Y)$ essentially eliminates only one possible choice for $X$. This is somehow quantified by the $\Theta(\frac{1}{n})$. | The AUTOMATA workshop series focuses on cellular automata:
<http://www.eng.u-hyogo.ac.jp/eecs/eecs12/automata2014/> |
I have a data frame with 7 columns that holds numerical and integer values where some columns, even though numerical, are binary values (e.g. a dummy variable for sex; $0=\text{male}$, $1=\text{female}$).
I was asked to check if my data frame is normally distributed and if not I have to normalize it. I found that there’s two ways to check: either by visualization, or by testing. However I tried both I didn’t get the outcome I want! | Welcome to CV!
There are several issues with your suggested approach:
* Contrary to what the name suggests, normalization will not turn an arbitrarily distributed variable into a normally distributed one.
* Neither can normality testing tell you that your data are normally distributed (only whether there is a significant deviation from normality).
* Finally, data need rarely be normally distributed. It is also unlikely any of your data truly are normally distributed in the first place. You mentioned an integer variable, this can't be exactly normal, because the normal distribution is continuous, from $-\infty$ to $+\infty$. The same goes for the binary variable. Rather, it is common for models to assume the *conditional* distribution of the outcome variable to be *approximately* normally distributed.
As to what approach is best, you may want to have a look [here](https://stats.stackexchange.com/q/2492/176202) for starters. |
Is it because it uses 3 way handshaking, it is reliable? Or is it due to other reasons? Like due to congestion control, flow control and error control in TCP? Why do we call TCP as reliable? | TCP is said to be reliable because when an application receives data, it knows that this is exactly the data that the sender sent. No part of the data was lost, reordered, etc.
Error control is the part of TCP that deals with ensuring the reliability of connections. For example, if the sender sends two packets A and B and the receiver received only B, the TCP implementation on the receiver side will wait a little in case A took longer than B to arrive. If A finally arrives, the receiver will release A to the application before B. If A doesn't arrive after a while, the receiver will transmit a request to the sender to retransmit A. (I am simplifying somewhat, see the specification for details.)
This is not related to congestion and flow control. They help the performance of the network, but they aren't needed or helpful to make a connection reliable. |
Using python need to code a recursive function with one input and no global integers that calculates the number of options to get $x$ using $a\*1+b\*2+c\*3$.
Say $x=3$, there are four options: $\lbrace (1,1,1),(1,2),(2,1),(3)\rbrace$. | Recursion is a pretty bad choice here, but here is the recursion you could use:
$$
f(n) = \begin{cases} 0, & n < 0, \\ 1, & n = 0, \\ f(n-1) + f(n-2) + f(n-3), & n > 0.\end{cases}
$$
For example,
$$
\begin{align\*}
f(-2) &= 0, \\
f(-1) &= 0, \\
f(0) &= 1, \\
f(1) &= f(0) + f(-1) + f(-2) = 1, \\
f(2) &= f(1) + f(0) + f(-1) = 2, \\
f(3) &= f(2) + f(1) + f(0) = 4.
\end{align\*}
$$
The dynamic programming approach implied by this example is a much better idea; it can be implemented in constant space and linear time, whereas the recursion will take linear space and exponential time. You could also use matrix powering to compute $f$:
$$
f(n) =
\begin{bmatrix} 1 & 0 & 0 \end{bmatrix}
\begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 1 \end{bmatrix}^n
\begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix}.
$$
The generating function is
$$
\sum\_{n=0}^\infty f(n) x^n = \frac{1}{1-x-x^2-x^3}.
$$
Finally, you can also find an explicit solution:
$$
\begin{align\*}
f(n) &= \mathrm{round}(Cx^n), \\
C &= \frac{1}{3} + \frac{\sqrt[3]{847+33\sqrt{33}}}{66} + \frac{4}{3\sqrt[3]{847+33\sqrt{33}}}, \\
x &=
\frac{1 + \sqrt[3]{19-3\sqrt{33}} + \sqrt[3]{19+3\sqrt{33}}}{3}.
\end{align\*}
$$
This explicit solution isn't very helpful since you need a lot of precision, but it does give you the correct asymptotics; note $C \approx 0.6184199223$ and $x \approx 1.839286756$. |
Diallel Analysis using the Griffing and Hayman approach is so common in plant breeding and genetics. I'm wondering if someone can share R worked example on Diallel Analysis. Is there any good referenced book which covered worked examples? Thanks
References:
Griffing B (1956) Concept of general and specific combining ability in relation to diallel crossing systems. Aust J Biol Sci 9:463-493 [[pdf](http://www.publish.csiro.au/?act=view_file&file_id=BI9560463.pdf)]
Hayman BI (1954) The analysis of variance of diallel tables. Biometrics 10:235-244 [[JSTOR](http://www.jstor.org/stable/3001877)]
Hayman BI (1954) The theory and analysis of diallel crosses. Genetics 39:789-809 [[pdf](http://www.genetics.org/content/39/6/789.full.pdf)] | There is beta package plantbreeding, which can do diallel analysis.
<https://r-forge.r-project.org/projects/plantbreeding/>
They has a blog:
<http://rplantbreeding.blogspot.com/>
The following is example from this package:
```
require(plantbreeding)
data(fulldial)
out <-diallele1(dataframe = fulldial, male = "MALE", female = "FEMALE",
progeny = "TRT", replication = "REP", yvar = "YIELD" )
print(out)
out$anvout # analysis of variance
out$anova.mod1 # analysis of variance for GCA and SCA effects
out$components.model1 # model1 GCA, SCA and reciprocal components
out$gca.effmat # GCA effects
out$sca.effmat # SCA effect matrix
out$reciprocal.effmat # reciprocal effect matrix
out$varcompare # SE for comparisions
out$anovadf.mod2 # ANOVA for model 2
out$varcomp.model2 # variance components for model 2
``` |
I am a mathematician interested in set theory, ordinal theory, infinite combinatorics and general topology.
Are there any applications for these subjects in computer science?
I have looked a bit, and found a lot of applications (of course) for finite graph theory, finite topology, low dimensional topology, geometric topology etc.
However, I am looking for applications of the infinite objects of these subjects, i.e. infinite trees ([Aronszajn trees](https://en.wikipedia.org/wiki/Aronszajn_tree) for example), infinite topology etc.
Any ideas?
Thank you!! | One major application of topology in semantics is the topological approach to computability.
The basic idea of the topology of computability comes from the observation that termination and nontermination are not symmetric. It is possible to observe whether a black-box program terminates (simply wait long enough), but it's not possible to observe whether it doesn't terminate (since you can never be certain you have not waited long enough to see it terminate). This corresponds to equipping the two point set {HALT, LOOP} with the Sierpinski topology, where $\emptyset, \{HALT\}, and \{HALT, LOOP\}$ are the open sets. So then we can basically get pretty far equating "open set" with "computable property". One surprise of this approach to traditional topologists is the central role that non-Hausdorff spaces play. This is because you can basically make the following identifications
$$
\begin{matrix}
\mathbf{Computability} & \mathbf{Topology}\\
\mbox{Type} & \mbox{Space} \\
\mbox{Computable function} & \mbox{Continuous function} \\
\mbox{Decidable set} & \mbox{Clopen set} \\
\mbox{Semi-decidable set} & \mbox{Open set} \\
\mbox{Set with semidecidable complement} & \mbox{Closed set} \\
\mbox{Set with decidable equality} & \mbox{Discrete space} \\
\mbox{Set with semidecidable equality} & \mbox{Hausdorff space} \\
\mbox{Exhaustively searchable set} & \mbox{Compact space} \\
\end{matrix}
$$
Two good surveys of these ideas are MB Smyth's *Topology* in the *Handbook of Logic in Computer Science* and Martin Escardo's [*Synthetic topology of data types and classical spaces*](http://www.cs.bham.ac.uk/~mhe/papers/entcs87.pdf).
Topological methods also play an important role in the semantics of concurrency, but I know much less about that. |
On page 231 of The Elements of Statistical Learning AIC is defined as follows in (7.30)
>
> Given a set of models $f\_\alpha(x)$ indexed by a tuning parameter
> $\alpha$, denote by $\overline{err}(\alpha)$ and $d(\alpha)$ the
> training error and number parameters for each model. Then for a set of
> models we define
>
>
> $$AIC(\alpha) = \overline{err}(\alpha) + 2 \cdot \frac{d(\alpha)}{N}\hat{\sigma\_\epsilon}^2$$
>
>
>
Where $\overline{err}$, the training error, is $\frac{1}{N}\sum\_{i=1}^NL(y\_i,\hat{f}(x\_i))$.
On the same page it is stated (7.29) that
>
> For the logistic regression model, using the binomial log-likelihood,
> we have
>
>
> $$AIC = -\frac{2}{N} \cdot \text{loglik} + 2 \cdot \frac{d}{N}$$
>
>
>
where "$\text{loglik}$" is the maximised log-likelihood.
The book also mentions that $\hat{\sigma\_\epsilon}^2$ is an estimate of the noise variance, obtained from the mean-squared error of a low-bias model.
It is not clear to me how the first equation leads to the second in the case of logistic regression? In particular what happens to the $\hat{\sigma\_\epsilon}^2$ term?
**Edit**
I found in a later example in the book (on page 241) the authors use AIC in an example and say
>
> For misclassification error we used
> $\hat{\sigma\_{\epsilon}}^2=[N/(N-d)] \cdot \overline{err}(\alpha)$ for
> the least restrictive model ...
>
>
>
This doesn't answer my question as it doesn't link the two aforementioned expressions of AIC, but it does seem to indicate that $\hat{\sigma\_{\epsilon}}^2$ is not simply set to $1$ as stated in Demetri's answer. | I imagine they made an approximation. $\sigma^2\_\epsilon$ is the residual variance of the outcome conditioned on the variables $x\_i$. When the outcome is binary, as in logistic regression, $\sigma^2<1$.
When we compare models with AIC, only the absolute differences between models matter, so using the approximation $\sigma^2=1$ for all models isn't so offensive. Let me demonstrate
$$\Delta AIC = AIC\_1 - AIC\_2 = \dfrac{-2}{N}(\text{loglik}\_1 - \text{loglik}\_2) + \dfrac{2}{N}(d\_1 - d\_2) $$
Because we made the assumption that $\sigma^2\_\epsilon$ was the same for each model (namely, it was 1) it would factor out of the difference between models' effective number of parameters. Setting $\sigma^2\_\epsilon=1$ isn't arbitrary, it is an upper bound on the variance of a binary variable. A least upper bound would be 0.25 and it it isn't quite clear to me why that wasn't chosen, but again the choice of $\sigma^2\_\epsilon$ seems only to affect the AIC values and not the differences between model AIC, which is what we're really after. |
There is a popular proof for the undecidability of the PCP (Post correspondence problem), which is outlined here:
<https://en.wikipedia.org/wiki/Post_correspondence_problem>
I'll assume whoever will answer the question will be familiar with this proof.
---
I have seen this proof elsewhere and this kind of proof always mentions that if the $TM \ M$ halts we can solve the instance of the PCP. So far so good.
Now I was thinking about the case when the $TM \ M$ does not halt on input $w$.
Then out total number of tuples/pairs ($\small{(a\_i,b\_i)}$, which get passed onto the PCP) should be countably infinite.
How can we even try to solve the PCP at this point ? Or do we implicitly think: "Thats impossible !" and say "There is no solution!" ?
This part confuses me very much because for the case that the TM halts we construct such a complex method and it seemed "cheap" to just throw the towel for the case when it would not halt.
I hope I could make my thoughts understandable, without much formality.
Any help is appreciated. | An instance of PCP consists of a finite list of tiles. The proof that PCP is undecidable consists of a computable reduction $f$ with the following properties:
* The input to $f$ is a Turing machine $M$ and a valid input $w$ to $M$.
* The output of $f$ is an instance of PCP (i.e., a finite list of tiles).
* The PCP instance $f(M,w)$ is a Yes instance iff $M$ halts on $w$.
A further property of the reduction is that the number of tiles depends only on $M$, though the contents of one of the tiles depends on $w$.
Let us denote the tiles by $(a\_1,b\_1),\ldots,(a\_n,b\_n)$.
When $M$ halts on $w$, there exists a number $N$ and a sequence $i\_1,\ldots,i\_N \in \{1,\ldots,n\}$ such that $$ a\_{i\_1} a\_{i\_2} \ldots a\_{i\_N} = b\_{i\_1} b\_{i\_2} \ldots b\_{i\_N}. $$
When $M$ does not halt on $w$, no such $N$ exists. However, there does exist an *infinite* sequence $i\_1,i\_2,\ldots$ such that $$ a\_{i\_1} a\_{i\_2} \ldots = b\_{i\_1} b\_{i\_2} \ldots, $$
where both sides are *infinite* words. Perhaps this is what you meant by "total number of tiles". The actual number of tiles is always finite, and independent of the input. The number of instances of tiles required to "solve" the PCP instance could be finite or infinite; but we only consider finite solutions as valid. |
I am puzzled by something I found using Linear Discriminant Analysis. Here is the problem - I first ran the Discriminant analysis using 20 or so independent variables to predict 5 segments. Among the outputs, I asked for the Predicted Segments, which are the same as the original segments for around 80% of the cases. Then I ran again the Discriminant Analysis with the same independent variables, but now trying to predict the Predicted Segments. I was expecting I would get 100% of correct classification rate, but that did not happen and I am not sure why. It seems to me that if the Discriminant Analysis cannot predict with 100% accuracy it own predicted segments then somehow it is not a optimum procedure since a rule exist that will get 100% accuracy. I am missing something?
Note - This situation seems to be similar to that in Linear Regression Analysis. If you fit the model $y = a + bX + \text{error}$ and use the estimated equation with the same data you will get $\hat{y}$ [$= \hat{a} + \hat{b}X$]. Now if you estimate the model $\hat{y} = \hat{a} + \hat{b}X + \text{error}$, you will find the same $\hat{a}$ and $\hat{b}$ as before, no error, and R2 = 100% (perfect fit). I though this would also happen with Linear Discriminant Analysis, but it does not.
Note 2 - I run this test with Discriminant Analysis in SPSS. | For optimisation, you don't need to perform a grid search; a Nelder-Mead simplex(fminsearch in MATLAB) approach is just as effective and generally much faster, especially if you have a lot of hyper-parameters to tune. Alternatively you can use gradient descent optimisation - if your implementation doesn't provide gradient information, you can always estimate it by finite differences (as fminunc in MATLAB does).
The Span bound is a good criterion to optimise, as it is fast, but good old cross-validation is hard to beat (but use a continuous statistic such as the squared hinge loss).
HTH
n.b. nu needs to lie in [0,1], however this is not a problem, just re-parameterise as theta = logit(nu), and then optimise theta instead of nu. You can then use more or less any numerical optimisation technique you like, e.g. Nelder-Mead simplex, gradient descent, local search, genetic algorithms... |
I think I'm not understanding it, but $\eta$-conversion looks to me as a $\beta$-conversion that does nothing, a special case of $\beta$-conversion where the result is just the term in the lambda abstraction because there is nothing to do, kind of a pointless $\beta$-conversion.
So maybe $\eta$-conversion is something really deep and different from this, but, if it is, I don't get it, and I hope you can help me with it.
(Thank you and sorry, I know this is part of the very basics in lambda calculus) | **Update [2011-09-20]:** I expanded the paragraph about $\eta$-expansion and extensionality. Thanks to Anton Salikhmetov for pointing out a good reference.
$\eta$-conversion $(\lambda x . f x) = f$ is a special case of $\beta$- conversion *only* in the special case when $f$ is itself an abstraction, e.g., if $f = \lambda y . y y$ then $$(\lambda x . f x) = (\lambda x . (\lambda y . y y) x) =\_\beta (\lambda x . x x) =\_\alpha f.$$ But what if $f$ is a variable, or an application which does not reduce to an abstraction?
In a way $\eta$-rule is like a special kind of extensionality, but we have to be a bit careful about how that is stated. We can state extensionality as:
1. for all $\lambda$-terms $M$ and $N$, if $M x = N x$ then $M = N$, or
2. for all $f, g$ if $\forall x . f x = g x$ then $f = g$.
The first one is a meta-statement about the terms of the $\lambda$-calculus. In it $x$ appears as a formal variable, i.e., it is part of the $\lambda$-calculus. It can be proved from $\beta\eta$-rules, see for example Theorem 2.1.29 in *"Lambda Calculus: its Syntax and Semantics"* by Barendregt (1985). It can be understood as a statement about all the *definable* functions, i.e., those which are denotations of $\lambda$-terms.
The second statement is how mathematicians usually understand mathematical statements. The theory of $\lambda$-calculus describes a certain kind of structures, let us call them "*$\lambda$-models*". A $\lambda$-model might be uncountable, so there is no guarantee that every element of it corresponds to a $\lambda$-term (just like there are more real numbers than there are expressions describing reals). Extensionality then says: if we take any two things $f$ and $g$ in a $\lambda$-model, if $f x = g x$ for all $x$ in the model, then $f = g$. Now even if the model satisfies the $\eta$-rule, it need not satisfy extensionality in this sense. (Reference needed here, and I think we need to be careful how equality is interpreted.)
There are several ways in which we can motivate $\beta$- and $\eta$-conversions. I will randomly pick the category-theoretic one, disguised as $\lambda$-calculus, and someone else can explain other reasons.
Let us consider the typed $\lambda$-calculus (because it is less confusing, but more or less the same reasoning works for the untyped $\lambda$-calculus). One of the basic laws that should holds is the exponential law $$C^{A \times B} \cong (C^B)^A.$$ (I am using notations $A \to B$ and $B^A$ interchangably, picking whichever seems to look better.) What do the isomorphisms $i : C^{A \times B} \to (C^B)^A$ and $j : (C^B)^A \to C^{A \times B}$ look like, written in $\lambda$-calculus? Presumably they would be $$i = \lambda f : C^{A \times B} . \lambda a : A . \lambda b : B . f \langle a, b \rangle$$ and $$j = \lambda g : (C^B)^A . \lambda p : A \times B . g (\pi\_1 p) (\pi\_2 p).$$
A short calculation with a couple of $\beta$-reductions (including the $\beta$-reductions $\pi\_1 \langle a, b \rangle = a$ and $\pi\_2 \langle a, b \rangle = b$ for products) tells us that, for every $g : (C^B)^A$ we have $$i (j g) = \lambda a : A . \lambda b : B . g a b.$$
Since $i$ and $j$ are inverses of each other, we expect $i (j g) = g$, but to actually prove this we need to use $\eta$-reduction twice: $$i(j g) = (\lambda a : A . \lambda b : B . g a b) =\_\eta (\lambda a : A . g a) =\_\eta g.$$
So this is one reason for having $\eta$-reductions. Exercise: which $\eta$-rule is needed to show that $j (i f) = f$? |
I know that knowledge about relationships between things can be represented using ontologies and stored in some sort of file or database system.
Can a network of procedural knowledge also be created in this way? Such that complex algorithms can be defined and stored efficiently, translated into other languages and forms (such as finite state machines or machine language), [changed](http://en.wikipedia.org/wiki/Belief_revision), and form the basis for other AI axioms?
i.e. [Procedural Reasoning Systems](http://en.wikipedia.org/wiki/Procedural_Reasoning_System) -- how would a Knowledge Area (KA) be represented as a cognitive primitive in a computer system? | Yes, there has been. Moshe Vardi recently gave a survey talk at [BIRS Theoretical Foundations of Applied SAT Solving workshop](http://www.birs.ca/events/2014/5-day-workshops/14w5101):
* Moshe Vardi, [Phase transitions and computational complexity](http://www.birs.ca/events/2014/5-day-workshops/14w5101/videos/watch/201401231116-Vardi.mp4), 2014.
(Moshe presents the graph of their experiment a bit after minute 14:30 in his talk linked above.)
Let $\rho$ denote the clause ratio.
As the value of $\rho$ increases beyond the threshold
the problem becomes easier for existing SAT solvers,
but not as easy as it was before reaching the threshold.
There is a very steep increase in difficulty as we approach the threshold from below.
After the threshold the problem becomes easier compared to the threshold but
the decrease in difficulty is much less steep.
Let $T\_\rho(n)$ denote the difficulty of the problem w.r.t. to $n$
(in their experiment $T\_\rho(n)$ is the median running-time of [GRASP](http://vlsicad.eecs.umich.edu/BK/Slots/cache/sat.inesc.pt/~jpms/grasp/) on random 3SAT instances with the clause ratio $\rho$).
Moshe suggests that $T\_\rho(n)$ changes as follows:
* $\rho \ll$ the threshold: $T\_\rho(n)$ is polynomial in $n$,
* $\rho$ is near the threshold: $T\_\rho(n)$ is exponential in $n$,
* $\rho \gg$ the threshold: $T\_\rho(n)$ remains exponential in $n$ but
the exponent decreases as $\rho$ increases. |
How does using 1 - correlation as the distance influence the determination of the number of clusters when doing kmeans?
Is it still valid to use the classical indices (Dunn, Davies-Bouldin...)? | First. It is odd to use $1-r$ distance with K-means clustering, which internally operates with euclidean distance. You could easily turn *r* into true euclidean *d* by the formula derived from *cosine theorem*: $\sqrt{2(1-r)}$.
Second. I wonder how you manage to input distance matrix into K-means clustering procedure. Does R allow it? (I don't use R, and the K-means programs I know require casewise data as input.) Note: it is possible to create raw casewise data out of euclidean distance matrix.
Third. There is a great number of internal "clustering criterions" (over 100 I believe) helpful to decide what cluster solution is "better". They differ in assumptions. Some (like cophenetic correlation or Silhouette Statistic) are very general and can be used with any distance or similarity measure. Some (like Calinski-Harabasz or Davies-Bouldin) imply euclidean distance (or at least metric one) in order to have geometrically sensible meaning. I haven't heard of Dunn's index you mention.
P.S. Reading Wikipedia page on Dunn index suggests that this index is of general type. |
I am trying to build a predictive model for a binary classification problem. I have 200,000 features and 100 samples. I want to reduce the # of features and not over-fit the model, all while being constrained with a very small sample size.
This is currently what I'm doing:
```
from sklearn.feature_selection import RFECV
from sklearn.cross_validation import train_test_split
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
import numpy as np
# remove mean and scale to unit variance
scaler = StandardScaler()
scaler.fit(features)
features = scaler.transform(features)
# split our data set into training, and testing
xTrain, xTest, yTrain, yTest = train_test_split(features, classes, test_size=0.30)
# create classifier to use with recursive feature elimination
svc = SVC(kernel="linear", class_weight = 'balanced')
# run recursive feature elimination with cross-validation
rfecv = RFECV(estimator=svc, step=1, cv=4,
scoring = 'roc_auc') # pick features using roc_auc score because we have an imbalance of classes
newTrain = rfecv.fit_transform(xTrain, yTrain)
# test model
svc.fit(newTrain, yTrain)
svc.predict(xTest)
```
I believe that I'm getting overly-optimistic classification accuracy, likely due to model over-fitting.
How can I test whether I am over-fitting my model? What would be the most optimal way to feature select and generate a predictive model using such a small sample size (and large # of features)? | Let $Y$ be the binary response variable and $X$ the vector of predictors with density $f$ (which would either be continuous, discrete or a combination of both). Note that
$$
\frac{P(Y = 1 \mid X = x)}{P(Y = 0 \mid X = x)} = \frac{P(Y = 1) f\_{X \mid Y=1}(x)}{P(Y = 0) f\_{X \mid Y=0}(x)}
$$
and so
$$
\log \left ( \frac{P(Y = 1 \mid X = x)}{P(Y = 0 \mid X = x)} \right ) = \log \left ( \frac{P(Y = 1)}{P(Y = 0)} \right ) + \log \left ( \frac{f\_{X \mid Y=1}(x)}{f\_{X \mid Y=0}(x)} \right ) .
$$
This means that under a logistic regression model the logarithm of the prior odds of the event $\{ Y = 1 \}$ appears as an additive constant in the conditional log odds. What you might consider then is an intercept adjustment where you subtract off the logit of the empirical odds and add the logit of the prior odds. But, assuming that the prior probability is accurate this doesn't expect to have much of an effect on the model. This type of adjustment is made primarily after some sampling procedure that artificially alters the proportion of events in the data. |
I want to make a quantitative statement like "There is a 90% chance that this $X$-$Y$-data follows a linear model (with some noise added on top)". I can't find this kind of statement discussed in standard statistics texbooks, such as James *et al.*'s "An Introduction to Statistical Learning" (asking as a physicist with rudimentary statistics knowledge).
To be more precise: I'm assuming that some data is generated from $Y = f(X) + \epsilon$, where $f(X)$ is some exact relationship, e.g. the linear model $f(X) = \beta\_0 + \beta\_1 X$, and $\epsilon$ is noise drawn from a normal distribution with some unknown standard deviation $\sigma$. I want to calculate the probability that some proposed $\hat f(X)$ matches the actual $f(X)$.
I can do a least-squares fit to determine the estimate $\hat f(X)$. Now, if the model is correct ($\hat f(X) = f(X)$), then the residuals of the fit should exactly correspond to $\epsilon$. At the very least, if the data fits the model, there should be no correlation between the residuals and $X$. To be more quantitative, though, I would want to check that the residuals are in fact from a normal distribution with unkown $\sigma$ (although the residual standard error, RSE, will be an estimate for $\sigma$, so I could also assume that $\sigma$ is actually known). Isn't there some way to calculate a p-value for whether some given values (the residuals) are from a given distribution (normal distribution with RSE as the standard deviation)?
I'm not looking for the $R^2$ statistic, which will tell me how linear the data is, but also take into account the noise (larger $\sigma$ will lower the $R^2$ value). In my case, I don't care how noisy the data is, as long as it's normally distributed around the fit $\hat f(X)$. | For better and deeper understanding of the **Residual Connection** concept, you may want to also read this paper: [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385). This is the same paper that is also referenced by "*Attention Is All You Need*" paper when explaining encoder element in the *Transformers* architecture. |
Probabilities of a random variable's observations are in the range $[0,1]$, whereas log probabilities transform them to the log scale. What then is the corresponding range of log probabilities, i.e. what does a probability of 0 become, and is it the minimum of the range, and what does a probability of 1 become, and is this the maximum of the log probability range? What is the intuition of this of being of any practical use compared to $[0,1]$?
I know that log probabilities allow for stable numerical computations such as summation, but besides arithmetic, how does this transformation make applications any better compared to the case where raw probabilities are used instead? a comparative example for a continuous random variable before and after logging would be good | I would like to add that taking the log of a probability or probability density can often simplify certain computations, such as calculating the gradient of the density given some of its parameters. This is in particular when the density belongs to the exponential family, which often contain fewer special function calls after being logged than before. This makes taking the derivative by hand simpler (as product rules become simpler sum rules), and also can lead to more stable numerical derivative calculations such as finite differencing.
As an illustration, let's take the Poisson with probability function $e^{-\lambda}\frac{\lambda^{x}}{x!}$. Even though $x$ is discrete, this function is smooth with respect to $\lambda$, and becomes $\log f\_x= -\lambda + x\*\log(\lambda) - \log(x!)$, for a derivative with respect to $\lambda$ of simply $\frac{\partial \log f\_x}{\partial \lambda} = -1 + \frac{x}{\lambda}$, which involves two simple operations. Contrast that with $\frac{\partial f\_x}{\partial \lambda} = \frac{e^{-\lambda } (x-\lambda
) \lambda ^{x-1}}{x!}$, which involves natural exponentiation, real exponentiation, computation of a factorial, and, worst of all, division by a factorial. This both involves more computation time and less computation stability, even in this simple example. The result is compounded for more complex probability functions, as well as when observing an i.i.d sample of random variables, since these are added in log space while multiplied in probability space (again, complicating derivative calculation, as well as introducing more of the floating point error mentioned in the other answer).
These gradient expressions are used in both analytic and numerical computation of Maximum a Posteriori ($\ell\_0$ Bayes) and Maximum Likelihood Estimators. It's also used in the numerical solution of Method of Moments estimating equations, often via Newton's method, which involves Hessian computations, or second derivatives. Here the difference between logged and unlogged complexity can be huge. And finally, it is used to show the equivalence between least squares and maximum likelihood with a Gaussian error structure. |
If $X $ and $Y$ are independent random variables such that $X+Y$ has the same distribution as $X$ then is it always true that $P(Y=0)=1\ ?$
[This is actually a fact that a researcher used (without proof) while giving a lecture on his new paper that I was attending.] | The variance argument isn't hard to make more general:
Consider the characteristic function of the sum $\phi\_{X+Y}(t) = \phi\_X(t) \phi\_Y(t)$
But since $X$ and $X+Y$ have the same distribution $\phi\_{X+Y}(t) = \phi\_X(t)$.
Hence $\phi\_Y(t) = \phi\_{X+Y}(t) /\phi\_X(t) = 1$
This is the characteristic function of a degenerate distribution with all its mass at $0$. |
I am currently studying the **[Erdos-Reyni](https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model)** model, the $G(n,p)$ model to be specific.
Its easy to understand that all graphs with $m$ edges have generating probability of $p^{m}(1-p)^{n(n-1)/2 - m}$ where p is the probability that there is an edge between a pair of vertices.
What I am not able to understand is the following:
Almost every graph in $G(n, 2ln(n)/n)$ is connected. In other words,
As $n$ tends to infinity, the probability that a graph on $n$ vertices with edge probability $p = 2ln(n)/n$ is connected, as $n$ gets very large.
Any idea how to get this? | If a graph is not connected then there must be a set $S$ of vertices not connected to its complement. Furthermore, we can assume that $|S| \leq n/2$. For a given set $S$, the probability that $S$ is not connected to its complement is
$$
(1-p)^{|S|(n-|S|)} \leq e^{-p|S|(n-|S|)}.
$$
The union bound shows that the probability that some such $S$ exists is at most
$$
\epsilon := \sum\_{k=1}^{n/2} \binom{n}{k} e^{-pk(n-k)} =
\underbrace{\sum\_{k=1}^{n/4} \binom{n}{k} e^{-pk(n-k)}}\_{\epsilon\_1} +
\underbrace{\sum\_{k=n/4+1}^{n/2} \binom{n}{k} e^{-pk(n-k)}}\_{\epsilon\_2}.
$$
When $k \leq n/4$, we have $n-k \geq (3/4)n$ and so $$e^{-p(n-k)} \leq e^{-(3/4)pn} = \frac{1}{n^{3/2}}.$$
Using the bound $\binom{n}{k} \leq n^k$, we get
$$
\epsilon\_1 \leq \sum\_{k=1}^{n/4} n^k \cdot n^{-(3/2)k} =
\sum\_{k=1}^{n/4} \frac{1}{n^{k/2}}.
$$
We can estimate this by considering separately $k=1$, $k=2$, and $k \geq 3$:
$$
\epsilon\_1 \leq \frac{1}{\sqrt{n}} + \frac{1}{n} + \sum\_{k=3}^{n/4} \frac{1}{n^{3/2}} \leq \frac{1}{\sqrt{n}} + \frac{1}{n} + \frac{n}{4} \cdot \frac{1}{n^{3/2}} = O\left(\frac{1}{\sqrt{n}}\right).
$$
When $n/4 < k \leq n/2$, we have $n-k \geq n/2$ and so $e^{-p(n-k)} \leq e^{-pn/2} = 1/n$. Using the sharper upper bound $\binom{n}{k} \leq (\frac{en}{k})^k$, we get
$$
\epsilon\_2 \leq \sum\_{k=n/4+1}^{n/2} \left(\frac{en}{k}\right)^k \cdot \frac{1}{n^k} = \sum\_{k=n/4+1}^{n/2} \left(\frac{e}{k}\right)^k.
$$
The summands $(e/k)^k$ are decreasing, and so
$$
\epsilon\_2 \leq \frac{n}{4} \left(\frac{4e}{n}\right)^{n/4} \leq \left(\frac{4e}{n}\right)^{n/4-1} = O\left(\frac{1}{\sqrt{n}}\right),
$$
since when $n \geq 6$, the exponent is at least $1/2$.
Putting everything together, we get that the probability that the graph is not connected is at most
$$
\epsilon = O\left(\frac{1}{\sqrt{n}}\right).
$$
Using more refined arguments, one can show that when $p = \frac{\ln n + c}{n}$ for constant $c$, the probability that $G(n,p)$ is connected tends to $e^{-e^{-c}}$. Moreover, in a certain precise sense, the only obstacle to connectivity (whp) is isolated vertices. |
How can sentence similarity be used for sentiment analysis? I know we have sentence Bert which can use cosine sim to measure the distance between to vectors (the sentence embeddings), but has anyone used to for sentiment analysis specifically? | If I want to go from A to B, that doesn’t get faster by taking a bus. But if I want to take 50 people from A to B that actually will be done much faster.
A simple example: How do you make six boiled eggs? And how do you make 120 boiled eggs if you have an oven with four flames, that will involve some pipelining as well. |
Let's say you have n uniform random variables from 0 to 1. The distribution of the average of these variables approaches normal with increasing n according to the central limit theorem. What if however, instead of all the variables being random, one of them was guaranteed to be 0, and one of them was guaranteed to be 1. This would arise in the following case: Let's say you have n=7 randomly generated numbers from 0 to 1 and they are, from smallest to largest, [.1419 .1576 .4854 .8003 .9572 .9649 .9706]. If you were to subtract the smallest number from all of the numbers and then divide all the numbers by the new maximum you would end up with [0 .0189 .4145 .7945 .9838 .9931 1]. In this way you have a set of n numbers where n-2 of them are random and the other two are guaranteed to be 0 and 1. I would like to know whether the central limit theorem still applies to numbers generated in this way. By visual inspection using MATLAB, it actually appears to approach normal quicker than when the numbers are all random, but I would like a mathematical reason as to why, especially considering that the central limit theorem states that all the numbers must be random. | Denote $X\_i, i=1,...,n$ the $U(0,1)$ independent RVs. The transformation described by the OP is (using the usual notation for order statistics),
$$Z\_i = \frac {X\_i-X\_{n,(1)}}{X\_{n,(n)}-X\_{n,(1)}} = R\_n^{-1}\cdot (X\_i-X\_{n,(1)})$$
where the double index in the minimum and maximum order statistic serve to remind us that they are functions of $n$. $R\_n$ is the range of the untransformed sample.
We want to consider
$$\frac 1n \sum\_{i=1}^nZ\_i \equiv \bar Z\_n = R\_n^{-1}\frac 1n \sum\_{i=1}^nX\_i - R\_n^{-1}X\_{n,(1)}$$
We have that
$$R\_n^{-1} \xrightarrow{p} 1,\;\;\; \frac 1n \sum\_{i=1}^nX\_i \xrightarrow{p} \frac 12,\;\; X\_{n,(1)}\xrightarrow{p} 0$$
So in all, applying Slutsky's lemma,
$$\bar Z\_n \xrightarrow{p} \frac 12 = \text{plim} \frac 1n \sum\_{i=1}^nX\_i \equiv \text{plim}\bar X\_n$$
So the sample average of the transformed sample is also a consistent estimator of the common expected value of the $X$'s. Note that $\text{Var}(\bar X\_n) = \frac 1{12n}$
Then, consider the manipulation
$$\sqrt{12n}\left(\bar Z\_n - \frac 12\right) = \\R\_n^{-1}\cdot \sqrt{12n}\left(\bar X\_n - \frac 12 \right) +\sqrt{12n}\left(\frac 12R\_n^{-1} -\frac 12\right) -\sqrt{12n}R\_n^{-1}X\_{n,(1)}$$
We examine each of the three components in turn:
**A)** By the **CLT** we have that $\sqrt{12n}\left(\bar X\_n - \frac 12 \right) \xrightarrow{d}\mathcal N (0,1)$. Since also $R\_n^{-1} \xrightarrow{p} 1$, then by Slutsky the first term converges in distribution to $\mathcal N (0,1)$.
**B)** We can write
$$\sqrt{12n}\left(\frac 12R\_n^{-1} -\frac 12\right) = \sqrt{3}\left(\frac {n(1-R\_n)}{\sqrt nR\_n}\right)$$
In [Dasgupta 2008](http://www.springer.com/mathematics/probability/book/978-0-387-75970-8) ch. 8 p. 108 Example 8.12, one can find for the sample range from an i.i.d. sample of $U(0,1)$ uniforms that $n(1-R\_n) \xrightarrow{d} \frac 12 \mathcal \chi^2(4)$). So the numerator above converges while the denominator goes to infinity. So the whole term goes to zero.
**C)** We know that the minimum order statistic from a sample of non-negative random variables, needs to be scaled by $n$ in order to converge in distribution ([see this post](https://stats.stackexchange.com/questions/102691/limiting-distribution-of-the-first-order-statistic-of-a-general-distribution/102731#102731)). In other words convergence is "fast", and scaling the third term only by $\sqrt n$ doesn't cut it. Therefore we have that $\sqrt{12n}R\_n^{-1}X\_{n,(1)} \rightarrow0$.
So, we conclude that
$$\sqrt{12n}\left(\bar Z\_n - \frac 12\right) \xrightarrow{d} \mathcal N(0,1)$$
as does $\bar X\_n$, for the same shifting and scaling. |
I have a set of solution nodes generated over a polar grid. I would like to convert / interpolate these solution nodes onto a Cartesian grid:
![polar-to-cart](https://i.imgur.com/6jI9yWX.png)
That is, using the image above, for each node in the Cartesian grid I would interpolate a value from the closest existing nodes (red).
Currently, my approach is to generate a kd-tree for the original solution nodes, then use a nearest-neighbor search to obtain the three closest nodes. I then use barycentric interpolation to obtain a value from these three points. The problem, however, is that my polar grid is much finer along the radial direction than it is in the azimuthal direction, which means that my nearest-neighbor search almost always selects points from the same radial. This has the result of creating "striations" in my new solution, instead of smoothly interpolating along the azimuthal direction (i.e., the results look no different than if I had simply mapped the nearest point to the "interpolated" point).
Unfortunately, I don't know how to achieve a better sampling without sacrificing the kd-tree and losing a lot of the speed improvements. Am I being thick-headed and missing an obvious solution? Or does anyone know a better way to approach this problem? | *Since this question was reopened and made more explicit, I would like to convert my comment into an answer. Now the OP wants to understand*
>
> why and when polynomial algorithms became of interest.
>
>
>
I especially focus on the sub-question:
>
> When did people realize the role and importance of efficient versus non-efficient algorithms?
>
>
>
---
Because algorithms, in its general terms, have existed since ancient times, it is hard to identify the person who is the first to highly praise the polynomial algorithms(, and when and why). However, there is a famous person who has explicitly advocated the polynomial algorithms. It is Jack Edmonds, in the paper [Paths, Trees, and Flowers; 1965](http://www.disco.ethz.ch/lectures/fs12/seminar/paper/Tobias/2.pdf).
In Introduction, the author claims
>
> We describe an ***efficient algorithm*** for finding in a given graph a matching of maximum cardinality.
>
>
>
Then in the second section titled "Digression", the author
>
> An explanation is due on the use of the words ***"efficient algorithm"***.
>
>
>
Then come the explanations:
>
> There is an obvious finite algorithm, but that algorithm increases in difficulty ***exponentially*** with the size of the graph. It is by no means obvious whether or not there exists an algorithm whose difficulty increases only ***algebraically*** with the size of the graph.
>
>
> When the measure of problem-size is reasonable and when the sizes assume values
> arbitrarily large, an asymptotic estimate of $\ldots$ the order of difficulty of an algorithm is theoretically important.
>
>
> For practical purposes ***the difference between algebraic and exponential order is often more crucial than the difference between finite and non-finite***.
>
>
> However, if only to motivate the search for good, practical algorithms, ***it is important to realize that it is mathematically sensible even to question their existence***. For one thing the task can then be described in terms of concrete conjectures.
>
>
>
---
***ADDED:*** I have just happened to found a third-party confirmation that it was Jack Edmonds who originally advocated the polynomial algorithms.
The following is quoted from Section 2.18.1 of the book "Applied Combinatorics (second edition)" by Fred Roberts and Barry Tesman.
>
> A generally accepted principle is that an algorithm is *good* if it is polynomial. This idea is originally due to Edmonds [1965].
>
>
> |
Bayes' rule is given by:
$$P(\theta|X) = \frac{P(X|\theta)P(\theta)}{P(X)}$$
Where $X$ are observations and $\theta$ is some model parameter. I would like to use an alternate notation to more strongly differentiate between the prior $P(\theta)$ and posterior $P(\theta|X)$ distributions. Is it appropriate to write:
$$P(\theta\_\text{post}|X) = \frac{P(X|\theta\_\text{prior})P(\theta\_\text{prior})}{P(X)}$$
Can it be said that the posterior and prior describe the distributions of two different random variables namely $\theta\_\text{post}$ and $\theta\_\text{prior}$ respectively? Or are the prior and posterior different distributions of the same random variable $\theta$? So perhaps we should write:
$$P\_\text{post}(\theta|X) = \frac{P(X|\theta)P\_\text{prior}(\theta)}{P(X)}$$
Equally how should one denote the prior distribution?
$$\theta\_\text{prior} \sim N(0,1)$$
Or:
$$P\_\text{prior}(\theta) \sim N(0,1)$$ | $\theta$ is the same random variable in both the posterior and the prior. The difference is that in the posterior you are conditioning on the data. It's your understanding of the values $\theta$ can take after you've considered the data whereas the prior is your understanding before considering the data. The $|X$ is already distinguishes the two so you don't need a subscript like $\theta\_{post}$ and $\theta\_{prior}$! It would be both redundant and confusing.
I also strongly suggest you *do not* drop the $|X$ part as in $P\_{post}(\theta)$. In Bayesian statisics, it's very important to remember what information you're conditioning on. Leaving the $|X$ in there will emphasis this. |
Everyone knows computing speed has drastically increased since their invention, and it looks set to continue. But one thing is puzzling me: if you ran an electrical current through a material today, it would travel at the same speed as if you did it with the same material 50 years ago.
With that in mind, how is it computers have become faster? What main area of processor design is it that has given these incredible speed increases?
I thought maybe it could be one or more of the following:
* Smaller processors (less distance for the current to travel, but it just seems to me like you'd only be able to make marginal gains here).
* Better materials | When computers can do more computations per unit of time, they are seen as being faster. Each computation may not be done any faster than before, but there are more computations being done. A good analogy would be the number of steps that a runner takes. If a runner behaved according to Moore's law, the runner would be able to take twice as many steps every two years. In essence, the runner would be covering twice the distance in the same amount of time the runner did two years ago. Distance divided by time equals speed. 2 X Distance equals 2 X Speed. |
Is it possible to extract data points from moving average data?
In other words, if a set of data only has simple moving averages of the previous 30 points, is it possible to extract the original data points?
If so, how? | I try to put what whuber said into an answer. Let's say you have a large vector $\mathbf x$ with $n=2000$ entries. If you compute a moving average with a window of length $\ell=30$, you can write this as a vector matrix multiplication $\mathbf y = A\mathbf x$ of the vector $\mathbf x$ with the matrix
$$A=\frac{1}{30}\left(\begin{array}{cccccc}
1 & ... & 1 & 0 & ... & 0\\
0 & 1 & ... & 1 & 0 & ...\\
\vdots & & \ddots & & & \vdots\\
0 & ... & 1 & ... & 1 & 0\\
0 & ... & 0 & 1 & ... & 1
\end{array}\right)$$
which has $30$ ones which are shifted through as you advance through the rows until the $30$ ones hit the end of the matrix. Here the averaged vector $\mathbf y$ has 1970 dimensions. The matrix has $1970$ rows and $2000$ columns. Therefore, it is not invertible.
If you are not familiar with matrices, think about it as a linear equation system: you are searching for variables $x\_1,...,x\_{2000}$ such that the average over the first thirty yields $y\_1$, the average over the second thirty yields $y\_2$ and so on.
The problem with the equation system (and the matrix) is that it has more unknowns than equations. Therefore, you cannot uniquely identify your unknowns $x\_1,...,x\_n$. The intuitive reason is that you loose dimensions while averaging, because the first thirty dimensions of $\mathbf x$ don't get a corresponding element in $\mathbf y$ since you cannot shift the averaging window outside of $\mathbf x$.
One way to make $A$ or, equivalently the equation system, solvable is to come up with $30$ more equations (or $30$ more rows for $A$) that provide additional information (are linearly independent to all other rows of $A$).
Another, maybe easier, way is to use the pseudoinverse $A^\dagger$ of $A$. This generates a vector $\mathbf z = A^\dagger\mathbf y$ which has the same dimension as $\mathbf x$ and which has the property that it minimizes the quadratic distance between $\mathbf y$ and $A\mathbf z$ (see [wikipedia](http://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse)).
This seems to work quite well. Here is an example where I drew $2000$ examples from a Gaussian distribution, added five, averaged them, and reconstructed the $\mathbf x$ via the pseudoinverse.
![reconstruction of original signal from moving average using the pseudoinverse](https://i.stack.imgur.com/ewbco.png)
Many numerical programs offer pseudo-inverses (e.g. Matlab, numpy in python, etc.).
Here would be the python code to generate the signals from my example:
```
from numpy import *
from numpy.linalg import *
from matplotlib.pyplot import *
# get A and its inverse
A = (tril(ones((2000,2000)),-1) - tril(ones((2000,2000)),-31))/30.
A = A[30:,:]
pA = pinv(A) #pseudo inverse
# get x
x = random.randn(2000) + 5
y = dot(A,x)
# reconstruct
x2 = dot(pA,y)
plot(x,label='original x')
plot(y,label='averaged x')
plot(x2,label='reconstructed x')
legend()
show()
```
Hope that helps. |
Training would be bad if training data is not sufficient. Techniques like SMOTE or ADASYN can be used for oversampling. For image data, we can blur or change the angle to generate more samples from the same image.
My question is: how do you generate fake time series data? | ATM I know of [TSimulus](https://rts-gen.readthedocs.io/en/latest/index.html) and [TimeSynth](https://github.com/TimeSynth/TimeSynth) to generate data programatically in a controlled manner (instead of generating random data).
TSimulus allows to generate data via various [generators](https://rts-gen.readthedocs.io/en/latest/generators.html).
TimeSynth is capable of generating **signal types**
```
Harmonic functions(sin, cos or custom functions)
Gaussian processes with different kernels
Constant
Squared exponential
Exponential
Rational quadratic
Linear
Matern
Periodic
Pseudoperiodic signals
Autoregressive(p) process
Continuous autoregressive process (CAR)
Nonlinear Autoregressive Moving Average model (NARMA)
```
and **noise types**
```
White noise
Red noise
```
If you are looking for a graphical way to generate data [TimeSeriesMaker](https://github.com/mbonvini/TimeSeriesMaker) is the only tool able to do this. |
I am trying to get the amortized analysis for a complicated algorithm. I am wondering whether there are textbooks or illustrative examples that could serve as inspiration of techniques in amortized analysis. | You may be interested in the classic papers by Robert Tarjan and others:
* ["The Amortized Computational Complexity"](http://www.cs.princeton.edu/courses/archive/spr06/cos423/Handouts/Amortized.pdf) by Robert Tarjan on a survey of amortized analysis of several algorithms and data structures.
* ["Amortized Efficiency Of List Update and Paging Rules"](https://www.cs.cmu.edu/~sleator/papers/amortized-efficiency.pdf) by Daniel Sleator and Robert Tarjan on self-organizing lists.
* ["Self-Adjusting Binary Search Trees"](https://www.cs.cmu.edu/~sleator/papers/self-adjusting.pdf) by Daniel Sleator and Robert Tarjan on splay trees.
* ["Efficiency of a Good But Not Linear Set Union Algorithm"](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.437.8198&rep=rep1&type=pdf) by Robert Tarjan on disjoint-set data structures.
* ["Fibonacci Heaps and Their Uses in Improved Network
Optimization Algorithms"](http://bioinfo.ict.ac.cn/~dbu/AlgorithmCourses/Lectures/Fibonacci-Heap-Tarjan.pdf) by Michael Fredman and Robert Tarjan on Fibonacci heaps.
The textbook [CLRS: Introduction to Algorithms; 3rd edition](https://mitpress.mit.edu/books/introduction-algorithms-third-edition) also contains chapters on Fibonacci heaps (Chapter 19) and Disjoint-set data structures (Chapter 21).
I also find the lecture note ["Amortized Analysis Explained"](https://www.cs.princeton.edu/~fiebrink/423/AmortizedAnalysisExplained_Fiebrink.pdf) by Rebecca Fiebrink at Princeton University very helpful. It contains basic examples, in-depth examples, and some more involved examples. |
I often read statements such as "the run-time of algorithm X is polynomial **in m and n**". Indeed, when a problem has two parameters, it is possible that the run-time of an algorithm is polynomial in one parameter but not the other one, so if this is not the case, it is important to emphasize that it is polynomial in both parameters.
However, I have never read a statement such as "the problem is NP-hard **in m and n**". A problem is always claimed to be NP-hard, period. My first question is: why? Apparently the same rationale is true here too: a problem may be NP-hard in one parameter, but at the same time, it may have an algorithm with run-time polynomial in the other parameter.
My second question: what is an accurate description of a problem with two parameters: $m$ and $n$, that (1) for a fixed $m$, can be solved in time polynomial in $n$, (2) for a fixed $n$, it is NP-hard (with respect to $m$)? | >
> A problem is always claimed to be NP-hard, period.
>
>
>
Indeed, a problem's definition already contains a specification of its parameters. (See the entries in [Richard Karp's seminal collection of NP-complete problems](https://onlinelibrary.wiley.com/doi/abs/10.1002/net.1975.5.1.45) for several examples.) Usually, there is no need to make explicit reference to the parameters *per se*, as they are "automatically scaled" by the input length; for your two parameters $m$ and $n$, for instance, the input could simply be encoded as $0^m1^n$. Hence, for abstract problem instances, it usually suffices to refer strictly to the input length, which is indeed how time complexity is defined (i.e., as the number of computation steps as a function of the input length).
Studying how different requirements on the parameters impact the problem's complexity is the object of study of [parameterized complexity theory](https://en.wikipedia.org/wiki/Parameterized_complexity). For example, the set of SAT formulas with $k$ many variables, $k$ being a fixed parameter (i.e., constant), is decidable in polynomial time (in the formula's length); in parameterized complexity terms, this means SAT is *fixed-parameter tractable*. It is only as the number of variables is also allowed to scale (in the input length) that SAT becomes NP-hard. |
I understand that an NP-hard problem is a problem X such that any problem in NP can be reduced to X in polynomial time.
Does there exist a problem that is hard to solve but problems in NP cannot be reduced to it in polynomial time i.e. it does not satisfy the definition of NP-hard but is strictly not in NP? If not, what is the proof that if I can efficiently solve a problem that is strictly not in NP, then I can efficiently solve every problem in NP?
I suppose an analogous question can also be asked for P and NP. If a problem is in NP, then can every problem in P be reduced in polynomial time to it?
Note: The [linked question](https://cs.stackexchange.com/questions/9063/np-hard-problems-that-are-not-in-np-but-decidable) asks the converse problem - does there exist a problem outside NP that cannot be reduced to a problem in NP. Indeed there are, for example the "(Non-)equivalence of two regular expressions" problem. However, from my understanding, every problem in NP can be reduced to this problem in polynomial time. Please correct me if I have misunderstood. | >
> I suppose an analogous question can also be asked for P and NP. If a problem is in NP, then can every problem in P be reduced in polynomial time to it?
>
>
>
No. There is a stupidly simple argument here: the empty language (i.e., a problem with no yes-instances) is in NP, but no problem in P can be reduced to it.
>
> Does there exist a problem that is hard to solve but problems in NP cannot be reduced to it in polynomial time i.e. it does not satisfy the definition of NP-hard but is strictly not in NP?
>
>
>
Yes (conditional on some complexity theory assumptions). Let's take some stupidly hard problem $L$. To keep this answer as simple as possible we will assume that $L$ is undecidable but this is not necessary; a language obtained by applying the Time Hierarchy Theorem to the Ackermann function would easily suffice. Assume that instances of $L$ can be padded, i.e., adding extra zeroes at the end does not change whether an instance is a yes- or no-instance. Starting with undecidable language $L'$, $L$ could be obtained by taking every string from $L'$ and appending a $1$ followed by an arbitrary amount of zeroes. Clearly $L$ is still undecidable.
Now consider the subset of $L$ containing only those strings whose length can be expressed as a tower of powers of two (i.e., $1, 2, 2^2, 2^{2^2}, 2^{2^{2^2}}, \ldots$). We split this subset into three languages $L\_0, L\_1, L\_2$, depending on the height of the tower of powers of two. $L\_0$ contains those strings where the tower has a height which is a multiple of $3$, $L\_1$ contains strings whose height is $1$ modulo $3$, $L\_2$ those strings whose height is $2$ modulo $3$.
Neither $L\_0, L\_1$ or $L\_2$ are in $NP$. If they were, we could decide $L$ by padding instances to make their length the appropriate tower of powers of two, and then solving them using the $NP$ algorithm. Obviously the padding can increase the length of the instance exponentially, but if $L$ is sufficiently hard (e.g., undecidable) this does not matter.
Now, to answer the original question, can every problem in $NP$ be reduced in polynomial time to $L\_0, L\_1$ and $L\_2$ (all of which are outside $NP$)? If we take some problem instance of a problem in $NP$, then at least one of the reductions (to $L\_0, L\_1$ or $L\_2$) will result in an exponentially smaller instance. This is because the reduction, being polynomial, cannot increase the size of the instance too much, so (due to the large gaps in instance sizes) must - for at least one of $L\_0, L\_1$ or $L\_2$ - give a much smaller instance as output.
Intuitively, this sounds very unlikely. It would mean we could take an arbitrary problem in $NP$ and in polynomial time output an exponentially smaller instance (albeit of a much harder problem).
Formally, this would mean that $NP\subseteq P/poly$, a consequence which is regarded as unlikely since it would imply the collapse of the polynomial hierarchy. |
```
for (i = 2; i < n; i = i * i) {
for (j = 1; j < i / 2; j = j + 1) {
sum = sum + 1;
}
}
```
I know that the outer loop can run for a maximum of $n^2$ times and the inner loop will run for $\frac{n^2}{4}$ times. | The second loop runs in $O(i)$. The first loop goes over the powers $2^{2^0}, 2^{2^1}, 2^{2^2}, \ldots$, until reaching $n$. So the overall running time is
$$
O(2^{2^0} + 2^{2^1} + 2^{2^2} + \cdots + 2^{2^m}),
$$
where $m$ is the maximal integer such that $2^{2^m} < n$. We can bound
$$
2^{2^0} + 2^{2^1} + 2^{2^2} + \cdots + 2^{2^m} \leq 2^1 + 2^2 + 2^3 + \cdots + 2^{2^m} \leq 2^{2^m+1} < 2n.
$$
Therefore the overall running time is $O(n)$.
In fact, the same analysis gives the optimal bound $\Theta(2^{2^{\lfloor \log\_2 \log\_2 (n-1) \rfloor}})$, with a bit more work. |
I want to calculate the variance of the maximum likelihood estimator of a Rayleigh distribution using $N$ observations.
The density probability function of this distribution is :
$$
f(\sigma,y\_i) = \frac{y\_i}{\sigma^2} e^{-\frac{y\_i^2}{2\sigma^2}}
$$
I also know that the mean is $\sigma \sqrt{\frac{\pi}{2}}$, its variance is $\frac{4 - \pi}{2}\sigma^2$ and its raw moments are $E[Y\_i^k] = \sigma^k 2^{\frac{k}{2}}\Gamma(1+\frac{k}{2})$. Knowing this, I was able to calculate the maximum likelihood estimator $\hat{\sigma}^{2,ML} = \frac{\sum\_{i=1}^{N} y\_i^2}{2N}$
I calculated the mean of this estimator : $m\_{\hat{\sigma}^{2,ML}} = E[\frac{\sum\_{i=1}^{N} y\_i^2}{2N}] = \frac{2N \sigma^2}{2N} = \sigma^2$ knowing that $E[y\_i^2] = \sigma^2 2 \Gamma(2) = 2\sigma^2$.
For the variance, however, I do not see how to do it. I have tried to do as follows:
$$
Var(Z) = E[Z^2] - E[Z]^2 = E[(\frac{\sum\_{i=1}^{N} y\_i^2}{2N})^2] - E[\frac{\sum\_{i=1}^{N} y\_i^2}{2N}]^2 = \frac{1}{2N} E[(\sum\_{i=1}^{N}y\_i^2)^2] - \sigma^4
$$
**My problem is** that I do not know how to calculate $E[(\sum\_{i=1}^{N}y\_i^2)^2]$. Could someone give me a hint? | What about just used the GLM with Poisson error structure and log-link??? But the idea about binomial may be more powerfull. |
I'm attempting to prove a problem is NPc, but I'm not sure which one would be optimal to use,
The problem is:
There are $n$ boars to be caged, and $m$ cages which each cage being able to hold $k$ boars.
Any boar can be put in in any cage, but certain pairs of boars can't be put together in the same cage.
I'm thinking of SAT or Knapsack to reduce to this problem, but not sure which. If anyone can lead me to the right direction or get me started I'd really appreciate it, thanks | Apply the distributive law on $x\land(1\lor y)$ and see what you get. |
In general setting of gradient descent algorithm, we have $x\_{n+1} = x\_{n} - \eta \* gradient\_{x\_n}$ where $x\_n$ is the current point, $\eta$ is the step size and $gradient\_{x\_n}$ is the gradient evaluated at $x\_n$.
I have seen in some algorithm, people uses **normalized gradient** instead of **gradient**. I wanted to know what is the difference in using **normalized gradient** and simply **gradient**. | In a gradient descent algorithm, the algorithm proceeds by finding a direction along which you can find the optimal solution. The optimal direction turns out to be the gradient. However, since we are only interested in the direction and not necessarily how far we move along that direction, we are usually not interested in the magnitude of the gradient. Thereby, normalized gradient is good enough for our purposes and we let $\eta$ dictate how far we want to move in the computed direction. However, if you use unnormalized gradient descent, then at any point, the distance you move in the optimal direction is dictated by the magnitude of the gradient (in essence dictated by the surface of the objective function i.e a point on a steep surface will have high magnitude whereas a point on the fairly flat surface will have low magnitude).
From the above, you might have realized that normalization of gradient is an added controlling power that you get (whether it is useful or not is something upto your specific application). What I mean by the above is:
1] If you want to ensure that your algorithm moves in fixed step sizes in every iteration, then you might want to use normalized gradient descent with fixed $\eta$.
2] If you want to ensure that your algorithm moves in step sizes which is dictated precisely by you, then again you may want to use normalized gradient descent with your specific function for step size encoded into $\eta$.
3] If you want to let the magnitude of the gradient dictate the step size, then you will use unnormalized gradient descent.
There are several other variants like you can let the magnitude of the gradient decide the step size, but you put a cap on it and so on.
Now, step size clearly has influence on the speed of convergence and stability. Which of the above step sizes works best depends purely on your application (i.e objective function). In certain cases, the relationship between speed of convergence, stability and step size can be analyzed. This relationship then may give a hint as to whether you would want to go with normalized or unnormalized gradient descent.
To summarize, there is no difference between normalized and unnormalized gradient descent (as far as the theory behind the algorithm goes). However, it has practical impact on the speed of convergence and stability. The choice of one over the other is purely based on the application/objective at hand. |
These two seem very similar and have almost an identical structure. What's the difference? What are the time complexities for different operations of each? | Both [binary search trees](http://www.wikipedia.org/wiki/Binary_search_trees) and [binary heaps](http://wikipedia.org/wiki/binary_heap) are tree-based data structures.
Heaps require the nodes to have a priority over their children. In a max heap, each node's children must be less than itself. This is the opposite for a min heap.
Max Heap:
>
> ![Binary Max Heap](https://i.stack.imgur.com/ld3It.png)
>
>
>
Binary search trees (BST) follow a specific ordering (pre-order, in-order, post-order) among sibling nodes. The tree **must** be sorted, unlike heaps.
Binary Search Tree:
>
> ![Binary Search Tree](https://i.stack.imgur.com/1REW1.png)
>
>
>
BST have average of $O(\log n)$ for insertion, deletion, and search.
Binary Heaps have average $O(1)$ for findMin/findMax and $O(\log n)$ for insertion and deletion. |
I have multiple sets, e.g.,
$$\{1, 2\}, \{2, 3, 4\}, \{1, 4\}$$
Each variable $1, 2, 3, 4$ is binary. I need to represent the following condition without additional variables
$$(1 \land 2) \lor (2 \land 3 \land 4) \lor (1 \land 4)$$
Basically, I need to enforce that either $1$ and $2$ are 1s, or $2$ and $3$ and $4$ are ones, or $1$ and $4$ are 1s, but not all of them. Constants can be added as needed, as well as equalities, inequalities, etc. How can I do that? | You are looking for a four-dimensional convex polytope that contains all of the points $(x\_1,x\_2,x\_3,x\_4)$ that satisfy your condition, and but not any point $(x\_1,x\_2,x\_3,x\_4)$ that doesn't satisfy your condition. (That's because any system of linear inequalities forms a convex polytope.)
It's not quite clear to me from your question about what your condition is, so I'll try answering your question with both plausible interpretations I see.
Interpretation #1:
------------------
In the first possible interpretation, the boolean formula is $(x\_1 \land x\_2) \lor (x\_2 \land x\_3 \land x\_4) \lor (x\_1 \land x\_4)$. This means the polytope needs to contain all of the following seven points:
$$(0,1,1,1), (1,0,0,1), (1,0,1,1), (1,1,0,0), (1,1,0,1), (1,1,1,0), (1,1,1,1)$$
and not any of the other nine points where the formula is false.
There are various ways to check whether there exists such a polytope. One way is to enumerate all possible inequalities $c\_1 x\_1 + c\_2 x\_2 + c\_3 x\_3 + c\_4 x\_4 + c\_5 \ge 0$ and look for combinations $(c\_1,\dots,c\_5)$ such that this inequality is true for all of the seven points listed above. Then, we check whether a system of constraints obtained by using all of those inequalities has the desired property (in particular, each of the other nine points violates at least one of these inequalities). You can then minimize the set of constructed inequalities, to remove redundancies.
Here is one such system of constraints that works:
$$\begin{align\*}
x\_2+x\_4-1 &\ge 0\\
x\_1+x\_3-1&\ge 0\\
x\_1+x\_2-x\_3+x\_4-1 &\ge 0
\end{align\*}$$
Another way to have obtained this system of constraints would be to convert the DNF formula you listed to CNF, and then translate each "OR" constraint into a single inequality using the techniques from [Express boolean logic operations in zero-one integer linear programming (ILP)](https://cs.stackexchange.com/q/12102/755). The formula $(x\_1 \land x\_2) \lor (x\_2 \land x\_3 \land x\_4) \lor (x\_1 \land x\_4)$ is equivalent to $(x\_1 \lor x\_2) \land (x\_1 \lor x\_3) \land (x\_1 \lor x\_4) \land (x\_2 \lor x\_4)$. That then yields the following system of constraints, which also works:
$$\begin{align\*}
x\_1+x\_2-1 &\ge 0\\
x\_1+x\_3-1 &\ge 0\\
x\_1+x\_4-1 &\ge 0\\
x\_2+x\_4-1 &\ge 0
\end{align\*}$$
Interpretation #2:
------------------
In the second plausible interpretation, you want the boolean formula
$$(x\_1 \land x\_2 \land \neg x\_3 \land \neg x\_4) \lor (\neg x\_1 \land x\_2 \land x\_3 \land x\_4) \lor (x\_1 \land \neg x\_2 \land \neg x\_3 \land x\_4).$$
Thus, you want the polytope to contain only the three points
$$(1,1,0,0),(0,1,1,1),(1,0,0,1).$$
We can apply the same methods to get a system of inequalities for this formula. Here is one solution that works:
$$\begin{align\*}
-x\_1-x\_3+1 &\ge 0\\
-x\_2+x\_3-x\_4+1 &\ge 0\\
x\_2+x\_4-1 &\ge 0\\
x\_1+x\_3-1 &\ge 0\\
x\_1+x\_2-x\_3+x\_4-1 &\ge 0
\end{align\*}$$ |
Let $G=(V,E)$ be an undirected graph. The **independent node degree** $d^i(v)$ of a node $v$ is the maximum size of a set of independent neighbors of $v$. Denote by $\Delta^i(G) = \max \{d^i(v) \mid v \in V\}$. Obviously $\Delta^i(G) \le \alpha(G)$, where $\alpha(G)$ is the independence number of $G$.
Examples: $\Delta^i(G)=1$ for complete graphs. $\Delta^i(G)\le 5$ for unit disc graphs. $\Delta^i(G)=n-1$ for a star graph with $n$ nodes.
My questions:
* What natural graph classes have $\Delta^i(G)\in O(1)$?
* What is known about the distribution of $d^i(v)$ in random graphs?
* For dense graphs the expected value of the independent node degree should be small. Are there any quantitative results? E.g. relating $m/n$ with $\Delta^i(G)$? | MinCC Graph Motif is NP-hard when the graph is a path (even APX-hard). Given a graph with colors on the vertices and a set of colors, find a subgraph matching the set of colors and minimizing the number of connected comp. See Complexity issues in vertex-colored graph pattern matching, JDA 2011. |
I am looking for a concise introductory text on algorithms with a high ratio $$\frac{\mbox{theory covered}}{\mbox{total number of pages}}.$$ It should begin at the beginning but then progress quickly without spending too much time on real world examples, elementary proof techniques, etc. As a research mathematician I have a solid background in mathematics which I happily employ to understand formalisms and condensed proofs, for example.
Do there exist such texts? Any recommendations? | **Algorithm Design by Kleinberg Tardos** This book helps develop a concrete understanding of how to design good algorithms and talk of their correctness and efficiency. (I studied this in my first year at college, very much readable)
For an online copy/lecture notes/reference, (as suggested by Suresh Venkat) go with **Jeff Erikson's [lecture notes](http://www.cs.uiuc.edu/~jeffe/teaching/algorithms/)**. They are really awesome! |
I've got a test Travel Salesman Problem's data with known optimal solutions. It's in a form of set of 2D points. Particularly, this is a tsplib format; sources are [here](http://www.math.uwaterloo.ca/tsp/world/wi29.tsp) and [here](http://www.math.uwaterloo.ca/tsp/world/wilog.html).
I'd started a simple test with the "Western Sahara - 29 Cities" (wi29) and found that my algorithm rapidly found a few better particular solutions than the proposed optimum.
---
I checked one of them manually and didn't find an error. So, I guess, here're the three options.
1. I did a mistake.
2. Wrong optimum.
3. Different problems were solved.
---
1 and 2. My solution tour is:
>
> 17>18>19>15>22>23>21>29>28>26>20>25>27>
> 24>16>14>13>9>7>3>4>8>12>10>11>6>2>1>5
>
>
>
*(will list my checking calculations if requested)*
Rounded length: 26040.76.
Optimal reference value: 27603.
3. I can't find a particular task descriptions and especially rounding policy for the TSPLib examples optimums. This is important, because they're looking rounded or discretized in another manner, but simple result rounding isn't looks like it. | In the TSPLIB norm, the travel cost between each pair of cities is the Euclidean distance between the points rounded to the nearest integer (not the distance rounded to two decimal places). |
Is there any data structure that maintain a collection of set (of finite ground set) supporting the following operations? Any sublinear running time will be appreciated?
1. Init an empty set.
2. Add an element to a set.
3. Given two set, report whether they intersect. | There are data structures that allow you to do this in less than linear time, even for worst-case inputs. See <http://research.microsoft.com/pubs/173795/vldb11intersection.pdf> (and the papers references in there).
If your two sets S and T have a large intersection and you have a dictionary for S, looking up elements of T in random order should quickly give you a common element. The most difficult case is when the intersection size is 0 or 1. |
This question is something I've wondered about for a while.
When people describe the P vs. NP problem, they often compare the class NP to creativity. They note that composing a Mozart-quality symphony (analogous to an NP task) seems much harder than verifying that an already-composed symphony is Mozart-quality (which is analogous to a P task).
But is NP really the "creativity class?" Aren't there plenty of other candidates? There's an old saying: "A poem is never finished, only abandoned." I'm no poet, but to me, this is reminiscent of the idea of something for which there is no definite right answer that can be verified quickly...it reminds me more of coNP and problems such as TAUTOLOGY than NP or SAT. I guess what I'm getting at is that it's easy to verify when a poem is "wrong" and needs to be improved, but difficult to verify when a poem is "correct" or "finished."
Indeed, NP reminds me more of logic and left-brained thinking than creativity. Proofs, engineering problems, Sudoku puzzles, and other stereotypically "left-brained problems" are more NP and easy to verify from a quality standpoint than than poetry or music.
So, my question is: Which complexity class most precisely captures the totality of what human beings can accomplish with their minds? I've always wondered idly (and without any scientific evidence to support my speculation) if perhaps the left-brain isn't an approximate SAT-solver, and the right-brain isn't an approximate TAUTOLOGY-solver. Perhaps the mind is set up to solve PH problems...or perhaps it can even solve PSPACE problems.
I've offered my thoughts above; I'm curious as to whether anyone can offer any better insights into this. To state my question succinctly: I am asking which complexity class should be associated with what the human mind can accomplish, and for evidence or an argument supporting your viewpoint. Or, if my qusetion is ill-posed and it doesn't make sense to compare humans and complexity classes, why is this the case?
Thanks.
**Update**: I've left everything but the title intact above, but here's the question that I really meant to ask: Which complexity class is associated with what the human mind can accomplish *quickly*? What is "polynomial human time," if you will? Obviously, a human can simulate a Turing machine given infinite time and resources.
I suspect that the answer is either PH or PSPACE, but I can't really articulate an intelligent, coherent argument for *why* this is the case.
Note also: I am mainly interested in what humans can approximate or "do most of the time." Obviously, no human can solve hard instances of SAT. If the mind is an approximate *X*-solver, and *X* is complete for class *C*, that's important. | I don't claim this is a complete answer, but here are some thoughts that are hopefully along the lines of what you're looking for.
NP roughly corresponds to "puzzles" (viz. the NP-completeness of Sudoku, Minesweeper, Free Cell, etc., when these puzzles are suitably generalized to allow $n \to \infty$). PSPACE corresponds to "2-player games" (viz. the PSPACE-completeness of chess, go, etc.). This is not news.
People generally seem to do alright with finite instances of NP-complete puzzles, and yet find them non-trivial enough to be entertaining. The finite instances of PSPACE-complete games that we play are considered some of the more difficult intellectual tasks of this type. This at least suggests that PSPACE is "hitting the upper limits" of our abilities. (Yet our opponents in these PSPACE-complete games are generally other people. Even when the opponents are computers, the computers aren't perfect opponents. This heads towards the question of the power of interactive proofs when the players are computationally limited. There is also the technicality that some generalizations of these games are EXP-complete instead of PSPACE-complete.)
To an extent, the problem sizes that arise in actual puzzles/games have been calibrated to our abilities. 4x4 Sudoku would be too easy, hence boring. 16x16 Sudoku would take too much time (not more than the lifetime of the universe, but more than people are generally willing to sit to solve a Sudoku puzzle). 9x9 seems to be the "Goldilocks" size for people solving Sudoku. Similarly, playing Free Cell with a deck of 4 suits of 13 cards each and 4 free cells seems to be about the right difficulty to be solvable yet challenging for most people. (On the other hand, one of the smartest people I know is able to solve Free Cell games as though she were just counting natural numbers "1,2,3,4,...") Similarly for the size of Go and Chess boards.
Have you ever tried to compute a 6x6 permanent by hand?
I suppose the point is that if you take *natural* problems in classes significantly above PSPACE (or EXP), then the only finite instances that people are capable of solving seem to be so small as to be un-interesting. Part of the reason "natural" is necessary here is that one can take a natural problem, then "unnaturally" modify all instances of size $< 10^{10}$ so that for all instances a human would ever try the problem becomes totally intractible, regardless of its asymptotic complexity.
Conversely, for problems in EXP, any problem size below the "heel of the exponential" has a chance of being solvable by most people in reasonable amounts of time.
As to the rest of PH, there aren't many (any?) natural games people play with a fixed number of rounds. This is also somehow related to the fact that we don't know of many natural problems complete for levels of PH above the third.
As mentioned by Serge, FPT has a role to play here, but (I think) mostly in the fact that some problems naturally have more than one "input size" associated with them. |
My question is similar to [this splitting question](https://cs.stackexchange.com/questions/14713/algorithm-for-splitting-array-into-subarrays-with-sums-close-to-the-target-value), but my objective function is different.
Looking for an algorithm to split array of $n$ positive (integer) numbers into $N$ contiguous non-empty subarrays ($N<n$) of approximately same sums:
$$
\min\_{\text{splits}} \,(\max\_j S\_j - \min\_j S\_j),
$$
where $S\_j$ is sum of numbers in $j$-th subarray $(j=1,\ldots,N)$.
E.g., best splitting of $100, 1, 1, 103, 90$ into three subarrays is $100,1,1|103|90$.
Typically $n\approx~10^6-10^8$, $N\approx 10-100$.
I suspect it would be some greedy approach... | This is a polynomial time algorithm:
Let $S^\*\_j$ be the sum of the the elements in the $j$-th subarray in an optimal solution and guess $m^\* = \min\_j S^\*\_j$ (there are only polynomially many choices of $m^\*$).
Given a a way to split an array into $k$ contiguous subarrays of sums $S\_1, \dots, S\_k$, define the *cost* of such a subdivision as:
$$
\begin{cases}
\max\_i S\_i & \mbox{if } \min\_i S\_i \ge m^\* \\
+\infty & \mbox{ otherwise.}
\end{cases}
$$
Let $OPT\_{m^\*}[i,k]$ the cost to split the array consisting of the first $i$ input elements into $k$ contiguous (non-empty) subarrays, and let $x\_i$ be the $i$-th input element.
Then, for $i,k \ge 1$,
$$
OPT\_{m^\*}[i][k] = \min\_{j=0, \ldots, i-1}
\begin{cases}
\displaystyle\max\, \left\{ OPT\_{m^\*}[j][k-1], \sum\_{h=j+1}^i x\_i \right\} & \mbox{if } \displaystyle\sum\_{h=j+1}^i x\_i \ge m^\* \\[10pt]
+\infty & \mbox{otherwise}
\end{cases}.
$$
Where $OPT\_{m^\*}[0][0] = 0$ and $OPT\_{m^\*}[i][0] = OPT\_{m^\*}[0][k] = +\infty$ for all $i,k > 0$.
The measure of an optimal solution to the original problem will be $OPT\_{m^\*}[n][N] - m^\*$ and you can reconstruct where to split the input array using standard techniques (e.g., by inspecting the dynamic programming table in reverse order or by storing the value of $j$ chosen for each entry $OPT\_{m^\*}[i][k]$). |
Miller and Chapman (2001) argue that it is absolutely inappropriate to control for non-independent covariates that are related to both the independent and dependent variables in an observational (non-randomized) study - even though this is routinely done in the social sciences. How problematic is it to do so? How is the best way to deal with this problem? If you routinely control for non-independent covariates in an observational study in your own research, how do you justify it? Finally, is this a fight worth picking when arguing methodology with ones colleagues (i.e., does it really matter)?
Thanks
------
Miller, G. A., & Chapman, J. P. (2001). Misunderstanding analysis of covariance. Journal
of Abnormal Psychology, 110, 40-48. - <http://mres.gmu.edu/pmwiki/uploads/Main/ancova.pdf> | It is as problematic as the degree of correlation.
The irony is that you wouldn't bother controlling if there weren't some expected correlation with one of the variables. And, if you expect your independent variable to affect your dependent then it's necessarily somewhat correlated with both. However, if it's highly correlated them perhaps you shouldn't be controlling for it since it's tantamount to controlling out the actual independent or dependent variable. |
I have trained a model for **spam classification** -
This is my code -
```py
X_train, X_test, y_train, y_test = train_test_split(data['text'], data['label'], test_size = 0.4, random_state = 1)
cv = CountVectorizer()
cv.fit(X_train)
cv.fit(X_test)
X_train = cv.transform(X_train)
X_test = cv.transform(X_test)
model = LogisticRegression(solver='lbfgs')
model.fit(X_train, y_train)
```
After that I had also completed testing, it gives me an accuracy of about **97%**
.
Now I want to add predict a new SMS/Email to be spam or not. What I am doing is -
```py
new = 'Hey there you got a sale here on website'
new = cleanText(new)
cv.fit([new])
new = cv.transform([new])
model.predict(new)
```
It gives me an error
>
> **ValueError**: X has 4 features per sample; expecting 4331
>
>
>
Please tell me where I am going wrong? | If you really just want to guess the sign, you should just build a new target : 0 if the sign is negative 1 if the sign is positive... That would fit with your binary classification approach and the metrics you want to use. |
I am currently running some mixed effect linear models.
I am using the package "lme4" in R.
My models take the form:
```
model <- lmer(response ~ predictor1 + predictor2 + (1 | random effect))
```
Before running my models, I checked for possible multicollinearity between predictors.
I did this by:
Make a dataframe of the predictors
```
dummy_df <- data.frame(predictor1, predictor2)
```
Use the "cor" function to calculate Pearson correlation between predictors.
```
correl_dummy_df <- round(cor(dummy_df, use = "pair"), 2)
```
If "correl\_dummy\_df" was greater than 0.80, then I decided that predictor1 and predictor2 were too highly correlated and they were not included in my models.
In doing some reading, there would appear more objective ways to check for multicollinearity.
Does anyone have any advice on this?
The "Variance Inflation Factor (VIF)" seems like one valid method.
VIF can be calculated using the function "corvif" in the AED package (non-cran). The package can be found at <http://www.highstat.com/book2.htm>. The package supports the following book:
Zuur, A. F., Ieno, E. N., Walker, N., Saveliev, A. A. & Smith, G. M. 2009. Mixed effects models and extensions in ecology with R, 1st edition. Springer, New York.
Looks like a general rule of thumb is that if VIF is > 5, then multicollinearity is high between predictors.
Is using VIF more robust than simple Pearson correlation?
**Update**
I found an interesting blog at:
<http://hlplab.wordpress.com/2011/02/24/diagnosing-collinearity-in-lme4/>
The blogger provides some useful code to calculate VIF for models from the lme4 package.
I've tested the code and it works great. In my subsequent analysis, I've found that multicollinearity was not an issue for my models (all VIF values < 3). This was interesting, given that I had previously found high Pearson correlation between some predictors. | An update, since I found this question useful but can't add comments -
The code from Zuur *et al.* (2009) is also available via the supplementary material to a subsequent (and very useful) publication of their's in the journal *Methods in Ecology and Evolution*.
The paper - **A protocol for data exploration to avoid common statistical problems** - provides useful advice and a much needed reference for justifying VIF thresholds (they recommend a threshold of 3). The paper is here: <http://onlinelibrary.wiley.com/doi/10.1111/j.2041-210X.2009.00001.x/full> and the R code is in the supplementary materials tab (.zip download).
**A quick guide**: to extract variance inflation factors (VIF) run their *HighStatLib.r* code and use the function `corvif`. The function requires a data frame with just the predictors (so, for example, `df = data.frame(Dataset[,2:4])` if your data are stored in *Dataset* with the predictors in columns 2 to 4. |
An econometrician told me that I shouldn't keep adding new variables to the model even if I have reason to believe they're relevant to the response variable, as it "reduces the efficiency of the other parameters". That is, even if you have 40 variables that "should" be related to the response, you should draw the line somewhere and include a subset of them.
**Question**: What are some good *references* that deal with this? | Probably the best regarded book on the model building process per se, is Frank Harrell's [*Regression Modeling Strategies*](http://rads.stackoverflow.com/amzn/click/0387952322), but the issues involved can be stated simply: For every additional covariate that is included in a model, you will lose 1 degree of freedom. If a factor with $k$ levels is included, you will lose $k-1$ degrees of freedom. This will decrease your [statistical power](http://en.wikipedia.org/wiki/Statistical_power) (the ability to differentiate the slope of the relationship between that covariate and the response from 0). Another way of putting that fact is that your confidence intervals around your beta estimate will be wider / sample parameters will vary more widely from their true values. If the covariates are orthogonal to each other and you have enough data, the impact is likely to be very small. Real-world (observational rather than experimental) data is never orthogonal, though, so there will be [multicollinearity](http://en.wikipedia.org/wiki/Multicollinearity), and multicollinearity can cause your beta estimates to fluctuate quite widely. (There are many threads on CV that explore multicollinearity, so if you aren't terribly familiar with it, you can read some of them by clicking on [multicollinearity](/questions/tagged/multicollinearity "show questions tagged 'multicollinearity'").) |
The question is very simple: why, when we try to fit a model to our data, linear or non-linear, do we usually try to minimize the sum of the squares of errors to obtain our estimator for the model parameter? Why not choose some other objective function to minimize? I understand that, for technical reasons, the quadratic function is nicer than some other functions, e.g., sum of absolute deviation. But this is still not a very convincing answer. Other than this technical reason, why in particular are people in favor of this 'Euclidean type' of distance function? Is there a specific meaning or interpretation for that?
The logic behind my thinking is the following:
When you have a dataset, you first set up your model by making a set of functional or distributional assumptions (say, some moment condition but not the entire distribution). In your model, there are some parameters (assume it is a parametric model), then you need to find a way to consistently estimate these parameters and hopefully, your estimator will have low variance and some other nice properties. Whether you minimize the SSE or LAD or some other objective function, I think they are just different methods to get a consistent estimator. Following this logic, I thought people use least square must be 1) it produces consistent estimator of the model 2) something else that I don't know.
In econometrics, we know that in linear regression model, if you assume the error terms have 0 mean conditioning on the predictors and homoscedasticity and errors are uncorrelated with each other, then minimizing the sum of square error will give you a CONSISTENT estimator of your model parameters and by the Gauss-Markov theorem, this estimator is BLUE. So this would suggest that if you choose to minimize some other objective function that is not the SSE, then there is no guarantee that you will get a consistent estimator of your model parameter. Is my understanding correct? If it is correct, then minimizing SSE rather than some other objective function can be justified by consistency, which is acceptable, in fact, better than saying the quadratic function is nicer.
In pratice, I actually saw many cases where people directly minimize the sum of square errors without first clearly specifying the complete model, e.g., the distributional assumptions (moment assumptions) on the error term. Then this seems to me that the user of this method just wants to see how close the data fit the 'model' (I use quotation mark since the model assumptions are probably incomplete) in terms of the square distance function.
A related question (also related to this website) is: why, when we try to compare different models using cross-validation, do we again use the SSE as the judgment criterion? i.e., choose the model that has the least SSE? Why not another criterion? | You asked a statistics question, and I hope that my control system engineer answer is a stab at it from enough of a different direction to be enlightening.
Here is a "canonical" information-flow form for control system engineering:
![enter image description here](https://i.stack.imgur.com/daQ2B.png)
The "r" is for reference value. It is summed with an "F" transform of the output "y" to produce an error "e". This error is the input for a controller, transformed by the control transfer function "C" into a control input for the plant "P". It is meant to be general enough to apply to arbitrary plants. The "plant" could be a car engine for cruise control, or the angle of input of an inverse-pendulum.
Let's say you have a plant with a known transfer function with phenomenology suitable to the the following discussion, a current state, and a desired end state. ([table 2.1 pp68](http://www.crcpress.com/product/isbn/9780849308925)) There are an infinite number of unique paths that the system, with different inputs, could traverse to get from the initial to final state. The textbook controls engineer "optimal approaches" include time optimal ([shortest time/bang-bang](http://yima.csl.illinois.edu/psfile/ECE553/Lectures17-19.pdf)), distance optimal (shortest path), force optimal (lowest maximum input magnitude), and [energy optimal](http://www.scholarpedia.org/article/Optimal_control#Minimum_energy_control_of_a_double_integrator_with_terminal_constraint) (minimum total energy input).
Just like there are an infinite number of paths, there are an infinite number of "optimals" - each of which selects one of those paths. If you pick one path and say it is best then you are implicitly picking a "measure of goodness" or "measure of optimality".
In my personal opinion, I think folks like L-2 norm (aka energy optimal, aka least squared error) because it is simple, easy to explain, easy to execute, has the property of doing more work against bigger errors than smaller ones, and leaves with zero bias. Consider h-infinity norms where the variance is minimized and bias is constrained but not zero. They can be quite useful, but they are more complex to describe, and more complex to code.
I think the L2-norm, aka the energy-minimizing optimal path, aka least squared error fit, is easy and in a lazy sense fits the heuristic that "bigger errors are more bad, and smaller errors are less bad". There are literally an infinite number of algorithmic ways to formulate this, but squared error is one of the most convenient. It requires only algebra, so more people can understand it. It works in the (popular) polynomial space. Energy-optimal is consistent with much of the physics that comprise our perceived world, so it "feels familiar". It is decently fast to compute and not too horrible on memory.
If I get more time I would like to put pictures, codes, or bibliographic references. |
f(n) = log n^2; g(n) = log n + 5 => f(n) = Θ (g(n))
I think we can prove this for omega but how can we prove it for Big oh ?
because if we simplify it to logn + logn <= logn +5 => logn<=5 is not true ! | You might be looking for a [dictionary](https://en.wikipedia.org/wiki/Dictionary_(data%20structure)). Common implementations are [hash tables](https://en.wikipedia.org/wiki/Hash_table) and [binary search trees](https://en.wikipedia.org/wiki/Binary_search_tree). |
So momentum based gradient descent works as follows:
$v=\beta m-\eta g$
where $m$ is the previous weight update, and $g$ is the current gradient with respect to the parameters $p$, $\eta$ is the learning rate, and $\beta$ is a constant.
$p\_{new} = p + v = p + \beta m - \eta g$
and Nesterov's accelerated gradient descent works as follows:
$p\_{new} = p + \beta v - \eta g$
which is equivalent to:
$p\_{new} = p + \beta (\beta m - \eta g ) - \eta g$
or
$p\_{new} = p + \beta^2 m - (1 + \beta) \eta g$
source: <https://github.com/fchollet/keras/blob/master/keras/optimizers.py>
So to me it seems Nesterov's accelerated gradient descent just gives more weight to the $\eta g$ term over the pervious weight change term m (compared to plain old momentum). Is this interpretation correct? | I don't think so.
There's a good description of Nesterov Momentum (aka Nesterov Accelerated Gradient) properties in, for example, [Sutskever, Martens et al."On the importance of initialization and momentum in deep learning" 2013](http://proceedings.mlr.press/v28/sutskever13.pdf).
The main difference is in classical momentum you first correct your velocity and then make a big step according to that velocity (and then repeat), but in Nesterov momentum you first making a step into velocity direction and then make a correction to a velocity vector based on new location (then repeat).
i.e. Classical momentum:
```
vW(t+1) = momentum.*Vw(t) - scaling .* gradient_F( W(t) )
W(t+1) = W(t) + vW(t+1)
```
While Nesterov momentum is this:
```
vW(t+1) = momentum.*Vw(t) - scaling .* gradient_F( W(t) + momentum.*vW(t) )
W(t+1) = W(t) + vW(t+1)
```
Actually, this makes a huge difference in practice... |
In distributed computing, the consensus problem seems to be one of the central topics which has attracted intensive research. In particular, the paper "Impossibility of Distributed Consensus with One Faulty Process" received the [2001 PODC Influential Paper Award](http://www.podc.org/influential/2001-influential-paper/).
So why is the consensus problem so important? What can we achieve with consensus both in theory and in practice?
Any references or expositions would be really helpful. | The paper you mention is important for 2 reasons:
1. It shows that there is no **asynchronous** deterministic consensus algorithm that tolerates even a single crash fault. Note that in the **synchronous** setting,there is a deterministic algorithm that terminates in $f+1$ rounds when $\le f$ processes crash.
2. It introduces *bivalence* and *univalence* of configurations (\*), which are used in many lower bounds and impossibility proofs later on.
**Applications**
One important application of the consensus problem is the election of a coordinator or leader in a fault-tolerant environment for initiating some global action. A consensus algorithm allows you to do this on-the-fly, without fixing a "supernode" in advance (which would introduce a single point of failure).
Another application is maintaining consistency in a distributed network: Suppose that you have different sensor nodes monitoring the same environment. In the case where some of these sensor nodes crash (or even start sending corrupted data due to a hardware fault), a consensus protocol ensures robustness against such faults.
---
(\*) A run of a distributed algorithm is a sequence of configurations. A configuration is a vector of the local states of the processes. Each process executes a deterministic state machine. Any correct consensus algorithm must eventually reach a configuration where every process has decided (irrevocably) on the same input value. A configuration $C$ is $1$-*valent* if, no matter what the adversary does, all possible extensions of $C$ lead to a decision value of $1$. Analogously, we can define $0$-*valency*. A configuration $C$ is *bivalent* if both decisions are reachable from $C$ (which one of the two is reached depends on the adversary). Clearly, no process can have decided in a bivalent configuration $C$, as otherwise we get a contradiction to agreement! So if we can construct an infinite sequence of such bivalent configurations, we have shown that there is no consensus algorithm in this setting. |
What is the funniest TCS-related published work you know?
Please include only those that are intended to be funny. Works which are explicitly crafted to be intelligently humorous (rather than, say, a published collection of short jokes regarding complexity theory) are preferred. Works with humorous (actually humorous, not just cute) titles are also accepted.
Please only one work per answer so the "best" ones can bubble to the top. | "Busy beavers gone wild" by Grégory Lafitte, EPTCS 1, 2009, pp. 123-129
[arXiv:0906.3257v1](http://arxiv.org/abs/0906.3257v1) |
In teaching Intro. Algorithms to undergrads, one of the most difficult tasks is to motivate why they need to know how to *prove* things about algorithms. (For many students, at least in many US universities, they may see some basic proofs in Discrete Mathematics, but Algorithms is often the first time they have to use proofs to reason about algorithms.) While I think I can do a decent job motivating this at an abstract level, concrete examples are much more convincing, hence my question:
>
> I'm looking for concrete examples that show the value of proofs in algorithms.
>
>
>
A couple questions already address this, and if you have answers to those, please add them there:
* [Examples of algorithms and proofs that seem correct, but aren't](https://cstheory.stackexchange.com/q/40478/129)
* [How to fool the “try some test cases” heuristic: Algorithms that appear correct, but are actually incorrect](https://cs.stackexchange.com/q/29475/20219)
However, I think there are other ways proofs can be valuable beyond the above. E.g., by working through the proof you can gain an insight that leads to a more efficient algorithm, or to an algorithm for a related problem, or to an idea for developing a better test suite to test your implementation, etc.
Some criteria:
* Answers here ideally shouldn't fit one of the questions above (as I said, please add an answer there if you have it)
* Should be presentable to Intro. Algorithms students. While the proof itself need not be presentable, the *story* should be presentable (e.g. the statement of the problems involved, some idea of how the proof led to something that just "eyeballing correctness" would not have). If you are unsure about this point, go ahead and add it as an answer and I'll do my best to see if it can be presented to this audience.
* Should be as natural as possible. Artificial examples are kinda okay, but real examples are much more motivating to students.
* Note that I am talking here about abstract proofs for abstract algorithms, *not* formal (/automated/etc) proofs of algorithm implementations. As a litmus test, the ideas involved should not be language-specific.
(While this question is not exactly research-level, I think it might get more answers here than on cs.SE, as I suspect the answers will come from historical examples in the theory of algorithms that expert researchers on this site are more likely to be familiar with.) | Here is a natural problem from graph theory where the proof and the algorithm are closely intertwined. In my view, one can discover this algorithm only via thinking about the proof and the algorithm "in parallel." The task is this:
**Input:** An undirected graph.
**Task:** Find a subgraph with maximum edge-connectivity.
***Note:*** What makes the task non-trivial is that a subgraph can have larger edge-connectivity than the whole graph. Since there are exponentially many subgraphs, it would take exponential time to check all of them to find the most connected one, even though we can compute the edge-connectivity of any graph in polynomial time.
Below is a part of a lecture note about solving this problem. It shows that the task can be solved in polynomial time, using a subroutine that finds a minimum cut (which is well know to be doable in polynomial time). For short, edge-connectivity is just called connectivity, and is denoted by $\lambda(G)$.
[![Finding a subgraph with maximum edge connectivity](https://i.stack.imgur.com/azaAv.png)](https://i.stack.imgur.com/azaAv.png) |
Example of function f(x) such that it is true that `f(x) = Ω(g(x))` but that it is not true that `f(x) = ω(g(x))` | Pick $f(x) = g(x) = x$. Then $f(x) \in \Omega(g(x))$ and $f(x) \not \in \omega(g(x))$. |
I am trying to pre-process a small dataset. I don't understand why I am not supposed to do the thing I explained below:
For example, say we have an attribute that describes the temperature of the weather in a set of 3 nominal values: Hot, mild and cold. I understand these definitions may have derived from numerical values while summarising.
But why would we summarise such values that are on a scale, and lose the scale in the process?
Would it not help to have the algorithm(any classification algorithm) realise that the difference between hot and cold is twofold of the difference between hot and mild by representing hot, mild and cold as integers 1, 2 and 3 respectively? | Likelihood and probability are two very different concepts:
One talk about probabilities when the distribution is **already** known and one want to know how probable an event is.
Likelihood on the other hand is usually much more experimental. It is used when, given some results, one want to know how likely it is that those results fit a specific distribution.
In other words, probability has to do with uncertainty on events while likelihood has to do with uncertainty on distributions.
Here is a great video from stat quest explaining the difference between the two concepts: <https://youtu.be/pYxNSUDSFH4> |
I wish to draw integers from 1 to some specific $N$ by rolling some number of fair six-sided dice (d6). A good answer will explain why its method produces *uniform* and *independent* integers.
As an illustrative example, it would be helpful to explain how a solution works for the case of $N=150$.
Furthermore, I wish for the procedure to be as efficient as possible: roll the least number of d6 on average for each number generated.
Conversions from [senary](https://en.wikipedia.org/wiki/Senary) to decimal are permissible.
---
[This question was inspired by this Meta thread](https://stats.meta.stackexchange.com/questions/5652/would-this-dice-question-be-on-topic-if-it-were-posted-or-migrated-here). | The set $\Omega(d,n)$ of distinct identifiable outcomes in $n$ independent rolls of a die with $d=6$ faces has $d^n$ elements. When the die is fair, that means each outcome of one roll has probability $1/d$ and independence means each of these outcomes will therefore have probability $(1/d)^n:$ that is, they have a uniform distribution $\mathbb{P}\_{d,n}.$
Suppose you have devised some procedure $t$ that either determines $m$ outcomes of a $c (=150)$-sided die--that is, an element of $\Omega(c,m)$--or else reports failure (which means you will have to repeat it in order to obtain an outcome). That is,
$$t:\Omega(d,n)\to\Omega(c,m)\cup\{\text{Failure}\}.$$
Let $F$ be the probability $t$ results in failure and note that $F$ is some integral multiple of $d^{-n},$ say
$$F = \Pr(t(\omega)=\text{Failure}) = N\_F\, d^{-n}.$$
(For future reference, note that the expected number of times $t$ must be invoked before not failing is $1/(1-F).$)
The requirement that these outcomes in $\Omega(c,m)$ be uniform and independent *conditional* on $t$ not reporting failure means that $t$ preserves probability in the sense that for every event $\mathcal{A}\subset\Omega(c,m),$
$$\frac{\mathbb{P}\_{d,n}\left(t^{\*}\mathcal{A}\right)}{1-F}= \mathbb{P}\_{c,m}\left(\mathcal{A}\right) \tag{1}$$
where
$$t^{\*}\left(\mathcal A\right) = \{\omega\in\Omega\mid t(\omega)\in\mathcal{A}\}$$
is the set of die rolls that the procedure $t$ assigns to the event $\mathcal A.$
Consider an atomic event $\mathcal A = \{\eta\}\subset\Omega(c,m)$, which must have probability $c^{-m}.$ Let $t^{\*}\left(\mathcal A\right)$ (the dice rolls associated with $\eta$) have $N\_\eta$ elements. $(1)$ becomes
$$\frac{N\_\eta d^{-n}}{1 - N\_F d^{-n}} = \frac{\mathbb{P}\_{d,n}\left(t^{\*}\mathcal{A}\right)}{1-F}= \mathbb{P}\_{c,m}\left(\mathcal{A}\right) = c^{-m}.\tag{2}$$
**It is immediate that the $N\_\eta$ are all equal to some integer $N.$** It remains only to find the most efficient procedures $t.$ The expected number of non-failures *per roll of the $c$ sided die* is
$$\frac{1}{m}\left(1 - F\right).$$
**There are two immediate and obvious implications.** One is that if we can keep $F$ small as $m$ grows large, then the effect of reporting a failure is asymptotically zero. The other is that for any given $m$ (the number of rolls of the $c$-sided die to simulate), we want to make $F$ as small as possible.
Let's take a closer look at $(2)$ by clearing the denominators:
$$N c^m = d^n - N\_F \gt 0.$$
This makes it obvious that in a given context (determined by $c,d,n,m$), $F$ is made as small as possible by making $d^n-N\_F$ equal the largest multiple of $c^m$ that is less than or equal to $d^n.$ We may write this in terms of the greatest integer function (or "floor") $\lfloor\*\rfloor$ as
$$N = \bigg\lfloor \frac{d^n}{c^m} \bigg\rfloor.$$
Finally, it is clear that $N$ ought to be as small as possible for highest efficiency, because it measures *redundancy* in $t$. Specifically, the expected number of rolls of the $d$-sided die needed to produce one roll of the $c$-sided die is
$$N \times \frac{n}{m} \times \frac{1}{1-F}.$$
**Thus, our search for high-efficiency procedures ought to focus on the cases where $d^n$ is equal to, or just barely greater than, some power $c^m.$**
The analysis ends by showing that for given $d$ and $c,$ there is a sequence of multiples $(n,m)$ for which this approach approximates perfect efficiency. This amounts to finding $(n,m)$ for which $d^n/c^m \ge 1$ approaches $N=1$ in the limit (automatically guaranteeing $F\to 0$). One such sequence is obtained by taking $n=1,2,3,\ldots$ and determining
$$m = \bigg\lfloor \frac{n\log d}{\log c} \bigg\rfloor.\tag{3}$$
The proof is straightforward.
This all means that when we are willing to roll the original $d$-sided die a sufficiently large number of times $n,$ we can expect to simulate nearly $\log d / \log c = \log\_c d$ outcomes of a $c$-sided die per roll. Equivalently,
>
> It is possible to simulate a large number $m$ of independent rolls of a $c$-sided die using a fair $d$-sided die using an average of $\log(c)/\log(d) + \epsilon = \log\_d(c) + \epsilon$ rolls per outcome where $\epsilon$ can be made arbitrarily small by choosing $m$ sufficiently large.
>
>
>
---
### Examples and algorithms
In the question, $d=6$ and $c=150,$ whence
$$\log\_d(c) = \frac{\log(c)}{\log(d)} \approx 2.796489.$$
Thus, **the best possible procedure will require, on average, at least $2.796489$ rolls of a `d6` to simulate each `d150` outcome.**
The analysis shows how to do this. We don't need to resort to number theory to carry it out: we can just tabulate the powers $d^n=6^n$ and the powers $c^m=150^m$ and compare them to find where $c^m \le d^n$ are close. This brute force calculation gives $(n,m)$ pairs
$$(n,m) \in \{(3,1), (14,5), \ldots\}$$
for instance, corresponding to the numbers
$$(6^n, 150^m) \in \{(216,150), (78364164096,75937500000), \ldots\}.$$
In the first case $t$ would associate $216-150=66$ of the outcomes of three rolls of a `d6` to Failure and the other $150$ outcomes would each be associated with a single outcome of a `d150`.
In the second case $t$ would associate $78364164096-75937500000$ of the outcomes of 14 rolls of a `d6` to Failure -- about 3.1% of them all -- and otherwise would output a sequence of 5 outcomes of a `d150`.
**A simple algorithm to implement $t$** labels the faces of the $d$-sided die with the numerals $0,1,\ldots, d-1$ and the faces of the $c$-sided die with the numerals $0,1,\ldots, c-1.$ The $n$ rolls of the first die are interpreted as an $n$-digit number in base $d.$ This is converted to a number in base $c.$ If it has at most $m$ digits, the sequence of the last $m$ digits is the output. Otherwise, $t$ returns Failure by invoking itself recursively.
**For much longer sequences, you can find suitable pairs** $(n,m)$ by considering every other convergent $n/m$ of the continued fraction expansion of $x=\log(c)/\log(d).$ The theory of continued fractions shows that these convergents alternate between being less than $x$ and greater than it (assuming $x$ is not already rational). Choose those that are less than $x.$
In the question, the first few such convergents are
$$3, 14/5, 165/59, 797/285, 4301/1538, 89043/31841, 279235/99852, 29036139/10383070 \ldots.$$
In the last case, a sequence of 29,036,139 rolls of a `d6` will produce a sequence of 10,383,070 rolls of a `d150` with a failure rate less than $2\times 10^{-8},$ for an efficiency of $2.79649$--indistinguishable from the asymptotic limit. |
Note: much more basic than: [Bitcoin and preventing double spending in decentralized digital currencies](https://cstheory.stackexchange.com/questions/7145/bitcoin-and-preventing-double-spending-in-decentralized-digital-currencies)
I'm trying to understand the concern that bitcoin's chain of blocks technique tries to solve (see <http://www.bitcoin.org/bitcoin.pdf>). Warning: not a crypto-dude, be tolerant.
I understand that basically we're worried about "double spending". I'm able to grasp this idea when talking about discrete, indivisible coins. E.g. there are only 3 coins in the world, which cant be divided to cents, and we need to track the history of all 3, and to make sure that each payer really had the coin he claimed to have while paying with it.
While I'm able to see how this concept extends to millions of coins, I can't get how does it extend to fractions of coins. How does a "double spending" attack scenario actually looks like? how do the honest nodes scrutinize a nominated new block, after some node proposes it as the new head node? please provide me with some illustrative examples... | The correct answer was given to me here: <http://forum.bitcoin.org/index.php?topic=27979.msg352498#msg352498>.
The gist: The entire value of the output contributes to the subsequent transaction, so if you wish to transfer smaller amount, you must add an additional output on the second transaction to send the remainder back to yourself.
Note that the accounting is done for transactions outputs and inputs. The block concept is not necessary for the verification process and is only used for technical convenience. |
**Problem:** **3SUM**
**Input:** Three lists A, B and C of integers and an integer k. Each list contain $n$ numbers.
**Task:** Decide whether there exists a tuple (a, b, c) ∈ A × B × C such that a + b + c = k.
**Question :** Is it possible to solve **3SUM** in $O(n^2)$ time using constant space ? Prove or disprove it | A $O(n^2)$ algorithm (with $O(1)$ space) is as follows:
* Sort $A$, $B$, and $C$ individually in $O(n \log n)$.
* For each $a \in A$:
+ Search a pair of $b \in B$ and $c \in C$ such that $b + c = k - a$. This can be done in $O(n)$ by traversing $B$ from the smallest to the largest and $C$ from the largest to the smallest. (Tip: Comparing $b + c$ with $k-a$ each time.) |
I wonder how does particular process gets CPU time or resources whenever it's
required to execute some instructions? When a process is in the idle state or waiting for input, it's not occupying the CPU but, if it is suddenly found that there is an instruction that requires CPU time, who tells that to the CPU? | Typically it will be an operating system process to take responsibility for "waking up" other processes. Most commonly, the so-called process scheduler, which is a key component of every operating system.
For example, if a user process requires some data to be read from disk before continuing its computation, it may be put into a waiting state by the scheduler until the data are available. The scheduler gives the CPU to other processes that are runnable (not waiting), then when the data are available the user process becomes runnable again. |
I have a hypothetical question: suppose there exists an algorithm that solves an NP-Complete problem polynomial time, but requires the computation
of values that grow exponentially big (or small).
For example, suppose POLY-3SAT solves 3-SAT in N^17, however to do so it must compute/evaluate the value of a number C whose value grows as N^1000 (or 1/N^1000).
What would this imply? Does computing/evaluating an exponentially large value automatically place the algorithm in EXP-SPACE (or some other complexity class)? It seems like this would be a different complexity class than just ordinary P. | Complexity classes are classes of problems, not classes of algorithms. For example, the existence of Dijkstra's algorithm means that the problem of computing shortest paths in graphs is in **P**, but it doesn't make sense to say that "Dijkstra's algorithm is in **P**".
Exponentially large values can be stored in binary in a polynomial numebr of bits, and we can compute on such numbers using a polynomial number of operations. This notwithstanding, it seems that you're trying to get a polnomial-time algorithm that somehow dows exponentially much work, which is an oxymoron. |
I am a university level student of Computer Science having a great passion to study Mathematics. I have a firm belief that Computer Science or Theoretical Computer Science is a direct branch of Mathematics and Logic and also of the opinion that a Computer Science degree has always to be Math oriented as a matter of fact. Please correct me if I am wrong.
I frankly feel that there isn't a lot of difference in the 2 subjects to be frank as every *"computation"* involves *"calculation"*, although every *"calculation"* might not be a *"computation"*. Again please provide substantial information and evidence and do update me if I am mistaken here. Thank you | In one phrase, I'd say the distinguishing concern is concern with **computational complexity**.
In math, you're just concerned with possibility and correctness; in TCS, you're worried not just about that, but also the computational *difficulty* of the problem, in terms of time complexity, approximability, space complexity, I/O complexity, and the like.
You may be able to find an odd exception somewhere just like with any rule, but it seems to me that this is pretty accurate overall. |
After doing some research on the topic, I have noticed a surprising deficit of inference packages and libraries that rely on message-passing or optimization methods for Python and R.
To the best of my knowledge, these methods are extremely useful. For example, for a Bayes Network (directed, acyclic) belief-propagation alone should be able to give exact answers. However, most inference software that is available online (e.g. STAN, BUGS, PyMC) rely on MCMC methods.
In the Python case, to the best of my knowledge, neither PyMC, scikit-learn or statsmodels include variational inference algorithms such as belief propagation, message-passing methods or any of their variants.
Why is that? Are these methods less used in practice because they are seen not as powerful or generic as their MCMC counterparts? or Is it simply a matter of lack of manpower and time? | Have you looked at [Edward](http://edwardlib.org/)? The [Inference API](http://edwardlib.org/api/inferences) supports among other things Variational inference:
* Black box variational inference
* Stochastic variational inference
* Variational auto-encoders
* Inclusive KL divergence: `KL(p∥q)` |
This post is inspired by the one in MO: [Examples of common false beliefs in mathematics](https://mathoverflow.net/questions/23478/examples-of-common-false-beliefs-in-mathematics).
Since the site is designed for answering research level questions, examples like *$\mathsf{NP}$ stands for non-polynomial time* should be not on the list. Meanwhile, we do want some examples that may not be hard, but without thinking in details it looks reasonable as well. We want the examples to be educational, and usually appears when studying the subject for the first time.
>
> What are some (non-trivial) examples of common false beliefs in theoretical computer science, that appear to people who are studying in this area?
>
>
>
To be precise, we want examples different from [surprising results](https://cstheory.stackexchange.com/questions/276/surprising-results-in-complexity-not-on-the-complexity-blog-list) and [counterintuitive results](https://cstheory.stackexchange.com/questions/2802/are-there-any-counterintuitive-results-in-theoretical-computer-science) in TCS; these kinds of results are surprising to many people, but they are TRUE. Here we are asking for surprising examples that people may think are true at first glance, but after deeper thought the fault within is exposed.
---
As an example of proper answers on the list, this one comes from the field of algorithms and graph-theory:
For an $n$-node graph $G$, a $k$-edge separator $S$ is a subset of edges of size $k$, where the nodes of $G \setminus S$ can be partition into two non-adjacent parts, each consists of at most $3n/4$ nodes. We have the following "lemma":
>
> A tree has a 1-edge separator.
>
>
>
Right? | This is one is common to computational geometry, but endemic elsewhere: **Algorithms for the real RAM can be transferred to the integer RAM (for integer restrictions of the problem) with no loss of efficiency.** A canonical example is the claim “Gaussian elimination runs in $O(n^3)$ time.” In fact, careless elimination orders can produce integers with [exponentially many bits](http://portal.acm.org/citation.cfm?id=258740).
Even worse, but still unfortunately common: **Algorithms for the real RAM *with floor function* can be transferred to the integer RAM with no loss of efficiency.** In fact, a real-RAM+floor can solve [any problem in PSPACE or in #P in a polynomial number of steps](http://www.computational-geometry.org/mailing-lists/compgeom-announce/2003-December/000852.html). |
In R's `survival::coxph` function, can I mix a covariate representing proportions (in the range 0.0-0.5) with an integer covariate (in the range 1-15), or should I transform the first one also to integers (0-50)? | If perOOgivenP has too many zeroes, it may cause the coefficients fails to NA. It can be a big problem. Even if you get the fitting values, it may not be accurate. You need to show more information about this variable.
It is hard to say whenther this residual plot is biased or not. One obvious thing you can do is to run lm(residuals~c(1:n)), where n is the length or residuals.
The coefficients of the fitting line should be close to 0. Also, I suggest use abline(h=0) for residual plot. |
What is the relationship between the number of clauses and the difficulty of a 3-SAT problem? | So, this all depends on what you mean by relationship.
In terms of traditional time complexity, all known algorithms solving 3SAT are, at best, exponential. To be precise, this means that if $f(n)$ is the function mapping an input size $n$ to the *maximum* time the algorithm takes on any input of size $n$, then $f$ is in $\Theta(b^n)$ for some $b \geq 1$.
However, in practice, adding clauses can speed things up, particularly if they are the *right* clauses. This is how, for example, [Conflict Driven Clause Learning](https://en.wikipedia.org/wiki/Conflict-Driven_Clause_Learning) works. When backtracking search fails, CDCL "learns" a new clause which must be true (or else the search wouldn't have failed). It can then restart its search. The additional clauses add constraints that, when [unit propagation](https://en.wikipedia.org/wiki/Unit_propagation) is performed (i.e. when $p$ holds, we remove all $\lnot p$ occurrences from clauses), can prune out large portions of the search space. |
>
> Minimum number of bits required to represent $(+32)\_{base10}$ and $(-32)\_{base10}$ in signed two's compliment form?
>
>
>
My attempt:
32 = 0100000 ( 1st zero - sign bit as positive)
So to represent +32 we need 7 bits
-32 = 1100000 (1st bit 1 - sign bit as negative)
So to represent -32 we need 7 bits
>
> But answer is given as 6 bits to store -32 and 7 bits to store +32(positve case i understood, negative in my opinion it should be 7 bits). His reason - one 1 bit enough to represent negative number. I am confused. Please clarify here
>
>
>
Also i have following Questions:-
>
> Can we say number of bits required to represent a negative number is strictly less than( or less than equal to) number of bits required to represent that corresponding positive number?
>
>
> how can we generalise minimum number of bits required to represent a given positive and negative number say +N and -N in signed magnitude representation, signed 1's complement notation and signed two's compliment notation.
>
>
> | The representations of the numbers as 6-bit two's complement binaries are
\begin{align}
+32\_{10} &= 10\,0000\_2\\
-32\_{10} &= \overline{10\,0000}\_2 + 1\_2\\
&= 01\,1111\_2 + 1\_2\\
&= 10\,0000\_2
\end{align}
which would give the same representation for both numbers. That means you cannot use 6-bits numbers to represent both values using two's complement.
For 7-bit binaries it works, however:
\begin{align}
+32\_{10} &= 010\,0000\_2\\
-32\_{10} &= \overline{010\,0000}\_2 + 1\_2\\
&= 101\,1111\_2 + 1\_2\\
&= 110\,0000\_2
\end{align}
As you can see, the most significant bit represents the sign of the number.
---
Another perspective on your question: What is the biggest positive/smallest negative number you can store in an $n$-bit two's complement binary? Then the answer is $2^{(n-1)}-1$ for the biggest positive and $-2^{(n-1)}$ for the smallest negative number.
That means that a 6-bit binary can hold values $-32, \ldots, 31$ and a 7-bit binary values $-64, \ldots, 63$. The number $-32$ then fits into the range of 6-bit binaries, while $+32$ doesn't. You'd need one more bit for the latter. |
Suppose you design a computer with a `ten-stage` Pipeline to execute one instruction, with each stage taking `5nsec`
A)how long will it take to execute a program that has `30 sequential` instructions.
**Answer: 195nsec**
B) how long will it take to execute a program that has `30 sequential instructions` in a `non-pipeline computer`
**Answer: 1500nsec**
can some one please help me to understand how was the answer achieved. | A JIT (Just-In-Time) compiler compiles code at run-time, i.e. as the program is running. Therefore the cost of compilation is part of the execution time of the program, and so should be minimized.
The opposite of this is an ahead-of-time (AOT) compiler which is basically synonymous with "batch compiler". This converts the source code to machine code and then just the machine code is distributed. Therefore, the compiler can be very slow as it doesn't impact the execution time of the resulting program.
Nowadays, when people say "compiler" they typically mean an AOT compiler. Indeed, the term "AOT compiler" only really started becoming popular relatively recently when people started making AOT compilers for JIT compiled languages, particularly JavaScript. Many of these languages, e.g. C#, compile to an intermediate language for a VM which is then JIT compiled to machine code at run-time. The term "AOT compiler" has the connotation that the source code will be compiled directly to machine code, so no form of JIT compilation is required at run-time.
"Batch compiler" is a bit of an archaic term at this point. The real contrast to a batch compiler when the term was popular was an [incremental compiler](https://en.wikipedia.org/wiki/Incremental_compiler). Incremental compilation is often associated with languages like Lisp where you had a REPL and you could interactively request the language implementation to compile a specific function. If a function was executed whose compilation had not been requested before, it would typically be interpreted. A batch compiler, by contrast, compiled all functions at once, i.e. in a batch. |
I am trying to visualize an appropriate plot for the observations in this table of means and standard deviations of recall scores:
\begin{array} {c|c c|c c|}
& \text{Control} & & \text{Experimental} & \\
& \text{Mean} & \text{SD} &\text{Mean} &\text{SD} \\
\hline
\text{Recall} & 37 & 8 & 21 & 6 \\
\hline
\end{array}
What is is the best way to do that? Is bar chart a good way to do it? How can I illustrate the standard deviation in that case? | Standard deviation on bar graphs can be illustrated by including [error bars](https://en.wikipedia.org/wiki/Error_bar) in them.
The visualization([source](https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/submissions/27387/versions/4/screenshot.png)) below is an example of such visualization:
[![enter image description here](https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/submissions/27387/versions/4/screenshot.png)](https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/submissions/27387/versions/4/screenshot.png)
---
From a discussion in the comments below, having only the error whiskers instead of the *error bars* setup seems a better way to visualize such data. So, the graph can look somewhat like this:
[![enter image description here](https://i.stack.imgur.com/mgkVy.jpg)](https://i.stack.imgur.com/mgkVy.jpg) |
In the paper [The Random Oracle Hypothesis Is False](http://dx.doi.org/10.1016/S0022-0000%2805%2980084-4), the authors (Chang, Chor, Goldreich, Hartmanis, Håstad, Ranjan, and Rohatgi) discuss the implications of the **random-oracle hypothesis**. They argue that we know very little about separations between complexity classes, and most results involve either using *reasonable* assumptions, or the random-oracle hypothesis. The most important and widely believed assumption is that PH does not collapse. In their words:
>
> In one approach, we assume as a working hypothesis that PH has infinitely many levels. Thus, any assumption which would imply that PH is finite is deemed incorrect. For example, [Karp and Lipton](http://dx.doi.org/10.1145/800141.804678) showed that if NP ⊆ P/poly, then PH collapses to $\Sigma^P\_2$. So, we believe that SAT does not have polynomial sized circuits. Similarly, we believe that the Turing-complete and many-one complete sets for NP are not sparse, because [Mahaney](http://dx.doi.org/10.1016/0022-0000%2882%2990002-2) showed that these conditions would collapse PH. One can even [show that](http://dx.doi.org/10.1137/0204037) for any k ≥ 0, $P^{\mathrm{SAT}[k]} = P^{\mathrm{SAT}[k+1]}$ implies that PH is finite. Hence, we believe that $P^{\mathrm{SAT}[k]} \ne P^{\mathrm{SAT}[k+1]}$ for all k ≥ 0. Thus, if the polynomial hierarchy is indeed infinite, we can describe many aspects of the computational complexity of NP.
>
>
>
Apart from the assumption about PH not collapsing, there have been many other complexity assumptions. For instance:
1. [Yao](http://dx.doi.org/10.1109/SFCS.1982.95) deems the following assumption plausible:
$RP \subseteq \bigcap\limits\_{\epsilon > 0} DTIME(2^{n^\epsilon})$.
2. [Nisan and Wigderson](http://dx.doi.org/10.1016/S0022-0000%2805%2980043-1) make several assumptions related to derandomization.
---
The main idea of this question is what its title says: To be an anthology of complexity-theoretic assumptions. It would be great if the following conventions were adhered to (whenever possible):
1. The assumption itself;
2. The first paper in which the assumption is made;
3. Interesting results in which the assumption is used;
4. If the assumption has ever been refuted / proved, or whether its plausibility has ever been discussed.
`This post is meant to be a community wiki; if an assumption is already cited, please edit the post and add new information rather than making a new post.`
---
**Edit (10/31/2011):** Some cryptographic assumptions and information about them are listed in the following websites:
1. Wiki of [Cryptographic Primitives and Hard Problems in Cryptography](http://www.ecrypt.eu.org/wiki/index.php/Main_Page).
2. Helger Lipmaa's [Cryptographic assumptions and hard problems](http://www.cs.ut.ee/~lipmaa/crypto/link/public/assumptions.php). | * **Assumption:** [Exponential time hypothesis](http://en.wikipedia.org/wiki/Exponential_time_hypothesis).
* **First cited in:** While being folklore, it was first formalized in the following paper: Russell Impagliazzo and Ramamohan Paturi. 1999. [The Complexity of k-SAT](http://dx.doi.org/10.1109%2FCCC.1999.766282). In *Proceedings of the Fourteenth Annual IEEE Conference on Computational Complexity* (*COCO '99*). IEEE Computer Society, Washington, DC, USA, 237-240.
* **Use(s):** It assumes that no NP-complete problem can be decided in sub-exponential time, and therefore implies that P ≠ NP.
* **Status:** Open. |
I want to model plant traits as a function of environmental variables. For example, tree height as a function of fire frequency. I'm doing this to test the effects of fires on plant traits (and not to predict traits at different scenarios).
Between 15-20 individuals were measured per site, in 8 sites - so I have 15-20 values of height in each site, and one fire frequency value per site.
I started by doing Spearman correlations as a preliminary approach, using the mean height per site, but I would like to use an approach where I can use all the height values, so that all information is used.
I have seen [this previous question](https://stats.stackexchange.com/questions/163134/correlation-between-repeated-not-time-measures-and-not-repeated-measures) where it is advised to try hierarchical models, using the response variable as a group-level predictor.
In [this other question](https://stats.stackexchange.com/questions/479784/is-it-possible-to-test-for-strongest-predictor-if-all-variables-are-identical-w) it is mentioned the use of environmental variables as fixed effects, and site as a random effect - I suppose here site is the grouping variable.
So are both ways correct? Or should I always introduce "site" in my model to group observations? | As pointed out by Robert, *Site* is a grouping factor in your study. To formulate the appropriate model, you will however need to determine whether you can treat *Site* as a *fixed* or *random* grouping factor in your modelling.
**Site as a fixed grouping factor**
If you were to repeat your study again, would you select the exact same 8 sites as before because these sites are the only ones you are interested in? If yes, you should treat *Site* as a *fixed* grouping factor. That means that you could formulate your models as *linear regression models* using the lm() function of R:
```
# effect of fire_frequency on tree_height is assumed to be
# the same across all 8 sites
m1 <- lm(tree_height ~ fire_frequency + Site, data = yourdata)
# effect of fire_frequency on tree_height is assumed to be
# different across sites
m2 <- lm(tree_height ~ fire_frequency*Site, data = yourdata)
```
**Site as a random grouping factor**
If you were to repeat your study again, would you select the exact same 8 sites as before because these sites are the only ones you are interested in? If no, you should treat *Site* as a *random* grouping factor, since the 8 sites were selected to be representative of a larger set of sites you are truly interested in (ideally, they would have been selected at random from that larger set of sites). That means that you could formulate your model as a *linear mixed effects regression model* using the lmer() function of R:
```
library(lme4)
m <- lmer(tree_height ~ fire_frequency + (1|Site), data = yourdata)
```
A third possibility would be to use a GEE model - especially since your fire\_frequency variable is a site-level predictor and GEE models can offer a more natural interpretation of its effect than mixed effects models. |
The type theory that I have seen is all developed over lambda calculus, which is an inherently functional language.
Nevertheless, in practice imperative languages have type systems. Are there differences in the type **theory** for imperative vs functional languages? | You might want to look at chapters 13 and 19 of [Types and Programing Languages](https://www.cis.upenn.edu/~bcpierce/tapl/) that handle types for constructs with side-effects, and a more general imperative language (Java) respectively.
As Andrej says, the type system community is aware of imperative languages (indeed, I know of very few completely pure systems!), and theory has developed accordingly. |
I wish to understand why P is a subset of PSCPACE, that is why a polynomial-time langauge does have a polynomial-sized circuit. I read many proofs like [this one here on page 2-3](http://www.stanford.edu/~rrwill/week3.pdf), but all the proofs use the same technique used in the Cook-Levin theorem to convert the computation of M on an n-bit input x to a polynomial sized circuit.
What I don't understand is that the resulting circuit is dependent on the input x, because what is being converted into a circuit is the computation of M on the specific input x. By definition of PSIZE, the same circuit must work for all the inputs in a fixed length, and thus is not dependent on one specific input.
So how is the process of creating a poly-sized circuit family for a poly-time deterministic Turing machine works exactly? | Cook-Levin gives one circuit for all inputs of a given size. So although the circuit depends on the input $x$, it only depends on the size of the input. So given TM $M$ with running time $t$, and a number $n$ (the size of input), Cook-Levin gives a circuit $C\_n$ of size roughly $t^2$ that solves the problem on all inputs of size $n$. The circuit $C\_n$ does not depend on what are the bits of the input for $M$, however we need to know the numbers of input bits as that is going to be the number of input bits for the circuit. |
Let's assume that we have built an universal quantum computer.
Except for security-related issues (cryptography, privacy, ...) which current real world problems can benefit from using it?
I am interested in both:
* problems currently unsolvable for a practical entry,
* problems which currently are being resolved, but a significant speedup would greatly improve their usability. | Efficiently simulating quantum mechanics. |
What is "Fibonacci" about the [Fibonacci LFSR](https://en.wikipedia.org/wiki/Linear-feedback_shift_register#Fibonacci_LFSRs)?
If I read right, Fibonacci LFSR means that it depends on its two last states, but from the example in Wikipedia it doesn't look like two states are taken in consideration (ie. XORing the taps in the current state, shifting and inputing the left bit..).
What am I missing? | It depends where the logarithm is. If it is just a factor, then it doesn't make a difference, because big-O or $\theta$ allows you to multiply by any constant.
If you take $O(2^{\log n})$ then the base is important. In base 2 you would have just $O(n)$, in base 10 it's about $O(n^{0.3010})$. |
If I am interested in the causal effects of the change in a variable ($E$) on
some outcome ($O$), **how would I represent that in a [directed acyclic graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph) (DAG)?**
Suppose $\Delta E\_2 = E\_2 - E\_1$, where $E\_1$ & $E\_2$ happen at times 1 & 2, would a correct DAG be:
1. Assuming that $\Delta E\_2$ is simply captured by all levels of $E\_1$ and $E\_2$ (\*a la\* the same way interaction effects are so captured)?
![](https://i.stack.imgur.com/nhl92.png "DAG 1: E_1 and E_2 both cause O")
2. Assuming that $\Delta E\_2$ is a causally distinct variable from $E\_1$ and $E\_2$, but requiring the presence of those variables?
![](https://i.stack.imgur.com/tRaYW.png "DAG 2: E_1, E_2, & Delta E all cause O")
3. Assuming that $\Delta E\_2$ is independent of $E\_1$ & $E\_2$ and the latter are not necessary to represent the effects of $\Delta E\_2$?
![](https://i.stack.imgur.com/9LcAH.png "DAG 3: Delta E causes O")
4. Something else?
**NOTE:** "[DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph#Causal_structures)" ***does not*** mean "any old kind of causal or correlational graph," but is a tightly prescribed formalism representing causal beliefs.
My motivation is that I am trying to think about DAG representation of dynamic models like the generalized error correction model:
$$\Delta O\_t = \beta\_{0} + \beta\_{\text{c}}\left(O\_{t-1} - E\_{t-1}\right) + \beta\_{\Delta E}\Delta E\_{t} + \beta\_E E\_{t-1} + \varepsilon\_t$$
Of course, the raw parameter estimate get transformed to interpret model as below, so perhaps DAGing the above model would be even messier?
Short-run instantaneous effect of change in $E$ on $\Delta O$: $\beta\_{\Delta E}$
Short-run lagged effect of level of $E$ on $\Delta O$: $\beta\_{E} - \beta\_{\text{c}} - \beta\_{\Delta E}$
Long-run equilibrium effect of lagged $E$ on $\Delta O$: $\frac{\beta\_{\text{c}} - \beta\_{E}}{\beta\_{\text{c}}}$ | **The solution is to think functionally.**
The value of $\Delta E\_{2} = f(E\_{1},E\_{2})$ more specifically$ \Delta E\_{2} = E\_{2} - E\_{1}$. Therefore difference variables may be represented in DAGs by option 4, "something else" (this DAG assumes $E\_{1}$ and $E\_{2}$ directly cause $O$ in addition to their difference):
![](https://i.stack.imgur.com/qezn7.png "DAG of difference variable with direct effects of parent variables")
If $E\_{1}$ & $E\_{2}$ do not have direct effects on $O$, $\Delta E\_{2}$ still remains a function of its parents:
![](https://i.stack.imgur.com/N9iiX.png "DAG of a difference variable without direct effects of parent variables")
If we rewrite the single lag generalized error correction model thus ($Q\_{t-1}$ for 'eQuilibrium term', where $Q\_{t-1} = O\_{t-1} - E\_{t-1}$):
$$\Delta O\_t = \beta\_{0} + \beta\_{\text{c}}\left(Q\_{t-1}\right) + \beta\_{\Delta E}\Delta E\_{t} + \beta\_E E\_{t-1} + \varepsilon\_t$$
Then the DAG underlying the model for $\Delta O\_{t}$ (ignoring its descendants at $t+1$) is:
![](https://i.stack.imgur.com/Uo6Ta.png "Segment of aDAG at time t for a generalized error correction model")
The effects of $E$ on $\Delta O\_{t}$ from the model thus enter from equilibrium term $Q\_{t-1}$, from $E\_{t-1}$ and from change term $\Delta E\_{t}$. Other causes of $O\_{t-1}$, $O\_{t}$, $E\_{t-1}$ and $E\_{t}$ (e.g., unmodeled variables, random inputs) are left implicit.
The portion of this answer corresponding to the first two DAGs is courtesy of personal communication with Miguel Hernán. |
Suppose you're meeting with programmers who have taken some professional programming courses (/ self thought) but didn't study a university level math.
In order to show them the beauty of TCS, I'd like to gather some nice results/open questions coming from TCS which can easily be explained.
A good candidate for this purpose (IMHO) will be showing that the halting problem is not decidable. Another will be showing a lower bound on the running time of comparison based sorting (although that's a bit pushing it from what I expect them to understand).
I can also use the ideas from [Explain P = NP problem to 10 year old](https://cstheory.stackexchange.com/questions/5188/explain-p-np-problem-to-10-year-old), assuming some of them are unfamiliar with it.
So, questions has to be:
(0. Beautiful)
1. Explainable with (at most) high school math.
2. (preferably) not trivial enough to be shown in professional programming courses (for C++/Java/Web/etc.). | I think that - *independently from the P vs NP question* - the [Cook-Levin theorem](http://en.wikipedia.org/wiki/Cook%27s_theorem) (and the related notion of NP-completeness) is another very good candidate; if you have an (efficient) solver for SAT then you have an (efficient) solver for any problem in NP .... and you can end up with something astonishing at least for me:
* solving $a x\_1^2 + b x\_2 + c = 0$ over non-negative integers variables;
* solving a Sudoku;
* finding an Hamiltonian path in a graph;
* solving a subset sum instance;
* and many other (real life) problems ...
are in some sense "equivalent problems"; so if your boss asks you to create a program for packing boxes into a container ... you can give him a Minesweeper solver ... :-) |
I have a database table of data transfers between different nodes. This is a huge database (with nearly 40 million transfers). One of the attributes is the number of bytes (nbytes) transfers which range from 0 bytes to 2 tera bytes. I would like to cluster the nbytes such that given k clusters some x1 transfers belongs to k1 cluster, x2 transfters to k2 etc.
From the terminology that I used you might have guessed what I was going with: K-means. This is 1d data since nbytes is the only feature I care about. When I was searching for different methods to this I saw the EM was mentioned a couple times along with a non-clustering approach. I would like to know about your views on how to approach this problem (specifically whether to cluster or not to cluster).
Thanks! | In one dimensional data, don't use cluster analysis.
Cluster analysis is usually a multivariate technique. Or let me better put it the other way around: for one-dimensional data -- which is completely ordered -- there are much better techniques. Using k-means and similar techniques here is a total waste, unless you put in enough effort to actually optimize them for the 1-d case.
Just to give you an example: for k-means it is common to use k random objects as initial seeds. For one dimensional data, it's fairly easy to do better by just using the appropriate quantiles (1/2k, 3/2k, 5/2k etc.), after sorting the data *once*, and then optimize from this starting point. However, 2D data cannot be sorted completely. And in a grid, there likely will be empty cells.
I also wouldn't call it cluster. I would call it **interval**. What you really want to do is to optimize the interval borders. If you do k-means, it will test for each object if it should be moved to another cluster. That does not make sense in 1D: only the objects at the interval borders need to be checked. That obviously is much faster, as there are only ~2k objects there. If they do not already prefer other intervals, more central objects will not either.
You may want to look into techniques such as **[Jenks Natural Breaks optimization](https://en.wikipedia.org/wiki/Jenks_natural_breaks_optimization "Wikipedia article")**, for example.
Or you can do a **[kernel density estimation](https://en.wikipedia.org/wiki/Kernel_density_estimation "Wikipedia article")** and look for local minima of the density to split there. The nice thing is that you do not need to specify k for this!
See [this answer](https://stackoverflow.com/a/35151947/1060350) for an example how to do this in Python (green markers are the cluster modes; red markers a points where the data is cut; the y axis is a log-likelihood of the density):
[![KDE with Python](https://i.stack.imgur.com/inGb2.png)](https://i.stack.imgur.com/inGb2.png)
P.S. please use the search function. Here are some questions on 1-d data clustering that you missed:
* [Clustering 1D data](https://stats.stackexchange.com/questions/13781/clustering-1d-data)
* <https://stackoverflow.com/questions/7869609/cluster-one-dimensional-data-optimally>
* <https://stackoverflow.com/questions/11513484/1d-number-array-clustering> |
EDIT: I've now asked a [similar question about the difference between categories and sets.](https://cs.stackexchange.com/questions/91357/what-exactly-is-the-semantic-difference-between-category-and-set)
Every time I read about type theory (which admittedly is rather informal), I can't really understand how it differs from set theory, **concretely**.
I understand that there is a conceptual difference between saying "x belongs to a set X" and "x is of type X", because intuitively, a set is just a collection of objects, while a type has certain "properties". Nevertheless, sets are often defined according to properties as well, and if they are, then I am having trouble understanding how this distinction matters in any way.
So in the most **concrete** way possible, what exactly does it **imply** **about $x$** to say that it is of type $T$, compared to saying that it is an element in the set $S$?
(You may pick any type and set that makes the comparison most clarifying). | In practice, claiming that $x$ being of type $T$ *usually* is used to describe *syntax*, while claiming that $x$ is in set $S$ is *usually* used to indicate a *semantic* property. I will give some examples to clarify this difference in *usage* of types and sets. For the difference in what types and sets actually *are*, I refer to [Andrej Bauer's answer](https://cs.stackexchange.com/a/91345/).
An example
----------
To clarify this distinction, I will use the example given in [Herman Geuvers' lecture notes](http://www.cs.ru.nl/~herman/onderwijs/provingwithCA/paper-lncs.pdf). First, we look at an example of inhabiting a type:
$$3+(7\*8)^5:\mathrm{Nat},$$
and an example of being member of a set:
$$3\in \{n\in\mathbb{N}\mid \forall x,y,z\in\mathbb{N}^+ (x^n+y^n\neq z^n)\}$$
The main difference here is that to test whether the first expression is a natural number, we don't have to compute some semantic meaning, we merely have to 'read off' the fact that all literals are of type Nat and that all operators are closed on the type Nat.
However, for the second example of the set, we have to determine the semantic meaning of the $3$ in the context of the set. For this particular set, this is quite hard: the membership of $3$ for this set is equivalent to proving Fermat's last theorem! Do note that, as stated in the notes, the distinction between syntax and semantics cannot always be drawn that clearly. (and you might even argue that even *this example* is unclear, as Programmer2134 mentions in the comments)
Algorithms vs Proofs
--------------------
To summarize, types are often used for 'simple' claims on the syntax of some expression, such that membership of a type can be checked by an *algorithm*, while to test membership of a set, we would in usually require a *proof*.
To see why this distinction is useful, consider a compiler of a typed programming language. If this compiler has to create a formal proof to 'check types', the compiler is asked to do an almost impossible task (automated theorem proving is, in general, hard). If on the other hand the compiler can simply run an (efficient) algorithm to check the types, then it can realistically perform the task.
A motivation for a strict(er) interpretation
--------------------------------------------
There are multiple interpretations of the semantic meaning of sets and types. While under the distinction made here extensional types and types with undecidable type-checking (such as those used in NuPRL, as mentioned in the comments) would not be 'types', others are of course free to call them as such (just as free as they are as to call them something else, as long as their definitions fit).
However, we (Herman Geuvers and I), prefer to not throw this interpretation out of the window, for which I (not Herman, although he might agree) have the following motivation:
First of all, the intention of this interpretation isn't *that far* from that of Andrej Bauer. The intention of a syntax is usually to describe how to construct something and having an algorithm to actually construct it is generally useful. Furthermore, the features of a set are usually only needed when we want a semantic description, for which undecidability is allowed.
So, the advantage of our more stricter description is to keep the separation *simpler*, to get a distinction more directly related to common practical usage. This works well, as long as you don't need or want to loosen your usage, as you would for, e.g. NuPRL. |
I have run a linear discriminant analysis for the simple 2 categorical group case using the MASS package lda() function in R. With priors fixed at 0.5 and unequal n for the response variable of each group, the output basically provides the group means and the LD1 (first linear discriminant coefficient) value. There is no automatic output of the cutoff (decision boundary) value estimated that is later used to classify new values of the response variable into the different groups. I have tried various unsuccessful approaches to extract this value. It is obvious that in the simple 2 group case the value will be close to the mean of the 2 group means and that the LD1 value is involved (perhaps grand mean \* LD1?). I am probably missing (misunderstanding?) the obvious and would appreciate being educated in this matter. Thanks. Regards,BJ | This is an expanded (or exegetical expansion of @StasK answer) attempt at the question focusing on **proportions**.
**Standard Error:**
The [standard error (**SE**) of the *sampling distribution a proportion $p$*](http://www.jerrydallal.com/lhsp/psd.htm) is defined as:
$\text{SE}\_p=\sqrt{\frac{p\,(1-p)}{n}}$. This can be contrasted to the [standard deviation (**SD**) of the *sampling distribution* of a proportion $\pi$](http://onlinestatbook.com/2/estimation/proportion_ci.html): $\sigma\_p=\sqrt{\frac{\pi\,(1-\pi)}{n}}$.
**Confidence Interval:**
The [confidence interval](https://en.wikipedia.org/wiki/Confidence_interval#Confidence_intervals_for_proportions_and_related_quantities) estimates the population parameter $\pi$ based on the sampling distribution and the central limit theorem (CLT) that allows a normal approximation. Hence, given a SE, and a proportion, $95\%$ the confidence interval will be calculated as:
$$p\,\pm\,Z\_{\alpha/2}\,\text{SE}$$
Given that $Z\_{\alpha/2}=Z\_{0.975}=1.959964\sim1.96$, the CI will be:
$$p\,\pm\,1.96\,\sqrt{\frac{p\,(1-p)}{n}}$$.
This raises a question regarding the utilization of the normal distribution even if we really don't know the population SD - when estimating confidence intervals for means, if the SE is used in lieu of the SD, the $t$ distribution is typically felt to be a better choice due to its fatter tails. However, in the case of a proportion, there is only one parameter, $p$, being estimated, since the formula for the [Bernouilli variance](https://en.wikipedia.org/wiki/Bernoulli_distribution) is entirely dependent on $p$ as $p\,(1-p)$. This is very nicely explained [here](https://www.quora.com/In-hypothesis-testing-why-doesnt-the-sample-proportion-have-a-students-t-distribution-like-the-sample-mean).
**Margin of Error:**
The [margin of error](https://en.wikipedia.org/wiki/Margin_of_error#Explanation) is simply the "radius" (or half the width) of a confidence interval for a particular statistic, in this case the sample proportion:
$\text{ME}\_{\text{@ 95% CI}}=1.96\,\sqrt{\frac{p\,(1-p)}{n}}$.
Graphically,
[![enter image description here](https://i.stack.imgur.com/4Prbn.png)](https://i.stack.imgur.com/4Prbn.png) |
If your logistic regression fit has coefficients with the following attributes, do you look at the values of `Pr(Z>|z|)` are smaller than 0.95 to determine whether that variable is needed at a 5% level of significance?
ie. If `Pr(>|z|)` is 0.964, this variable is not needed at 5% significance.
[![enter image description here](https://i.stack.imgur.com/6UTBa.png)](https://i.stack.imgur.com/6UTBa.png) | Firstly, the p-value given for the Z-statistic would have to be interpreted as how likely it is that a result as extreme or more extreme than that observed would have occured under the null hypothesis. I.e. 0.96 would in principle mean that the data are providing very little evidence that the variable is needed (while small values such as, say, $p\leq 0.05$ would provide evidence for the likely relevance of the variable, as pointed out by others already). However, a lack of clear evidence that the variable is needed in the model to explain the this particular data set would not imply evidence that the variable is not needed. That would require a difference approach and with a very larege standard error one would not normally be able to say that the variable does not have an effect. Also, it is a very bad idea to decide which variables are to be included in a model based on p-values and then fitting the model with or without them as if no model selection had occurred.
Secondly, as also pointed out by others, when you get this huge a coefficient (corresponds to an odds ratio of $e^{-14.29}$) and standard error from logistic regression, you typically have some problem. E.g. the algorithm did not converge or there is complete separation in the data. If your model really did only include an intercept, then perhaps there are no events at all, and all records did not have an outcome? If so, then a standard logistic regression may not be able to tell you a lot. There are some alternatives for such sparse data situations (e.g. a Bayesian analysis including the available prior information). |
I have the problem:
>
> Show that there exists a real number for which no program exists
> that runs infinitely long and writes that number's decimal
> digits.
>
>
>
I suppose it can be solved by reducing it to the Halting problem, but I have no idea how to do so.
I would also appreciate links to similar problems for further practice. | It's actually much simpler. There's only a countable number of algorithms. Yet there are uncountably many real numbers. So if you try to pair them up, some real numbers will be left hanging. |
This is **not** a class assignment.
It so happened that 4 team members in my group of 18 happened to share same birth month. Lets say June. . What are the chances that this could happen. I'm trying to present this as a probability problem in our team meeting.
Here is my attempt:
* All possible outcome $12^{18}$
* 4 people chosen among 18: 18$C\_4$
* Common month can be chosen in 1 way: 12$C\_1$
So the probability of 4 people out of 18 sharing the same birth month is $\frac{18C\_4 \* 12C\_1}{12^{18}}$ = very very small number.
Questions:
1. Is this right way to solve this problem?
2. What the probability that there is **exactly** 4 people sharing a birth month?
3. What the probability that there is **at least** 4 people (4 or more people) sharing a birth month?
Please note: I know that all months are not equal, but for simplicity lets assume all months have equal chance. | >
> It so happened that 4 team members in my group of 18 happened to share same birth month. Let's say June. What are the chances that this could happen? I'm trying to present this as a probability problem in our team meeting.
>
>
>
There are several other good answers here on the mathematics of computing probabilities in these "birthday problems". One point to note is that birthdays are not uniformly distributed over calendar days, so the uniformity assumption that is used in most analyses slightly underestimates the true probability of clusters like this. However, setting that issue aside, I would like to get a bit "meta" on you here and encourage you to think about this problem a little differently, as one that involves a great deal of "confirmation bias".
Confirmation bias occurs in this context because you are more likely to take note of an outcome and seek a probabilistic analysis of that outcome if it is unusual (i.e., low probability). To put it another way, think of all the previous times in your life where you were in a room with people and learned their birthday month and the results were not unusual. In those cases, I imagine that you did not bother to come on CV.SE and ask a question about it. So the fact that you are here asking this question is an important conditioning event, that would only happen if you observe something that is sufficiently unusual to warrant the question. In view of this, the conditional probabiltity of the result you observed, conditional on your presence asking this question, is quite high --- much higher than the analysis in the other answers would suggest.
To examine this situation more formally, consider these the following events:
$$\begin{matrix}
\mathcal{A}(x,y) & & & \text{Seeing } x \text{ people with same birthday month out of } y \text{ random people}, \\[6pt]
\mathcal{B} & & & \text{Deciding the observed outcome warrants probabilistic investigation}. \
\end{matrix}$$
Most of the answers here are telling you how to estimate $\mathbb{P}(\mathcal{A}(4,18))$ but the actual probabilty at play here is the conditional probability $\mathbb{P}(\mathcal{A}(4,18) | \mathcal{B})$, which is **much**, **much** higher (and cannot really be computed here). |
I would like to know the meaning of an autocorrelation graph of a sine wave. When the time lag is 0, then the autocorrelation should give the highest value of 1 since a copy of the signal is completely correlated to itself. By this logic, after a time equal to the period of the signal, the correlation should again be maximum since the shifted signal is again the signal itself. However, when I plot the correlation in python, I get a function which keeps increasing which goes against my intuition that the correlation function should be periodic. Could anyone please explain why the autocorrelation shows this trend in its graphical form?
```
import matplotlib.pyplot as plt
import numpy as np
time = np.arange(0, 10, 0.1);
y = np.sin(time)
result = np.correlate(y, y, mode='full')
plt.plot(result[:int(result.size/2 )])
plt.show()
```
[![enter image description here](https://i.stack.imgur.com/QE8le.png)](https://i.stack.imgur.com/QE8le.png) | It's too late to answer the OP's question but I write this in the hope to help someone who might be looking for an answer. Your understanding about autocorrelation function is correct, it should be maximum when the signal is in most agreement with itself, which is at t=0. The result that you see is because you plot only half the values of the autocorrelation. Also, in your plot, the x-axis is not correctly labelled. Usually the autocorrelation is plotted against the 'time delay' or lag between the signals and not the index of the result starting from 0 like you have shown in your plot. If t1 and t2 are the time length of the two signals, then the time delay between the signals ranges from -t1 to t2. If the x-axis is correctly labelled, the values of autocorrelation given by `np.correlate()` will make a lot of sense.
```
import matplotlib.pyplot as plt
import numpy as np
time = np.arange(0, 10, 0.1);
y = np.sin(time)
result = np.correlate(y, y, mode='full')
lags = np.arange(-time[-1],time[-1]+0.1,0.1) #adding 0.1 to include the last instant of time also
plt.figure()
plt.plot(lags,result)
plt.xlabel('Lag')
plt.ylabel('autocorrelation')
plt.show()
```
[![AutoCorrelation of a sine wave](https://i.stack.imgur.com/61ZFF.png)](https://i.stack.imgur.com/61ZFF.png)
Now it can be seen that the autocorrelation is maximum at a lag of 0 and the next peak of the autocorrelation function is at the time period of the signal, just as expected in the question. The value of the second correlation peak is lesser because of the fixed length of the signal (=10s) and shifting the signal by its own time period (approx. 6.28s) leads to a portion of the signal not having matching values, simply just because the function is not defined outside the range [0,10).
The outputs from `np.correlate ()` or `scipy.signal.correlate()` do not make it very clear to understand this, indeed the first intiution to understand the correlation result would be to plot the correlation against its index. The documentation and the function outputs of the equivalent MATLAB function `xcorr` make this clear, it even gives the lags at which the correlation values are computed as one of the outputs.
Hope this helps. |
I have pandas data frame which has some related columns, like let's say I have a column that showing housing sale prices and other column shows total area of that house. If I look for outliers alone it's showing really expensive houses but some of them makes sense because they have huge area, what I want to find is cheap houses with huge areas.
I can spot them by eye if I plot scatterplot and filter them out manually, but is there any way to do this without visual analysis? | SAT score is discrete, but it is not a count. Not all discrete variables need to be counts. SAT score is an interval variable. Typically, SAT score is treated as a continuous variable in regression models. |
I try to really internalize the way backpropagation works. I made up different networks with increasing complexity and wrote the formulas to it.
However, I have some difficulties with the matrix notation. Hope anyone can help me.
My network has 2 input, 3 hidden and 2 output neurons.
[![enter image description here](https://i.stack.imgur.com/bt6V5.jpg)](https://i.stack.imgur.com/bt6V5.jpg)
I wrote up the matrices[![enter image description here](https://i.stack.imgur.com/QlBeP.jpg)](https://i.stack.imgur.com/QlBeP.jpg)
The loss is MSE: $ L = \frac {1}{2} \sum (y\_{hat} - y\_{true})^2$
The derivative of the Loss with respect to the weight matrix $W^{(3)}$ should have the same dimensions like $W^{(3)}$ to update each entry with (stochastic) gradient descent.
$\frac {\partial L}{\partial W^{(3)}} = \frac {\partial L}{\partial a^{(3)}} \frac {\partial a^{(3)}}{\partial z^{(3)}} \frac {\partial z^{(3)}}{\partial W^{(3)}} = (a^{(3)} - y) \odot a^{(3)} \odot (1 - a^{(3)}) a^{(2)T}$
**First question**, is that correct to transpose $a^{(2)}$ since otherwise the dimension would not work out?
Now for the second weight matrix, where **I cannot figure out what is wrong with the dimensions**:
$\frac {\partial L}{\partial W^{(3)}} = \frac {\partial L}{\partial a^{(3)}} \frac {\partial a^{(3)}}{\partial z^{(3)}} \frac {\partial z^{(3)}}{\partial a^{(2)}} \frac {\partial a^{(2)}}{\partial z^{(2)}} \frac {\partial a^{(2)}}{\partial z^{(2)}} \frac {\partial z^{(2)}}{\partial W^{(2)}} = (a^{(3)} - y) \odot a^{(3)} \odot (1 - a^{(3)}) W^{(3)} (1,1,1)^T a^{(1)T}$
I get **2x1 2x3 3x1 1x2**...
I wrote just $(1,1,1)$ assuming that the the $z = (z\_1, z\_2, z\_3)$ are greater than 0. | For your first question, yes, transposing $a^{(2)}$ will do the job since for each entry of the matrix $w\_{ij}^{(3)}$, the derivative includes the multiplier $a\_{i}^{(2)}$. So, $a\_{1}^{(2)}$ will be in the first column, $a\_{2}^{(2)}$ will be in the second column and so on. This is directly achieved by multiplying $\delta$ (the first three terms) by ${a\_2^{(2)}}^T$Note that in your indexing, $w\_{ij}$ denotes $j$-th row and $i$-th column.
Second one is a bit tricky. First of all, you're using [denominator layout](https://en.wikipedia.org/wiki/Matrix_calculus#/Denominator-layout_notation), so a vector of size $m$ divided by another vector of size $n$ has derivative of size $n\times m$. Typically, numerator layout is more common.
### It's all about Layouts
Let's say we have a scalar loss $L$, and two vectors $a,z$ which have dimensions $m,n$ respectively. In $\frac{\partial L}{\partial z}=\frac{\partial L}{\partial a}\frac{\partial a}{\partial z}$, according to denominator layout, first one produces $m\times 1$ vector, and second one produces $n\times m$ matrix, so dimensions mismatch. If it was numerator layout, we'd have $1\times m$ times $m\times n$, and get $1\times n$ gradient, still consistent with the layout definition.
This is why you should append **to the left** as you move forward in denominator layout (because we're actually transposing a matrix multiplication in changing the layout: $(AB)^T=B^TA^T$):
$$\underbrace{\frac{\partial L}{\partial z}}\_{n\times 1}=\underbrace{\frac{\partial a}{\partial z}}\_{n\times m}\underbrace{\frac{\partial L}{\partial a}}\_{m\times 1}$$
So,
$$\frac {\partial L}{\partial W\_{ij}^{(2)}} = \underbrace{\frac {\partial z^{(2)}}{\partial W\_{ij}^{(2)}}}\_{1\times 3} \underbrace{\frac {\partial a^{(2)}}{\partial z^{(2)}}}\_{3\times 3} \underbrace{\frac {\partial z^{(3)}}{\partial a^{(2)}}}\_{3\times 2} \underbrace{\frac {\partial L}{\partial z^{(3)}}}\_{2\times 1}$$
And everything matches. I've changed two more things:
* Some of these calculations can be merged and optimised using element-wise multiplications, e.g. the term $\frac {\partial a^{(2)}}{\partial z^{(2)}}$ produces a 3x3 output, but it's a diagonal matrix. This is actually what you've done while calculating gradients in the last layer.
* I've used $W\_{ij}$ because it's easier to think. $\frac{\partial z}{\partial W}$ is a 3D tensor, since numerator is a vector and denominator is a matrix. After finding the expressions for each $W\_{ij}$ and placing them into the gradient matrix one by one according to **denominator** layout, you can take out common multiplications and write the final formula. |
I wish to train a model that detects the breed of a dog based on video input. I have a dataset containing 10 classes with 30 videos in each class. The problem is that for each of these videos, the dog is not present throughout the course of the video. The following are examples of 2 videos from the dataset:
Video 1: Video of backyard (first 5 seconds) --> Dog appears (15 seconds) --> Video of surrounding buildings (3 seconds)
Video 2: Video of grass (first 8 seconds) --> Dog appears (3 seconds) --> Video of nearby people (4 seconds)
I presume that my CNN would detect redundant features and hence give incorrect outputs if I trained my model on the videos as is. Hence, do I need to manually trim each of the 300 videos to show only the part where the dog appears or is there an easier way to approach this problem? | One option is to create a hierarchical system. The first stage could be a model that detects the presence of "dog" / "not dog". The second sage could be if "dog" is presence then the specific breed.
Many Convolutional Neural Networks (CNNs) are designed for images. Thus, classification of the video would happen frame-by-frame.
Since this is a relative common task, find a pretrained CNN and see how well it performs on your specific data. If the performance is not acceptable, then label your data and fine-tune the model. |
I am new to machine learning and I got this task in my university. I have a dataset with over 100 columns and two target variables: $target1$ is categorical i.e. $0$ or $1$ and $target2$ is continuous i.e. values in range $0 \space to \space 100$.
How can I predict this type of problem?
I tried using multi-output classification from sklearn using the Random forest as an ensembler and it is predicting nicely for continuous target variable but not for categorical target variable. | You should break this down into two models. I would solve this in the following manner:
1. The first model would predict if its either Target 1 or Target 2
by looking at 100 columns
2. The second model then would look at the 100 columns and additionally the output of model 1 and then predict 0 or 1 in case of target 1 or 0-100 in case of target 2.
I do not think you can achieve the result with just one single model.
If you need more information, I could elaborate on it. But this should give you a starting point.
Here is an explanation of the 2 above points:
1. Train a classifier, with all the data points you have with labels as Target 1/ Target 2. For this you could use any family of classifier. But you need to be very careful in the evaluation. If this models performs poorly, you will have a problem, as your classification would affect your next model. You also need to check if the distribution between target 1 and target 2 are appropriate before using a model to classify them.
2. Once the classifier is done, you can then use regression with all the input features + class of the entry ( target 1 or 2 ). |
Many times I have come across informal warnings against "data snooping" (here's [one amusing example](http://goo.gl/mVVNV4)), and I think I have an intuitive idea of roughly what that means, and why it may be a problem.
On the other hand, "exploratory data analysis" seems to be a perfectly respectable procedure in statistics, at least judging by the fact that a [book](http://books.google.com/books/about/Exploratory_Data_Analysis.html?id=UT9dAAAAIAAJ) with that title is still reverentially cited as a classic.
In my line of work I often come across what looks to me like rampant "data snooping", or perhaps it would be better described as "data [torture](http://en.wikipedia.org/wiki/Ronald_Coase#Quotations)", though those doing it seem to see the same activity as entirely reasonable and unproblematic "exploration".
Here's the typical scenario: costly experiment gets carried out (without much thought given to the subsequent analysis), the original researchers cannot readily discern a "story" in the gathered data, someone gets brought in to apply some "statistical wizardry", and who, after *slicing and dicing* the data every which way, finally manages to extract some publishable "story" from it.
Of course, there's usually some "validation" thrown in the final report/paper to show that the statistical analysis is on the up-and-up, but the blatant publish-at-all-cost attitude behind it all leaves me doubtful.
Unfortunately, my limited understanding of the do's and don'ts of data analysis keeps me from going beyond such vague doubts, so my conservative response is to basically disregard such findings.
My hope is that not only a better understanding of the distinction between exploration and snooping/torturing, but also, and more importantly, a better grasp of principles and techniques for detecting when that line has been crossed, will allow me to evaluate such findings in a way that can reasonably accounts for a less-than-optimal analytic procedure, and thus be able to go beyond my current rather simple-minded response of blanket disbelief.
---
EDIT: Thank you all for the very interesting comments and answers. Judging by their content, I think I may have not explained my question well enough. I hope this update will clarify matters.
My question here concerns not so much what *I* should do to avoid torturing *my* data (although this is a question that also interests me), but rather: how should I regard (or evaluate) results that *I know for a fact* have been arrived through such "data torture."
The situation gets more interesting in those (much rarer) cases in which, in addition, I am in the position to voice an opinion on such "findings" before they get submitted for publication.
At this point the *most* I can do is say something like "I don't know how much credence I can give to these findings, given what I know about the assumptions and procedures that went into getting them." *This is too vague to be worth even saying.* Wanting to go beyond such vagueness was the motivation for my post.
To be fair, my doubts here are based on more than seemingly questionable statistical methods. In fact, I see the latter more as consequence of the deeper problem: a combination of a cavalier attitude towards experimental design coupled with a categorical commitment to publishing the results as they stand (i.e. without any further experiments). Of course, follow-up projects are always envisioned, but it is simply ***out-of-the-question*** that not a single paper will come out of, say, "a refrigerator filled with 100,000 samples."
Statistics comes into the picture only as a means towards fulfilling this supreme objective. The only justification for latching onto the statistics (secondary as they are in the whole scenario) is that a frontal challenge to the assumption of "publication-at-all-cost" is simply pointless.
In fact, I can think of only one effective response in such situations: to propose some statistical test (not requiring additional experimentation) that truly tests the quality of the analysis. But I just don't have the chops in statistics for it. My hope (naive in retrospect) was to find out what I could study that may enable me to come up with such tests...
As I write this it dawns on me that, if it doesn't already exist, the world could use one new sub-branch of statistics, devoted to techniques for detecting and exposing "data-torture". (Of course, I don't mean getting carried away by the "torture" metaphor: the issue is not "data-torture" per-se, but the spurious "findings" it can lead to .) | There is a distinction which sometimes doesn't get enough attention, namely **hypothesis generation vs. hypothesis testing**, or exploratory analysis vs. hypothesis testing. You are allowed all the dirty tricks in the world to come up with your idea / hypothesis. But when you later test it, you must ruthlessly kill your darlings.
I'm a biologist working with high throughput data all the time, and yes, I do this "slicing and dicing" quite often. Most of the cases the experiment performed was not carefully designed; or maybe those who planned it did not account for all possible results. Or the general attitude when planning was "let's see what's in there". We end up with expensive, valuable and in themselves *interesting* data sets that I then turn around and around to come up with a story.
But then, it is only a story (possible bedtime). After you have selected a couple of interesting angles -- and here is the crucial point -- you must test it not only with independent data sets or independent samples, but preferably with an independent *approach*, an independent experimental system.
The importance of this last thing -- an independent experimental setup, not only independent set of measurements or samples -- is often underestimated. However, when we test 30,000 variables for significant difference, it often happens that while similar (but different) samples from the same cohort and analysed with the same method will not reject the hypothesis we based on the previous set. But then we turn to another type of experiment and another cohort, and our findings turn out to be the result of a methodological bias or are limited in their applicability.
That is why we often need several papers by several independent researchers to really accept a hypothesis or a model.
So I think such data torturing is fine, as long as you keep this distinction in mind and remember what you are doing, at what stage of scientific process you are. You can use moon phases or redefine 2+2 as long as you have an *independent* validation of the data. To put it on a picture:
![enter image description here](https://i.stack.imgur.com/sQNcc.jpg)
Unfortunately, there are those who order a microarray to round up a paper after several experiments have been done and no story emerged, with the hope that the high throughput analysis shows something. Or they are confused about the whole hypothesis testing vs. generation thing. |
So I am new to all this. I was wondering in pandas can I convert my column values into numbers?
I'll try and give an example to explain what I mean
So say for example I have a column called, 'animals', in this column I have six different animals but I want to convert them to numerical values so just as simple as 1,2,3,4,5,6 for each of the different animals. How would I go about doing this?? | Well, one way i like to handle this problem (which is a common problem, at least in daily job life) is to convert each possibility in a column with binary value. Let me elaborate a bit. Let's say you have your column **animals** with 3 possibilities : dog, cat, and horse. You *explode* your column in 3 differents columns : colDog, colCat and colHorse. And you fill your new columns based on the value of the column **animals**. For example : if you have dog in the first row, you put 1 in the column colDog, etc.
The problem with handling categorical data with numerical value instead of binary is that you create a hierarchical order between your values. If dog is 1, cat is 2 and horse is 3, then horse will have more impact than cat and dog. Or i think you just want to represent your categories. |
My favourite theorem in complexity theory is the Time hierarchy theorem. However, this was done in 1965.
I wanted to know then if there was anything similar for Quantum Computing.
Also, if not what are the people / groups working in this direction! | The answer is no. We don't even have a time hierarchy theorem for bounded-error probabilistic polynomial time (i.e., BPTIME). The deterministic and non-deterministic time hierarchy theorems have a diagonalization argument, which does not seem to work for semantic classes. This is why we don't have strong hierarchy theorems for semantic classes.
The best result I'm aware of is a hierarchy theorem for BPTIME with 1 bit of advice: [Fortnow, L.; Santhanam, R. (2004). Hierarchy Theorems for Probabilistic Polynomial Time](http://www.cs.uchicago.edu/~fortnow/papers/probhier.ps).
I don't know of any groups working on a quantum time hierarchy theorem. I would guess that this is because it seems like the BPTIME hierarchy problem is easier, so researchers would attack that problem instead.
(Somewhat related questions: [Is there a syntactic characterization for BPP, BQP, or QMA?](https://mathoverflow.net/questions/35236/is-there-a-syntactic-characterization-for-bpp-bqp-or-qma) on MathOverflow and [Semantic vs. Syntactic Complexity Classes](https://cstheory.stackexchange.com/questions/1233/semantic-vs-syntactic-complexity-classes) on cstheory.) |
Let's say I've got a function $f$ that takes a single number and returns a number. And I have another function $\mathrm{verify}f$ which takes the input I gave to $f$ and the number returned by $f$ which returns true if the output is the same as $f$ would have returned given the input.
If you know the input and output to $f$ and the function to verify then it is possible to work out what the implementation of $f$ is.
What I'd like to know, is if there exist some function that even if you know the output and the input and the verification function it's impossible to work out what $f$ does.
It feels like these functions must not exist – but I don't know that for sure. | There are examples that are vaguely of this sort in cryptography, if you are willing to allow $f$ to also depend upon a key that is kept secret.
If you choose $f$ right, here is what can be achieved. Given many pairs $(x\_i,y\_i)$ where $y\_i=f(k,x\_i)$, it can potentially be hard (computationally infeasible) to recover $k$, for someone who has no prior information on $k$. Depending upon your situation, this might or might not be a suitable solution.
Note: This does not contradict the impossibility results in the other answers.
You might also be interested in software obfuscation. Do some reading on the topic, and you'll probably be able to learn more. More detailed questions about obfuscation are probably more likely to reach a knowledgeable audience on Security.SE or Crypto.SE. |