input
stringlengths
38
38.8k
target
stringlengths
30
27.8k
If I have 39% of students at a school that exhibit a specific, objective, measurable behavior, can I extrapolate this and say that any student at that school has a 39% chance of exhibiting that behavior?
As others have mentioned already, there's no clear separation between statistics, machine learning, artificial intelligence and so on so take any definition with a grain of salt. Logistic regression is probably more often labeled as statistics rather than machine learning, while neural networks are typically labeled as machine learning (even though neural networks are often just a collection of logistic regression models). In my opinion, machine learning studies methods that can somehow learn from data, typically by constructing a model in some shape or form. Logistic regression, like SVM, neural networks, random forests and many other techniques, does learn from data when *constructing* the model. > > If I understood correctly, in a Machine Learning algorithm, the model has to learn from its experience > > > That is not really how machine learning is usually defined. Not all machine learning methods yield models which dynamically adapt to new data (this subfield is called *online learning*). > > What is the difference between logistic regression with the normal regression in term of "learning"? > > > Many regression methods are also classified as machine learning (e.g. SVM).
1. Valid progams for NP imply every solution is a valid answer. 2. NP not equals #P implies not all solutions are answers. 3. Therefore, Validity implies NP=#P. NP is the problem class for finding single verifiable solutions. #P is the related problem class of counting solutions. If the proof is invalid, where is the flaw? By my reasoning, the proof is a three boolean variable three clause 2cnf expression, one of the smallest possible uniquely solvable boolean formulas, requiring three inferences to resolve. My best counting benchmark (4cnf 4 coloring, degree 6 graph) took eleven weeks: C4D6N66c.cnf + #P 472,406,068,323,174 retros 76865745357 infers 66385 billion Send to pehoushek1 at gmail for single file C++ program, bob, for #sat, dimacs forms. The three thousand line bob program can solve millions of small formulas in a single run, but can be exponential on large formulas. bob also solves sat, unsat, and qbfs, in roughly the same order of magnitude of time as #P, computing nearly two trillion inferences per day. My main publication in the general area is Introduction to Qspace (Satisfiability 2002), containing a short proof of the theorem #P=#Q: the number of satisfying assignments to a boolean formula equals the number of valid quantifications of the formula. bob uses #P=#Q to solve qbfs, indicating coNP=NP=#P=#Q=PSpace=Exp. Garey and Johnson is the main reference.
You have some fundamental misunderstanding of what a language $L$ being in $\mathsf{NP} \cap \mathsf{coNP}$ means. You'd need to show that there exist two machines: $M$ which is an $\mathsf{NP}$ machine, and $M'$, which is a $\mathsf{coNP}$ machine, such that $L$ is decided by $M$, and $L$ is also decided by $M'$. Each one should be able to solve the problem on its own, without being given the other one as an oracle, like you do. Your construction on the other hand requires a linear number of calls to *both* $\mathsf{NP}$ and $\mathsf{coNP}$. In other words you have an alternating Turing machine with linear number of alternations. It's easy to see that such a machine can solve TQBF and therefore any problem in $\mathsf{PSPACE}$. Since the functional equivalent $\mathsf{FPSPACE}$ of $\mathsf{PSPACE}$ contains $\mathsf{\#P}$ and $\mathsf{FPSPACE}$ reduces easily (in the Turing reduction sense) to $\mathsf{PSPACE}$, your statement is not at all surprising. In fact your machine is more powerful than needed for solving $\mathsf{\#P}$ problems.
I have GPS collar data that I have used to extract values to points through ArcGIS Pro on MODIS landcover data. From this, I have a record of the number of times the animal has been in each landcover class. My data looks like this: | Landcover | Selection Frequency | | --- | --- | | Evergreen | 105 | | WoodySavanna | 327 | | Savanna | 30 | | Grassland | 2 | | Croplands | 2 | It looks like there is a pattern of selection there so I am wondering what statistical test I can run to identify if there is a significant difference between the selection frequencies of the landcover classes. Ideally would be running this in `R`.
Your problem can be rephrased in terms of testing if the cell probabilities of a Multinomial distribution follow a given pattern. In particular, given the sample $(X\_1,\ldots,X\_5)\sim \text{Mn}(466,\theta\_1,\ldots,\theta\_5)$ the problem is to test $$H\_0: \theta\_1=\cdots=\theta\_5=1/5$$ against $$H\_1:\theta\_i\neq\theta\_j\, \text{for at least one pair } i,j, \text{with } i\neq j.$$ Note that $466$ is the sum of cell counts. There are several ways to implement this test, and in `R` the simplest way is perhaps this ``` counts = c(105, 327, 30, 2, 2) expected = rep(1/5,5) chisq.test(counts, p = expected) ```
Although exponential separations between bounded-error quantum query complexity ($Q(f)$) and deterministic query complexity ($D(f)$) or bounded-error randomized query complexity ($R(f)$) are known, they only apply to certain partial functions. If the partial functions have some [special structures](http://arxiv.org/abs/0911.0996v1) then they are also polynomially related with $D(f) = O(Q(f)^9))$. However, I am mostly concerned about total functions. In a [classic paper](http://portal.acm.org/citation.cfm?id=502097) it was shown that $D(f)$ is bounded by $O(Q(f)^6)$ for total functions, $O(Q(f)^4)$ for monotone total functions, and $O(Q(f)^2)$ for symmetric total functions. However, no greater than quadratic separations are known for these sort of functions (this separation is achieved by $OR$ for example). As far as I understand, most people conjecture that for total functions we have $D(f) = O(Q(f)^2)$. Under what conditions has this conjecture been proven (apart from symmetric functions)? What is the best current bounds on decision-tree complexity in terms of quantum query complexity for total functions?
As far as I know, the general bounds you state are essentially the best known. Changing the model slightly, Midrijanis has [shown](http://arxiv.org/abs/quant-ph/0403168) the bound that $D(f) = O(Q\_E(f))^3$, where $Q\_E(f)$ is the *exact* quantum query complexity of $f$; there are also tighter bounds known in terms of one-sided error (see Section 6 of [this paper](http://arxiv.org/abs/quant-ph/0501142)). In terms of more specific, but still general, classes of functions, there is a paper of Barnum and Saks which [shows](http://arxiv.org/abs/quant-ph/0201007) that all read-once functions on $n$ variables have quantum query complexity $\Omega(\sqrt{n})$. Although this progress has been limited, there has been considerable progress in lower bounding the quantum query complexity of *specific* functions; see [this review](http://arxiv.org/abs/quant-ph/0509153) for details (or e.g. the more recent [paper](http://arxiv.org/abs/0904.2759) of Reichardt, which proves that the most general version of the ''adversary'' bound characterises quantum query complexity).
I've heard/seen in several places that you can transform the data set into something that is normal-distributed by taking the logarithm of each sample, calculate the confidence interval for the transformed data, and transform the confidence interval back using the inverse operation (e.g. raise 10 to the power of the lower and upper bounds, respectively, for $\log\_{10}$). However, I'm a bit suspicious of this method, simply because it doesn't work for the mean itself: $10^{\operatorname{mean}(\log\_{10}(X))} \ne \operatorname{mean}(X)$ What is the correct way to do this? If it doesn't work for the mean itself, how can it possibly work for the confidence interval for the mean?
There are several ways for calculating confidence intervals for the mean of a lognormal distribution. I am going to present two methods: Bootstrap and Profile likelihood. I will also present a discussion on the Jeffreys prior. **Bootstrap** *For the MLE* In this case, the MLE of $(\mu,\sigma)$ for a sample $(x\_1,...,x\_n)$ [are](https://stats.stackexchange.com/questions/32025/parameters-uncertainty-for-small-sample-size/32028#32028) $$\hat\mu= \dfrac{1}{n}\sum\_{j=1}^n\log(x\_j);\,\,\,\hat\sigma^2=\dfrac{1}{n}\sum\_{j=1}^n(\log(x\_j)-\hat\mu)^2.$$ Then, the MLE of the mean is $\hat\delta=\exp(\hat\mu+\hat\sigma^2/2)$. By resampling we can obtain a [bootstrap sample](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) of $\hat\delta$ and, using this, we can calculate [several bootstrap](https://projecteuclid.org/journals/statistical-science/volume-1/issue-1/Bootstrap-Methods-for-Standard-Errors-Confidence-Intervals-and-Other-Measures/10.1214/ss/1177013815.full) confidence intervals. The following `R` codes shows how to obtain these. ``` rm(list=ls()) library(boot) set.seed(1) # Simulated data data0 = exp(rnorm(100)) # Statistic (MLE) mle = function(dat){ m = mean(log(dat)) s = mean((log(dat)-m)^2) return(exp(m+s/2)) } # Bootstrap boots.out = boot(data=data0, statistic=function(d, ind){mle(d[ind])}, R = 10000) plot(density(boots.out$t)) # 4 types of Bootstrap confidence intervals boot.ci(boots.out, conf = 0.95, type = "all") ``` *For the sample mean* Now, considering the estimator $\tilde{\delta}=\bar{x}$ instead of the MLE. Other type of estimators might be considered as well. ``` rm(list=ls()) library(boot) set.seed(1) # Simulated data data0 = exp(rnorm(100)) # Statistic (MLE) samp.mean = function(dat) return(mean(dat)) # Bootstrap boots.out = boot(data=data0, statistic=function(d, ind){samp.mean(d[ind])}, R = 10000) plot(density(boots.out$t)) # 4 types of Bootstrap confidence intervals boot.ci(boots.out, conf = 0.95, type = "all") ``` **Profile likelihood** For the definition of likelihood and profile likelihood functions, [see](https://stats.stackexchange.com/questions/28671/what-is-the-exact-definition-of-profile-likelihood/28673#28673). Using the invariance property of the likelihood we can reparameterise as follows $(\mu,\sigma)\rightarrow(\delta,\sigma)$, where $\delta=\exp(\mu+\sigma^2/2)$ and then calculate numerically the profile likelihood of $\delta$. $$R\_p(\delta)=\dfrac{\sup\_{\sigma}{\mathcal L}(\delta,\sigma)}{\sup\_{\delta,\sigma}{\mathcal L}(\delta,\sigma)}.$$ This function takes values in $(0,1]$; an interval of level $0.147$ [has an approximate](https://rads.stackoverflow.com/amzn/click/com/0387961836) confidence of $95\%$. We are going to use this property for constructing a confidence interval for $\delta$. The following `R` codes shows how to obtain this interval. ``` set.seed(1) # Simulated data data0 = exp(rnorm(100)) # Log likelihood ll = function(mu,sigma) return( sum(log(dlnorm(data0,mu,sigma)))) # Profile likelihood Rp = function(delta){ temp = function(sigma) return( sum(log(dlnorm(data0,log(delta)-0.5*sigma^2,sigma)) )) max=exp(optimize(temp,c(0.25,1.5),maximum=TRUE)$objective -ll(mean(log(data0)),sqrt(mean((log(data0)-mean(log(data0)))^2)))) return(max) } vec = seq(1.2,2.5,0.001) rvec = lapply(vec,Rp) plot(vec,rvec,type="l") # Profile confidence intervals tr = function(delta) return(Rp(delta)-0.147) c(uniroot(tr,c(1.2,1.6))$root,uniroot(tr,c(2,2.3))$root) ``` **$\star$ Bayesian** In this section, an alternative algorithm, based on Metropolis-Hastings sampling and the use of the Jeffreys prior, for calculating a credibility interval for $\delta$ is presented. Recall that the [Jeffreys prior](https://en.wikipedia.org/wiki/Jeffreys_prior) for $(\mu,\sigma)$ in a lognormal model is $$\pi(\mu,\sigma)\propto \sigma^{-2},$$ and that this prior is invariant under reparameterisations. This prior is improper, but the posterior of the parameters is proper if the sample size $n\geq 2$. The following `R` code shows how to obtain a 95% credibility interval using this Bayesian model. ``` library(mcmc) set.seed(1) # Simulated data data0 = exp(rnorm(100)) # Log posterior lp = function(par){ if(par[2]>0) return( sum(log(dlnorm(data0,par[1],par[2]))) - 2*log(par[2])) else return(-Inf) } # Metropolis-Hastings NMH = 260000 out = metrop(lp, scale = 0.175, initial = c(0.1,0.8), nbatch = NMH) #Acceptance rate out$acc deltap = exp( out$batch[,1][seq(10000,NMH,25)] + 0.5\*(out$batch[,2][seq(10000,NMH,25)])^2 ) plot(density(deltap)) # 95% credibility interval c(quantile(deltap,0.025),quantile(deltap,0.975)) ``` Note that they are very similar.
I am working on my school datamining project. Within preprocessing stage I need to remove outliers from my data set which is positively skewed (see description). I have an idea to remove all values which are larger than mean + 3 x standard deviation, but I am not sure this is a suitable technique for my case because the data set is not normally distributed. What technique should I use? ``` var n mean sd median trimmed mad min max range skew kurtosis se 1 1 41019 1668.99 1107.08 1453.68 1524.22 1026.05 10.9 5920.74 5909.84 1.18 1.33 5.47 ```
Bottom line is that the decision to remove data from your dataset is a subject-matter decision, not a statistical decision. The statistics help you to identify outliers given what you believe about the dataset. A very readable applied treatment of outliers is given in * B. Iglewicz and D. C. Hoaglin, [How to Detect and Handle Outliers](http://rads.stackoverflow.com/amzn/click/087389247X) (Milwaukee: ASQC Press) 1993. A more advanced and detailed treatment is given in * V. Barnett and T. Lewis, [Outliers in Statistical Data](http://rads.stackoverflow.com/amzn/click/0471930946) (New York: John Wiley and Sons) 1994.
I am trying to test the null $E[X] = 0$, against the local alternative $E[X] > 0$, for a random variable $X$, subject to mild to medium skew and kurtosis of the random variable. Following suggestions by Wilcox in 'Introduction to Robust Estimation and Hypothesis Testing', I have looked at tests based on the trimmed mean, the median, as well as the M-estimator of location (Wilcox' "one-step" procedure). These robust tests do outperform the standard t-test, in terms of power, when testing with a distribution that is non-skewed, but leptokurtotic. However, when testing with a distribution that is skewed, these one-sided tests are either far too liberal or far too conservative under the null hypothesis, depending on whether the distribution is left- or right-skewed, respectively. For example, with 1000 observations, the test based on the median will actually reject ~40% of the time, at the nominal 5% level. The reason for this is obvious: for skewed distributions, the median and the mean are rather different. However, in my application, I really need to test the mean, not the median, not the trimmed mean. **Is there a more robust version of the t-test that actually tests for the mean, but is impervious to skew and kurtosis?** Ideally the procedure would work well in the no-skew, high-kurtosis case as well. The 'one-step' test is almost good enough, with the 'bend' parameter set relatively high, but it is less powerful than the trimmed mean tests when there is no skew, and has some troubles maintaining the nominal level of rejects under skew. **background:** the reason I really care about the mean, and not the median, is that the test would be used in a financial application. For example, if you wanted to test whether a portfolio had positive expected log returns, the mean is actually appropriate because if you invest in the portfolio, you will experience all the returns (which is the mean times the number of samples), instead of $n$ duplicates of the median. That is, I really care about the sum of $n$ draws from the R.V. $X$.
Why are you looking at non-parametric tests? Are the assumptions of the t-test violated? Namely, ordinal or non-normal data and inconstant variances? Of course, if your sample is large enough you can justify the parametric t-test with its greater power despite the lack of normality in the sample. Likewise if your concern is unequal variances, there are corrections to the parametric test that yield accurate p-values (the Welch correction). Otherwise, comparing your results to the t-test is not a good way to go about this, because the t-test results are biased when the assumptions are not met. The Mann-Whitney U is an appropriate non-parametric alternative, if that's what you really need. You only lose power if you are using the non-parametric test when you could justifiably use the t-test (because the assumptions are met). And, just for some more background, go here: [Student's t Test for Independent Samples](http://www.jerrydallal.com/LHSP/STUDENT.HTM).
Consider a multiset of $n$ integers, where each integer is between $1$ and $3 M$. The sum of all integers is $3 S$. There are three bins. The capacity of each bin is $C = S + M$. Is there a polynomial-time algorithm to decide whether all integers can be packed into the bins? To explain why this specific case is special, consider some variants of the problem: * When $C$ is not fixed, the problem is NP-hard, since it is a special case of the bin-packing problem. * When $C = S$, the problem is NP-hard, since it is equivalent to 3-way number partitioning. * When $C \geq S + 2 M$, the problem is easy since the answer is always "yes". *Proof:* put the items in an arbitrary order into the bins as long as there is room. Suppose by contradiction that not all integers are packed; let $x$ be some remaining integer. Then the sum in each bin is larger than $C-x$, so the sum in all bins is more than $3 C - 3 x \geq 3 S + 6 M - 3 x$. Since $x \leq 3 M$, this sum is larger than $3 S + 2 x - 3 x = 3 S - x$. But then the sum of all integers plus $x$ is larger than $3 S$ - a contradiction. The case $C = S+M$ is between these extremes: it is larger than $S$ for which the problem is hard, but smaller than $S+2M$ for which the answer is always yes. Is this case is easy or hard?
Yes, there is a polynomial time solution. It is not very pretty, so simplifications or alternative approaches are welcome. TL;DR: the exact weights are rarely important, so we can round them up to a multiple of $K := \lceil \frac{3M}{n^3} \rceil$ or something similar (there are only $\approx n^3$ rounded item weights). Then, we can do dynamic programming that keeps track of rounded weights for two bins and *exact* weight for the third bin. If we somehow got a false negative result, then in *all* solutions *two* bins are almost full at the same time. Such cases turn out to be highly structured (all items have weights that are either close to $0$ or close to $3M$, the total weight of the former is small and there are exactly $3\ell + 2$ items of the latter type for some $\ell$) and can be solved by an algorithm that greedily moves items between bins. Only the case when $M > n^4$ is interesting, otherwise simple dynamic programming will do the job. Similarly, assume that $n > 10^9$ to make things like $Kn^2$ negligible when compared to $M$. Do not worry, actual precise bounds are much better, I just chose these to not think about the details too much. As mentioned above, the first step is to try finding a solution with "discretised" weights. Pick $K := \lceil \frac{3M}{n^3} \rceil$. For each item, its *discrete weight* is its weight rounded *up* to a multiple of $K$. For each bin, its *discrete size* is its size, rounded *down* to a nearest multiple of $K$. Items *discretely fit* into a bin, if the sum of their discrete weights does not exceed the discrete size of the bin. Clearly, if items discretely fit into a bin, then they do fit in the bin for real. The opposite implication is not true, but only because of rounding issues; if several items fit into a bin, but not discretely, then their total weight is at least $(S + M) - (n+1)K$ (at most $n$ items, each causes a rounding error of at most $K$; rounding of the bin size also causes a rounding error of at most $K$). Let us say that any bin with sum of weights at least $(S + M) - (n+1)K$ is *almost full*. Unless the bin is almost full, discrete fitting and real fitting is the same thing. Consider any solution (by *solution*, I mean a correct distribution of items to bins). There can be at most two almost full bins; otherwise the total weight of items is at least $3(S + M) - 3(n+1)K = 3S + (3M - 3(n+1)K) > 3S$, as $K \approx \frac{3M}{n^3}$). We can find all solutions with at most *one* full bin by simple dynamic programming, which answers the following question: suppose that we distributed some prefix of items to bins and we know the sums of *discrete* item weights in the second and the third bins (there are only $n \cdot \lceil \frac{3M}{K} \rceil \approx n^4$ possible discrete weights for each bin), what is the minimal possible total *real* weight of items in the first bin in this case? We will not miss out the solutions where there is at most one almost full bin (because we can track the weight in *one bin* precisely). The only remaining problem is the case when *all* optimal solutions have *exactly two* almost full bins. Let us study the structure of the solutions in this case (this is not an algorithmic part, but rather only a mathematical setup; so, we are not interested in algorithmic aspects for the following few paragraphs). Suppose that the first bin is filled up to $S + M - \varepsilon\_1$ and the second is filled up to $S + M - \varepsilon\_2$, where $\max(\varepsilon\_1, \varepsilon\_2) \leqslant (n+1)K$. Then, the third bin is filled up to $3S - (S + M - \varepsilon\_1) + (S + M - \varepsilon\_2) = S - 2M + \varepsilon\_1 + \varepsilon\_2$. Let us try to move some items from the first bin to the third (in order to make them both *not* almost full). Why cannot we do that? Because all items in the first bin are either *small* (have weight at most $(n+1)K$) and the first bin will not stop being almost full, or because all items in the first bin are *big* (have weight at least $3M - 3K(n+1)$), so the third bin will become almost full (or, maybe, there is not even enough space for the item). About the latter part: if the weight of an item is less than $3M - 3K(n+1)$, then the third bin will be filled up to less than \begin{equation\*} S - 2M + \varepsilon\_1 + \varepsilon\_2 + (3M - 3K(n+1)) = S + M - (3K(n+1) - \varepsilon\_1 - \varepsilon\_2) \leqslant S + M - K(n+1), \end{equation\*} because $\max(\varepsilon\_1, \varepsilon\_2) \leqslant K(n+1)$. Now, let us move all small items from the first bin to the third bin one-by-one. Then, either the first one stops being almost full at some point (while the third one is still not almost full, it still had almost $3M$ space left before all operations), or we move them all. Similarly, move all small items from the second bin to the third bin. Now, *all* items in the first and the second bins are big, but they are still almost full. As before, denote the space left in the first and the second bins by $\varepsilon\_1$ and $\varepsilon\_2$. I claim that no items in the third bin have weight in the range $[10K(n+1), 3M - 10K(n+1)]$. Why? Because we can take any such item and swap its places with a big item from the first bin. Then, the total weight of items in the first bin will decrease by at least $(3M - 3K(n+1)) - (3M - 10K(n+1)) = 7K(n+1)$, hence it will not be almost full anymore. On the other hand, the total weight of items in the third bin was $S - 2M + \varepsilon\_1 + \varepsilon\_2 \leqslant S - 2M + 2K(n+1)$ and will increase by at most $3M - 10K(n+1)$. So, the third bin will be filled up to at most $S + M - 8K(n+1)$ and still will not be almost full. In the end, there are only items of weight at most $10K(n+1)$ (now, forget about the previous terminology and call *such* items *small*) and items of weight at least $3M - 10K(n+1)$ (call them *big*). As per gnasher729 answer, small items can be easily distributed to the bins in the end, because $10K(n+1) < 3M/2$. Unfortunately, we cannot *ignore* small items, because they affect $3S$ (the total weight of items) and, therefore, the sizes of the bins. Fortunately, we know from the above manipulations that we can move all small items to the third (*not* almost full) bin. Hence, we are left with the following problem: there are $m$ big items with weights $3M - v\_1$, $3M - v\_2$, $\ldots$, $3M - v\_m$, where $v\_1 \leqslant v\_2 \leqslant \ldots \leqslant v\_m \leqslant 10K(n+1)$. There are also $n - m$ small items with total weight $R = 3S - (3Mm - V)$, where $V = v\_1 + \ldots + v\_m$. The exact weights of small items is not important. There is barely any difference between weights of big items, because $10K(n+1)$ is negligible when compared to $M$. Moreover, the total weight of small items is also negligible. Because otherwise we could do something similar to the way we proved that all items are either big or small: we can pick small items (remember that they are all in the third bin) until their total weight exceeds $20K(n+1)$ for the first time (at such a moment, their total weight will exceed $20K(n+1)$ by at most a weight of a single small item; in other word, it will be in the range $[20K(n+1), 30K(n+1)]$) and then swap all these items with a single big item from the first or the second bin. Hence, $R < 20K(n+1)$. Therefore, "from a bird's view", we have $m$ items of weight $3M$ and $n - m$ items of weight $0$. I claim that $m \bmod 3 = 2$. Indeed, consider the other cases. When $m \bmod 3 = 0$, we can place arbitrary $m/3$ big items in each bin. Distribute small items arbitrarily as well. Then, the total weight of items in each bin will not exceed $3M \cdot (m/3) + R < Mn + 20K(n+1)$. On the other hand, \begin{equation\*} S + M - K(n+1) \geqslant (3M - 10K(n+1)) \cdot (m/3) + M - K(n+1) = Mm + (M - K(n+1) - 10Km(n+1) / 3), \end{equation\*} which is greater than $Mm + 20K(n+1)$, because $Km(n+1) \leqslant Kn(n+1)$ is negligible when compared to $M$. Hence, in this case, there is a solution with *no* almost full bins. Contradiction. When, $m \bmod 3 = 1$, or $m = 3\ell + 1$, there is a bin with $\ell + 1$ items in any solution. Hence, their total weight is at least $(\ell + 1)(3M - 10K(n+1))$. On the other hand, the size of bin is $S + M \leqslant (R + (3\ell + 1) \cdot 3M) / 3 + M = R/3 + (3\ell + 2)/3 \cdot 3M$. Again, the multiplier before $3M$ is the deciding factor here, and $\ell + 1 > (3\ell + 2)/3$. Hence, there is no solution in this case, contradiction. Finally, $m \bmod 3 = 2$, or $m = 3\ell + 2$ corresponds to the case when two bins have $\ell + 1$ big items in them and one has $\ell$. This case is actually nontrivial: not only there are inevitably exactly two almost full bins, but we also cannot distribute big items without considering their weights, like in the case $m \bmod 3 = 0$. The remaining text deals with this case. The good thing is that we can distribute all small items and $\ell$ heaviest big items to the third bin and this is optimal. Intuitively, some bin *has* to contain at most $\ell$ big items, and this leaves a lot of free space left regardless of big items chosen. Formally, $\ell$ big items and all small items have total weight $3M\ell + R$, but $S + M \geqslant (3M - 10K(n+1)) \cdot (3\ell+2)/3 + M = 3(\ell + 1)m - \textrm{something small}$. Hence, there is *almost* $3M$ of free space left, even if we choose $\ell$ heaviest items to place in the bin (because the number of big items is more important than the actual items involved). On the other hand, other two bins have little free space left, because the total weights of items in them is $3M \cdot (\ell + 1) - \textrm{something small}$. Hence, we need to actually compare small numbers and cannot get away with rough bounds, as we did in the first algorithmic part and in the arguments above. Recall that we place $\ell$ heaviest big items (or $\ell$ items with the smallest values of $v\_i$, which is the same) in the third bin. Hence, let $L$ be the multiset of $\ell$ smallest values of $v\_i$ and $U$ be the multiset of $2(\ell + 1)$ largest values of $v\_i$. Moreover, denote $u\_i = v\_{i+\ell}$ for $i := 1, 2, \ldots, 2(\ell + 1)$, so $U = \{u\_1, u\_2, \ldots, u\_{2(\ell + 1)}\}$ (here, $u\_i$ is non-strictly increasing). Then, we want to distribute items of weights $3M - u\_i$ for $i \in [1, 2(\ell + 1)]$ into two bins of size \begin{equation\*} S + M = (3M \cdot (3\ell + 2) - \mathrm{sum}(U) - \mathrm{sum}(L) + R))/3 + M = 3M \cdot (\ell + 1) - (\mathrm{sum}(U) + \mathrm{sum}(L) - R)/3, \end{equation\*} where by $\mathrm{sum}(X)$ I mean the sum of elements in the multiset $X$. Because we distribute exactly $\ell + 1$ items in both bins, we can focus on the space we saved when compared to $3M(\ell + 1)$. That is, we need to find a submultiset $X$ of $U$ with size $\ell + 1$, such that $\mathrm{sum}(X) \geqslant (\mathrm{sum}(U) + \mathrm{sum}(L) - R)/3$ and $\mathrm{sum}(U \setminus X) \geqslant (\mathrm{sum}(U) + \mathrm{sum}(L) - R)/3$. Here, $X$ represents the items that go to the first bin and $U \setminus X$ represents items that go the second bin. In other words, we need to find $X$ with \begin{equation\*}\mathrm{sum}(X) \in [(\mathrm{sum}(U) + \mathrm{sum}(L) - R)/3, \mathrm{sum}(U) - (\mathrm{sum}(U) + \mathrm{sum}(L) - R)/3] = [(\mathrm{sum}(U) + \mathrm{sum}(L) - R)/3, (2 \cdot \mathrm{sum}(U) - \mathrm{sum}(L) + R)/3].\end{equation\*} In the end, we need $\mathrm{sum}(X)$ to get in the *gap* of length \begin{equation\*}(2 \cdot \mathrm{sum}(U) - \mathrm{sum}(L) + R)/3 - (\mathrm{sum}(U) + \mathrm{sum}(L) - R)/3 + 1 = 1 + (2R + \mathrm{sum}(U) - 2\mathrm{sum}(L))/3.\end{equation\*} This gap always has positive length, because $\mathrm{sum}(U) \geqslant 2\mathrm{sum}(L)$: $\min(U) \geqslant \max(L)$ by definition of $U$ and $L$ and there are $2(\ell + 1)$ elements in $U$, but only $\ell$ elements in $L$. Moreover, the gap is centered at $\mathrm{sum}(U)/2$ (both by formulas and its "physical" meaning, which must be symmetric with respect to swapping the first and the second bin). Now, we can do the following: start from some submultiset $X$, such that $\mathrm{sum}(X) \leqslant \mathrm{sum}(U) / 2$ and gradually change it to its complement $U \setminus X$ by replacing its elements with elements of its complement, for which $\mathrm{sum}(U \setminus X) \geqslant \mathrm{sum}(U) / 2$. Then, either at some moment $\mathrm{sum}(\textrm{current subset})$ will get in the gap, leading to a valid way to place the remaining $2(\ell + 1)$ items in the first and the second bins), or we will somehow "*jump over*" the whole gap in a single move, meaning that the difference between two elements of $U$ is somehow greater than the length of the gap. Intuitively, the length of the gap is usually quite large. Hence, jumping over the gap is an extremely rare situation. Not all ways to change $X$ into $U \setminus X$ work similarly well, so let us choose some fixed $X$ and some fixed way to transform $X$ into $U \setminus X$. Specifically, split all $2(\ell + 1)$ elements of $U$ into $\ell + 1$ pairs. One of the pairs is $u\_{2\ell + 1} \leftrightarrow u\_{2\ell + 2}$ (we pair up two largest elements), all other pairs are $u\_{i} \leftrightarrow u\_{i + \ell}$ (we pair up elements on the distance $\ell$, of course $i \in [1, \ell]$). Initially, $X$ contains the smaller element in each pair. If $\mathrm{sum}(X)$ is *already* in the gap, we won and do not have to do anything. Otherwise, replace the elements of $X$ one-by-one by elements that they are paired up with, as long as we can do that *without jumping over* the gap. In the end, we are left with at least one pair that makes us jump over the whole gap (as mentioned before, we cannot reach $U \setminus X$ without either falling in the gap or jumping over it, because the gap is centered in $\mathrm{sum}(U)/2$). I claim that only the pair $u\_{2\ell + 1} \leftrightarrow u\_{2\ell + 2}$ can lead to such a big jump. Assume the contrary, suppose that, for some $i \in [1, \ell]$, the jump $\Delta := u\_{i + \ell} - u\_i$ is more than $1 + (2R + \mathrm{sum}(U) - 2\mathrm{sum}(L))/3$ (the length of the gap, as shown before). Then, $u\_{i + \ell} = u\_i + \Delta \geqslant \max(L) + \Delta$. Because $i + \ell \leqslant 2\ell$, $u\_{2\ell+2} \geqslant u\_{2\ell+1} \geqslant u\_{i + \ell} \geqslant \max(L) + \Delta \geqslant \Delta$. In the end, \begin{equation\*} \mathrm{sum}(U) \geqslant u\_{2\ell + 1} + u\_{2\ell + 2} + (2\ell) \cdot \max(L) + (u\_{i + \ell} - \max(L)) \geqslant 2\ell \cdot \max(L) + 3\Delta \geqslant 2 \cdot \mathrm{sum}(L) + 3\Delta\end{equation\*} (compared to "doubled $L$", $U$ has two extra elements, which are both at least $\Delta$, and an element that is at least $\Delta + \textrm{the corresponding element of "doubled $L$"}$). Hence, the length of the gap is \begin{equation\*}1 + (2R + \mathrm{sum}(U) - 2\mathrm{sum}(L))/3 > 0 + (0 + 3\Delta)/3 = \Delta,\end{equation\*} contradiction. Therefore, the only way to "jump over" the gap is to replace $u\_{2\ell + 1}$ with $u\_{2\ell + 2}$. Let us do that immediately (as the first operation, while $X$ still contains $u\_1$, $u\_2$, $\ldots$, $u\_{\ell}$). Did we jump over the gap? If we did not, we will never jump over it, so we will successfully find a solution. If we did, then the largest element of $U$ is *so large*, that even putting it in the same set with $\ell$ smallest elements of $U$ is already making the sum of the corresponding set too large. In this case, there is no solution, because the largest element of $U$ has to go *somewhere* (either to $X$, or to $U \setminus X$).
I have a multiple sequence alignment that I'm using to construct a phylogenetic tree. Usually, phylogenetic trees are constructed under the assumption that the input sequences are all from the present day -- thus, it forces them all to be leaf nodes. However, my problem is a bit different. I have genome sequences collected at various dates from about 1960 to the present. Because of the extremely rapid rate of evolution (since I'm dealing with viruses), its likely that many of the older sequences are actually ancestors of the more recent ones. Is there any kind of phylogenetic structure that can represent this kind of relationship? Or can I deduce this from a standard phylogenetic tree?
I agree with Glen\_b. In regression problems, the main focus is on the parameters and not on the independent variable or predictor, x. And then one can decide whether one wants to linearise the problem employing simple transformations or proceed it as such. Linear problems: count the number of parameters in your problem and check whether all of them have power 1. For example, $y = ax + bx^2 + cx^3 + d x^{2/3} + e/x + f x^{-4/7}$. This function is nonlinear in $x$. But for regression problems, the nonlinearity in $x$ is not an issue. One has to check whether the parameters are linear or linear. In this case, $a$, $b$, $c$,.. $f$ all have power 1. So, they are linear. Remark that, in $y = \exp(ax)$, though a looks like it has power 1, but when expanded $\exp(ax) = 1 + ax/ 1! + (ax)^2 / 2! + \dots $. You can clearly see that it is a nonlinear parameter since a has a power more than 1. But, this problem can be linearised by invoking a logarithmic transformation. That is, a nonlinear regression problem is converted to a linear regression problem. Similarly, $y = a / (1+b \exp(cx)$ is a logistic function. It has three parameters, namely $a$, $b$ and $c$. The parameters $b$ and $c$ have power more than 1, and when expanded they multiply with each other bringing nonlinearity. So, they are not linear. But, they can be also linearised using a proper substitution by setting first $(a/y)-1 = Y$ and then invoking a logarithmic function on both the sides to linearise. Now suppose $y = a\_1 / (1+b\_1\exp(c\_1x)) + a\_2 / (1+b\_2\exp(c\_2x))$. This is once again nonlinear with respect to the parameters. But, it cannot be linearised. One needs to use a nonlinear regression. In principle, using a linear strategy to solve a nonlinear regression problem is not a good idea. So, tackle linear problems (when all the parameters have power 1) using linear regression and adopt nonlinear regression if your parameters are nonlinear. In your case, substitute the weighting function back in the main function. The parameter $\beta\_0$ would be the only parameter with power 1. All the other parameters are nonlinear ($\beta\_1$ eventually multiplies with $\theta\_1$ and $\theta\_2$ (these two are nonlinear parameters) making it also nonlinear. Therefore, it is a nonlinear regression problem. Adopt a nonlinear least squares technique to solve it. Choose initial values cleverly and use a multistart approach to find the global minima. This vide will be helpful (though it does not talk about global solution): <http://www.youtube.com/watch?v=3Fd4ukzkxps> Using GRG nonlinear solver in the Excel spreadsheet (install the solver toolpack by going to options - Add-Ins - Excel Add-Ins and then choosing Solver Add-In)and invoking the multistart in the options list by prescribing intervals to the parameters and demanding the constraint precision and the convergence to be small, a global solution can be obtained. If you are using Matlab, use the global optimisation toolbox. It has multistart and globalsearch options. Certain codes are available here for a global solution, [here](http://www.mathworks.de/de/products/global-optimization/code-examples.html) and [here](http://www.mathworks.de/de/help/gads/how-globalsearch-and-multistart-work.html). If you are using Mathematica, look [here](http://mathworld.wolfram.com/GlobalOptimization.html). If you are using R, try [here](http://cran.r-project.org/web/views/Optimization.html).
In a hierachical model, we have $$p(x\_1, \dots, x\_N, z\_1, \dots, z\_N, \beta) = p(\beta) \prod\_{i=1}^N p(z\_i | \beta) p(x\_i | z\_i) $$ In such models, we have $x\_i \perp x\_j | \beta, i \neq j$. However, in general, we do not have $x\_i \perp x\_j$, while in general we usually assume that data points $x\_{1:N}$ are drawn i.i.d. from some model. Is this a somewhat contradictory?
One reason is the arbitrary nature of the 20% cut in the classes. It is telling the model that that 19% is more different to 21% than 21% is to 23%, say. I think someone else on here may be able to clarify this or formalise it better. Another aspect is you will not need the softmax filter at the end, so is slightly computationally easier. Whether this helps with backprop in some way I do not know.
Several optimization problems that are known to be NP-hard on general graphs are trivially solvable in polynomial time (some even in linear time) when the input graph is a tree. Examples include minimum vertex cover, maximum independent set, subgraph isomorphism. Name some natural optimization problems that remain NP-hard on trees.
Graph Motif is NP-Complete problem on trees of maximum degree three: Fellows, Fertin, Hermelin and Vialette, [Sharp Tractability Borderlines for Finding Connected Motifs in Vertex-Colored Graphs](https://doi.org/10.1007/978-3-540-73420-8_31) Lecture Notes in Computer Science, 2007, Volume 4596/2007, 340-351
One often sees assertions that these are "necessary" for modern military and consumer computing applications. I presume that without these rare metals, the devices could be manufactured, but would be larger and/or more expensive. Is this true?
> > Can I write a Finite Automaton to do anything a Full-Featured General Purpose Computer can do? > > > Your computer *is* a finite automaton. It's state can be uniquely described by the content of its registers, RAM, SSDs, etc. Every input (including the system clock) causes a transition from one state to the next, in a deterministic manner. Conceivably, you could enumerate all the possible states, and draw the arcs between them, forming a deterministic finite automaton (DFA). There are maaaaany states, but they are finite. However, this only works because the infinite memory requirement of Turing machines (TMs) is often ignored (exactly because it would never be achievable, so it's of little pragmatic use). Pushdown automata (PDAs) and TMs can be thought to have two separate components. 1. A "decision making" part, that decides how one state should transition to a next, analogous to a CPU. * In a PDA, this is the automaton * In a TM, it's the rule set by which symbols are evaluated 2. They also have a "working memory", whose job is to contain most of the state of the system, analogous to RAM. * In a PDA, this is the push-down stack * In a TM, it's the tape DFAs have both of these components rolled into one. The automaton is responsible both for all decision making, and for all state-keeping. As a consequence, the state can never be bigger than the DFA. There is an asymmetry here: PDAs and TMs are granted use of infinite storage (bottomless push-down stacks, infinite tapes), whereas DFAs are not, by definition. If a DFA was given the same affordance for infinite state-keeping (by allowing it to have infinite states), it would no longer be a deterministic *finite* automaton. It would be a deterministic infinite automaton! Interestingly, infinite state-automata are not only Turing Complete, but they're actually *more powerful* than Turing Machines. With infinitely many states allowed, *any* langauge can be expressed as an automaton with one start node, one accepting state, and one non-accepting state. For every string in the language, an arc is made to the accepting state. For every other string, an arc is made to the non-accepting state.
Suppose I have a graph $G$ with $M(G)$ the (unknown) set of perfect matchings of $G$. Suppose this set is non-empty, then how difficult is it to sample uniformly at random from $M(G)$? What if I am okay with a distribution that is close to uniform, but not quite uniform, then is there an efficient algorithm?
There is a classical paper of [Jerrum and Sinclair (1989)](http://epubs.siam.org/doi/abs/10.1137/0218077) on sampling perfect matchings from dense graphs. Another classical paper of [Jerrum, Sinclair and Vigoda (2004; pdf)](http://www.cc.gatech.edu/~vigoda/Permanent.pdf) discusses sampling perfect matchings from bipartite graphs. Both these papers uses rapidly mixing Markov chains, and so the samples are only almost uniform. I imagine that uniform sampling is difficult.
There are many techniques for visualizing high dimension datasets, such as T-SNE, isomap, PCA, supervised PCA, etc. And we go through the motions of projecting the data down to a 2D or 3D space, so we have a "pretty pictures". Some of these embedding (manifold learning) methods are described [here](http://scikit-learn.org/stable/modules/manifold.html). [![enter image description here](https://i.stack.imgur.com/H3FBv.png)](https://i.stack.imgur.com/H3FBv.png) **But is this "pretty picture" actually meaningful? What possible insights can someone grab by trying to visualize this embedded space?** I ask because the projection down to this embedded space is usually meaningless. For example, if you project your data down to principal components generated by PCA, those principal components (eiganvectors) don't correspond to features in the dataset; they're their own feature space. Similarly, t-SNE projects your data down to a space, where items are near each other if they minimize some KL divergence. This isn't the original feature space anymore. (Correct me if I'm wrong, but I don't even think there is a large effort by the ML community to use t-SNE to aid classification; that's a different problem than data visualization though.) I'm just very largely confused why people make such a big deal about some of these visualizations.
Taking a slightly different approach than the other great answers here, the "pretty picture" is worth a thousand words. Ultimately, you will need to convey your findings to someone who is not as statistically literate, or who simply does not have the time, interest, or whatever, to grasp the full situation. That doesn't mean we cannot help the person to understand, at least a general concept or a piece of the reality. This is what books like Freakonomics do - there's little to no math, no data sets, and yet the findings are still presented. From the arts, look at [Marshal Ney at Retreat in Russia](https://en.wikipedia.org/wiki/Adolphe_Yvon#/media/File:Adolphe_Yvon_%281817-1893%29_-_Marshall_Ney_at_retreat_in_Russia.jpg). This massive oversimplification of the Napoleonic wars nevertheless conveys great meaning and allows people with even the most ignorant knowledge of the war to understand the brutality, the climate, the landscape, the death, and decorum that permeated the invasion of Russia. Ultimately the charts are simply communication, and for better or worse, human communication is often times focused on conflation, simplification, and brevity.
I'll preface my question with the fact that I'm just learning about linear regression so I may be thinking about this wrong. I have a set of data. In this set I have one dependent variable and about 10 independent variables and the data set is growing regularly. It's rows of data in database with 10 columns of independent variables and one column of the dependent variable. You can see my previous question for an example of what I'm trying to do: [Variables importance: who can do the most pushups?](https://stats.stackexchange.com/questions/13673/variables-importance-who-can-do-the-most-pushups) The output of a linear regression is a formula right? Now I want to write a python script (I could use R also but I'd greatly prefer python) to take this data as input and output the linear regression formula. Is there a python method to do this? Do I need to run a regression comparing each independent variable with the dependent variable one at a time? Or is there a python method to feed in the data with all 10 independent variables and come out with a formula?
Model 1 is nested in model 2, but neither model 1 or 2 is nested in 3 because of the transformation on your x variable. Decisions about types of models/shapes of relationships should depend more on understanding the science that produced the data and the questions to be answered than on canned formulas.
In the famous [Dunning-Kruger paper](https://www.sciencedirect.com/science/article/abs/pii/B9780123855220000056) ([ungated](http://www.area-c54.it/public/dunning%20-%20kruger%20effect.pdf)) perceived and actual scores are plotted, with quartile on the x axis and percentile on the y axis. The message is the perceived scores, and the actual scores are shown as a line for reference. In figures 1 and 4 the actual score line is straight, as one would expect as we have the rank score for the same value on both axis. However figure 3 has the 2nd quartile point significantly lower than the identity line, and in figure 2 the 2nd and 3rd quartiles are slightly lower. What is this describing? ![Figures 1-4 from Dunning-Kruger paper](https://i.imgur.com/ykc5y9C.png)
I agree this is odd, but I suppose it could be caused by tied scores showing up in particular quartiles. Suppose we have 100 subjects that get scores from 1 to 100 in 1-point intervals, except that everybody who would have had a score between 26 and 50 instead gets a score of 30. In this case, the average score percentile among the second quartile is either 26 or 50, depending on how you handle endpoints in the percentile calculation. For a dataset with only distinct scores, the average percentile of the second quartile must be 37.5, but it would be possible to find a different average percentile within a quartile that contains tied scores. Not sure if this is the case, or if it's just an error in plotting - even Figures 1 and 4 don't look exactly the same in "actual test score" values (the second quantile in Fig 1 is near 35, but it's closer to 40 in Fig 4).
The usual demonstration of the halting problem's undecidability involves positing an adversarial machine (call it $A\_0$) that runs the decider machine (call it $D\_0$) on itself and performs the opposite of the answer it gets. But it would be possible to construct a machine $D\_1$, that checked for the exact source code of $A\_0$ and output the correct answer. Of course then another machine $A\_1$ that runs $D\_1$ instead of $D\_0$ could also be constructed. And so on to any finite $n$. So it seems like any given adversarial machine can be thwarted by another larger-indexed decider machine. So it appears that it does not directly follow from that demonstration that there is any single machine that there cannot be a decider machine constructed for. The proposition that there is no single machine that can decide for all cases still holds of course, but I'm interested in that slightly different question of whether there exists a machine that no decider machine can correctly identify whether it halts. Is the answer to that question known? I can imagine a possible answer that any given machine must either halt or not so one of the trivial decider programs that always says the same answer would be correct. But it seems to me that there is a sensible notion of "non-trivially deciding" that would exclude examples like that. But maybe the fact I'm currently unable to describe that notion precisely indicates I'm wrong about that? Edit: I think I now have a way of describing a notion of "non-trivially deciding", although now that name does not fit as well. First we need to change the problem slightly. In this version the decider machines output one of $halts$, $continues$, or $unknown$, indicating that the machine halts, does not halt, or the decider machine does not "know" respectively. So we can call decider machines that are correct in all of the cases that they output $halts$ or $continues$ "correct" or "honest" decider machines. So now my question is, is there a machine that no single honest decider machine would identify correctly? By "identify correctly" I mean the decider outputs either $halts$ or $continues$ and that the outputs correctly correspond to the machine under examination's behaviour. By the definition of honest, if the machine under examination halts and the decider outputs $continues$ or the machine under examination does not halt, and the decider outputs $halts$ then the decider is not honest. So this definition excludes the decider machines that always output the same answer, including the one that always outputs $unknown$ by my definition of "identify correctly". Edit 2: To elaborate on my notion of a decider machine correctly identifying a machine's behaviour, we can break the definition into two parts gaining some more vocabulary in the process. First we have the criterion that the decider outputs $halts$ or $continues$ for that machine. We can call that criterion the identification criterion, and we can say of decider machines that they identify a machine if and only if they out one of $halts$ or $continues$ on that machine. Second we have the criterion that the decider's output correspond to the to the machine under examination's behaviour. So if the examined machine halts and the decider outputs $halts$ the decider is correct about that machine. Similarly if the examined machine does not halt, a decider that outputs $continues$ would be correct. It seems useful to include outputting $unknown$ as "technically correct". So the full rules would be a decider is correct about a given examined machine if one of the following is true: * The examined machine halts and the decider outputs $halts$ or $unknown$ * The examined machine does not halt and the decider outputs $continues$ or $unknown$ We can call this the "correctness" criterion, and say that a decider is correct about a given machine if and only if the above condition is true. We can put the vocabulary back together and say that a decider correctly identifies a given machine if they identify that machine, and they are correct about that machine. So now we can state that the always $unknown$ decider does not correctly identify every machine since, while it is always correct about every machine, it does not identify any machines!
Suppose you've come up with a machine $P\_0$ which you claim decides the halting problem. I create a $Q\_0$ that makes it malfunction somehow (your $P\_0$ either ends up diverging or gives the wrong answer). The proof of the Halting Problem shows that I can always create such a $Q\_0$. "Aha!" you say. "But now I can create $P\_1$, which is exactly like $P\_0$—except that it checks whether its input is $Q\_0$ and gives a hardcoded answer if so." Sure, you can do this. But then I can just create a $Q\_1$ in exactly the same way that I created $Q\_0$. This is always possible—that's how the proof works. You can show by induction that you can always create a $P\_n$ that has $n$ hardcoded special cases, protecting it against $n$ different $Q$s. In fact, for any "undecidable program" $Q$ I create, you can create a special $P$ that can decide it: ``` DoesItHalt(X): if X = Q, return [hardcode whichever answer is correct] else, return true ``` Therefore there is no "universally halting-undecidable program" $Q\_\omega$. But I can also create a new $Q$ that defeats any $P$ you make, since you can have as many special cases as you like, but that number must be finite. Therefore there is no "universal halting-decider program" $P\_\omega$ either (which is the whole point of Turing's proof).
I'm looking for empirical papers which investigate if a user can benefit from Q&A sites like Stack Overflow. I welcome any papers related to this topic, e.g: * an experiment, investigating if a specific task can be executed faster, * an analysis, investigating if a user understands the solutions on Q&A sites or if he just does copy&paste without thinking about it, * a comparative analysis of the code quality of users with access to Q&A sites in contrast to users without internet access (but just offline documentation of APIs).
There was a presentation on ICSE 2011, at the New Ideas and Emerging Results track, entitled ["How do programmers ask and answer questions on the web?"](https://ieeexplore.ieee.org/document/6032523). They only had initial results, but they sounded very interesting and promising. Maybe you could contact the authors if you need more info (they're from the Dept. of Comput. Science, University of Victoria, Victoria, BC, Canada). Here is the full reference: Treude, C.; Barzilay, O.; Storey, M. **How do programmers ask and answer questions on the web?**. In Proc. of the 33rd International Conference on Software Engineering (ICSE), 2011. UPDATE: This paper was just mentioned at the [Stack Exchange blog](https://stackoverflow.blog/2014/01/23/stack-exchange-cc-data-now-hosted-by-the-internet-archive/?cb=1): Chris Parnin, Christoph Treude, Lars Grammel, Margaret-Anne Storey. **Crowd Documentation: Exploring the Coverage and the Dynamics of API Discussions on Stack Overflow**. Georgia Institute of Technology, Tech. Rep, 2012. [[PDF]](http://www.cc.gatech.edu/%7Evector/papers/CrowdDoc-GIT-CS-12-05.pdf)
Let's say we have 10 people, each with a list of favorite books. For a given person X, I would like to find a special subset of X's books liked only by X, i.e. there is no other person that likes all of the books in X's special subset. I think of this special subset as a unique "fingerprint" for X. I would appreciate suggestions on an approach for finding such sets. (While this reads like a homework problem, it is related to a problem in my biology research that I am trying to solve.)
I assume you want the fingerprint to be as small as possible. Then this is the [Hitting Set](http://en.wikipedia.org/wiki/Hitting_set) problem: For each person, make a list of all books liked by X but not by this person. Then, the goal is to select at least one book from each list. The problem is NP-hard, so you can't expect to find an algorithm that always solves it optimally in polynomial time. The greedy algorithm has a bad theoretical worst-case bound, but often works quite decent in practice. If you want to solve it optimally, an Integer Linear Programming solver should be able to solve instances of up to 1000 or maybe 10000 books. If you give more details on the size and structure of your instances, we could suggest other approaches.
I came across this problem: *Show that in every infinite computably enumerable set, there exists an infinite decidable set*. As an attempt to solve the problem, I could only think of a proof by construction. Without loss of generality, let the alphabet be $\Sigma=\{0,1\}$. The construction goes as follows, suppose that $TM$ $M$ recognizes an infinite computably enumerable set $L$, we can form a decidable language $D \subseteq L$ as follows: > > 1) lexicographically enumerate all input words $w$ in $\Sigma^\*$ and repeatedly perform steps $a-b:$ > > > > > > > a) at the $k$th step, run $M$ on input words $\{w\_0,w\_1,...,w\_k\}$ for $k$ steps, where words $\{w\_0,w\_1,...,w\_k\}$ are lexicographically ordered > > > > > > b) if $M$ accepts an input word $w \in \{w\_0,w\_1,...,w\_k\}$ from step 2, include $w$ in a temporary language $D\_{temp}$ > > > > > > > > > 2) partition the words in $D\_{temp}$ into two, those starting with $0$ are $\in D$, while those starting with $1$ are $\not \in D$, along with all other words $\Sigma^\* - D\_{temp}$. > > > Step (1) does not loop in a word $w$ since each run of step (1) is limited for a finite number of $k$ steps. Is this construction okay ? Or did I miss something ...
Here is one possible approach. Since $L$ is c.e., there is some enumerator that outputs a list of the word in $L$: $w\_1,w\_2,\ldots$. Let $D$ consist of all words $w\_i$ which are longer than all words appearing before them. I claim that $D$ is infinite. If not, let $w\_m$ be the last word in $D$. Then all other words have length at most $|w\_m|$. However, this contradicts the infinitude of $L$. Furthermore, $D$ is decidable. Given a word $w$, run the enumerator for $L$, and halt whenever one of the following events happen: * If $w\_i = w$, output Yes. * If $|w\_i| > |w|$, output No. Since $L$ is infinite, this algorithm will eventually stop.
I want to determine the minimum and maximum number of leaves of a complete tree(not necessarily a binary tree) of height $h$. I already know how to find minimum($h+1$) and maximum($2^{h+1}-1$) number of **nodes** from the height, but what about *leaves*? Is there a way to determine them knowing nothing but height of the tree?
Could you make a FA that accepts all and only even-length strings? What do you know about the intersection of two regular languages?
I have satellite data that provides radiance which I use to compute the Flux (using surface and cloud info). Now using a regression method, I can develop a mathematical model directly relating radiance and flux and can be used to predict the flux for new radiance values. Is it possible to do same using decision trees or regression trees? In a regression there is mathematical equation connecting dependent and independent variable? Using decision trees, how could you develop such a model?
In algorithmic modeling, as opposed to parametric modeling, there is no explicit equation relating input and output variables (see this [paper](http://projecteuclid.org/euclid.ss/1009213726) by Breiman). The assumption is that the phenomenon being modeled is complex and unknown, and rather than imposing a formal model (which comes with a suite of assumptions and limitations), algos directly learn the links between predictors and predictand from the data. In the case of a single tree, this is not so much of an issue because the tree explains its predictions in a very visual and intuitive manner, but with ensemble of trees (Random Forests, Boosting), interpretability is definitely traded off for accuracy.
Since functions of independent random variables are independent, can I say that: 1. if $X$ and $Y$ independent then $X$ and $1/Y$ are independent. 2. If $X$ is normal, then $X$ is independent of its square (a chi square variate) I am confused whether the functions should be one-one.
Yes to (1) and no to (2). Let me explain. 1. The reasoning is from the transformation theorem. This is it generally. Assume you have two original random variables $X\_1$ and $X\_2$, along with their joint density $f\_{X\_1,X\_2}(x\_1,x\_2)$. The transformation theorem gives you the joint density of two new random variables $Y\_1 = g\_1(X\_1,X\_2)$ and $Y\_2 = g\_2(X\_1,X\_2)$. Assume that these $g\_i$s are smooth enough to possess the derivatives I write down: $$ g\_{Y\_1,Y\_2}(y\_1,y\_2) = f\_{X\_1,X\_2}(x\_1[y\_1,y\_2],x\_2[y\_1,y\_2])|\det(J)| $$ where $J = \left( \begin{array}{cc} \frac{\partial x\_1}{\partial y\_1} & \frac{\partial x\_1}{\partial y\_2} \\ \frac{\partial x\_2}{\partial y\_1} & \frac{\partial x\_2}{\partial y\_2} \end{array} \right). $ 2. Now assume that $X\_1$ and $X\_2$ start off to be independent. This is your case you're dealing with. That means $f\_{X\_1,X\_2}(x\_1,x\_2) = f\_{X\_1}(x\_1)f\_{X\_2}(x\_2)$. Also, if you only make $Y\_1$ a function of $X\_1$ and only make $Y\_2$ a function of $X\_2$, then $J$ is diagonal, right? Now plug that stuff into the general new density and you'll see why $Y\_1$ and $Y\_2$ are still independent: \begin{align\*} g\_{Y\_1,Y\_2}(y\_1,y\_2) &= f\_{X\_1,X\_2}(x\_1[y\_1,y\_2],x\_2[y\_1,y\_2])|\det(J)| \\ &= f\_{X\_1}(x\_1[y\_1,y\_2])f\_{X\_2}(x\_2[y\_1,y\_2]) |\frac{\partial x\_1}{\partial y\_1}\frac{\partial x\_2}{\partial y\_2}| \\ &= f\_{X\_1}(x\_1[y\_1,y\_2])|\frac{\partial x\_1}{\partial y\_1}| f\_{X\_2}(x\_2[y\_1,y\_2]) |\frac{\partial x\_2}{\partial y\_2}| \end{align\*} Still factors. Hence $Y\_1$ is independent of $Y\_2$. 3. The functions $g\_i$ don't need to be one-to-one, but you'd have to use the more beefed up version of the transformation to justify it. Same idea would apply, though. Edit: @Whuber linked to a good thread that shows one of his answers demonstrating the same result using sigma fields, which is much more elegant and more generally applicable. His version always works, as long as the transformations are measurable, while mine only works for continuous random variables and certain types of transformations. Regarding your second example where you ask about a random variable $X$ and it's square $X^2$: "[s]ince[sic] functions of independent random variables are independent"...neither of these answers will apply. Also you need to qualify what type of functions you're talking about. With the way I understood your question, your second point seemed to be trying to come up with a sort of counterexample to help you better understand your situation. This is why I didn't really address it. But the reason it's false is because $p(X^2|X=x)$ is discrete with all of its mass on $x^2$, while the marginal $p(X^2)$ is continuous (chi-square). So they're obviously very different.
I understand the purpose of Convolutional filters (or kernels). I visualize them as learnable feature extractors. E.g. Extract vertical edges or horizontal edges, etc. [![enter image description here](https://i.stack.imgur.com/Pgj2y.png)](https://i.stack.imgur.com/Pgj2y.png) Could somebody kindly explain to me the intuition behind stacking 2 or more consecutive convolution filters? Why couldn't the 2 filters be merged into 1? **Picture from Andrew Ng's video** [![enter image description here](https://i.stack.imgur.com/SfUH4.png)](https://i.stack.imgur.com/SfUH4.png) **Video URL** [Here](https://youtu.be/bXJx7y51cl0?t=6m15s) is the link to Andrew Ng's lecture on Machine learning as a reference
Basically, you *can* have multiple convolutional modules in one layer. It is called [grouping](http://colah.github.io/posts/2014-12-Groups-Convolution/) and was introduced in `AlexNet`. The inputs are the same in this case and the outputs of all convolutional modules should be concatenated after passing the input. I quote from the link the benefit of grouping in conv nets. > > Group convolutions provide elegant language for talking about lots of situations involving probability ... Group convolutions naturally extend convolutional neural networks, with everything fitting together extremely nicely. Since convolutional neural networks are one of the most powerful tools in machine learning right now, that’s pretty interesting... > > > About stacking, the provided answers will suffice.
Let's assume that we have built an universal quantum computer. Except for security-related issues (cryptography, privacy, ...) which current real world problems can benefit from using it? I am interested in both: * problems currently unsolvable for a practical entry, * problems which currently are being resolved, but a significant speedup would greatly improve their usability.
Efficiently simulating quantum mechanics.
> > A box contains $5$ white and $2$ black balls. A coin with unknown $P(Head)=p$ is tossed once. If it lands **HEADS** then a white ball is added, else a black ball is added to the box. Then a ball is selected at random from the box. Given that the ball drawn is **WHITE**, find the Maximum Likelihood Estimator of $p$. > > > I find this problem quite confusing, really. It seems to be pretty straightforward and hence I am shocked by the substandard quality, else I am making some serious error. My attempt is as follows: > > $P(White)=P(White|Head)P(Head)+P(White|Tail)P(Tail)=\dfrac{6}{8}.p+\dfrac{5}{8}(1-p)=\dfrac{p}{8}+\dfrac{5}{8}$ > > > This is actually my likelihood of $p$ given the sample (my sample is **WHITE** ball). So this is maximized for $\hat{p}=1$. So $1$ (????) is the MLE for $p$. It is a constant estimator. > > > This is kind of weird. Any suggestion/correction/explanation is welcome.
The difference between **subtracting the mean** and **dividing by the mean** is the difference between subtraction and division; presumably you are not really asking about the mathematics. There is no mystery here, as it's no more than a statistical analogue of * Bill is 5 cm taller than Betty (subtraction) * Bill is twice the weight of his son Bob (division) with the difference that the mean is used as a reference level, rather than another value. We should emphasise that * (Bill $-$ Betty) or (value $-$ mean) preserves units of measurement while * (Bill / Bob) or (value / mean) is independent of units of measurement. and that subtraction of the mean is always possible, while division by the mean usually only makes sense if the mean is guaranteed to be positive (or more widely that no two values have different signs and the mean cannot be zero). Taking it further then (value $-$ mean) / SD is scaling by the standard deviation and so again produces a measure independent of units of measurement, and also of the variability of the variable. It's always possible so long as the SD is positive, which does not bite. (If the SD were zero then every value is the same, and detailed summary is easy without any of these devices.) This kind of rescaling is often called **standardization**, although it is also true that that term too is overloaded. Note that subtraction of the mean (without or with division by SD) is just a change of units, so distribution plots and time series plots (which you ask about) look just the same before and after; the numeric axis labels will differ, but the shape is preserved. The choice is usually substantive rather than strictly statistical, so that it is question of which kind of adjustment is a helpful simplification, or indeed whether that is so. I'll add that your question points up in reverse a point often made on this forum that asking about **normalization** is futile unless a precise definition is offered; in fact, that are even more meanings in use than those you mentioned. The OP's context of space-time data is immaterial here; the principles apply regardless of whether you have temporal, spatial or spatial-temporal data.
I do not understand the difference between the `fit` and `fit_transform` methods in scikit-learn. Can anybody explain simply why we might need to transform data? What does it mean, fitting a model on training data and transforming to test data? Does it mean, for example, converting categorical variables into numbers in training and transforming the new feature set onto test data?
These methods are used for dataset transformations in scikit-learn: Let us take an example for scaling values in a dataset: Here the `fit` method, when applied to the training dataset, learns the model parameters (for example, mean and standard deviation). We then need to apply the `transform` method on the training dataset to get the transformed (scaled) training dataset. We could also perform both of these steps in one step by applying `fit_transform` on the training dataset. Then why do we need 2 separate methods - `fit` and `transform`? In practice, we need to have separate training and testing dataset and that is where having a separate `fit` and `transform` method helps. We apply `fit` on the training dataset and use the `transform` method on both - the training dataset and the test dataset. Thus the training, as well as the test dataset, are then transformed(scaled) using the model parameters that were learned on applying the `fit` method to the training dataset. Example Code: ``` scaler = preprocessing.StandardScaler().fit(X_train) scaler.transform(X_train) scaler.transform(X_test) ```
Suppose we connect the points of $V = \mathbb{Z}^2$ using the set of undirected edges $E$ such that either $(i, j)$ is connected to $(i + 1, j + 1)$, or $(i + 1, j)$ is connected to $(i, j + 1)$, independently and uniformly at random for all $i, j$. (Inspired by the title and cover of [this book](http://10print.org/).) What is the probability that this graph has an infinitely large connected component? Similarly, consider $\mathbb{R}^2 \setminus G$, the complement of the planar embedding of the graph. What is the probability that the complement has an infinite connected component? Clearly, if all the diagonals point the same way, both the graph and its complement have an infinite component. How about a uniform random graph of the above kind?
Hmm, well, here's one first attempt. Let's observe two important things: 1. If this graph has an infinitely large connected component, by König's infinity lemma, it has an infinite simple path. 2. The event that the graph has an infinite simple path is independent of each individual choice of edge orientation (and thus, every finite set of edge choices). Therefore it is a tail event and by Kolmogorov's zero-one law the probability is either zero or one. So, is it zero or one? It's not immediately clear, though we can make a guess, since by the "infinite monkeys with typewriters" theorem, this graph contain simple paths of arbitrarily large length with probability one. Of course, more is needed to rigorously prove that it actually has an *infinite* path with probability one.
I understand this is a slightly vague question, but there are results for P vs. NP, such as the question cannot be easily resolved using oracles. Are there any results like this which have been shown for P vs. NP but have not been shown for P vs PSPACE, so that there is hope that certain proof techniques might resolve P vs PSPACE even though they cannot resolve P vs NP? And are there any non-trivial results that say that if P = PSPACE then there are implications that do not necessarily hold under P = NP? Or anything else non-trivial in the literature that suggests it's easier to prove P != PSPACE than it is to prove P != NP?
This doesn't really answer your question, but there is a result that under a restricted form of time travel (yes, time travel), it holds that $P=PSPACE$. I'll remark that the result is nontrivial, given the restrictions on the model. See [this explanation by Scott Aaronson.](http://www.scottaaronson.com/democritus/lec19.html)
*I know why cases 1 and 2 are wrong because our language can have different numbers of 0's and 1's. But I'm not sure how case 3 can be proved wrong for our language.* Exercise 1.30: Describe the error in the following “proof” that $0^{∗}1^{∗}$ is not a regular language. (An error must exist because $0^{∗}1^{∗}$ is regular.) > > The proof is by contradiction. Assume > that $0^{∗}1^{∗}$ is regular. Let p be the pumping length for $0^{∗}1^{∗}$ given by the pumping > lemma. Choose s to be the string $0^{p}1^{p}$. You know that s is a member of $0^{∗}1^{∗}$, but > Example 1.73 shows that s cannot be pumped. Thus you have a contradiction. So > $0^{∗}1^{∗}$ is not regular. > > > Example 1.73: > > Let B be the language $\{0^{n}1^{n}|n ≥ 0\}$. We use the pumping lemma to prove that > B is not regular. The proof is by contradiction. > Assume to the contrary that B is regular. Let p be the pumping length given > by the pumping lemma. Choose s to be the string $0^{p}1^{p}$. Because s is a member > of B and s has length more than p, the pumping lemma guarantees that s can be > split into three pieces, s = xyz, where for any i ≥ 0 the string $xy^{i}z$ is in B. We > consider three cases to show that this result is impossible. > > > 1. The string y consists only of 0s. In this case, the string xyyz has more 0s > than 1s and so is not a member of B, violating condition 1 of the pumping > lemma. This case is a contradiction. > 2. The string y consists only of 1s. This case also gives a contradiction. > 3. The string y consists of both 0s and 1s. In this case, the string xyyz may > have the same number of 0s and 1s, but they will be out of order with some > 1s before 0s. Hence it is not a member of B, which is a contradiction. > > > > > Thus a contradiction is unavoidable if we make the assumption that B is regular, > so B is not regular. Note that we can simplify this argument by applying > condition 3 of the pumping lemma to eliminate cases 2 and 3. > In this example, finding the string s was easy because any string in B of > length p or more would work. > > > *This question is from the book 'Introduction to the Theory of Computations' by Michael Sipser, exercise 1.30*
Edit: This proof is insufficient as pointed out in comment. I am assuming that when the author says "covers the full height of the tree", it means that the node that is put at the root of the tree will be exchanged with one of its children until it reaches a leaf of the tree. Since the heap is a [complete tree](https://en.wikipedia.org/wiki/Binary_tree#Types_of_binary_trees), it means that its leaves are all on the two last levels of the tree. So even if the full height of the tree is not covered, the difference with the full height will be at most 1 (like in the second case of your example), and that is why it does not change the asymptotic complexity.
Just wondering whether anyone could define expected counts in regards to a chi square test.
Expected counts are the projected frequencies in each cell if the null hypothesis is true (aka, no association between the variables.) Given the follow 2x2 table of outcome (O) and exposure (E) as an example, a, b, c, and d are all observed counts: ![enter image description here](https://i.stack.imgur.com/H59e8.png) The expected count for each cell would be the product of the corresponding row and column totals divided by the sample size. For example, the expected count for O+E+ would be: $$\frac{(a+b) \times (a+c)}{a+b+c+d}$$ (see red arrows for the meaning of "corresponding") Then the expected counts will be contrast with the observed counts, cell by cell. The more the difference, the higher the resultant statistics, which is the $\chi^2$.
How do you check if two algorithms (say, Merge sort and Naïve sort) return the same result for any input, when the set of all inputs is infinite? **Update:** Thank you [Ben](https://cs.stackexchange.com/a/2062/1638) for describing how this is impossible to do algorithmically in the general case. [Dave's answer](https://cs.stackexchange.com/a/2063/1638) is a great summary of both algorithmic and manual (subject to human wit and metaphor) methods that don't always work, but are quite effective.
In contrast to what the nay-sayers say, there are many effective techniques for doing this. * Bisimulation is one approach. See for example, Gordon's paper on [Coinduction and Functional Programming](http://research.microsoft.com/pubs/68298/fp94.ps.gz). * Another approach is to use operational theories of program equivalence, such as the work of [Pitts](http://www.cs.tau.ac.il/~nachumd/formal/exam/pitts.pdf). * A third approach is to [verify](http://en.wikipedia.org/wiki/Formal_verification) that both programs satisfy the same functional specification. There are thousands of papers on this approach. * A fourth approach is to show that one program can be rewritten into the other using sound [program transformations](http://www.program-transformation.org/). Of course none of these methods is complete due undecidability, but volumes and volumes of work has been produced to address the problem.
I currently have a system that has `{f(a) = b, f(f(x)) = x}` (part of an [exam question](http://folk.uio.no/peterol/INF3230V13/Oppgaver/midterm06.pdf) - look at page 5 - exercise 1). To start off with proving non-confluency, I am thinking along these lines: ``` f(f(x)) and f(a) can be unified by using {a -> f(x)}. ``` Then we can rewrite: ``` f(f(x)) = x [eq.1] f(f(x)) = b [eq.2] ``` The above two cannot be reduced any further, and do not have any common ancestor or successor. Therefore the system is not confluent. To make this confluent, we can add a third equation to the system: ``` x = b ``` This way, the equation will both be confluent and terminate. Another alternative would be: ``` f(x) = b ``` Is there anything I have missed? Or is this pretty much the gist of it?
I do not think $f(f(x))$ and $f(a)$ can be unified. You can not map constant $a$ to the term $f(x)$. My example would be $f(f(a)) = f(b)$ while otherwise $f(f(a)) = a$. It seems the equation $x=b$ maps all terms to $b$. That is too much. I would add $f(b) = a$. This leaves two classes of terms, those equivalent to $a = f(b) = f(f(a)) = \dots$ and those equivalent to $b = f(a) = f(f(b)) = \dots$ Here, $a=a$ and $b=b$ and never the twain shall meet ...
My question is simple: does any body know where can I find the proof that MAX CLIQUE is NP-HARD? **Remarks:** MAX CLIQUE is the decision problem defined as follows:Given a graph $G$ and $k>0$. Does the graph $G$ contain a maximum clique of size $k$? I have been looking for a formal proof that MAX CLIQUE is **NP-Hard**, still I haven't found one. The book of Computers and Intractability (Michael Garey) in the final chapters mentions briefly that the full proof is on the phd thesis of Ernet Legget: <http://dl.acm.org/citation.cfm?id=907661> . I couldn't find the copy of the book on the internet nor any survey whit the proof.
There is probably a proof in Karp's original paper. Here is a simple reduction from SAT. Given an instance $\varphi$ of SAT with $m$ clauses, construct an instance of MAX-CLIQUE as follows. For each clause $C$ and each literal $\ell$ appearing in $C$, there is a vertex $(C,\ell)$. Two vertices $(C\_1,\ell\_1),(C\_2,\ell\_2)$ are connected if $C\_1 \neq C\_2$ and $\ell\_1 \neq \lnot \ell\_2$. This has a clique of size $m$ iff $\varphi$ is satisfiable (exercise).
What exactly is the problem of having non-invertible covariance matrix? Why is getting the inverse of this matrix so important? This problem is often encountered when doing regressiong works on samples, but even under context of sampling, I do not see how this becomes the problem..
Let's predict income with two highly positively correlated variables: Years of work experience and number of carrots eaten in one's lifetime. Let's ignore omitted variable bias issues. Also, let's say years of work experience has a much greater impact on income than carrots eaten. Your beta parameter estimates would be unbiased, but the standard errors of the parameter estimates would be greater than if the predictors were not correlated. Collinearity does not violate any assumptions of GLMs (unless there is perfect collinearity). Collinearity is fundamentally a data problem. In small datasets, you might not have enough data to estimate beta coefficients. In large datasets, you likely will. Either way, you can interpret the beta parameters and the standard errors just as if collinearity were not an issue. Just be aware that some of your parameter estimates might not be significant. In the event your parameter estimates are not significant, get more data. Dropping a variable that should be in your model ensures your estimates are biased. For example, if you were to drop the years of experience variables, the carrots eaten variables would become positively biased due to "absorbing" the impact of the dropped variable. To answer the shared variance question, here is a fun test you can do in a statistical program of your choice: * Make two highly correlated variables (x1 and x2) * Add an error term (normally distributed, zero mean) * Create y by adding x1 to the error term. (i.e. The actual beta values of x1 and x2 are 1 and 0 respectively) * Regress y on x1 and x2 with a large data set. Although there is a very large shared variance between x1 and x2, only x1 has a ceteris paribus, marginal effect relationship to y. In contrast, holding x1 constant and changing x2 does nothing to the expected value of y, so the shared variance is irrelevant.
As I teach myself dynamic programming, I have learned about the coin exchange problems. Specially this site: <https://www.geeksforgeeks.org/dynamic-programming-set-7-coin-change/> provides great insight about it. Specifically, the following implementation of a tabulated-DP-based solution for this problems is presented as follows: ``` def count(S, m, n ): # If n is 0 then there is 1 # solution (do not include any coin) if (n == 0): return 1 # If n is less than 0 then no # solution exists if (n < 0): return 0; # If there are no coins and n # is greater than 0, then no # solution exist if (m <=0 and n >= 1): return 0 # count is sum of solutions (i) # including S[m-1] (ii) excluding S[m-1] return count( S, m - 1, n ) + count( S, m, n-S[m-1] ); ``` However, this only counts the number possibles solutions. **Question**: How can I actually save these solutions for post-processing? **Previous research**: In this very helpful video: <https://www.youtube.com/watch?v=ENyox7kNKeY> they explain how to use an array of parent pointers, to generate the actual solutions, however, I am having issues with implementing this approach with the previous tabulated solution. Any hint?
You can't. You can't squeeze blood out of a stone. There's no such thing as a free lunch. You can't get something from nothing. If you want labelled images, you will need to label them yourself, or find some existing data set that comes with labels. There are no shortcuts. Yes, this is tedious and labor-intensive. This is one of the less-well-known and less-glamorous aspects of working on machine learning: in practical projects, we often spend the majority of our time (or more!) just assembling data sets, and only a small fraction on the actual learning algorithms themselves. I know you could do various shortcuts, like trying to write a quick program to label them. But if that program makes mistakes, you'll just be training your machine learning algorithm to make the same mistakes -- so that's not actually helpful. If you want to reduce the amount of labelling effort, there *are* ways to reduce the effort -- but still they will require a significant amount of manual labelling. Nothing comes for free. For instance, you can use [active learning algorithms](https://en.wikipedia.org/wiki/Active_learning_(machine_learning)) to identify which instances to label. A simple example of that is to manually label a few hundred images as your initial training set, train a classifier, apply it to all remaining images, pick out the 20 images that the classifier is least confident on, manually label those 20 images, add them to the training set, and repeat. (This is an application of uncertainty sampling.) There are other, more sophisticated methods out there as well. Another plausible approach is to somehow cluster the images, then manually label a few images from each cluster. This has issues, though, as you'll need some reasonable way to cluster the images, and that might be a non-trivial task. (One possible approach for clustering is to take some existing, pre-trained ImageNet classifier -- e.g., VGG, Inception, ResNet, etc. -- throw away the last layer or last two layers, and use the output before those layers as the input to some clustering algorithm like k-means. Doing k-means clustering directly on the raw image probably won't work well, but if you do it on the vector of activation values at some deep layer near the end of a good pre-trained classifier, then you might get better results.) Finally, we often use data augmentation to make the classifier more robust. For instance, for each image in the training set, we might generate 10 more copies, where each copy is rotated or translated or cropped by a random amount, and add that to the training set. This tends to help make the classifier more robust to changes in orientation or pose -- but it doesn't give you something for nothing. You still need a large training set that contains many different kinds of tables.
Given a sentence like this, and i have the data structure (dictionary of lists): `{'cat': ['feline', 'kitten']}` A feline was recently spotted in the neighborhood protecting her little kitten How would I efficiently process these set of text to convert the word synonyms of the word cat to the word CAT such that the output is like this: A cat was recently spotted in the neighborhood protecting her little cat I would also like to inquire whether my data structure is relevant and efficient for this task
You can't. At least not always. That's not a well-defined mapping. The notion of synonyms is subjective and a word might have many synonyms it can be mapped into. Even humans can't do that task (they won't always agree what the sentence should be mapped to), so it's not reasonable to expect computers to do it, either. Perhaps you want to compare two sentences to determine whether they are saying the same thing. In full generality, that is probably AI-complete (i.e., beyond our state of knowledge right now). You can probably get a pretty good approximation by using existing pre-trained encoders that map text (such as a sentence) to an embedding, followed by some distance measure or similarity measure on those embeddings.
I am not sure I see it. From what I understand, edges and vertices are complements for each other and it is quite surprising that this difference exists. Is there a good / quick / easy way to see that in fact finding a Hamiltonian path should be much harder than finding a Euler path?
Maybe the following perspective helps: When you are trying to construct a Eulerian path, you can proceed almost greedily. You just start the path somewhere and the try to walk as long as possible. If you detect a circle, you will delete its edges (but record that this circle was constructed). By this you decompose the graph in circles, which can be easily combined to an Eulerian tour. The point is, none of your decision "how to walk across the graph" can actually be wrong. You will always succeed and never get stuck. The situation with Hamiltonian paths is different. Again, you might want to construct a path by walking along edges of the graph. But this time you can really make *bad* decisions. This means you cannot continue the path, but not all vertices have been visited. What you can do is back-track. That means you revert some of your old decisions and continue along a different path. Essentially all algorithms that are known for the general problem rely on some way or the other on back-tracking, or trying out a large set of solutions. This, however, is characteristic for NP-complete problems. So (simplified) bottom-line: Eulerian path requires no back-tracking, but Hamiltonian path does. (Notice that it might be that P=NP and in this case a *clever* Hamiltonian path algorithm would exist.)
I am interested in changing the null hypotheses using `glm()` in R. For example: ``` x = rbinom(100, 1, .7) summary(glm(x ~ 1, family = "binomial")) ``` tests the hypothesis that $p = 0.5$. What if I want to change the null to $p$ = some arbitrary value, within `glm()`? I know this can be done also with `prop.test()` and `chisq.test()`, but I'd like to explore the idea of using `glm()` to test all hypotheses relating to categorical data.
Look at confidence interval for parameters of your GLM: ``` > set.seed(1) > x = rbinom(100, 1, .7) > model<-glm(x ~ 1, family = "binomial") > confint(model) Waiting for profiling to be done... 2.5 % 97.5 % 0.3426412 1.1862042 ``` This is a confidence interval for log-odds. For $p=0.5$ we have $\log(odds) = \log \frac{p}{1-p} = \log 1 = 0$. So testing hypothesis that $p=0.5$ is equivalent to checking if confidence interval contains 0. This one does not, so hypothesis is rejected. Now, for any arbitrary $p$, you can compute log-odds and check if it is inside confidence interval.
I have a data set with lots of zeros that looks like this: ``` set.seed(1) x <- c(rlnorm(100),rep(0,50)) hist(x,probability=TRUE,breaks = 25) ``` I would like to draw a line for its density, but the `density()` function uses a moving window that calculates negative values of x. ``` lines(density(x), col = 'grey') ``` There is a `density(... from, to)` arguments, but these seem to only truncate the calculation, not alter the window so that the density at 0 is consistent with the data as can be seen by the following plot : ``` lines(density(x, from = 0), col = 'black') ``` (if the interpolation was changed, I would expect that the black line would have higher density at 0 than the grey line) Are there alternatives to this function that would provide a better calculation of the density at zero? ![enter image description here](https://i.stack.imgur.com/1IQTU.png)
I'd agree with Rob Hyndman that you need to deal with the zeroes separately. There are a few methods of dealing with a kernel density estimation of a variable with bounded support, including 'reflection', 'rernormalisation' and 'linear combination'. These don't appear to have been implemented in R's `density` function, but are available in [Benn Jann's `kdens` package for Stata](http://fmwww.bc.edu/RePEc/bocode/k/kdens.pdf).
I did lot of research on internet but couldn't get my answer. I want to know what is the difference between the word size and CPU architecture? For eg.- I read that CPU of 32-bit architecture can address 2^32 memory locations. Now if we consider that one address points to a single byte, then it means we can have memory size of 2^32 Bytes. We can call it byte addressable memory. Now, I also read that word size is the size of register in a CPU, i.e., it is the minimum no. of bits on which a CPU works upon. So, help me understand here what is the word size of this computer organization? Is it 32-bit or 1 Byte (as memory is byte addressable)? What if one address could point 2 Bytes of data instead of 1 Byte? What would be word size then? Consider the following link. [GATE CS Questions on computer organisation](https://www.geeksforgeeks.org/computer-organization-problem-solving-instruction-format/) Question no. 3 from this link is as follows- > > A machine has a 32-bit architecture, with 1-word long instructions. It has 64 registers, each of which is 32 bits long. It needs to support 45 instructions, which have an immediate operand in addition to two register operands. Assuming that the immediate operand is an unsigned integer, the maximum value of the immediate operand is > > > Now, the very first line of the solution says- > > As machine has 32-bit architecture, therefore, 1 word = 32 bits = instruction size > > > How can we say that since CPU is 32-bit, word size is also 32 bit. Doesn't 32-bit architecture just mean that the no. of addresses in the physical memory is 2^32? And also doesn't the word size mean the size of memory each of those 2^32 addresses would point to?
It's simple - 32 bit word size, or 32 bit architecture, mean exactly what the person using the expression means. No more, no less. You can't draw any conclusions from it. For example, there's an Intel 32 bit architecture that can address 64 GB of physical RAM, and no Intel 64 bit architecture that can address anything near 2^64 bytes of physical or virtual memory, while 32 bit PowerPC supported 2^53 bytes of virtual memory. 64 bit POWER has 32 bit instructions. 32 bit Intel has instructions from 8 bit to over 100 bit. 32 bit ARM is what it is, and 64 bit ARM is what it is. Some with 32 and 64 bit x86, except they are different. And so on. Don't make any assumptions, they will all be somewhere between wrong and not quite right. If you want to pass an exam, learn the answers that the exam expects (sad but true).
You can have data in wide format or in long format. This is quite an important thing, as the useable methods are different, depending on the format. I know you have to work with `melt()` and `cast()` from the reshape package, but there seems some things that I don't get. Can someone give me a short overview how you do this?
There are several resources on Hadley Wickham's [website](http://had.co.nz/reshape/) for the package (now called `reshape2`), including a link to a [paper](http://www.jstatsoft.org/v21/i12) on the package in the Journal of Statistical Software. Here is a brief example from the paper: ``` > require(reshape2) Loading required package: reshape2 > data(smiths) > smiths subject time age weight height 1 John Smith 1 33 90 1.87 2 Mary Smith 1 NA NA 1.54 ``` We note that the data are in the wide form. To go to the long form, we make the `smiths` data frame *molten*: ``` > melt(smiths) Using subject as id variables subject variable value 1 John Smith time 1.00 2 Mary Smith time 1.00 3 John Smith age 33.00 4 Mary Smith age NA 5 John Smith weight 90.00 6 Mary Smith weight NA 7 John Smith height 1.87 8 Mary Smith height 1.54 ``` Notice how `melt()` chose one of the variables as the id, but we can state explicitly which to use via argument `'id'`: ``` > melt(smiths, id = "subject") subject variable value 1 John Smith time 1.00 2 Mary Smith time 1.00 3 John Smith age 33.00 4 Mary Smith age NA 5 John Smith weight 90.00 6 Mary Smith weight NA 7 John Smith height 1.87 8 Mary Smith height 1.54 ``` Here is another example from `?cast`: ``` #Air quality example names(airquality) <- tolower(names(airquality)) aqm <- melt(airquality, id=c("month", "day"), na.rm=TRUE) ``` If we store the molten data frame, we can *cast* into other forms. In the new version of `reshape` (called `reshape2`) there are functions `acast()` and `dcast()` returning an array-like (array, matrix, vector) result or a data frame respectively. These functions also take an aggregating function (eg `mean()`) to provide summaries of data in molten form. For example, following on from the Air Quality example above, we can generate, in wide form, monthly mean values for the variables in the data set: ``` > dcast(aqm, month ~ variable, mean) month ozone solar.r wind temp 1 5 23.61538 181.2963 11.622581 65.54839 2 6 29.44444 190.1667 10.266667 79.10000 3 7 59.11538 216.4839 8.941935 83.90323 4 8 59.96154 171.8571 8.793548 83.96774 5 9 31.44828 167.4333 10.180000 76.90000 ``` There are really only two main functions in `reshape2`: `melt()` and the `acast()` and `dcast()` pairing. Look at the examples in the help pages for these two functions, see Hadley's website (link above) and look at the paper I mentioned. That should get you started. You might also look into Hadley's [`plyr` package](http://had.co.nz/plyr/) which does similar things to `reshape2` but is designed to do a whole lot more besides.
Let's say I want to create a line of best fit to approximate the relationship between years of golf experience, and average golf score. If I have only 4 data points, my line of best fit will have a lot of noise. Is there an equation I can use to say how good the line of best fit is based on the number of data points are used to create it? I guess we could say quality = theNumberOfDataPoints, but it doesn't seem like a linear relationship to me... Is it maybe the square root of the number of data points?
You mention your line of best fit, so you are thinking graphically. You could also show the "quality" graphically. In that case I would suggest plotting, along with the line of best fit itself, the upper and lower bounds of: * the 95 % confidence interval (of the mean DV for different values of IV), and * the 95 % prediction interval (of DVs predicted by the model from different values of IV). There are some examples here: <http://www.medcalc.org/manual/scatter_diagram_regression_line.php> ...and here is a simple one for data just like yours, with * 95 % confidence interval bounds in red * 95 % prediction interval bounds in orange ![graph with 95 % confidence interval bounds and 95 % prediction interval bounds plotted](https://i.stack.imgur.com/5FZ8N.png) R code: ( ``` a <- c(5,10,15,20) score <- c(95,82,75,69) plot(a,score) model.lm <- lm(score ~ a) abline(model.lm,col="grey30") frame = data.frame(a,score) newx <- seq(0,25) prdConf <- predict(model.lm, newdata=data.frame(a=newx), interval = c("confidence"), level = 0.95, type="response") prdPred <- predict(model.lm, newdata=data.frame(a=newx), interval = c("prediction"), level = 0.95, type="response") lines(newx,prdConf[,2],col="red",lty=2) lines(newx,prdConf[,3],col="red",lty=2) lines(newx,prdPred[,2],col="orange",lty=2) lines(newx,prdPred[,3],col="orange",lty=2) # with help from [https://stat.ethz.ch/pipermail/r-help/2007-November/146285.html][3] ``` ) In my view plots like this should be actually be standard practice (especially the 95 % prediction interval) since it communicates the predictions made by the model so clearly, but I have only seen it rarely.
Recently I found an interesting algorithm book entitled ['Explaining Algorithms Using Metaphors' (Google books)](http://books.google.com.hk/books?id=jfBGAAAAQBAJ&pg=PA1&lpg=PA1&dq=explaining+algorithms+using+metaphors&source=bl&ots=WzN9sRRw6e&sig=LZiwzBgOxMK_iXdijfTjQoX6ZFY&hl=en&sa=X&ei=TPBVVPLQO-TYmgW0t4KYBw&ved=0CEEQ6AEwBQ#v=onepage&q=explaining%20algorithms%20using%20metaphors&f=false) by Michal Forišek and Monika Steinová. "Good" metaphors help people understand and even visualize the abstract concepts and ideas behind algorithms. For example, > > One well-known exposition of the shortest path using the balls-and-strings model > looks as follows: To find the shortest path between $s$ and $t$, one just grabs the corresponding two balls and tries to pull them apart. > > > My question: > > I would like to see as many metaphors as possible for computer science algorithms/concepts/ideas. > > Do you know any? Do you have your own ones? > > >
B-Trees, N-ary Trees, Autocracy and Democracy <http://rkvsraman.blogspot.in/2008/08/b-trees-n-ary-trees-autocracy-and.html>
I encountered below statement by *Alan M. Turing* [here](https://justinmeiners.github.io/lc3-vm/index.html#1:5): > > "The view that machines cannot give rise to surprises is due, I > believe, to a fallacy to which philosophers and mathematicians are > particularly subject. This is the assumption that as soon as a fact is > presented to a mind all consequences of that fact spring into the mind > simultaneously with it. It is a very useful assumption under many > circumstances, but one too easily forgets that it is false." > > > I am not a native English speaker. Could anyone explain it in plain English?
> > Mathematicians and philosophers often assume that machines (and here, he probably means "computers") cannot surprise us. This is because they assume that once we learn some fact, we immediately understand every consequence of this fact. This is often a useful assumption, but it's easy to forget that it's false. > > > He's saying that systems with simple, finite descriptions (e.g., Turing machines) can exhibit very complicated behaviour and that this surprises some people. We can easily understand the concept of Turing machines but then we realise that they have complicated consequences, such as the undecidability of the halting problem and so on. The technical term here is that "knowledge is not closed under deduction". That is, we can know some fact $A$, but not know $B$, even though $A$ implies $B$. Honestly, though, I'm not sure that Turing's argument is very good. Perhaps I have the benefit of writing nearly 70 years after Turing, and my understanding is that the typical mathematician knows much more about mathematical logic than they did in Turing's time. But it seems to me that mathematicians are mostly quite familiar with the idea of simple systems having complex behaviour. For example, every mathematician knows the definition of a [group](https://en.wikipedia.org/wiki/Group_(mathematics)), which consists of just four simple axioms. But nobody – today or then – would think, "Aha. I know the four axioms, therefore I know every fact about groups." Similarly, Peano's axioms give a very short description of the natural numbers but nobody who reads them thinks "Right, I know every theorem about the natural numbers, now. Let's move on to something else."
I want to generate a bivariate Gaussian dataset. The dataset includes a total of 800 results drawn randomly from four two-dimensional Gaussian classes with means $(-3,0)'$, $(0,0)'$, $(3,0)'$, and $(6,0)'$, all with the same variance-covariance matrix $$\Sigma = \pmatrix{0.5 & 0.05 \\ 0.05 &0.5}.$$ How can I do that in MATLAB? I'm not expert in MATLAB.
First, the means. ``` mu = 3*floor(rand(1,800)*4) - 3; % first dimension means mu = [mu; zeros(1,800)]; % add second dimension ``` Given info on [multivariate normal random deviate generation](http://en.wikipedia.org/wiki/Multivariate_normal_distribution#Drawing_values_from_the_distribution), [Cholesky factorization](http://www.mathworks.com/help/techdoc/ref/chol.html), and [MATLAB's builtin normal random number generator](http://www.mathworks.com/help/techdoc/ref/randn.html), you'll be able to understand the code below. It generates a 2-by-800 matrix, each column of which is sampled from the mixture distribution you specified in the question. ``` SIGMA = [.5 .05; .05 .5]; A = chol(SIGMA); randvec = mu + A'*randn(2,800); ``` I have not tested this code properly -- I don't have a local copy of MATLAB. I tried it in an [Octave web interface](http://lavica.fesb.hr/octave/octave-on-line_en.php) and it seemed to work.
Let's say we are running an A/B testing and each data point has a binary response. We would like to test whether the ratio of true are different between A and B. (e.g. ask a yes/no question to both group A and group B and would like to test if there is difference in the ratio of "Yes" between the two groups) I understand I can apply z-test if we can approximate the distribution of the number of true data (modeled as binomial distribution) as normal distribution, but there are cases that we can not approximate the binomial distribution as normal distribution. So my question is, is there any statistical test available for given two binomial distributions $A \sim \mathrm{Bin}(n, p\_a)$ and $B \sim \mathrm{Bin}(m, p\_b)$ where $n$ and $m$ are the sample size of A and B to test if $p\_a$ and $p\_b$ are different without approximation to normal/Poisson distribution?
You can just use a [Fisher Exact Test](https://en.wikipedia.org/wiki/Fisher%27s_exact_test). Let us know if you have trouble following what it does. Not super related, but if you're thinking of difference of binomials, it's nice to convince yourself that if $p\_1 \neq p\_2$, then the difference is not itself a binomial! I think that's kinda fun to think about.
A machine outputs either a 0 or a 1 each second. We denote this output at time $t$ as $b\_t$. The probability that it outputs 1 is $p\_t$ at time $t$. How do we go about studying the change in $p\_t$ in $t$? This problem depends on the window in which you restrict your analysis. If your window is just 1 second, the probability changes very often but if the window encompasses the whole set of data you have, then you can not talk of any change. But determining the window demands knowledge of how $p\_t$ changes. I would like to be pointed to references relating to this problem. Thank you.
You can perform the Bayesian updating of the parameters as you would with an unconstrained $\lambda$, then adjust the posterior to reflect the constraints by limiting the range of $\lambda$ to $[0.2,0.7]$ and renormalizing appropriately. In your case, as you realized, you'd have to be able to integrate the posterior Beta distribution to find the normalization constant. If, by good fortune or planning, you've picked a prior on $\lambda$ with integer parameters $a$ and $b$ (as you have done in the question), then the posterior will have integer parameters $a'$ and $b'$, and there is a closed form solution for the incomplete beta function: $$I\_x(a,b) = {1\over \text{B}(a,b)}\int\_0^xp^a(1-p)^b\text{d}p=\sum\_{j=a}^{a+b-1}{a+b-1\choose j}x^j(1-x)^{a+b-1-j}$$ ... admittedly awkward if $a$ and/or $b$ are large. A check of the formula, using R: ``` Ix <- function(x,a,b) { res <- 0 m <- a + b - 1 for (j in a:m) { res <- res + choose(m, j)*x^j*(1-x)^(m-j) } res } ``` and comparison with the cumulative Beta distribution in the same language: ``` > Ix(0.4,3,5) [1] 0.580096 > pbeta(0.4,3,5) [1] 0.580096 ``` In your case, the normalization of the posterior would involve dividing by $I\_{0.7}(a',b')-I\_{0.2}(a',b')$. If, on the other hand, you don't have integer parameters, you will be forced to resort to using an infinite series expansion or numerical integration; the latter will probably work well enough given the smoothness of the functions. Of course, languages such as R and Python / scikit have functions that will evaluate the CDF of the Beta distribution for you, so you can avoid the whole issue by simply using the available canned routines to find the normalization constant.
I always read in books that when we do classification or machine learning tasks it's always better to normalize the features so to make them in one range like 0-1. Today I used weka to play with Iris dataset. First I just built a J48 classifier without normalizing the values, and the it made perfect performance. However when I normalized all the features to be in the range 0-1, the classifier made so much mistakes. Why is that? Shouldn't normalization be used always?
If your attributes already have a meaningful and **comparable scale** then normalization can destroy important information. Take e.g. data coming from a physical experiment. Coordinates are measure in x,y,z, each axis is in milimeters. Since the experiment is performed on a flat dish, x and y vary on the range of 0-100 (i.e. 10 centimeters), but the z axis only varies from 0-10 (i.e. a 1 cm high box). Normalizing such data with greatly emphasize the z axis, which most likely is not supported by a physical interpretation of the results. Key point of the story: understanding your data is essential. Normalization is a hotfix if you don't understand the scales of your data.
I took a class once on Computability and Logic. The material included a correlation between complexity / computability classes (R, RE, co-RE, P, NP, Logspace, ...) and Logics (Predicate calculus, first order logic, ...). The correlation included several results in one fields, that were obtained using techniques from the other field. It was conjectured that P != NP could be attacked as a problem in Logic (by projecting the problem from the domain of complexity classes to logics). Is there a good summary of these techniques and results?
It's possible that you're asking about results in finite model theory (such as the characterization of P and NP in terms of various fragments of logic). The recent attempted proof of P != NP initially made heavy use of such concepts, and some good references (taken from the [wiki](http://michaelnielsen.org/polymath1/index.php?title=Deolalikar_P_vs_NP_paper)) are * [Erich Gradel's review of FMT and descriptive complexity](http://www.logic.rwth-aachen.de/pub/graedel/FMTbook-Chapter3.pdf) * [Ron Fagin's article on descriptive complexity](http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=38D2317557C5EF25F3ED6D0D3CBE6E69?doi=10.1.1.28.6917&rep=rep1&type=pdf)
I'd like to know whether the solution proposed below is valid/acceptable and any justification available. We have two biological conditions, and for each condition we measured 3 time series, so at each time point we have up to 3 data points per condition. For a priori reasons we believe the time series follow a Gaussian mixture model. **We'd like to test for a significant difference in means between conditions at a small number of time points** (e.g. t = 14, 28, and 38). The plot below is an example, but we have thousands of similar comparisons, and most are not so clear. What's the best way to do this? [![the data](https://i.stack.imgur.com/GylbY.png)](https://i.stack.imgur.com/GylbY.png) **A first approach** was to compute a t-test at the time points of interest (after transforming the data into something approximately normal). However this gave us at most 6 data points for each comparison, threw out a ton of data, and it didn't use our assumption about the Gaussian mixture model. **Here is the proposed solution.** Fit a Gaussian mixture model to both conditions, calculate confidence intervals around the fitted curve, and use these confidence intervals to assess significance at given times. That is, we would evaluate whether the fitted models differ at the time points of interest. (See below.) I could see a problem with this approach if the Gaussian mixture models don't describe the raw data, but in general they do very well (mean R^2 is around 0.9). So, given that Gaussian mixture models seem to be a good description of our data, **is this a valid/acceptable solution? If so, how would we carry it out?** [![enter image description here](https://i.stack.imgur.com/7SmHP.png)](https://i.stack.imgur.com/7SmHP.png) Edit: I asked a similar question [here](https://stats.stackexchange.com/questions/198371/test-for-difference-between-small-number-of-gaussian-like-time-series) and linked to a different, also similar question where [Rob Hyndman suggested using a parametric bootstrap](https://stats.stackexchange.com/a/3617/90427). Unfortunately I'm really not familiar with parametric bootstrapping, so I'm not sure whether it would be appropriate here. [This explanation of parametric boostrap](https://stats.stackexchange.com/a/54855/90427) says it involves "positing a model on the statistic you want to estimate" (in our case the means?), and estimating those parameters "by repeated sampling of the ecdf". I believe the ecdf is just our data. However I'm not clear how to go from there (bootstrapped distributions of parameters for the Gaussian mixture models) to answering whether there's a significant difference at a specific time point. My understanding is that parametric bootstrapping directly compares the model parameters, i.e. Gaussian parameters in our case. But we're not interested in comparing model parameters. Instead we want to compare the model value, e.g. y\_condition1(t=38) vs y\_condition2(t=38). So I'm not sure parametric bootstrapping is the way to go.
The approach you suggest is to fit a model to sampled data from each condition, then use some measure of the models' variability to test whether they're different at a particular set of time points. This seems similar to some established regression techniques, which involve assuming you know the basic functional form, then using that form to do bootstrapping. These methods are the parametric bootstrap and bootstrapping residuals (more about this in a second). So, I think your general idea makes sense; the question is how to implement it. After fitting models to the time series, the question is how to estimate variability at the time points of interest. A bootstrapping approach could work. But, it won't be possible to use the simple bootstrap (i.e. to resample the data points) because the data are correlated in time. That's why Rob Hyndman suggested the parametric bootstrap. In that approach, you'd fit a model to each time series, repeatedly simulate new data from the model, then run statistics on the simulated data. The model in this case wouldn't be a simple curve, but a generative model (i.e. it would have to give a probability distribution from which you could sample new points). Here's a paper using that approach. They use Gaussian process regression to model the time series and do parametric bootstrapping. Their method might work well for your data. You'd use the same model fitting and bootstrap procedure, but the thing you'd test would be the equality of the mean at particular time points. > > [Kirk and Stumpf (2009)](http://bioinformatics.oxfordjournals.org/content/25/10/1300.full). Gaussian process regression bootstrapping: exploring the effects of uncertainty in time course data > > > Another possibility along similar lines is to resample the residuals. The procedure would look like this: Let's say the sampled time series is $\{x\_1, ..., x\_n\}$ for the first condition and $\{y\_1, ..., y\_n\}$ for the second condition. Say we're interested in the differences at time points $t\_1, t\_2, t\_3$. 1. Fit a model to each sampled time series (e.g. weighted sum of Gaussian basis functions, as you're currently using). The model should fit the data well and represent your beliefs about its temporal structure. 2. Evaluate the models at each sampled time point. Call these fitted values $\hat{x}\_i$ and $\hat{y}\_i$ for each time point $i$. 3. Compute the residuals at each time point. $a\_i = x\_i - \hat{x}\_i, b\_i = y\_i - \hat{y}\_i$ 4. Generate a synthetic version of each time series by resampling the residuals: * For each condition at each time point, randomly draw a residual (from all time points, with replacemement), and add it to the fitted value (at that time point). * For condition 1 at time point $i$, the synthetic time series is $x^\*\_i = \hat{x}\_i + a\_j$, where $j$ is a randomly chosen integer from $1$ to $n$. * For condition 2 at time point $i$, the synthetic time series is $y^\*\_i = \hat{y}\_i + b\_k$, where $k$ is a separate, randomly chosen integer from $1$ to $n$. 5. Fit a new model to each of the synthetic time series. * The models should have the same functional form as used to fit the original data. * The model fit to $x^\*$ is $f\_{(i)}(t)$. The model fit to $y^\*$ is $g\_{(i)}(t)$ * The subscript $i$ denotes the current bootstrap sample (i.e. the number of times we've run through the loop). * The models are continuous functions of time, so they can be evaluated at any time point $t$. 6. Evaluate the new models (fit to the synthetic data) at the time points of interest, where we want to calculate the differences. That is, calculate $f\_{(i)}(t)$ and $g\_{(i)}(t)$ for $t \in \{t\_1, t\_2, t\_3\}$. Record these values. 7. Repeat steps 4-6 many times (e.g. 10,000). Each iteration will produce a single bootstrap sample. 8. We now have a set of bootstrapped function values at each of the time points of interest. That is: $f\_{(i)}(t)$ and $g\_{(i)}(t)$ for $t \in \{t\_1, t\_2, t\_3\}$ and for $i \in \{1, ..., 10000\}$ 9. For each time point of interest $t$, run a statistical test comparing the bootstrapped values $f\_{(i)}(t)$ vs. $g\_{(i)}(t)$. I.e. test the null hypothesis that there's no difference. Or, even better, calculate a confidence interval on the difference, since it's more informative. Resampling the residuals relies on the assumption that the residuals are identically distributed, so it would be good to check that this is true. This condition could be violated, for example, if the variance [changes over time](https://en.wikipedia.org/wiki/Heteroscedasticity). Possibly of interest: [This chapter](http://www.sagepub.com/sites/default/files/upm-binaries/21122_Chapter_21.pdf) describes bootstrapping residuals. [These notes](http://www.stat.cmu.edu/~cshalizi/uADA/13/lectures/which-bootstrap-when.pdf) briefly compare the simple bootstrap, parametric bootstrap, and bootstrapping the residuals.
What is the difference between the statements: **converges in probability** and **converges in distribution** I am looking for a more intuitive explanation and *simple formulas* rather than abstract mathematical definitions. (I have no knowledge of measure theory or other abstract mathematical concepts). I didn't find my answer in this topic: [Convergence in probability and distribution](https://stats.stackexchange.com/questions/266908/convergence-in-probability-and-distribution)
Trying my best to use only simple formulas and be intuitive. Consider a sequence of random variable $X\_1, X\_2, \ldots $, and another random variable $X$. **Convergence in distribution**. For each $n$, let us take realizations (generate many samples) from $X\_n$, and make a histogram. So we now have a sequence of histograms, one for each $X\_n$. If the histograms in the sequence look more and more like the histogram of $X$ as $n$ progresses, we say that $X\_n$ converges to $X$ in distribution (denoted by $X\_n \stackrel{d}{\rightarrow} X$). **Convergence in probability.** To explain convergence in probability, consider the following procedure. For each $n$, * repeat many times: + Jointly sample $(x\_n, x)$ from $(X\_n, X)$. + Find the absolute difference $d\_n = |x\_n - x|$. * We have got many values of $d\_n$. Make a histogram out of these. Call this histogram $H\_n$. If the histograms $H\_1, H\_2, \ldots $ become skinnier, and concentrate more and more around 0, as $n$ progresses, we say that $X\_n$ converges to $X$ in probability (denoted by $X\_n \stackrel{p}{\rightarrow} X$). **Discussion** Convergence in distribution only cares about the histogram (or the **distribution**) of $X\_n$ relative to that of $X$. As long as the histograms look more and more like the histogram of $X$, you can claim a convergence in distribution. By contrast, convergence in probability cares also about the realizations of $X\_n$ and $X$ (hence the distance $d\_n$ in the above procedure to check this). > > $(\dagger)$ Convergence in probability requires that the probability that the values drawn > from $X\_n$ and $X$ match (i.e., low $d\_n = |x\_n - x|$) gets higher and higher as > $n$ progresses. > > > This is a stronger condition compared to the convergence in distribution. Obviously, if the values drawn match, the histograms also match. This is why convergence in probability implies convergence in distribution. The converse is not necessarily true, as can be seen in Example 1. **Example 1.** Let $Z \sim \mathcal{N}(0,1)$ (the standard normal random variable). Define $X\_n := Z$ and $X := -Z$. It follows that $X\_n \stackrel{d}{\rightarrow} X$ because we get the same histogram (i.e., the bell-shaped histogram of the standard normal) for each $X\_n$ and $X$. Now, it turns out that $X\_n$ **does not** converge in probability to $X$. This is because $(\dagger)$ does not hold. The reason is that realizations of $X\_n$ and $X$ will never match, no matter how large $n$ is. Yet, they share the same histogram! **Example 2.** Let $Y, Z \sim \mathcal{N}(0,1)$. Define $X\_n := Z + Y/n$, and $X := Z$. It turns out that $X\_n \stackrel{p}{\rightarrow} X$ (and hence $X\_n \stackrel{d}{\rightarrow} X$). Read the text at $(\dagger)$ again. In this case, the values drawn from $X\_n$ and $X$ are almost the same, with $x\_n$ corrupted by the added noise, drawn from $Y/n$. However, since the noise is weaker and weaker as $n$ progresses, eventually $Y/n=0$, and we get the convergence in probability. **Example 3.** Let $Y, Z \sim \mathcal{N}(0,1)$. Define $X\_n := Z + n Y$, and $X := Z$. We see that $X\_n$ does not converge in distribution. Hence, it does not converge in probability.
In the paper called [Deep Learning and the Information Bottleneck Principle](https://arxiv.org/abs/1503.02406) the authors state in section II A) the following: > > Single neurons classify only linearly separable inputs, as they can > implement only hyperplanes in their input space $u = wh+b$. > Hyperplanes can optimally classify data when the inputs are > conditioanlly indepenent. > > > To show this, they derive the following. Using Bayes theorem, they get: $p(y|x) = \frac{1}{1 + exp(-log\frac{p(x|y)}{p(x|y')} -log\frac{p(y)}{p(y')})} $ (1) Where $x$ is the input, $y$ is the class and $y'$ is the predicted class (I assume, $y'$ not defined). Continuing on, they state that: $\frac{p(x|y)}{p(x|y')} = \prod^N\_{j=1}[\frac{p(x\_j|y)}{p(x\_j|y')}]^{np(x\_j)} $ (2) Where $N$ is the input dimension and $n$ I'm not sure (again, both are undefined). Considering a sigmoidal neuron, with the sigmoid activation function $\sigma(u) = \frac{1}{1+exp(-u)}$ and preactivation $u$, after inserting (2) into (1) we get the optimal weight values $w\_j = log\frac{p(x\_j|y)}{p(x\_j|y')}$ and $b=log\frac{p(y)}{p(y')}$, when the input values $h\_j=np(x\_j)$. Now on to my questions. I understand how inserting (2) into (1) leads to the optimal weight and input values $w,b,h$. What I do not understand however, is the following: 1. How is (1) derived using Bayes theorem? 2. How is (2) derived? What is $n$? What is the meaning of it? I assume it has something to do with conditional independence 3. Even if the dimensions of x are conditionally independent, how can one state that it is equal to to its scaled probability? (i.e how can you state $h\_j=np(x\_j)$?) EDIT: The variable $y$ is a binary class variable. From this I assume that $y'$ is the "other" class. This would solve question 1. Do you agree?
This is a model setup where the authors are using a special form of Bayes theorem that applies when you have a binary variable of interest. They first derive this special form of Bayes theorem as Equation (1), and then they show that the condition in Equation (2) leads them to the linear form specified for their network. It is important to note that the latter equation *is not derived* from previous conditions --- rather, it is a *condition* for the linear form they are using for their network. --- **Deriving the first equation:** Equation (1) in the paper is just a form of Bayes theorem that frames the conditional probability of interest in terms of the [standard logistic (sigmoid) function](https://en.wikipedia.org/wiki/Logistic_function) operating on functions of the likelihood and prior. Taking $y$ and $y'$ to be the two binary outcomes of the random variable $Y$, and applying Bayes theorem, gives: $$\begin{equation} \begin{aligned} p(y|\mathbf{x}) = \frac{p(y,\mathbf{x})}{p(\mathbf{x})} &= \frac{p(\mathbf{x}|y) p(y)}{p(\mathbf{x}|y) p(y)+p(\mathbf{x}|y') p(y')} \\[6pt] &= \frac{1}{1+ p(\mathbf{x}|y') p(y')/p(\mathbf{x}|y) p(y)} \\[6pt] &= \frac{1}{1+ \exp \Big( \log \Big( \tfrac{p(\mathbf{x}|y') p(y')}{p(\mathbf{x}|y) p(y)} \Big) \Big)} \\[6pt] &= \frac{1}{1+ \exp \Big( - \log \tfrac{p(\mathbf{x}|y)}{p(\mathbf{x}|y')} - \log \tfrac{p(y)}{p(y')} \Big)} \\[6pt] &= \text{logistic} \Bigg( \log \frac{p(\mathbf{x}|y)}{p(\mathbf{x}|y')} + \log \frac{p(y)}{p(y')} \Bigg). \\[6pt] \end{aligned} \end{equation}$$ **Using Equation (2) as a condition for the lienar form of the network:** As stated above, this equation is not something that is derived from previous results. Rather, it is a sufficient *condition* that leads to the linear form that the authors use in their model ---i.e., the authors are saying that *if* this equation holds, then certain subsequent results follow. Letting the input vector $\mathbf{x} = (x\_1,...,x\_N)$ have length $N$, if Equation (2) holds, then taking logarithms of both sides gives: $$\begin{equation} \begin{aligned} \log \frac{p(\mathbf{x}|y)}{p(\mathbf{x}|y')} &= \log \prod\_{i=1}^N \Big[ \frac{p(x\_i|y)}{p(x\_i|y')} \Big]^{n p (x\_i)} \\[6pt] &= \sum\_{i=1}^N n p (x\_i) \log \Big[ \frac{p(x\_i|y)}{p(x\_i|y')} \Big] \\[6pt] &= \sum\_{i=1}^N h\_i w\_i. \\[6pt] \end{aligned} \end{equation}$$ Under this condition, we therefore obtain the posterior form: $$\begin{equation} \begin{aligned} p(y|\mathbf{x}) &= \text{logistic} \Bigg( \log \frac{p(\mathbf{x}|y)}{p(\mathbf{x}|y')} + \log \frac{p(y)}{p(y')} \Bigg) \\[6pt] &= \text{logistic} \Bigg( \sum\_{i=1}^N h\_i w\_i + b \Bigg), \\[6pt] \end{aligned} \end{equation}$$ which is the form that the authors are using in their network. This is the model form postulated by the authors in the background section, prior to specifying Equations (1)-(2). The paper does not define $n$ is in this model setup, but as you point out, the answer by Prof Tishby says that this is the test sample size. In regard to your third question, it appears that the requirement of Equation (2) means that the values in $\mathbf{x}$ are **not** conditionally independent given $y$.
In soft-margin SVM, once we solve the dual-form optimization problem (using quadratic programming, which is guaranteed to converge on a global optimum because the problem is convex), we get a vector $\alpha$ of coefficients that we can use to recover the hyperplane parameters $\beta$ and $\beta\_0$. The additional constraint in soft-margin (compared to hard-margin) is $0 \leq \alpha\_i \leq C$ for some positive constant $C$. We say that: 1. If $\alpha\_i = 0$, then the corresponding $x\_i$ is outside the margin 2. If $\alpha\_i = C$, then the corresponding $x\_i$ is on the wrong side of the margin 3. If $0 \lt \alpha\_i \lt C$, then the corresponding $x\_i$ is on the margin We recover $\beta$ through the KKT condition $\beta = \sum\_{i}{\alpha\_i y\_i x\_i}$, but we recover $\beta\_0$ by selecting any $x\_i$ whose corresponding $\alpha\_i$ satisfies the third condition above and solving $y\_i(\beta^T x\_i + \beta\_0)=1$ (we should get the same value for $\beta\_0$ using any such point satisfying the third condition above). Here's my question: given an optimal $\alpha$, is it possible for the third condition to not be satisfied for *any* $\alpha\_i$? For example, I actually have a dataset that's linearly separable for which I'm trying to train a soft-margin SVM with $C=0.5$, but I'm finding that every value in $\alpha$ is either $0$ or $0.5$ and that there are no values in between, hence I cannot recover $\beta\_0$. Am I doing something wrong if my $\alpha$ doesn't satisfy the third condition above for any $i$? My intuition tells me that there should always be at least two points on the margin because the primal optimization problem maximizes the size of the margin; but the $\alpha$ for my dataset is contradicting my intuition. --- If it helps, I'm doing this in MATLAB with the following (pseudo)code: ``` X = 2 by n matrix of linearly separable data points y = n by 1 matrix of class labels (-1 or +1) X_hat = X with each column signed to its label H = X_hat' * X_hat; f = -1 * ones(n, 1); Aeq = y'; beq = 0; lb = zeros(n, 1); ub = 0.5 * ones(n, 1); alpha = quadprog(H, f, [], [], Aeq, beq, lb, ub); % alpha is always 0 or 0.5 ```
No, it's not guaranteed, but if the solution has no alpha in that open interval, then we have a "degenerate" SVM training problem, for which the optimal w=0, and we always predict the majority class. This is shown in Rifkin et al.’s “[A Note on Support Vector Machine Degeneracy](ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-1661.pdf)”, an MIT AI Lab Technical Report. Does your problem have these characteristics? It seems unlikely this would happen with naturally occurring data.
This problem is from the book [1]. In case of being closed as a duplication of that in [2], I first make a defense: * The accepted answer at [2] is still in dispute. * The proof given by `@eh9` is based on Kruskal's algorithm. * I am seeking for a proof independent of any MST algorithms. --- > > ***Problem:*** Let $T$ be an MST of graph $G$. Given a connected subgraph $H$ of $G$, show that $T \cap H$ is contained in some MST of $H$. > > > --- My partial trial is *by contradiction*: > > Suppose that $T \cap H$ is not contained in any MST of $H$. That is to say, for any MST of $H$ (denoted $MST\_{H}$), there exists an edge $e$ such that $e \in T \cap H$, and however, $e \notin MST\_{H}$. > > Now we can add $e$ to $MST\_{H}$ to get $MST\_{H} + {e}$ which contains a cycle (denoted $C$). > > > * Because $MST\_{H}$ is a minimum spanning tree of $H$ and $e$ is not in $MST\_{H}$, we have that every other edge $e'$ than $e$ in the cycle $C$ has weight no greater than that of $e$ (i.e., $\forall e' \in C, e' \neq e. w(e') \le w(e)$). > * There exists at lease one edge (denoted $e''$) in $C$ other than $e$ which is not in $T$. Otherwise, $T$ contains the cycle $C$. > > > Now we have $w(e'') \le w(e)$ and $e \in T \land e'' \notin T$, $\ldots$ > > > I failed to continue... --- 1. [Algorithms, Chapter 5: Greedy algorithms](http://cseweb.ucsd.edu/users/dasgupta/book/index.html) 2. ["Minimum Spanning tree subgraph"@StackOverflow](https://stackoverflow.com/questions/13127446/minimum-spanning-tree-subgraph)
***Flaws in the accepted answer:*** When I re-read the accepted answer given by @Mahmoud A. today, I find flaws in it. Consider the figures for an example: ![mst-subgraph](https://i.stack.imgur.com/5Rskg.png) In this example, $e = CE$. In figure (5), the edge $CD$ in the cycle created by adding edge $e = CE$ into $T\_H$ is not in $T\_G$ shown in figure (2), but we have $w(CD)<w(e=CE)$. So the claim "For every other edge $e' \notin T\_G$ in the cycle we have $w(e′)=w(e)$" does not hold. ***Note:*** In the following, I fix the flaws. I leave the answer given by @Mahmoud A. as accepted because it is original and insightful. ***Correction:*** While where is an edge $e \in T\_G \cap H$ such that $e \notin T\_H$ do: 1. Adding $e$ to $T\_H$ to create a cycle $C$. * 1.1 Because $T\_H$ is an MST of $H$, for every other edge $e' \in C$ we have $w(e') \le w(e)$. Otherwise, for any edge $e' \in C$ such that $w(e')>w(e)$, $T\_H + \{ e \} - \{ e' \}$ has smaller weight than $T\_H$. * 1.2 Let $e = (u,v)$. Remove $e$ from $T\_G$. Then vertices $u$ and $v$ are separated into two different connected components, denoted by $U \ni u$ and $V \ni v$, of $T\_G$. Because $T\_H$ is an MST of $H$ (which includes vertices $u$ and $v$) and $e \notin T\_H$, there must be an edge $e'' \in C$ which connects the two components $U$ and $V$ again. We claim that $w(e'') = w(e)$. In 1.1, we have proved that $w(e'') \le w(e)$ (replacing $e'$ there by $e''$). If $w(e'') < w(e)$, then $T\_{G} - \{ e \} + \{ e'' \}$ has smaller weight than $T\_G$. * 1.3 According to 1.1 and 1.2 above, there are an edge $e'' \in C$ such that $e'' \neq e \land w(e'') = w(e)$. We replace $e''$ with $e$ in $T\_H$ to obtain $T'\_H = T\_H - \{ e'' \} + \{ e \} \in MST\_H$. Since $e'' \notin T\_{G} \cap H$ and $e \in T\_{G} \cap H$, $T'\_H$ contains one more common edge with $T\_{G} \cap H$ than $T\_H$ does. 2. Rename $T'\_H$ to $T\_H$ EndWhile After the loop we have $T\_{G} \cap H \subseteq T\_H$.
What is the probability of drawing 4 aces from a standard deck of 52 cards. Is it: $$ \frac{1}{52} \times \frac{1}{51} \times \frac{1}{50} \times \frac{1}{49} \times 4! $$ or do I simply say it is: $$ \frac{4}{52} = \frac{1}{13} $$
Do you mean getting 4 aces in a row when drawing them one by one from a full deck without replacement? If it is the case, then it is simply multiplication of successive probabilities: $\frac{4}{52}$ \* $\frac{3}{51}$ \* $\frac{2}{50}$ \* $\frac{1}{49}$ = 4! \* $\frac{48!}{52!}$ = 3.6938e-006.
How would you explain why the Fast Fourier Transform is faster than the Discrete Fourier Transform, if you had to give a presentation about it for the general (non-mathematical) public?
FFT algorithms are faster ways of doing DFT. It is a family of algorithms and not a single algorithm. How it becomes faster can be explained based on the heart of the algorithm: **Divide And Conquer**. So rather than working with big size Signals, we divide our signal into smaller ones, and perform DFT of these smaller signals. At the end we add all the smaller DFT to get actual DFT of the big signal. This gives great benefit asymptotically. So for large values of N, we save a lot!! We shall see how much in a while. Let us try to understand it more with little bit of high school maths: DFT computation for N point signal takes: N^2 multiplications (this is clear from the basic defintion, you have N points, and for each of the points you have to multiply N complex sinusoids). Now if we dvivide our N-point signal into two signals S1 and S2 of length N/2, and then perform DFT of these smaller signals, we will be doing (N/2)^2 multiplication for each of the S1 and S2. So total no. of mulitplications performed in calculating DFT of S1 and S2 will be 2 \* (N/2)^2. Then to get actual DFT of N point signal we need to add these componets. So we can see that we actually need approximately (see Note below) 2 \* (N/2)^2= (n^2)/2 which is *half* that in case of N point signal. This reduction of half is a result of dividing the siganls once. But if we continue deviding the signals again and again, we will end up in reducing the amount of computations in each step by an amount which is much lesser than N^2. In fact the over all no. of compuations using this method comes of the order of N\*logN. To get an idea of how fast this is as compared to direct DFT, consider a computer which can execute 1 mulitiplication operation in 1 nano-seconds. Assume that we have a signal having total of N= 10^9 points. Total number of nanoseconds spent for evaluating DFT by direct method (i.e not using FFT)on the given Computer= N^2= 10 ^ 18 Nano seconds, which is approximately 31 years. While using FFT we shall spent N\*logN = (10^9)\*log (10^9)= (10^9)\*9\*log10 = (10^9) \* 9 \* 3.32 which is approximately equal to 30 \* (10^9) or approximately 30secs!! **So using direct DFT you will spend 31 years while using the devide and conquer approach as in FFT you will get the same output only in 3o seconds!! Now its up to you which method will you prefer :-).** The bottom line is that a lot of problem in nature can be solved much easily if they are small in size. The total effort in solving many small sized problems, is often much smaller than in case of solving one big problem; especially, if the size of the problem is very big such as the above example explained. This is the basic idea on which FFT algorithms are based upon. **Note:** I said 'approximately' because it turns out that we cannot add them directly, and indeed we need a mulitplication factor, actually a phase factor, before combining them. But this is still much lesser that no. of multiplicatiosns we need to do in calculating DFT of S1 and S2, so we can neglect this additional multiplication for large values of N. **Dvide and Conquer: an analogy in military** The concept has been used historically in military. For example [Sun Tzu](http://en.wikiquote.org/wiki/Sun_Tzu#The_Art_of_War) (c. 6th century BCE) a Chinese general, military strategist, and author of The Art of War, writes: It is the rule in war, 1- if ten times the enemy's strength, surround them; 2- if five times, attack them; 3- if double, be able to divide them; 4- if equal,engage them; 5- if fewer, be able to evade them; 6- if weaker, be able to avoid them. **Sun Tzu** ![enter image description here](https://i.stack.imgur.com/uxkuW.jpg) FFT works on rule no. 3. Rule 1 and 2 tells that if your computer is very powerful enough, then probably you will not be interested in adopting any 'Dvide and Conquer' approach, becasue the gain in doing so will be very less. Also rule no 4 to 6 tells that when your computer is too weak, forget about conquering the problem, by dividing the problem size. You will have to search some other ways. One such way could be [Randomization](http://en.wikipedia.org/wiki/Randomized_algorithm)
Currently, I have trained my model through 5-fold cross validation with very small amount of the sample (n=100). I used whole data set to train and got quite low performance in terms of accuracy, which is bit higher than 70%. However, if I put my data which was used for training back to trained model to validate, it gives me higher accuracy (80%). So, my question is it okay to say that I have verified my trained model using training set and got 80 of accuracy? or should I have to stick with 70 % of accuracy that I received from 5-fold cross validation?
Typically you would use the best model parameters and then re-run the model with the portion of the data set aside for training to come up with a new 'best' that you can run against your test set.
Assume we want to define a context free grammar of say a programming language, where on each line everything after the character # until the end of line is considered a comment and should be ignored. How to express that in a context free grammar?
Without multi-line statements, it's rather simple; assuming `line` is the non-terminal for well-formed line, change its right-hand-side appearances to ``` line (# .*)? ``` (borrowing regular expression syntax for brevity). Otherwise, if you explicitly handle line breaks, replace occurrences of the line-break token `CRLF` (or so) similarly by ``` (# .*)? CRLF ``` If line breaks remain implicit, one chance you have left is make it part of the lexer; don't only skip whitepace, but also `(# .*)? [\r\n]`. You can also create a token `COMMENT` for that and place it in all places of the grammar where it may appear. This strategy doesn't work for nested multiline comments, though, a common problem in older languages.
I have a set of players. They play against each other (pairwise). Pairs of players are chosen randomly. In any game, one player wins and another one loses. The players play with each other a limited number of games (some players play more games, some less). So, I have data (who wins against whom and how many times). Now I assume that every player has a ranking that determines the probability of winning. **I want to check if this assumption is actually truth.** Of course, I can use the [Elo rating system](http://en.wikipedia.org/wiki/Elo_rating_system) or the [PageRank algorithm](http://en.wikipedia.org/wiki/PageRank) to a calculate rating for every player. But by calculating ratings I do not prove that they (ratings) actually exist or that they mean anything. In other words, I want to have a way to prove (or to check) that players do have different strengths. How can I do it? **ADDED** To be more specific, I have 8 players and only 18 games. So, there a lot of pairs of players who did not play against each other and there a lot of pairs that played only once with each other. As a consequence, I cannot estimate the probability of a win for a given pair of players. I also see, for example, that there is a player who won 6 times in 6 games. But maybe it is just a coincidence.
**You need a probability model.** The idea behind a ranking system is that a single number adequately characterizes a player's ability. We might call this number their "strength" (because "rank" already means something specific in statistics). We would predict that player A will beat player B when strength(A) exceeds strength(B). But this statement is too weak because (a) it is not quantitative and (b) it does not account for the possibility of a weaker player occasionally beating a stronger player. We can overcome both problems by *supposing the probability that A beats B depends only on the difference in their strengths.* If this is so, then we can re-express all the strengths is necessary so that *the difference in strengths equals the log odds of a win.* Specifically, this model is $$\mathrm{logit}(\Pr(A \text{ beats } B)) = \lambda\_A - \lambda\_B$$ where, by definition, $\mathrm{logit}(p) = \log(p) - \log(1-p)$ is the log odds and I have written $\lambda\_A$ for player A's strength, etc. This model has as many parameters as players (but there is one less degree of freedom, because it can only identify *relative* strengths, so we would fix one of the parameters at an arbitrary value). It is a kind of [generalized linear model](http://en.wikipedia.org/wiki/Generalized_linear_model) (in the Binomial family, with logit link). The parameters can be estimated by [Maximum Likelihood](http://en.wikipedia.org/wiki/Maximum_likelihood). The same theory provides a means to erect confidence intervals around the parameter estimates and to test hypotheses (such as whether the strongest player, according to the estimates, is significantly stronger than the estimated weakest player). Specifically, the likelihood of a set of games is the product $$\prod\_{\text{all games}}{\frac{\exp(\lambda\_{\text{winner}} - \lambda\_{\text{loser}})}{1 + \exp(\lambda\_{\text{winner}} - \lambda\_{\text{loser}})}}.$$ After fixing the value of one of the $\lambda$, the estimates of the others are the values that maximize this likelihood. Thus, varying any of the estimates reduces the likelihood from its maximum. If it is reduced too much, it is not consistent with the data. In this fashion we can find confidence intervals for all the parameters: they are the limits in which varying the estimates does not overly decrease the log likelihood. General hypotheses can similarly be tested: a hypothesis constrains the strengths (such as by supposing they are all equal), this constraint limits how large the likelihood can get, and if this restricted maximum falls too far short of the actual maximum, the hypothesis is rejected. --- In this particular problem there are 18 games and 7 free parameters. In general that is too many parameters: there is so much flexibility that the parameters can be quite freely varied without changing the maximum likelihood much. Thus, applying the ML machinery is likely to prove the obvious, which is that there likely are not enough data to have confidence in the strength estimates.
It is known that the temporal logics LTL,CTL,CTL\* can be translated/embedded into the $\mu$-calculus. In other words, the (modal) $\mu$-calculus subsumes these logics, (i.e. it is more expressive.) Could you please explain/point me to papers/books that elaborate on this matter. In particular, are there concrete fairness, liveness, etc. properties not expressible in the temporal logics but in the $\mu$-calculus?
The $\mu$-calculus is strictly more expressive than LTL, CTL and CTL\*. This is a consequence of a few different results. The first step is to show that the $\mu$-calculus is as expressive as temporal logics. The main idea for encoding these logics comes from recognizing temporal properties as fixed points. At a very informal level, least fixed points allow you to express properties of a finitary nature and greatest fixed points apply to infinitary properties. For example, *eventually $\varphi$* in LTL defines that there is an instant in the finite future at which $\varphi$ is true, while *always $\varphi$* states that $\varphi$ is true at an infinite number of future time-steps. In terms of fixed points the eventually property would be expressed using a least fixed point and the always property using a greatest fixed point. Following such an intuition temporal operators can be encoded as fixed point operators. The next step is to show that the $\mu$-calculus is more expressive. The main idea is alternation depth. Fixed points alternate if a least fixed point influences the greatest fixed point, and vice-versa. The alternation depth of a $\mu$-calculus formula counts the number of alternations that occur in it. The operators in CTL can be encoded by $\mu$-calculus formulae with alternation depth $1$. The operators in CTL\* and LTL can be encoded by $\mu$-calculus formulae with alternation depth at most $2$. However, the alternation hierarchy of the $\mu$-calculus is strict, which means that increasing alternation depth in a formula allows you to express strictly more properties. This is why people say the $\mu$-calculus is more expressive than these temporal logics. Some references: 1. The initial arguments that the $\mu$-calculus subsumes several logics appears in [Modalities for Model Checking:Branching Time Logic Strikes Back](ftp://ftp.cs.utexas.edu/pub/techreports/tr85-21.pdf), Emerson and Lei, 1985. 2. The translation of CTL into the $\mu$-calculus is straightforward. You can find it in the book on [*Model Checking*](http://rads.stackoverflow.com/amzn/click/0262032708) by Clarke, Grumberg and Peled. You can also find it in [*Model Checking and the $mu$-calculus*](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.916) by Emerson or in Ken McMillan's [dissertation](http://www.kenmcmil.com/thesis.html). 3. The translation of CTL\* into the $\mu$-calculus is involved. Rather than the original, indirect translation, I suggest the paper of Mads Dam on [Translating CTL\* into the modal mu-calculus](http://www.lfcs.inf.ed.ac.uk/reports/90/ECS-LFCS-90-123/). 4. There is a simpler translation of LTL into what is called the linear-time $\mu$-calculus, in which the modalities operate over traces and not states. See [Axiomatising Linear Time Mu-calculus](http://www.lfcs.inf.ed.ac.uk/reports/95/ECS-LFCS-95-324/) by Roope Kaivola. 5. The alternation hierarchy is studied in [The modal mu-calculus alternation hierarchy is strict](http://homepages.inf.ed.ac.uk/jcb/Research/althi-preprint.ps.gz) by Julian Bradfield and in [A hierarchy theorem for the $\mu$-calculus](http://link.springer.com/chapter/10.1007/3-540-61440-0_119) by Giacomo Lenzi. All this is about expressivity not about utility. In practice, people don't usually specify properties as $\mu$-calculus expressions because they might find temporal logics easier to work with. The industrial specification languages differ from both temporal logics and the $\mu$-calculus in their syntax and their expressive power.
I love doing TCS in my spare time. Lately I have been trying to do some research as a hobby. I'm looking for some extra input from people who do this full-time: * Do you think it is possible to do this "just for fun"? I have no intention to ever get a PhD. * What resources would you recommend?
I could be off-base, but in my view some of the better topics to focus on if you only wish to pursue problems as an amateur are in discrete mathematics: combinatorics, graph theory, and even combinatorial geometry. This is because the problems in these realms are quite accessible and easy to state and ponder without too much background. That doesn't mean you can solve them without background: that will take a lot more time. But it's a good place to start. Also, what might limit you is access to literature: papers, books etc if you don't have access to a university library - in that case, working on problems that are more "current" means that you'll be more likely to find papers off researcher websites. It's possible that the days of Fermat-style amateur mathematicians are over, but I really doubt it. I've known people who started doing research as a side hobby and enjoyed it so much they are now full-time researchers. And even if you don't, you'll at least enjoy yourself. As Alessandro points out in comments, this website is a great resource for you to use as well.
Would someone please clarify for me the difference between direct and random access? Specifically, why does this Wikipedia article on [Direct Access Storage Devices](http://en.wikipedia.org/wiki/Direct_access_storage_device) distinguish between the two: > > The direct access capability, occasionally and incorrectly called > random access (although that term survives when referring to memory or RAM), > > > whereas this article on [random access](http://en.wikipedia.org/wiki/Random_access) doesn't: > > In computer science, random access (sometimes called direct access) is > the ability to access an element at an arbitrary position in a > sequence in equal time, independent of sequence size. > > >
The Wikipedia article about direct access storage devices contrasts two access methods to information stored on physical devices: sequential access and direct access. Sequential access is exemplified by tape drives used for backups. Direct access, which is synonymous with random access, is much more common. The difference between sequential access and direct or random access is quantitative rather than qualitative. Magnetic hard drives are considered random access although data access involves moving the read-and-write heads, which could be quite slow compared to other types of memory. In contrast, modern RAM often provides truly direct access, in the sense that accessing two locations in memory has the same cost whatever the difference in address (disregarding difference between the logical model in which RAM is byte- or word-addressable, and the physical model in which data is transferred in larger cache lines).
In research articles (e.g. [this one's abstract](http://ttic.uchicago.edu/~cjulia/papers/crossing-STOC.pdf)), I often see the complexity of an algorithm expressed using a somewhat ambiguous notation: * $\log^2(n) = ?$ * $\log(n)^2 = ?$ * $(\log n)^2 = (\log(n)) \times (\log(n))$ * $\log(n^2) = \log(n \times n)$ * $\log \log n = \log(\log(n))$ The last three case are easy enough to understand, but what about the first two? The operator precedence is unclear. Do they mean $\log(n \times n)$, or do they mean $(\log(n)) \times (\log(n))$, or do they mean $\log(\log(n))$? Citing a reliable source would be a plus, so that I can confidently add this information to [Wikipedia](https://en.wikipedia.org/wiki/Time_complexity#Table_of_common_time_complexities "Time complexity — Table of common time complexities"). Note I'm asking about the meaning of big-O or about $O(\log n)$ complexity. I'm interested in what the $^2$ means (square the $n$, square the result of $\log(n)$, or $\log^2$ = $\log \cdot \log$).
Regarding the operator precedence, as specified the other answers: ``` log²(n) : possibly ambiguous most commonly 'log(n) * log(n) ' but possible that some writers intend 'log(log(n))' log(n)² : ambiguous, log(n) * log(n) or 2log(n) ``` --- Now, you asked about their meaning in the context of asymptotic behaviour and, specifically, Big-O notation. Below follows a note regarding seeing research articles state that the time complexity of an algorithm is `log(n²)`, which is, in the context of Big-O notation, somewhat of a misuse of the notation. First note that ``` log(n²) = 2log (n) ``` If some some arbitrary function `f(n)` is in `O(log(n²)) = O(2log(n)`, then we can show that `f(n)` is in `O(log(n)`. From the definition of O(n), we can state the following ``` f(n) is in O(g(n)) => |f(n)| ≤ k*|g(n)|, for some constant k>0 (+) for n sufficiently large (say, n>N) ``` See e.g. [this reference covering Big-O notation](https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-o-notation). Now, if `f(n)` is in `O(2log(n)`, then there exists some set of positive constants `k` and `N` such that the following holds ``` |f(n)| ≤ k*|log(n)|, for n>N <=> |f(n)| ≤ k/2*|2log(n)|, for n>N (++) ``` Hence, we can just choose a constant `k2=k/2` and it follows from `(++)` that `f(n)` is in `O(log(n)`. **Key point:** The purpose of this explanation is that *never* should you see an article describe the asymptotic behaviour of an algorithm as `O(log(n²))`, as this is a redundant way of saying that the algorithm is in `O(log(n))`. --- For explanations regarding what different `O(log(n)` variations means with regard to the algorithms behaviour, see e.g. the links provided by other answers, as well as the numerous threads on SO covering these subject (e.g. [time complexity of binary search](https://stackoverflow.com/questions/8185079/how-to-calculate-binary-search-complexity), and so on).
Counting the order of automorphism group of a graph is polynomial-time equivalent to graph isomorphism problem. But if we just want to know if the order is greater than 1, what is the complexity of this problem? Graphs which have no automorphism except for the trivial one, that is identity permutation, are called asymmetric graphs. So the problem is to check if the graph is asymmetric.
An old approach (but it is still used in some applications) is the [N-version programming](http://en.wikipedia.org/wiki/N-version_programming) From Wikipedia: **N-version programming** (**NVP**), also known as *multiversion programming*, is a method or process in software engineering where multiple functionally equivalent programs are independently generated from the same initial specifications. The concept of N-version programming was introduced in 1977 by Liming Chen and Algirdas Avizienis with the central conjecture that the "independence of programming efforts will greatly reduce the probability of identical software faults occurring in two or more versions of the program".The aim of NVP is to improve the reliability of software operation by building in fault tolerance or redundancy. .... See for example: "[Challenges in Building Fault - Tolerant Flight Control System for a Civil Aircraft](http://www.iaeng.org/IJCS/issues_v35/issue_4/IJCS_35_4_07.pdf)"
Given a collection of $n$ numbers, $S$, the question is to decide whether all the elements of $S$ are distinct from each other. If they are distinct from each other (no two of them are the same), print "Yes". Otherwise print "No". I know the worst case time complexity of this question is $\Theta \left ( n\log n \right )$. Of course it is based on comparison among elements. But I can't figure out how to prove this. Perhaps using the decision tree?
The lower bound $\Omega(n\log n)$ is a classical result of Ben-Or, [Lower bounds for algebraic computation trees](http://pdf.aminer.org/000/212/028/lower_bounds_for_algebraic_computation_trees_with_integer_inputs.pdf). The problem itself is known as *element distinctness*, and other lower bounds are proved by reduction to this fundamental lower bound. Ben-Or's lower bound is proved in the *algebraic decision tree* model, a generalization of the comparison model in which each decision can be made on the basis of evaluating the sign of an arbitrary polynomial in the inputs (rather than just the polynomials $a\_i - a\_j$). In order to get a meaningful lower bound, we only allow polynomials of constant degree (the lower bound works for any fixed constant): otherwise we can detect whether all elements are unique by evaluating the polynomial $$ \prod\_{i<j} (a\_i - a\_j). $$ (The actual model considered by Ben-Or, *algebraic computation trees*, is even more general: he considers "programs" with statements of the form $x = y \pm z$, $x = yz$, $x = y/z$, $x = \sqrt{y}$ and $x = \alpha y$ for real $\alpha$, as well as conditional statements which depend on conditions of the form $x = 0$, $x > 0$ or $x \geq 0$. Each operation and conditional statement takes unit time, and the lower bound is on the worst-case running time.) Here is a sketch of the proof in the case where decisions are made based on the sign of *linear* polynomials, following [Jeff Erickson's notes](http://www.cs.uiuc.edu/~jeffe/teaching/497/06-algebraic-tree.pdf). For each node $v$ in the algebraic decision tree, let $R(v)$ be the set of $(a\_1,\ldots,a\_n)$ reaching that node. It is not hard to check that $R(v)$ is connected. For each permutation $\pi \in S\_n$, the set $R\_\pi$ of tuples $(a\_1,\ldots,a\_n)$ ordered according to $\pi$ is connected, but any two $R\_\alpha,R\_\beta$ for $\alpha \neq \beta$ are separated by some region in which the elements are not distinct. The lower bound $\Omega(n\log n)$ immediately follows. If the allowable conditions have higher degree, then it is no longer true that $R(v)$ is connected; but a deep theorem of Thom and Milner shows that if the decision tree is shallow then each $R(v)$ is composed of "not too many" connected components, and we get the same lower bound. (The same proof idea works for the algebraic computation trees considered by Ben-Or.) The latest word on the subject (as of 1998) is work of [Grigoriev, Karpinski, Meyer auf der Heide and Smolensky](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.46.3204&rep=rep1&type=pdf), extending the lower bound to randomized algebraic decision trees, and work of [Grigoriev](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.36.4541&rep=rep1&type=pdf) extending it to randomized algebraic computation trees. Their proof avoids the use of the deep result of Thom and Milnor mentioned above, and so gives an alternative "elementary" proof of Ben-Or's result.
I came across an odd problem when writing an interpreter that (should) hooks to external programs/functions: Functions in 'C' and 'C++' can't hook [variadic functions](https://en.wikipedia.org/wiki/Variadic_function), e.g. I can't make a function that calls 'printf' with the exact same arguments that it got, and instead has to call an alternate version that take a variadic object. This is very problematic since I want to be able to make an object that hold an anonymous hook. So, I thought that this was weird since [Forth](http://en.wikipedia.org/wiki/Forth_%28programming_language%29), [JavaScript](http://en.wikipedia.org/wiki/JavaScript), and perhaps a plethora of other languages can do this very easily without having to resort to assembly language/machine code. Since other languages can do this so easily, does that mean that the class of problems that each programming language can solve actually varies by language, even though these languages are all [Turing complete](https://en.wikipedia.org/wiki/Turing_completeness)?
Turing complete languages can compute the same set of functions $\mathbb{N}^k \rightarrow \mathbb{N}$, which is the set of general recursive partial functions. That's it. This says nothing about the language features. A Turing Machine has very limited compositional features. The untyped $\lambda$-calculus is far more compositional, but lacks many features commonly found in modern languages. Turing completeness tells nothing about having types, built in arrays/integers/dictionaries, input/output capabilities, network access, multithreading, dynamic allocation, ... Just because Java does not have feature X (say, macros, higher-rank types, or dependent types), it does not suddenly stop being Turing complete. Turing completeness and language expressiveness are two different notions.
You open a code editor, define a syntax with lambdas, a few primitives. Then you invent some nice computation rules, some cool typing rules, and write a corresponding interpreter and "type checker". Congratulations! You just built a [proof assistant](https://en.wikipedia.org/wiki/Proof_assistant)! Now you can formalize all of mathematics on it! Or can you? After all, what makes the thing you built a valid "proof assistant"? What constitutes a "type checker"? As far as I'm concerned, you could have actually implemented Tetris, claimed that blocks are terms, their shapes are types, and the game loop is the computation rule. Who has authority to argue you're wrong? When it comes to implementing a "programming language", the definition is much more clear. Just invent a syntax, a set of reduction rules and prove your thing Turing-complete. Then it is as good as a programming language is expected to be. When it comes to implementing a proof assistant, I have no idea. Sure, there are common things I see present in proof assistants, so, intuitively, I know what they are. They have lambdas and the corresponding dependent function space, they have inductive datatypes and their introductions and eliminations, and they must be consistent, which mean at least some type is uninhabited. But what if my "proof assistant" has no lambdas nor datatypes? It could have for example just combinators. I don't have a precise definition of what makes a "type theory" a "type theory". **What, formally, makes a program a valid proof assistant, in the same sense that Agda/Coq are? What is a precise and complete definition of a type theory? And what one must do to "justify" his syntax, reduction and typing rules?** I'm making this specifically because, while I feel like I can, given enough time, implement something like [A Cosmology of Datatypes](https://pages.lip6.fr/Pierre-Evariste.Dagand/stuffs/thesis-2011-phd/thesis.pdf), I do not feel the freedom to make any change, even minimal, to whatever the author did, because I do not know what justifies those things he did.
I would expect a proof assistant to provide: 1. a syntax to express some mathematical *statements* 2. a syntax to express a *proof* of one of those statements 3. a computational process to "check" that a proof of a statement is indeed valid (returns success or failure, or maybe does not return at all) Finally, for such a proof assistant to be trustworthy, it should come with a justification of the fact that when its checking process says that a proof of a statement is valid, then indeed this statement is valid in the usual mathematical sense -- there exists a mathematical proof that mathematicians will accept.
Each prior restricts the sample space, so in theory, your probability becomes more and more accurate. However, if our sample space becomes too small, the tradeoff between the accurate sample and the confidence we gain from having more data points becomes apparent. Example: Assume I want to know the odds that I will contract COVID. I ask myself what is $P(\text{getting covid} | \text{20 years old})$. Let's say I get $10\%$. Then I stack another prior to get even more accurate data. I ask myself what is $P(\text{getting covid} | \text{20 years old} \land \text{is not obese})$. Great. Even more accurate probability. However, what if I ask myself $P(\text{getting COVID} | \text{I have a tattoo of my name on my arm})$. Let's say there are $2$ people in the world, me and my friend, who have a tattoo of my name on his/her arm. Let's say my friend got COVID. This gives me a $1$ in $2$ odds of getting COVID which dosen't make sense. A couple of ideas - maybe Bayes Theorem should be only used as a tool, maybe the priors have to be relevant to the outcome, maybe there's a threshold below which our sample is not significant. Not sure.
**You are not asking about having too many priors, but about conditioning.** The prior in Bayes theorem is this part $$ p(A|B) = \frac{p(B|A) ~\overbrace{p(A)}^\text{prior}}{p(N)} $$ while you seem to be asking about conditioning $p(B|\cdot)$ on many different variables. First, notice that conditional probabilities $p(B|A)$, $p(B|C)$, $p(B|C,A)$, $p(B|D,A)$, etc tell you about *different* scenarios. The probability that a CrossValidated.com user named Neel Sandell gets COVID is a different thing than the probability that any random person gets it, or that an MD working on a COVID ward gets it. Each time you condition, you restrict the space, that is correct. When you ask about probability given that someone has blond hair, the answer would be relevant only to the blond-haired people, not people in general. So you asked about a specific scenario, your question was restricted, not the answer. This has nothing to do with the priors, it is just the bare fact that if you ask different questions, you get different answers, and if you ask specific questions, you get specific answers that may not be relevant to the general problem.
What's your favorite examples where information theory is used to prove a neat combinatorial statement in a simple way ? Some examples I can think of are related to lower bounds for locally decodable codes, e.g., in [this](http://www.wisdom.weizmann.ac.il/~oded/p_2q-ldc.html) paper: suppose that for a bunch of binary strings $x\_1,...,x\_m$ of length $n$ it holds that for every $i$, for $k\_i$ different pairs {$j\_1,j\_2$}, $$e\_i = x\_{j\_1} \oplus x\_{j\_2}.$$ Then m is at least exponential in n, where the exponent depends linearly on the average ratio of $k\_i/m$. Another (related) example is some isoperimetric inequalities on the Boolean cube (feel free to elaborate on this in your answers). Do you have more nice examples? Preferably, short and easy to explain.
There is basically an entire course devoted to this question: <https://catalyst.uw.edu/workspace/anuprao/15415/86751> The course is still ongoing. So not all notes are available as of writing this. Also, some examples from the course were already mentioned.
Consider the task of sorting a list x of size n by repeatedly querying an oracle. The oracle draws, without replacement, a random pair of indices (i, j), with i != j, and returns (i, j) if x[i] < x[j] and (j, i) if x[i] > x[j]. How many times, on average, do we need to query the oracle to sort the list? How is that average affected if the draw is made with replacement? Looking for a formula but an asymptotic estimate or an order would also be great.
This answer gives exact formulas for the expected number of steps, with and without replacement. To be clear, we interpret OP's problem as detailed in [OP's Python gist](https://gist.github.com/murbard/5c711d8efe114c1ecb2152579937064c): each step of the process makes one random comparison, and the process stops as soon as the comparisons made are sufficient to determine the total order. **Lemma 1.** *(i) **With replacement** the expected number of comparisons is $${n \choose 2} H\_{n-1},$$ where $H\_k = \sum\_{i=1}^k \frac 1 i \approx 0.5+\ln k$ is the $k$th harmonic number.* *(ii) **Without replacement**, the expected number of comparisons is* $$n^2/2 - n + 3/2 - 1/n.$$ *Proof.* Assume WLOG that the set of numbers in $x$ is $[n]$. Let random variable $T$ be the total number of steps. As observed in @user3257842's answer, a necessary and sufficient condition for the full order to be known is that, for all $i\in [n-1]$, the positions holding elements $i$ and $i+1$ have been compared. Call such positions "neighbors". *Proof of Part (i).* For $m \in [n-1]$, let random variable $T\_m$ be the number of steps such that, at the start of the step, the number of neighbors that have not yet been compared is $m$. Then the total number of steps is $T=T\_{n-1} + T\_{n-2} + \cdots + T\_1$. By linearity of expectation, $E[T] = \sum\_{m=1}^{n-1} E[T\_m]$. To finish, we calculate $E[T\_m]$ for any $m\in [n-1]$. Consider any $m\in [n-1]$. Consider any iteration that starts with $m$ neighbors not yet compared. Given this, $m$ of the $n\choose 2$ possible comparisons would compare one not-yet compared pair of neighbors, while the remaining comparisons would not (so would leave the number of uncompared neighbors unchanged). So the number of not-yet-compared neighbors either stays the same, or, with probability $m/{n\choose 2}$, reduces by 1. It follows that $E[T\_m]$, the expected number of steps that start with $m$ uncompared neighbors, is ${n\choose 2}/m$. So $E[T] = \sum\_{m=1}^{n-1} {n\choose 2}/m = {n\choose 2} H\_{n-1}$, proving Part (i). *Proof of Part (ii).* View the process as follows: a random permutation of the $n\choose 2$ pairs is chosen, and then the algorithm checks each pair (in the chosen order) just until it has compared all neighboring pairs. That is, we want the expected position of the last neighboring pair in the random ordering of pairs. There are $n-1$ neighboring pairs and ${n\choose 2} - (n-1) = {n-1\choose 2}$ non-neighboring pairs. By a standard calculation, the expected position of the last neighboring pair is ${n \choose 2} - {n-1\choose 2}/n = n^2/2 - n + 3/2 - 1/n$. $~~~\Box$ Here is some intuition about the last "standard calculation". Suppose you choose a $k$-subset $S$ uniformly at random from $[m+k]$. What's the expectation of $\max S$? The $k$ chosen elements divide the $m$ unchosen elements into $k+1$ intervals of total size $m$. By a symmetry argument the expected number of elements in each of these intervals is the same, so each interval has size $m/(k+1)$ in expectation. In particular, the expected size of the last interval is $m/(k+1)$, so the expectation of the largest element is $m+k - m/(k+1)$. In our case $m+k = {n \choose 2}$ and $k=n-1$.
Multi-arm bandits work well in situation where you have choices and you are not sure which one will maximize your well being. You can use the algorithm for some real life situations. As an example, learning can be a good field: > > If a kid is learning carpentry and he is bad at it, the algorithm will tell him/her that he/she probably should need to move on. If he/she is good at it, the algorithm will tell him/her to continue to learn that field. > > > Dating is a also a good field: > > You're a man on your putting a lot of 'effort' in pursuing a lady. However, your efforts are definitely unwelcomed. The algorithm should "slightly" (or strongly) nudge you to move on. > > > What others real-life situation can we use the multi-arm bandit algorithm for? PS: If the question is too broad, please leave a comment. If there is a consensus, I'll remove my question.
They can be used in a biomedical treatment / research design setting. For example, I believe [q-learning](https://en.wikipedia.org/wiki/Q-learning) algorithms are used in Sequential, Multiple Assignment, Randomized Trial ([SMART trials](https://methodology.psu.edu/ra/adap-inter)). Loosely, the idea is that the treatment regime adapts optimally to the progress the patient is making. It is clear how this might be best for an individual patient, but it can also be more efficient in randomized clinical trials.
Let's say we have two independent 1-dimensional random variables $X$ and $Y$ and we want to estimate $\mathbb{P}(Y>X)$. Say we take a sample of size $N$ of $X$ and a sample of $Y$ of size $M$. Is the following approach reasonable? 1. I draw 100 bootstrap samples of $X$. I take the mean of each bootstrap sample to build my belief of $\mathbb{E}[X]$ which I denote $p\left(\mathbb{E}[X]\right)$. 2. I draw 100 bootstrap samples of $Y$. I take the mean of each bootstrap sample to build my belief of $\mathbb{E}[Y]$ which I denote $p\left(\mathbb{E}[Y]\right)$. I estimate $\mathbb{P}(Y>X)$ as $\int p\left(\mathbb{E}[Y]\right) - p\left(\mathbb{E}[X]\right) \;\mathbb{I}\left[{p(\mathbb{E}[Y]) > p(\mathbb{E}[X])}\right]$
No, it doesn't make sense. The expectation of $X$ and $Y$ has very little to do with the $P(Y>X)$. Also, the notation $p(E[Y])$ doesn't make sense because $E[Y]$ is just a number, i.e. there is no probability distribution associated with it. Here is what you want $$ \begin{array}{rl} P(Y>X) = E[I(Y>X)] \approx \frac{1}{M}\sum\_{m=1}^M I\left(Y^{(m)}>X^{(m)}\right). \end{array} $$ where $Y^{(m)},X^{(m)}$ are samples from the joint distribution of $Y$ and $X$ and $I()$ is the *indicator function* that is 1 when the statement within the parentheses is true and 0 otherwise. If $Y$ and $X$ are independent, you can sample $Y$ and $X$ independently from their marginal distributions.
Almost all ML notebooks out there have a section where they select the best features to use in the model. Why is this step always there ? How bad can it be to keep a variable that is not correlated with the response variable ? If you are really unlucky then yes a feature that is positively correlated with your response in your training set could in fact be negatively correlated with it in the real world. But then, it's not even sure that one will be able to catch it with a feature selection routine. My assumption is that it used to be a necessary step when computing resources were scarce, but with today's resources it is basically irrelevant. What is your view ? Can you give a real world example where it would harm the model to keep all training features ?
You are right. If someone is using regularization correctly and doing hyperparameter tuning to avoid overfitting, then it should not be a problem theoretically (ie multi-collinearity will not reduce model performance). However, it may matter in a number of practical circumstances. Here are two examples: 1. You want to limit the amount of data you need to store in a database for a model that you are frequently running, and it can be expensive storage-wise, and computation-wise to keep variables that don't contribute to model performance. Therefore, I would argue that although computing resources are not 'scarce' they are still monetarily expensive, and using extra resources if there is a way to limit them is also a time sink. 2. For interpretation's sake, it is easier to understand the model if you limit the number of variables. Especially if you need to show stakeholders (if you work as a data scientist) and need to explain model performance.
I am reading the book Computer Organization and Design. It compares instructions on memory and instructions on registers but doesn't say anything about the speed of instructions when the source operand(s) is/are constants. In that case which will be faster register operands or immediate(constant) operands ?
It depends. =========== Like AProgrammer said, it depends on the processor. We are in an age where there are many [limiting factors](https://electronics.stackexchange.com/questions/122050/what-limits-cpu-speed) based on physics in CPU construction. This means that distance traveled for an instruction and heat generated by a gate cause latency. In theory, this means that for a pipeline where the bottleneck is the decoding stage, this matters. With immediate operands, you would not need to travel to the register to grab the values, which is additional clock cycles and distance traveled. This would decrease latency, and thus increase speed. However, in real world applications, this is very likely NOT the bottleneck, and so there will be little to no increase ([if mandatory register access stages exist](https://en.wikipedia.org/wiki/Classic_RISC_pipeline)) in speed.
[![enter image description here](https://i.stack.imgur.com/ROkCE.png)](https://i.stack.imgur.com/ROkCE.png) I am learning about Dijkstra's Algorithm and the A\* Algorithm and I have manually worked through the graph attached here, which is supposed (I think) to demonstrate that the A\* Algorithm is faster than the Dijkstra one. The sequence in which I chose the nodes is indicated by the circled numbers. I can see that the process is fairly simple but there are plenty of possibilities for human error, so please bear with me. My course book says that the A\* will be quicker. I can't see why this is true. Have I worked through the graph incorrectly? It seems to me that both methods require that all nodes must be visited so where is the increase in efficiency? I have read what I can on Stack Overflow about this so please don't point me at earlier posts unless there is a very good reason.
I absolutely don't understand the attached images and what you try to do on it. What I can say is that Dijkstra an A\* algorithm are pathfinding methods: **Dijkstra algorithm** is an exploration method that let you find the shortest path to any vertex of the graph. The idea is to always consider the vertices that you reach from the cheapest (closest) node of the queue. You may run this algorithm to find the shortest path from a specific node $s$ to a specific node $t$. But you can also use it to determine global properties of the distances from $s$. Or for instance, build a tree that let you backtrack the path from $s$ to any vertex. **A\* algorithm** is basically a Dijkstra method that use an additional heuristic to sort the nodes of the queue. On distance problem, this heuristic is generally based on the euclidian distance from the node to the aim. It favorises the exploration of the nodes that are more likely to go in the good direction. So this specialization of Dijkstra algorithm is only used to go from a specific node $s$ to a specific node $t$. It is generally largely faster but does not guarantee to find the optimal path (in the ver). I suggest you play with this [**pathfinding simulation tool**](https://qiao.github.io/PathFinding.js/visual/)
I have a data set that consists of 717 observations (rows) which are described by 33 variables (columns). The data are standardized by z-scoring all the variables. No two variables are linearly dependent ($r=1$). I've also removed all the variables with very low variance (less than $0.1$). The figure below shows the corresponding correlation matrix (in absolute values). When I'm trying to run factor analysis using `factoran` in Matlab as follows: ``` [Loadings1,specVar1,T,stats] = factoran(Z2,1); ``` I receive the following error: ``` The data X must have a covariance matrix that is positive definite. ``` Could you please tell me where is the problem? Is it due to low mutual dependency among the used variables? In addition, what can I do about it? --- My correlation matrix: ![enter image description here](https://i.stack.imgur.com/gZ2sI.png)
Let's define the correlation matrix by $C$. Since it is positive semi-definite, but not positive definite, its spectral decomposition looks something like $$C = Q D Q^{-1},$$ where the columns of $Q$ consist of orthonormal eigenvectors of $C$ and $$D = \begin{pmatrix}\lambda\_1 & 0 & \cdots & \cdots &\cdots & \cdots& 0\\ 0 & \lambda\_2 & \ddots & && &\vdots \\ \vdots & \ddots &\ddots & \ddots && &\vdots \\ \vdots & &\ddots &\lambda\_n &\ddots &&\vdots \\ \vdots & & & \ddots &0 & \ddots& \vdots \\ \vdots & & & &\ddots & \ddots& 0\\ 0 & \cdots &\cdots & \cdots &\cdots & 0& 0\end{pmatrix}$$ is a diagonal matrix containing the eigenvalues corresponding to the eigenvectors in $Q$. Some of those are $0$. Moreover, $n$ is the rank of $C$. A simple way to restore positive definiteness is setting the $0$-eigenvalues to some value that is numerically non-zero, e.g. $$\lambda\_{n+1}, \lambda\_{n+2},... = 10^{-15}.$$ Hence, set $$\tilde{C} = Q \tilde{D} Q^{-1},$$ where $$\tilde{D} = \begin{pmatrix}\lambda\_1 & 0 & \cdots & \cdots &\cdots & \cdots& 0\\ 0 & \lambda\_2 & \ddots & && &\vdots \\ \vdots & \ddots &\ddots & \ddots && &\vdots \\ \vdots & &\ddots &\lambda\_n &\ddots &&\vdots \\ \vdots & & & \ddots &10^{-15} & \ddots& \vdots \\ \vdots & & & &\ddots & \ddots& 0\\ 0 & \cdots &\cdots & \cdots &\cdots & 0& 10^{-15}\end{pmatrix}$$ Then, perform the factor analysis for $\tilde{C}. In Matlab, one can obtain $Q,D$ using the command: ``` [Q,D] = eig(C) ``` Constructing $\tilde{C}$ is then just simple Matrix manipulations. Remark: It would be hard to tell how this influences the factor analysis though; hence, one should probably be careful with this method. Moreover, even though this is a $C$ is a correlation matrix, $\tilde{C}$ may well be not. Hence, another normalisation of the entries might be necessary.
The definition of asymptotic polynomial-time approximation scheme (Asymptotic PTAS) is defined as follows: > > A minimization problem $\Pi$ is Asymptotic PTAS if for all $\epsilon$ there exists an algorithm $A\_\epsilon$ and $N\_\epsilon > 0$, such that $A\_\epsilon(I) \leq (1+\epsilon)OPT$ for $OPT(I) \geq N\_\epsilon$. > > > I have two questions: 1. Is Asymptotic PTAS contained in APX? 2. If problem $\Pi$ is APX-Hard, does it imply that $\Pi$ is not Asymptotic PTAS (suppose $P \neq NP$)? I know above are true if we replace Asymptotic PTAS with PTAS (which are common-sense now). But I can't figure out how the definitions of Asymptotic PTAS and APX give answers to above questions..
It's not unitary, so it's impossible because all quantum transformations have to be unitary. Consider the states $$ \frac{3}{5} |0\rangle + \frac{4}{5} |1\rangle \quad \mathrm{and} \quad \frac{3}{5} |0\rangle - \frac{4}{5} |1\rangle. $$ These get taken to $$ \frac{1}{\sqrt{2}}\left(e^{3/5 i} |0\rangle + e^{4/5 i} |1\rangle\right) \quad \mbox{and}\quad \frac{1}{\sqrt{2}}\left( e^{3/5 i} |0\rangle + e^{-4/5 i} |1\rangle \right). $$ The inner product between the first pair of states is $-\frac{7}{25} = -0.28$, while the inner product between the second pair of states is $\frac{1}{2}(1+e^{8/5 i} )\approx 0.4585 +0.4998 i.$ Since unitary operations preserve inner products, this is not unitary.
Roughly, [AlphaGo](https://www.nature.com/articles/nature16961.pdf)/[AlphaGo Zero](https://www.nature.com/articles/nature24270.pdf) 's algorithm is as follows: > > 1. Using a policy network, generate a distribution of move probabilities (intuitively, capturing how good those moves are based on a first-approximation). > 2. Pick a move based on those probabilities, and imagine the board position that results from the move. Then use the value network to evaluate the resulting position. > 3. Repeat this by using MCTS (monte carlo tree search) for $n$ times, which results in "augmented" move probabilities. (these probabilities represent the ratio of wins that resulted from the $n$ samples of the MCTS. > > > **My question is:** * Why does AlphaZero use *both a policy network and a value network*? It could also have used only a policy net with MCTS, or only a value net with MCTS. Is there a principled understanding of why this works well? (apart from "it turns out to work well in experiments"). * Why does AlphaZero use both the value function AND an "upper confidence bound" to guide its search?
First a small note: AlphaGo Zero actually does only use a single network, it no longer has separate value and policy *networks* like the original AlphaGo did. That single network does still have separate *value* and *policy* *heads* though (two separate outputs)... so I suppose you can intuitively still view it as having two networks (which happen to share a large number of parameters between them). --- With that technicality out of the way, it is first important to have a good understanding of what the different roles are that are fulfilled by **learning** (regardless of whether it is a policy or a value network) on one hand, and **search** (MCTS) on the other hand: * The networks (both value and policy) obtained through **learning** can informally be viewed as providing a strong level of **intuition** to the program. These components look at a current state, and instantly make a "snap decision" for an action to play (in the case of policy network) or a "snap evaluation" to estimate the value of a state (in the case of value network). They can be trained to perform at an admirable level on their own already, but are always going to be constrained in some sense. When not combined with an element of search (like MCTS), they do not perform any additional "reasoning" or "thinking". * The search component (MCTS) provides "reasoning" / "thinking" / "deliberation" to the program. It does not make any instant "snap" decisions, but happily uses as much thinking time as it can get to continue reasoning and gradually improve the decision it would make during the specific turn it is "thinking" about. It really focuses on that one particular turn, rather than then learning components which focus on the complete game simultaneously. Given an infinite amount of thinking time, it would play optimally. --- > > Why does AlphaZero use both a policy network and a value network? It could also have used only a policy net with MCTS, or only a value net with MCTS. Is there a principled understanding of why this works well? (apart from "it turns out to work well in experiments"). > > > Note that: * Given a state $s$, the value network gives an estimate of the value $V(s)$ of that state $s$. * Given a state $s$, the policy network gives a recommendation of an action $a = \pi(s)$ to take (or a probability distribution $\pi(s, a)$ over the different possible actions $a$). These are quite different kinds of outputs, and this means there are different parts in the MCTS algorithm where they can be used: * The output $V(s)$ of a value network is an evaluation "in hindsight". You first have to actually reach a state $s$, and then the value network can be used to evaluate it. This is useful, for example, if you want to terminate a Play-out / rollout of MCTS early (before it has been rolled out all the way to a truly terminal state), and return a reasonable evaluation for that rollout. However, it can **not** be used (efficiently) for action selection during a rollout for example; if you want to use a $V(s')$ estimate to determine which action to pick in a rollout, you have to first generate **all** possible successor states $s'$ for **all** possible actions, **evaluate all of them**, and then you can finally pick the best action accordingly. This is computationally expensive. * The output $\pi(s, a)$ of a policy network is a "proactive action selection". You can use it, for example, during a Play-out/rollout to immediately select an action for a current state $s$, no need to first generate and evaluate all possible successors $s'$. However, when you wish to terminate a rollout, evaluate the resulting state, and backpropagate that evaluation... the policy network does not provide the information required to do that, it doesn't give you a state-value estimate. In practice, it is indeed desirable to be able to do both of the things that were described above as being easy to do for one network and difficult for the other. It is important to use a strong "intuition" (obtained throuhg learning) for action selection in MCTS, because this leads to more realistic rollouts and therefore also often better evaluations. This cannot be done using a value network (well, it can in the Selection phase / tree traversal of MCTS, but not once a node is reached that hasn't been fully expanded). It is also desirable to terminate rollouts early and backpropagate high-quality evaluations, rather than rolling them out all the way (which takes more time and introduces more variance due to greater likelihood of selecting unrealistic actions along the trajectory). This functionality can only be provided by a value network. --- > > Why does AlphaZero use both the value function AND an "upper confidence bound" to guide its search? > > > When the search process starts, when we have not yet run through a large number of MCTS simulations, the evaluations based purely on MCTS itself (which are typically used in the UCB equation) are unreliable. So, initially it is useful to have a strong "intuition" for value estimates, as provided by the value network. As the number of MCTS simulations performed increases (in particular as it tends to infinity, but of course also earlier than that in practice), we expect the MCTS-based evaluations to gradually become more reliable, eventually even potentially becoming more accurate than the learned value network. Note that the value network was trained to provide reasonable value estimates instantly for **any game state**. The MCTS search process dedicates all of its time just to computing reliable value estimates for **the current game state**, it is allowed to "specialize" to the current game state. So, as time moves on, we'll want to start relying a bit more on the MCTS-based evaluations, and a bit less on the initial "intuition" from the value network.
I have a dataset. There are lots of missing values. For some columns, the missing value was replaced with -999, but other columns, the missing value was marked as 'NA'. Why would we use -999 to replace the missing value?
Such values are for databases. Most databases long ago, and many today, allocated a fixed number of digits for integer-valued data. A number like -999 is the smallest that can be stored in four characters, -9999 in five characters, and so on. (It should go without saying that--by definition--a numeric field cannot store alphanumeric characters such as "NA". *Some* numeric code has to be used to represent missing or invalid data.) Why use the most negative number that can be stored to signify a missing value? *Because if you mistakenly treat it as a valid number, you want the results to be dramatically incorrect.* The further your codes for missing values get from being realistic, the safer you are, because hugely wrong input usually screws up the output. (Robust statistical methods are notable exceptions!) How could such a mistake happen? This occurs all the time when data are exchanged between systems. A system that assumes -9999 represents a missing value will blithely output that value when you write the data out in most formats, such as CSV. The system that reads that CSV file might not "know" (or not be "told") to treat such values as missing. Another reason is that good statistical data and computing platforms recognize many different kinds of missing values: NaNs, truly missing values, overflows, underflows, non-responses, etc, etc. By devoting the most negative possible values (such as -9999, -9998, -9997, etc) to these, you make it easy to query out all missing values from any table or array. Yet another is that such values usually show up in graphical displays as extreme outliers. Of all the values you could choose to stand out in a graphic, the most negative possible one stands the greatest chance of being far from your data. --- There are useful implications and generalizations: * A good value to use for missing data in floating-point fields is the most negative valid number, equal approximately to $-10^{303}$ for double-precision floats. (Imagine the effect that would have on any average!) On the same principle, many old programs, which used single-precision floats, used somewhat arbitrary large numbers such as 1E+30 for missing values. * Adopt a standard rule of this type to make it easy to invent NoData codes in new circumstances (when you are designing your own database software). * Design your software and systems to fail dramatically if they fail at all. The worst bugs are those that are intermittent, random, or tiny, because they can go undetected and be difficult to hunt down.
Suppose a **build** max-heap operation runs bubble down over a heap. How does its amortized cost equal $O(n)$?
Note that here we don't have an arbitrary list of operations, we are talking about a single operation. So referring to it as amortized analysis can be confusing. Amortized analysis is usually used for an arbitrary list of operations, not a particular list of operations. We can use the methods for amortized analysis but AFAIK it is not common to call the result an amortized analysis. Schulz answer is correct and uses the aggregate method. It is the usual proof of the fact and you can find it in textbooks like CLRS, chapter 6. If you want to use the potential method for proving the bound, then remember that we want to get $O(1)$ for each heapify operation (to get $O(n)$ for the build heap operation). The real cost for each heapify is $O(h)$. We have to pay for the rest from the potential function. If we charge $c$ units for amortized cost then we can see that the analysis will work (we will determine $c$ later). For the nodes in the bottom level (1) we will have to pay $n/2$. So we save potential $$cn/2-n/2$$ For the next level (2) we will pay $2n/4$ so we have potential $$c(n/2 + n/4) - (n/2 + 2n/4)$$ For the next level (3) we will pay $3n/8$ so we have potential $$c(n/2 + n/4 + n/8) - (n/2 + 2n/4 + 3n/8)$$ For the $k$th level we will pay $kn/2^i$ so we have potential $$cn\Sigma\_{i=1}^k \frac{1}{2^i} - n\Sigma\_{i=1}^k \frac{i}{2^i}$$ Note that $\Sigma\_{i=1}^k \frac{i}{2^i} \leq d$ for some constant $d$ independent of $n$ and $k$ and $\Sigma\_{i=1}^k \frac{1}{2^i} \geq \frac{1}{2}$. If we put these in the formula we get that the potential will always be at least $\frac{c}{2}n - dn$. So if we choose $c$ to be any constant larger than $2d$ it will always be positive. The exact value of $c$ is not really important. To cast this as an accounting method we need to specify which part of the data structure the potential is being assigned. In this case the potential needs to be assigned to the levels of the heap (if we want to assign the potential to nodes then it will be more complicated since it is difficult to tell which nodes will be use later when heapify is applied to nodes on higher levels).
I am just wondering what we can infer from a graph with x-axis as the actual and y axis as the predicted data? ![signups](https://i.stack.imgur.com/dwlOW.png)
Scatter plots of Actual vs Predicted are one of the richest form of data visualization. You can tell pretty much everything from it. Ideally, all your points should be close to a regressed diagonal line. So, if the Actual is 5, your predicted should be reasonably close to 5 to. If the Actual is 30, your predicted should also be reasonably close to 30. So, just draw such a diagonal line within your graph and check out where the points lie. If your model had a high R Square, all the points would be close to this diagonal line. The lower the R Square, the weaker the Goodness of fit of your model, the more foggy or dispersed your points are (away from this diagonal line). You will see that your model seems to have three subsections of performance. The first one is where Actuals have values between 0 and 10. Within this zone, your model does not seem too bad. The second one is when Actuals are between 10 and 20, within this zone your model is essentially random. There is virtually no relationship between your model's predicted values and Actuals. The third zone is for Actuals >20. Within this zone, your model steadily greatly underestimates the Actual values. From this scatter plot, you can tell other issues related to your model. The residuals are heteroskedastic. This means the variance of the error is not constant across various levels of your dependent variable. As a result, the standard errors of your regression coefficients are unreliable and may be understated. In turn, this means that the statistical significance of your independent variables may be overstated. In other words, they may not be statistically significant. Because of the heteroskedastic issue, you actually can't tell. Although you can't be sure from this scatter plot, it appears likely that your residuals are autocorrelated. If your dependent variable is a time series that grows over time, they definitely are. You can see that between 10 and 20 the vast majority of your residuals are positive. And, >20 they are all negative. If your independent variable is indeed a time series that grows over time it has a Unit Root issue, meaning it is trending ever upward and is nonstationary. You have to transform it to build a robust model.
From my basic understanding of Adaptive Control, I understand that it uses the error and the velocity of the error to approximate the error in the solution space of a problem, thus allowing for guaranteed convergence under certain conditions and rapid adaptation to changing conditions. I've accumulated this knowledge basically from anecdotal evidence and from this [video](http://1.usa.gov/1tg2n2I). From my basic understanding of a Kalman Filter, it takes into account the error from past measurements to estimate the current state with greater accuracy. From my flawed perspective, they seem almost identical, but what's the difference between the two? I've heard anecdotally that they are duals of each other, but that the Kalman Filter is only for linear systems? Is this close to the truth?
These are two fundamentally different parts of control theory, although the way that they work is kindof similar. In a Kalman Filter you *assume* a model for your system and a model for your error and the filter estimates the dynamic states of the model, which change as a function of time. On the other hand, with adaptive control you assume a model, but define some parameters of the model that are unknown. Usually these are assumed to be fixed in time or changing slowly with respect to your system dynamics. The adaptive part then estimates these parameters, and the controller acts accordingly to control the states. You can actual use a Kalman Filter *and* adaptive control at the same time, each one achieving a different control result for your system.
How I understand microcode translate an instruction to microinstructions. And CPU has a unit that stores all possible of microinstructions. These microinstructions can be changed, because it load every time after boot from motherboard. Can instruction set be changed? I am sorry if i ask strange question, if I misunderstood , please correct , thanks and again I am sorry if my English grammar is bad. (google translate)
That depends completely on the CPU. Some CPUs don't use microcode at all. Some CPUs use a mixture of hardcoded instructions and microcoded instructions. This mixture can be anywhere between almost all hardcoded and only a few microcoded instructions to almost all microcoded and only a few hardcoded instructions. For CPUs that have microcode, the microcode could be either modifiable or not modifiable. If all instructions are microcoded (there are no hardcoded instructions) and the microcode is stored and loaded in such a way that you can load a different one at will, then it would *theoretically* be possible to load microcode that implements a completely different instruction set. However, the feasibility of that still depends on the internal design of the CPU. An extreme example would be the Transmeta Crusoe and Efficeon CPUs. These were 128 bit VLIW designs internally, and in order to execute x86 code, they had a software layer called *Code Morphing Software*, which was essentially an interpreter and JIT compiler from x86 to the internal VLIW code. The Code Morphing Software was running entirely in the standard RAM. This makes it sound like it should have been *very* easy to just load a different version of the Code Morphing Software and turn the CPU into an ARM or PowerPC or SPARC or MIPS CPU, right? Well, not according to Linus Torvalds, who worked for Transmeta at the time. He was asked this exact question, and his answer was that the internals of the CPU are so specifically optimized for interpreting x86 code that writing a CMS implementation for, say, MIPS would be no different (and no more performant) than simply writing a MIPS emulator for x86 … which already exist anyway. On the other hand, the CPUs of the Xerox Alto, Dandelion, Dolphin, and Dorado workstations were *explicitly* designed to be microcoded for different use cases. They would run completely different microcode depending on whether they were running Interlisp, Smalltalk, or the Star system. For example, the Smalltalk system implemented the most important instructions of their VM in microcode, so that in some sense, the CPU could directly execute (parts of) Smalltalk byte code. You could say that when running the Smalltalk system, Smalltalk byte code was the ISA!
Say we are studying Twitter hashtags over time. We monitor how popular they are day to day. Some hashtags may be volatile (i.e. "lunch", "Celtics", "Friday"). Their popularity rises and falls frequently. Some hashtags may be in the process of becoming unpopular (i.e. "Gulf oil spill", "Transformers 2", "Christine O'Donnell"). Is there a mathematical model that can distinguish between a hashtag that has temporarily fallen in popularity but is likely to go up in popularity later and a hashtag that is sinking and likely to stay sunk in popularity? thanks
As far as I can tell from my reading, a common method for determining a time series' trend is to smooth the series, perhaps in an iterated fashion, as in: [A Pakistan SBP paper](http://www.sbp.org.pk/departments/stats/sam.pdf). In the Seasonal Adjustment Methodology section, it describes how X-12 ARIMA does it, though they also use a seasonal factor which perhaps you could also use or perhaps you could simply ignore. Other links might include [A Bank of England web page](http://www.bankofengland.co.uk/mfsd/iadb/notesiadb/seasonal_adjustment.htm) and [A US Census Bureau paper](http://www.census.gov/ts/papers/jbes98.pdf) (pages 8-12).
The rules of the question state that: > > 0. Only one element is different. > 1. Rest are all same. > 2. Array A size is 8. > 3. I need to find the different element and remove it (Hashing cannot be used). > > > I have not developed the code but I have come up with an algorithm. There will be two cases one where the different element value is less and other where it has greater value. Suppose I take case 2, here I compare the sum of 1st 3 elements and the next 3 elements. If both are same, compare A[6] and A[7]. If both are different, suppose 1st 3 elements has greater sum, then compare A[0] and A[1], if they are the same, A[3] is the unequal element, otherwise either A[0] or A[1] is greater, according to their value. Now my question is what if the array size is $N$>8, what will be the algorithm or code for that?
* Compare the first two elements (1 comparison.) * If they are equal, find the different element among the remaining ones ($n-2$ comparisons). * If they are different, compare the third element to the first ($1$ comparison). If they differ, the different element is the first; otherwise the second. So you conclude in either $n-1$ comparisons (with probability $1-\frac2n$) or in $2$ (with probability $\frac2n$). The expectation is $n-3+\frac6n$, assuming the locations to be equiprobable. --- **Based on Nathaniel's comment, there is a better way:** * find the first pair of distinct elements (at most $m:=\lfloor\frac n2\rfloor$ comparisons); * conclude with an extra comparison. More precisely, * when you found an heterogeneous pair, perform a majority vote with a value from another pair. * if $n$ is even, you only need to process the first $m-1$ pairs (if they are all homogeneous, the last one is not). Worst case $m = \frac n2$ comparisons; expected case $\frac{1+2+...m-1}m+1 = \frac n4+\frac 12$ comparisons. * if $n$ is odd, you need to process m pairs; if none is heterogenous, the different element is the last one. Worst case $m+1 = \frac{n+3}2$ comparisons; expected case $\frac{(n-1)(\frac{1+2+...m}m+1)+m}n = \frac n4 +\frac32-\frac 7{4n}$ comparisons. Note that in the worst case it is impossible to beat $m$ comparisons as you must read all values before you can conclude.
I am a Software Developer but I came from a non-CS background so maybe it is a wrong question to ask, but I do not get why **logic gates**/**boolean logic** behave the way they do. Why for example: ``` 1 AND 1 = 1 // true AND true 1 OR 0 = 1 // true OR False 0 AND 1 = 0 // false AND true ``` And so on.. Is it a [conditional definition](https://en.wikipedia.org/wiki/Material_conditional) for example, like it is like that by definition? Or there is a logical/intuitive explanation for these results? I have searched google, also looked at the Wiki page of logic gates for an explanation of 'why' but I can only find 'how'. I would appreciate any answer or resources.
I think the questioner has it backwards. If we have a logical function such that ``` A | B | result ---+---+------- 0 | 0 | 0 0 | 1 | 0 1 | 0 | 0 1 | 1 | 1 ``` then we **decide** to call that function *and* because it is obvious that the result is 1 only when A *and* B are both 1. Similarly for *or*, *exclusive-or*, etc. There are 16 possible logical functions of 2 operands --it's easy to list them all, it's just the permutations of the 'result' column above. Some have obvious names.. The field probably dates from George Boole. As far as circuits are concerned, any of the 16 could in principle be built, but some are more useful than others.
I'm looking at the effect defeat and entrapment inducing conditions have on subjective ratings of defeat and entrapment at three different time points (among other things). However the subjective ratings are not normally distributed. I've done several transformations and the squareroot transformation seems to work best. However there are still some aspects of the data that have not normalized. This non-normality manifests itself in negative skewness in High entrapment high defeat conditions at the time point I expected there to be the highest defeat and entrapment ratings. Consequently I think it could be argued that this skew is due to the experimental manipulation. Would it be acceptable to run ANOVAs on this data despite the lack of normality, given the manipulations? Or would non-parametric tests be more appropriate? If so is there a non parametric equivalent of a 4x3 mixed ANOVA?
It's the residuals that should be normally distributed, not the marginal distribution of your response variable. I would try using transformations, do the ANOVA, and check the residuals. If they look noticeably non-normal regardless of what transformation you use, I would switch to a non-parametric test such as the Friedman test.
When analyzing the asymptotic running time of an algorithm where the tightest lower bound and upper bound are not the same, is it bad to denote the running time in theta notation? If an algorithm has a running time of $\Theta(n)$, is it safe to assume that the upper and lower bound are the same?
> > When analyzing the asymptotic running time of an algorithm where the tightest lower bound and upper bound are not the same, is it bad to denote the running time in theta notation? > > > Do you mean to say that we can only prove an $\Omega(f\_1)$ and an $O(f\_2)$ bound, respectively, on *the same cost function* (such as "worst-case running time)? Then yes: you can only use $\Theta(f\_1) = \Theta(f\_2)$ to combine the bounds if, well, $f\_1 \in \Theta(f\_2)$. If you mean to say that best case and worst case have different growth rates, then that's a [different issue](https://cs.stackexchange.com/questions/23068/how-do-o-and-%ce%a9-relate-to-worst-and-best-case): "lower bound" and "best case" as well as "upper bound" and "worst case" are *not* the same concepts! You can have $\Theta$-bounds for either case, but they still don't combine in a meaningful way since they are talking about different cost functions. > > Also, if an algorithm has a running time of theta(n), is it safe to assume that the upper and lower bound are the same? > > > That's not a "safe assumption", that's *the definition* of $\Theta$. I suggest you revisit the definitions of the different Landau symbols and the basics on how to use them in algorithm analysis. Maybe start reading [at our reference question](https://cs.stackexchange.com/a/61/98).
I have to find a negative cycle in a directed weighted graph. I know how the Bellman Ford algorithm works, and that it tells me if there is a reachable negative cycle. But it does not explicitly name it. How can I get the actual path $v1, v2, \ldots vk, v1$ of the cycle? After applying the standard algorithm we already did $n-1$ iterations and no further improvement should be possible. If we can still lower the distance to a node, a negative cycle exists. My idea is: Since we know the edge that can still improve the path and we know the predecessor of each node, we can trace our way back from that edge until we meet it again. Now we should have our cycle. Sadly, I did not find any paper that tells me if this is correct. So, does it actually work like that? **Edit:** This example proofs that my idea is wrong. Given the following graph, we run Bellman-Ford from node $1$. ![enter image description here](https://i.stack.imgur.com/pNLTw.png) We process edges in the order $a, b, c, d$. After $n-1$ iterations we get **node distances:** $1: -5$ $2: -30$ $3: -15$ and **parent table:** $1$ has parent $3$ $2$ has parent $3$ $3$ has parent $2$ Now, doing the $n$th iteration we see that the distance of node $1$ can still be improved using edge $a$. So we know that a negative cycle exists and $a$ is part of it. But, by tracing our way back through the parent table, we get stuck in another negative cycle $c, d$ and never meet $a$ again. How can we solve this problem?
You are right for the most part. Just one more addition. When you go back the predecessor chain when trying to find the cycle, you stop when you either reach the starting vertex $v\_1$ or any other vertex that has already been seen in the predecessor chain that you have seen so far. Basically, you stop and output vertices whenever you detect a cycle when going backwards using the predecessors. As for papers, a simple Google search yields [Xiuzhen Huang: *Negative-Weight Cycle Algorithms*](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.1981&rep=rep1&type=pdf). As a bonus, they also list another algorithm for finding negative weight cycles that are *not* reachable from the source vertex $s$.
DNS Spoofing and DNS Hijacking, both of them basically are type of attacks where the request is redirected to some other malicious website or some other web page. But I did not find some concrete difference between them, Can someone please explain the primary difference between DNS Spoofing and DNS Hijacking..
You are missing a major part of the definition of $CoNP$: you need a proof that your machine $M$ runs in nondeterministic polynomial time for all "yes" inputs. Because you have simply taken the NP SAT algorithm and flipped it, it runs in nondeterministic polynomial time for all "no" inputs. But we have no such guarantees for the "yes" inputs. The real answer is, nobody knows that SAT isn't in $CoNP$. If we knew that, we would know that $CoNP = NP$, but this is still an open problem. So, nobody can point to a specific reason that you *can't* build a machine that decides the complement of SAT it nondeterministic polynomial time. It's just that nobody has been able to do it yet, or prove that it's impossible.
I've got a regression problem where a model is required to predict a value in the range [0, 1]. I've tried to look at the distribution of the data and and it seems that there are more examples with a low value label ([0, 0.2]) than higher value labels ([0.2, 1]). When I try to train the model using the MAE metric, the model converges to a state where it has a very low loss, but it seems that the model has converged to a state in which it predicts a low value on many of the high value label examples. So my assumption was that the data is imbalanced and I should try to weight the loss of the examples depending on their label. **Question:** what is the best way to weight the loss in this configuration? Should I weight each example by the value of its label using some function f(x) , where f(x) is low when x is low and high when x is high? *Or* should I split the label values into bins ([0, 0.1), [0.1, 0.2) ... [0.9, 1]) and weight each bin (similarly to categorical loss weight)?
> > I'm having trouble understanding the use of Vector in machine learning > to represent a group of features. > > > **In short**, I would say that "Features Vector" is just a convenient way to speak about a set of features. **Indeed,** for each label 'y' (to be predicted), you need a set of values 'X'. And a very convenient way of representing this is to put the values in a vector, such that when you consider multiple labels, you end up with a matrix, containing one row per label and one column per feature. In an abstract way, you can definitely think of those vectors belonging to a multiple dimensions space, but (usually) not an Euclidean one. Hence all the math apply, only the interpretation differs ! Hope that helps you.
I have N elements and need to find the maximum of these elements. At each time tick, exactly one of the N elements is updated and I need to determine the new max element (more specifically, the index of the max element). How can I do this by accessing as little "state" as possible? The naive approach is to store all N elements and in each tick read the N elements and determine the max element. Here the amount of "state" accessed corresponds to N elements. Is it possible to determine the max element by maintaining lesser state -- specifically state that grows sub-linearly as N increases? Essentially I do not want to make a pass on the entire set of N elements. Assume N is a fixed number. It seems like this problem has the flavor of data streaming algorithms like the count(-min) sketch. We have N buckets -- items being streamed in update the count of one of these N buckets -- and wish to perform some query on these N buckets with sub-linear space usage. I understand how sketches can be used to query the value of the Nth bucket, but do not understand if it can be extended to obtain an estimate for the max value (and specifically, the index of the max element). The heavy hitters problem outputs items above a certain frequency, but I am interested only in the item with the maximum frequency. Am I over-thinking this problem? Is there a far simpler solution?
In the streaming model, any constant approximation of the maximum requires $\Omega(N)$ space. Showing this for a small enough constant approximation follows by an easy reduction from the disjointness problem in two-party communication complexity. This is now very standard, but I will describe it below anyways. Recall that in the disjointness problem Alice is given a binary string $x \in \{0, 1\}^N$, and Bob is given a string $y \in \{0, 1\}^N$, and they want to decide whether $x$ and $y$ have disjoint support. It is a classic result that this task requires that Alice and Bob exchange at least $\Omega(N)$ bits, even if they share random bits and only want to succeed with probability $2/3$ over the choice of randomness. Assume there is a randomized streaming algorithm $A$ with space complexity $S$, which with probability $2/3$ computes a number $a$ such that $\frac{a}{F\_\infty} \in (\frac{2}{3}, \frac{4}{3})$, where $F\_\infty$ is the true maximum frequency of the $N$ elements. This gives a protocol for the disjointness problem with $S$ bits of communication, and therefore $S = \Omega(N)$. In the reduction, Alice simulates the algorithm $A$ and feeds it a stream which increases the frequency of element $i$ by $1$ for each $i$ such that $x\_i = 1$. Then Alice takes the memory state of $A$ (which has only $S$ bits) and sends it to Bob, who then simulates $A$ starting from the state he received from Alice and feeds it a stream which increases the frequency of $i$ by $1$ for each $i$ s.t. $y\_i = 1$. Then Bob decides that $x$ and $y$ have disjoint support if the output $a$ of $A$ is at most $4/3$. For more information, see [Alon, Matias, and Szegedy's beautiful paper](http://www.tau.ac.il/~nogaa/PDFS/amsz4.pdf), which introduced the streaming model and proved the first lower bounds.
It is impossible to write a programming language that allows all machines that halt on all inputs and no others. However, it seems to be easy to define such a programming language for any standard complexity class. In particular, we can define a language in which we can express all efficient computations and only efficient computations. For instance, for something like $P$: take your favorite programming language, and after you write your program (corresponding to Turing Machine $M'$), add three values to the header: an integer $c$, and integer $k$, and a default output $d$. When the program is compiled, output a Turing machine $M$ that given input $x$ of size $n$ runs $M'$ on $x$ for $c n^k$ steps. If $M'$ does not halt before the $c n^k$ steps are up, output the default output $d$. Unless I am mistaken, this programming languages will allow us to express all computations in $P$ and nothing more. However, this proposed language is inherently non-interesting. My question: are there programming languages that capture subsets of computable functions (such as all efficiently computable function) in a non-trivial way? If there are not, is there a reason for this?
One language attempting to express only polynomial time computations is the [soft lambda calculus](http://www-lipn.univ-paris13.fr/~baillot/Publications/MESPAPIERS/fossacs072.pdf). It's type system is rooted in linear logic. A recent [thesis](http://edoc.ub.uni-muenchen.de/9910/1/Schimanski_Stefan.pdf) addresses polynomial time calculi, and provides a good summary of recent developments based on this approach. Martin Hofmann has been working on the topic for quite some time. An older list of relevant papers can be found [here](http://www.dcs.ed.ac.uk/home/resbnd/); Many of his [papers](http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/h/Hofmann:Martin.html)'s continue in this direction. Other work takes the approach of verifying that the program uses a certain amount of resources, using [Dependent Types](http://www.cs.st-andrews.ac.uk/~eb/drafts/icfp09.pdf) or [Typed Assembly Language](http://www.cs.cornell.edu/talc/papers/resource_bound/res.pdf). Yet other approaches are based on [resource bounded formal calculi](http://www.di.unito.it/~dezani/papers/asian03.pdf), such as variants of the ambient calculus. These approaches have the property that well-typed programs satisfy some pre-specified resource bounds. The resource bound could be time or space, and generally can depend upon the size of the inputs. Early work in this area is on strongly normalising calculi, meaning that all well-typed programs halt. [System F](http://en.wikipedia.org/wiki/System_F), aka the polymorphic lambda calculus, is strongly normalising. It has no fixed point operator, but is nonetheless quite expressive, though I don't think it is known what complexity class it corresponds to. By definition, any strongly normalising calculus expresses some class of terminating computations. The programming language [Charity](http://en.wikipedia.org/wiki/Charity_(programming_language)) is a quite expressive functional language that halts on all inputs. I don't know what complexity class it can expression, but the Ackermann function can be written in Charity.
A perfect estimator would be accurate (unbiased) and precise (good estimation even with small samples). I never really thought of the question of precision but only the one of accuracy (as I did in [Estimator of $\frac{\sigma^2}{\mu (1 - \mu)}$ when sampling without replacement](https://stats.stackexchange.com/questions/298309/estimator-of-frac-sigma2-mu-1-mu-when-sampling-without-replacement) for example). Are there cases where the unbiased estimator is less precise (and therefore eventually "less good") than a biased estimator? If yes, I would love a simple example proving mathematically that the less accurate estimator is so much more precise that it could be considered better.
Yes there are plenty of cases; you're beating around the bush that is the topic of [Bias-Variance tradeoff](https://en.wikipedia.org/wiki/Bias_of_an_estimator#Bias.2C_variance_and_mean_squared_error) (in particular, the graphic to the right is a good visualization). As for a mathematical example, I am pulling the following example from the excellent *Statistical Inference* by Casella and Berger to show that a biased estimator has lower Mean Squared Error and thus is considered better. Let $X\_1, ..., X\_n$ be i.i.d. n$(\mu, \sigma^2)$ (i.e. Gaussian with mean $\mu$ and variance $\sigma^2$ in their notation). We will compare two estimators of $\sigma^2$: the first, unbiased, estimator is $$\hat{\sigma}\_{unbiased}^2 := \frac{1}{n-1}\sum\_{i=1}^{n} (X\_i - \bar{X})^2$$ usually called $S^2$, the canonical sample variance, and the second is $$\hat{\sigma}\_{biased}^2 := \frac{1}{n}\sum\_{i=1}^{n} (X\_i - \bar{X})^2 = \frac{n-1}{n}\hat{\sigma}\_{unbiased}^2$$ which is the Maximum Likelihood estimate of $\sigma^2$. First, the MSE of the unbiased estimator: $$\begin{align} \text{MSE}(\hat{\sigma}^2\_{unbiased}) &= \text{Var} \ \hat{\sigma}^2\_{unbiased} + \text{Bias}(\hat{\sigma}^2\_{unbiased})^2 \\ &= \frac{2\sigma^4}{n-1}\end{align}$$ The MSE of the biased, maximum likelihood estimate of $\sigma^2$ is: $$\begin{align}\text{MSE}(\hat{\sigma}\_{biased}^2) &= \text{Var}\ \hat{\sigma}\_{biased}^2 + \text{Bias}(\hat{\sigma}\_{biased}^2)^2\\ &=\text{Var}\left(\frac{n-1}{n}\hat{\sigma}^2\_{unbiased}\right) + \left(\text{E}\hat{\sigma}\_{biased}^2 - \sigma^2\right)^2 \\ &=\left(\frac{n-1}{n}\right)^2\text{Var} \ \hat{\sigma}^2\_{unbiased} \, + \left(\text{E}\left(\frac{n-1}{n}\hat{\sigma}^2\_{unbiased}\right) - \sigma^2\right)^2\\ &= \frac{2(n-1)\sigma^4}{n^2} + \left(\frac{n-1}{n}\sigma^2 - \sigma^2\right)^2\\ &= \left(\frac{2n-1}{n^2}\right)\sigma^4\end{align}$$ Hence, $$\text{MSE}(\hat{\sigma}\_{biased}^2) = \frac{2n-1}{n^2}\sigma^4 < \frac{2}{n-1}\sigma^4 = \text{MSE}(\hat{\sigma}\_{unbiased}^2)$$
In [compressed sensing](http://en.wikipedia.org/wiki/Compressed_sensing), the goal is to find linear compression schemes for huge input signals that are known to have a sparse representation, so that the input signal can be recovered efficiently from the compression (the "sketch"). More formally, the standard setup is that there is a signal vector $x \in \mathbb{R}^n$ for which $\|x\|\_0 < k$, and the compressed representation equals $Ax$ where $A$ is a $R$-by-$n$ real matrix where we want $R \ll n$. The magic of compressed sensing is that one can explicitly construct $A$ such that it allows fast (near-linear time) exact recovery of any $k$-sparse $x$ with $R$ as small as $O(k n^{o(1)})$. I might not have the parameters best known but this is the general idea. My question is: are there similar phenomena in other settings? What I mean is that the input signal could come from some "low complexity family" according to a measure of complexity that is not necessarily sparsity. We then want compression and decompression algorithms, not necessarily linear maps, that are efficient and correct. Are such results known in a different context? What would your guess be for a more "general" theory of compressed sensing? (Of course, in applications of compressed sensing, linearity and sparsity are important issues. The question I ask here is more "philosophical".)
There is manifold-based compressed sensing, in which the sparsity condition is replaced by the condition that the data lie on a low-dimensional submanifold of the natural space of signals. Note that sparsity can be phrased as lying on a particular manifold (in fact, a secant variety). See, for example [this paper](http://people.ee.duke.edu/~lcarin/MFA10.pdf) and the references in its introduction. (I admittedly do not know if this paper is representative of the area -- I am more familiar with the related topic of manifold-based classifiers a la [Niyogi-Smale-Weinberger](http://people.cs.uchicago.edu/~niyogi/papersps/noise.pdf).)
In a previous question, I inquired about fitting distributions to some non-Gaussian empirical data. It was suggested to me offline, that I might try the assumption that the data is Gaussian and fit a Kalman filter first. Then, depending on the errors, decide if it is worth developing something fancier. That makes sense. So, with a nice set of time series data, I need to estimate **several** variable for a Kalman filter to run. (Sure, there is probably an R package somewhere, but I want to actually **learn** how to do this myself.)
Max Welling has a nice [tutorial](http://www.cs.ucl.ac.uk/staff/S.Prince/4C75/WellingKalmanFilter.pdf) that describes all of the Kalman Filtering and Smoothing equations as well as parameter estimation. This may be a good place to start.
I am interested in fitting a Poisson/negative binomial distribution to estimate the number of times a phenomenon happens within a period, let's just say 10 years. I can count the events from monthly reports, but unfortunately, there are reports missing. So for one sample, I might have 120 observational slots, but for some others I might have 30. The event can happen if it is not observed. The missing slots pattern is random (ie not correlated between samples), and it can result in anything from a nearly complete observational record to a very decimated one. How can I cope with this?
Sounds like your problem is one where you have a fixed rate at which the event happens (after adjusting for covariates), but one unit you observe say 10 years while another for only 1 year. This is a fairly standard problem for this type of model, and the "offset" or an "exposure" is designed for that problem. Once you know those keyword, it is easy to find more about that in any textbook dealing with count data. As a textbook I like <http://www.stata.com/bookstore/regression-models-categorical-dependent-variables>
Given 2 integer $M$ and $N$, a recursive modulo is $M \bmod (M \bmod (M \bmod ...(M \bmod N)$ until the result is 0. What is its time complexity? I guess that it's $O(log(M))$ but I can't prove it.
Extending on @gnasher729's answer, I tried to find, given $N\in \mathbb{N}$, what is the smallest $M$ such that $N$ iterations of modulo are necessary before getting $0$ (and even finding if such an $M$ existed). What we want to find is the smallest $M$ such that $\forall k\in \{1, …, N\}$, $M \equiv -1 \mod k$. Denote $f(N)$ such an $M$. A few lines in Python after that, I found the following values: * $f(2) = 1$; * $f(3) = 5$; * $f(4) = 11$; * $f(5) = f(6) = 59$; * $f(7) = 419$; * $f(8) = 839$; * $f(9) = f(10) = 2519$; * $f(11) = f(12) = 27719$; * $f(13) = f(14) = f(15) = 360359$. Given those observations, I conjectured that the following algorithm could compute $f(N)$: * $f(2) = 1$; * if $N > 2$, then + either $N = p^{\alpha}$ with $p$ prime, and $f(N) = p\times (f(N-1) + 1) - 1$; + or $f(N) = f(N-1)$. After a bit of search, I found out that I just rediscovered a formula to compute [$\text{lcm}(1, 2, …, N) - 1$](https://math.stackexchange.com/questions/659799/lcm-of-first-n-natural-numbers). Given the definition of the least common multiple, it is clear that $f(N) \equiv -1 \mod k$ for all $k\in \{1, …, N\}$. $f(N)$ is indeed the smallest such $M$: assume $M<\text{lcm}(1, 2, …, N) - 1$. Then there exists $k\in \{1, …, N\}$ such that $k \not\mid M + 1$. That means that $M\not\equiv -1\mod k$.
I am trying to fit a zero-inflated Poisson GAM to my count data, and I want a log link. `ziP()` from the `mgcv` package does not support the log link. What can I do?
If you mean you want a log link for the Poisson part then the model is actually parameterised in terms of $\log(\mu\_i)$ where $\mu\_i = E(y\_i)$. In other words the link is implied but you need to take care of any backtransformation; predictions and fitted values will be on the log scale, even if you use `predict(..., type = 'response')` (double check I have this right as it's a while since I used this and I'm going off the help page; run `predict()` with `type = 'link'` and `type = 'response'`, and note if they are the same.) The other option in *mgcv* is to use the `ziplss()`, but it again is coded in terms of the log Poisson response and the login of probability of presence, such that both links are `'identity'`.