input
stringlengths
38
38.8k
target
stringlengths
30
27.8k
This question is inspired by a comment Jukka Suomela made on [another question](https://cstheory.stackexchange.com/q/9310/30). > > What are examples of infinitely large but locally finite computation problems (and algorithms)? > > > In other words, what are examples of computations that halt in finite time, in which each Turing Machine reads and processes only finite data, but altogether the computation solves an infinite-size problem if there are infinitely-many Turing machines networked together?
Just to give some ideas of what is possible (but somewhat non-trivial), here is one example: a distributed algorithm that finds a **maximal edge packing** on a bounded-degree graph. Problem definition ------------------ Given a simple undirected graph $G = (V,E)$, an *edge packing* (or fractional matching) associates a weight $w(e)$ with each edge $e \in E$ such that for each node $v \in V$, the total weight of edges incident to $v$ is at most $1$. A node is *saturated* if the total weight of incident edges is equal to $1$. An edge packing is *maximal* if all edges have at least one saturated endpoint (i.e., none of the weights can be greedily extended). Observe that a maximal matching $M \subseteq E$ defines a maximal edge packing (set $w(e) = 1$ iff $e \in M$); hence it is easy to solve in a classical centralised setting (assuming $G$ is finite). Edge packings actually have some applications, at least if one defines an application in the usual TCS sense: the set of saturated nodes forms a $2$-approximation of a minimum vertex cover (of course this makes only sense in the case of a finite $G$). Model of computation -------------------- We will assume that there is a global constant $\Delta$ such that the degree of any $v \in V$ is at most $\Delta$. To keep this as close to the spirit of the original question, let us define the model of computation as follows. We assume that each node $v \in V$ is a Turing machine, and an edge $\{u,v\} \in E$ is a communication channel between $u$ and $v$. The input tape of $v$ encodes the degree $\deg(v)$ of $v$. For each $v \in V$, the edges incident to $v$ are labelled (in an arbitrary order) with integers $1,2,\dotsc,\deg(v)$; these are called *local edge labels* (the label of $\{u,v\} \in E$ can be different for $u$ and $v$). The machine has instructions with which it can send and receive messages through each of these edges; a machine can address its neighbours by using the local edge labels. We require that the machines compute a valid edge packing $w$ for $G$. More precisely, each $v \in V$ has to print on its output tape an encoding of $w(e)$ for each edge $e$ incident to $v$, ordered by the local edge labels, and then halt. We say that a distributed algorithm $A$ finds a maximal edge packing in time $T$, if the following holds for any graph $G$ of maximum degree $\Delta$, and for any local edge labelling of $G$: if we replace each node of $G$ with an identical copy of the Turing machine $A$ and start the machines, then after $T$ steps all machines have printed a valid (globally consistent) solution and halted. Infinities ---------- Now all of the above makes perfect sense even if the set of nodes $V$ is countably infinite. The problem formulation and the model of computation do not have any references to $|V|$, directly or indirectly. The length of the input for each Turing machine is bounded by a constant. What is known ------------- The problem can be solved in finite time even if $G$ is infinite. The problem is non-trivial in the sense that some communication is necessary. Moreover, the running time depends on $\Delta$. However, for any fixed $\Delta$, the problem can be solved in constant time regardless of the size of $G$; in particular, the problem is solvable on infinitely large graphs. I have not checked what is the best known running time in the model defined above (which is *not* the usual model used in the field). Nevertheless, a running time that is polynomial in $\Delta$ should be fairly easy to achieve, and I think a running time that is sublinear in $\Delta$ is impossible.
It seems that lots of people (including me) like to do exploratory data analysis in Excel. Some limitations, such as the number of rows allowed in a spreadsheet, are a pain but in most cases don't make it impossible to use Excel to play around with data. [A paper by McCullough and Heiser](http://www.pages.drexel.edu/~bdm25/excel2007.pdf), however, practically screams that you will get your results all wrong -- and probably burn in hell as well -- if you try to use Excel. Is this paper correct or is it biased? The authors do sound like they hate Microsoft.
Use the right tool for the right job and exploit the strengths of the tools you are familiar with. In Excel's case there are some salient issues: * Please don't use a spreadsheet to manage data, even if your data will fit into one. You're just asking for trouble, terrible trouble. There is virtually no protection against typographical errors, wholesale mixing up of data, truncating data values, etc., etc. * Many of the statistical functions indeed are broken. The t distribution is one of them. * The default graphics are awful. * It is missing some fundamental statistical graphics, especially boxplots and histograms. * The random number generator is a joke (but despite that is still effective for educational purposes). * Avoid the high-level functions and most of the add-ins; they're c\*\*p. But this is just a general principle of safe computing: if you're not sure what a function is doing, don't use it. Stick to the low-level ones (which include arithmetic functions, ranking, exp, ln, trig functions, and--within limits--the normal distribution functions). *Never* use an add-in that produces a graphic: it's going to be terrible. (NB: it's dead easy to create your own probability plots from scratch. They'll be correct and highly customizable.) In its favor, though, are the following: * Its basic numerical calculations are as accurate as double precision floats can be. They include some useful ones, such as log gamma. * It's quite easy to wrap a control around input boxes in a spreadsheet, making it possible to create dynamic simulations easily. * If you need to share a calculation with non-statistical people, most will have some comfort with a spreadsheet and none at all with statistical software, no matter how cheap it may be. * It's easy to write effective numerical macros, including porting old Fortran code, which is quite close to VBA. Moreover, the execution of VBA is reasonably fast. (For example, I have code that accurately computes non-central t distributions from scratch and three different implementations of Fast Fourier Transforms.) * It supports some effective simulation and Monte-Carlo add-ons like Crystal Ball and @Risk. (They use their own RNGs, by the way--I checked.) * The immediacy of interacting directly with (a small set of) data is unparalleled: it's better than any stats package, Mathematica, etc. When used as a giant calculator with loads of storage, a spreadsheet really comes into its own. * *Good* EDA, using robust and resistant methods, is not easy, but after you have done it once, you can set it up again quickly. With Excel you can effectively reproduce *all* the calculations (although only some of the plots) in Tukey's EDA book, including median polish of n-way tables (although it's a bit cumbersome). In direct answer to the original question, there is a bias in that paper: it focuses on the material that Excel is weakest at and that a competent statistician is least likely to use. That's not a criticism of the paper, though, because warnings like this need to be broadcast.
I am very new to machine learning and in my first project have stumbled across a lot of issues which I really want to get through. I'm using logistic regression with R's `glmnet` package and alpha = 0 for ridge regression. I'm using ridge regression actually since lasso deleted all my variables and gave very low area under curve (0.52) but with ridge there isn't much of a difference (0.61). My dependent variable/output is probability of click, based on if there is a click or not in historical data. The independent variables are state, city, device, user age, user gender, IP carrier, keyword, mobile manufacturer, ad template, browser version, browser family, OS version and OS family. Of these, for prediction I'm using state, device, user age, user gender, IP carrier, browser version, browser family, OS version and OS family; I am not using keyword or template since we want to reject a user request before deep diving in our system and selecting a keyword or template. I am not using city because they are too many or mobile manufacturer because they are too few. **Is that okay or should I be using the rejected variables?** To start, I create a sparse matrix from my variables which are mapped against the column of clicks that have yes or no values. After training the model, I save the coefficients and intercept. These are used for new incoming requests using the formula for logistic regression: > > ![1 / (1+e^-1*sum(a+k(ith)*x(ith)))](https://i.stack.imgur.com/AZBRq.png) > > > Where `a` is intercept, `k` is the `i`th coefficient and `x` is the `i`th variable value. **Is my approach correct so far?** Simple GLM in R (that is where there is no regularized regression, right?) gave me 0.56 AUC. With regularization I get 0.61 but there is no distinct threshold where we could say that above 0.xx its mostly ones and below it most zeros are covered; actually, the max probability that a click didn't happen is almost always greater than the max probability that a click happened. **So basically what should I do?** I have read how stochastic gradient descent is an effective technique in logit so how do I implement stochastic gradient descent in R? If it's not straightforward, is there a way to implement this system in Python? Is SGD implemented after generating a regularized logistic regression model or is it a different process altogether? Also there is an algorithm called follow the regularized leader (FTRL) that is used in click-through rate prediction. Is there a sample code and use of FTRL that I could go through?
Stochastic gradient descent is a method of setting the parameters of the regressor; since the objective for logistic regression is convex (has only one maximum), this won't be an issue and SGD is generally only needed to improve convergence speed with masses of training data. What your numbers suggest to me is that your features are not adequate to separate the classes. Consider adding extra features if you can think any any that are useful. You might also consider interactions and quadratic features in your original feature space.
Here are two variations on the definition of NP. They (almost certainly) define distinct complexity classes, but my question is: are there natural examples of problems that fit into these classes? (My threshold for what counts as natural here is a bit lower than usual.) Class 1 (a superclass of NP): Problems with polynomial-size witnesses that take superpolynomial but subexponential time to verify. For concreteness, let's say time $n^{O(\log n)}$. This is equivalent to the class of languages recognized by nondeterministic machines that take time $n^{O(\log n)}$ but can only make poly(n) nondeterministic guesses. > > Are there natural problems in class 1 that is not known/thought to be either in $NP$ nor in $DTIME(n^{O(\log n)})$? > > > Class 1 is a class of languages, as usual. Class 2, on the other hand, is a class of relational problems: Class 2: A binary relation R = {(x,y)} is in this class if 1. There is a polynomial p such that (x,y) in R implies |y| is at most p(|x|). 2. There is a poly(|x|)-time algorithm A such that, for all inputs x, if there is a y such that (x,y) is in R, then (x,A(x)) is in R, and if there is no such y, then A(x) rejects. 3. For any poly(|x|)-time algorithm B, there are infinitely many pairs (x,w) such that B(x,w) differs from R(x,w) (here I am using R to denote its own characteristic function). In other words, for all instances, some witness is easy to find if there is one. And yet not all witnesses are easily verifiable. (Note that if R is in class 2, then the projection of R onto its first factor is simply in P. This is what I meant by saying that class 2 is a class of relational problems.) > > Are there natural relational problems in class 2? > > >
For Class 2, one somewhat silly example is R(p, a) = {p is an integer polynomial, a is in the range of p, and |a| = O(poly(|p|)}. R is in Class 2 but undecidable.
I'll phrase my question using an intuitive and rather extreme example: **Is the expected compression ratio (using zip compression) of a children's book higher than that of a novel written for adults?** I read somewhere that specifically the compression ratio for zip compression can be considered an indicator for the information (as interpreted by a human being) contained in a text. Can't find this article anymore though. I am not sure how to attack this question. Of course no compression algorithm can grasp the meaning of verbal content. So what would a zip compression ratio reflect when applied to a text? Is it just symbol patterns - like word repetitions will lead to higher ratio - so basically it would just reflect the vocabulary? **Update:** Another way to put my question would be whether there is a correlation which goes beyond repetition of words / restricted vocabulary. --- Tangentially related: [Relation of Word Order and Compression Ratio and Degree of Structure](http://www.joyofdata.de/blog/relation-of-word-order-and-compression-ratio/)
I'd say this sounds highly likely. Suppose you take a fairly large sample of children's literature and a similarly-sized (in characters) sample of adult literature. It seems entirely reasonable to suspect that there is a greater variety of words in the adult literature, and that these words may be longer or may rely on more unusual diphthongs than words used in children's literature. This may further imply that children's literature has more whitespace and possibly more punctuation than adult literature. Taken together, this seems to point towards children's literature being much more homogenous at the scale of 5-10 characters when compared to adult literature, so compression techniques that can take advantage of homogeneity at the scale of up to at least a dozen or so characters should be able to compress children's text more efficiently than they could adult literature. Of course, this makes some assumptions about what is considered children's literature and what is considered adult literature. What do you consider "Gulliver's travels", for instance? My discussion above assumes that we're considering books that are clearly for young children and books that are clearly for adults; compare "Goodnight, Moon" to "1984", for instance.
I see examples of LSTM sequence to sequence generation models which use start and end tokens for each sequence. I would like to understand when making predictions with this model, if I'd like to make predictions on an arbitrary sequence - is it required to include start and end tokens tokens in it?
When I understand you correctly, you would loop over a string character by character and compare if there is a match at the same position in some other string. Drawing [from this post](https://stackoverflow.com/a/4588633/9524424), you find that: ``` import distance distance.levenshtein("0123456789", "1234567890") distance.hamming("0123456789", "1234567890") ``` The **[Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance) is 2**, while the **[Hamming distance](https://en.wikipedia.org/wiki/Hamming_distance) is 10** (and so would be your loop approach when I understand you correctly). So in case Hamming is okay for your task, you can use the loop approach. Please see the links for more details regarding the distances.
It is well-known that in general, the order of universal and existential quantifiers cannot be reversed. In other words, for a general logical formula $\phi(\cdot,\cdot)$, $(\forall x)(\exists y) \phi(x,y) \quad \not\Leftrightarrow \quad (\exists y)(\forall x) \phi(x,y)$ On the other hand, we know the right-hand side is more restrictive than the left-hand side; that is, $(\exists y)(\forall x) \phi(x,y) \Rightarrow (\forall x)(\exists y) \phi(x,y)$. This question focuses on techniques to derive $(\forall x)(\exists y) \phi(x,y) \Rightarrow (\exists y)(\forall x) \phi(x,y)$, whenever it holds for $\phi(\cdot,\cdot)$. **Diagonalization** is one such technique. I first see this use of diagonalization in the paper [Relativizations of the $\mathcal{P} \overset{?}{=} \mathcal{NP}$ Question](http://dx.doi.org/10.1137/0204037) (see also the [short note by Katz](http://www.cs.umd.edu/~jkatz/complexity/relativization.pdf)). In that paper, the authors first prove that: > > For any deterministic, polynomial-time oracle machine M, there exists a language B such that $L\_B \ne L(M^B)$. > > > They then reverse the order of the quantifiers (using **diagonalization**), to prove that: > > There exists a language B such that for all deterministic, poly-time M we have $L\_B \ne L(M^B)$. > > > This technique is used in other papers, such as [[CGH]](http://eprint.iacr.org/1998/011.pdf) and [[AH]](http://linkinghub.elsevier.com/retrieve/pii/089054019190024V). I found another technique in the proof of Theorem 6.3 of [[IR]](http://portal.acm.org/citation.cfm?id=73007.73012). It uses a combination of **measure theory** and **pigeon-hole principle** to reverse the order of quantifiers. > > I want to know what other techniques are used in computer science, to reverse the order of universal and existential quantifiers? > > >
To me, the "canonical" proof of the Karp-Lipton theorem (that $NP \subseteq P/poly \Longrightarrow \Pi\_2 P = \Sigma\_2 P$) has this flavor. But here it is not the actual theorem statement in which quantifiers get reversed, but rather the "quantifiers" get reversed within the model of alternating computation, using the assumption that $NP$ has small circuits. You want to simulate a computation of the form $(\forall y)(\exists z)R(x,y,z)$ where $R$ is a polynomial-time predicate. You can do this by guessing a small circuit $C$ for (say) satisfiability, modifying $C$ so that it checks itself and produces a satisfying assignment when its input is satisfiable. Then for all $y$, create a SAT instance $S(x,y)$ that's equivalent to $(\exists z)R(x,y,z)$ and solve it. So you've produced an equivalent computation of the form $(\exists C)(\forall y)[S(x,y)$ is satisfiable according to $C]$.
The problem is informally defined as follows: There is a pipe with holes in it, located at discrete positions. We have tubes that are used to cover the holes. Tubes come at fixed radius. Having a tube of a radius of 1, say, placed at position 3, covers holes at positions 2, 3, and 4. The puzzle is about providing positions for placing the tubes so that the lowest possible amount of tubes are used. Note that tubes are allowed to overlap each other. For example, the tube radius is 1 and the holes are located at {2, 4, 6, 10}. Tubes need to be placed at position 3 (covering 2 and 4), 6 and 10. Obviously this seems to be a covering problem, yet I wasn't able to find a specific, studied problem equivalent to this one. Any ideas on how to efficiently approach it? **EDIT**: please consider an additional condition, which requires a tube's center to be placed exactly on top of a hole (for whatever strange reason). This should change the solution considerably.
First an observation: This problem could be stated with different sets of rules. If you try to find a solution that minimises the cost, and using a tube has an associated positive cost independent of the position of the tube and the holes covered by it, then you can always find an optimal solution where each tube is positioned such that the leftmost hole it covers is at the left edge of the tube, and is not covered by any other tube. Why is that? Because given any optimal solution where this is not the case, we can move one tube to the right until the leftmost hole it covers is not covered by any other tube (if that is not possible, then the tube would be redundant), and then further to the right until the leftmost hole it covers is at its left edge. This shows immediately that Vince's algorithm is optimal for the case that all tubes have the same radius and the same cost, since it is always necessary to cover the leftmost hole that isn't covered yet, *and* we can cover it as described by Vince's algorithm, and still get an optimal solution. With different rules, the situation becomes more difficult. For example, we might have tubes of different sizes and cost. In that case, we can use dynamic programming - the best way to cover n holes is always to cover the first m holes at optimal cost, then add a tube covering the next n-m holes. If there are limited numbers of some tubes, then we still use dynamic programming, but we need to keep track of the optimal solutions with different usage of limited resources. For example, if we had unlimited numbers of tubes of size 1, and at most three tubes of size 5, we would separately calculate how to best cover the first n holes, using 0/1/2/3 tubes of size 5. PS. Another condition was added: The center of a tube must be exactly on top of a hole. So if I have holes at 3 and 5 but not at 4, I cannot cover them both with a tube of radius 1 but only with a tube of radius 2, centered at either 3 or 5. An optimal solution with all tubes the same size will still cover the leftmost uncovered hole with a tube positioned as far to the right as possible. So if the leftmost uncovered hole is at location h, and the tube has radius r, instead of always placing the center of the tube at (h + r) which may not be possible, you place it at the first of the locations (h+r, h+r-1, ..., h+1, h) which has a hole.
Consider the following two arguments > > "*For every non deterministic TM M1 there exists an **equivalent** > deterministic machine M2 recognizing the same language.*" > > "***Equivalence*** of two Turing Machines is undecidable" > > > These two arguments seem a bit contradicting to me. What should I conclude from these two arguments?
These arguments are not really related, on several levels. The first is that *existence* and *computability* are not the same thing. That is, even if there exists some object (e.g. a TM), it does not mean that *finding* (or computing) such an object can be done using a TM. However, there is a more basic difference between the two statements. The first says that for every TM $M\_1$, there exists some equivalent machine $M\_2$. The second statement is that deciding, given $M\_1$ and $M\_2$, whether these two specific machines are equivalent, is undecidable. Just to emphasize the difference, consider the following problem: given a TM $M\_1$, is there some TM $M\_2$, that is different from $M\_1$, which is equivalent to $M\_1$? This problem is trivial - the answer is always "yes" (e.g. just add a redundant state to $M\_1$).
In his book *Computational Complexity*, Papadimitriou defines **FNP** as follows: > > Suppose that $L$ is a language in **NP**. By Proposition 9.1, there is a polynomial-time decidable, polynomially balanced relation $R\_L$ such that for all strings $x$: There is a string $y$ with $R\_L(x,y)$ if and only if $x\in L$. The function problem associated with $L$, denoted $FL$, is the following computational problem: > > > Given $x$, find a string $y$ such that $R\_L(x,y)$ if such a string exists; if no such string exists, return "no". > > > The class of all function problems associated as above with language in **NP** is called **FNP**. **FP** is the subclass resulting if we only consider function problems in **FNP** that can be solved in polynomial time. > > > (...) > > > (...), we call a problem $R$ in **FNP** *total* if for every string $x$ there is at least one $y$ such that $R(x,y)$. The subclass of **FNP** containing all total function problems is denoted **TFNP**. > > > In a venn diagram in the chapter overview, Papadimitriou implies that **FP** $\subseteq$ **TFNP** $\subseteq$ **FNP**. I have a hard time understanding why exactly it holds that **FP** $\subseteq$ **TFNP** since problems in **FP** do not have to be total per se. To gain a better understanding, I've been plowing through literature to find a waterproof definition of **FP**,**FNP** and sorts, without success. In my very (humble) opinion, I think there is little (correct!) didactic material of these topics. For decision problems, the classes are sets of languages (i.e. sets of strings). What exactly are the classes for function problems? Are they sets of relations, languages, ... ? What is a solid definition?
Emil Jerabek's comment is a nice summary, but I wanted to point out that there are other classes with clearer definitions that capture more-or-less the same concept, and to clarify the relation between all these things. [Warning: while I believe I've gotten the definitions right, some of the things below reflect my personal preferences - I've tried to be clear about where that was.] In the deterministic world, a function class is just a collection of functions (in the usual, mathematical sense of the word "function", that is, a map $\Sigma^\* \to \Sigma^\*$). Occasionally we want to allow "partial functions," whose output is "undefined" for certain inputs. (Equivalently, functions that are defined on a subset of $\Sigma^\*$ rather than all of it.) Unfortunately, there are two different definitions for $\mathsf{FP}$ floating around, and as far as I can tell they are not equivalent (though they are "morally" equivalent). * $\mathsf{FP}$ (definition 1) is the class of functions that can be computed in polynomial-time. Whenever you see $\mathsf{FP}$ and its not in a context where people are talking about $\mathsf{FNP}, \mathsf{TFNP}$, this is the definition I assume. In the nondeterministic world things get a little funny. There, it is convenient to allow "partial, multi-valued functions." It would be natural to also call such a thing a *binary relation*, that is, a subset of $\Sigma^\* \times \Sigma^\*$. But from the complexity point of view it is often philosophically and mentally useful to think of these things as "nondeterministic functions." I think many of these definitions are clarified by the following classes (whose definitions are completely standardized, if not very well-known): * $\mathsf{NPMV}$: The class of "partial, multi-valued functions" computable by a nondeterministic machine in polynomial time. What this means is there is a poly-time nondeterministic machine, and on input $x$, on each nondeterministic branch it may choose to accept and make some output, or reject and make no output. The "multi-valued" output on input $x$ is then the set of all outputs on all nondeterministic branches when given $x$ as input. Note that this set can be empty, so as a "multi-valued function" this may only be partial. If we think of it in terms of binary relations, this corresponds to the relation $\{(x,y) : y \text{ is output by some branch of the computation on input } x\}$. * $\mathsf{NPMV}\_t$: Total "functions" in $\mathsf{NPMV}$, that is, on every input $x$, at least one branch accepts (and therefore makes an output, by definition) * $\mathsf{NPSV}$: Single-valued (potentially partial) functions in $\mathsf{NPMV}$. There is some flexibility here, however, in that multiple branches may accept, but if any branch accepts, then all accepting branches must be guaranteed to make the *same* output (so that it really is single-valued). However, it is still possible that no branch accepts, so the function is only a "partial function" (i.e. not defined on all of $\Sigma^\*$). * $\mathsf{NPSV}\_t$: Single-valued total functions in $\mathsf{NPSV}$. These really are functions, in the usual sense of the word, $\Sigma^\* \to \Sigma^\*$. It is a not-too-hard exercise to see that $\mathsf{NPSV}\_t = \mathsf{FP}^{\mathsf{NP} \cap \mathsf{coNP}}$ (using Def 1 for FP above). When we talk about potentially multi-valued functions, talking about containment of complexity classes isn't really useful any more: $\mathsf{NPMV} \not\subseteq \mathsf{NPSV}$ unconditionally simply because $\mathsf{NPSV}$ doesn't contain any multi-valued "functions", but $\mathsf{NPMV}$ does. Instead, we talk about "c-containment", denoted $\subseteq\_c$. A (potentially partial, multi-valued) function $f$ refines a (potentially partial multi-valued) function $g$ if: (1) for every input $x$ for which $g$ makes some output, so does $f$, and (2) the outputs of $f$ are always a subset of the outputs of $g$. The proper question is then whether every $\mathsf{NPMV}$ "function" has an $\mathsf{NPSV}$ refinement. If so, we write $\mathsf{NPMV} \subseteq\_c \mathsf{NPSV}$. * $\mathsf{PF}$ (a little less standard) is the class of (potentially partial) functions computable in poly-time. That is, a function $f\colon D \to \Sigma^\*$ ($D \subseteq \Sigma^\*$) is in $\mathsf{PF}$ if there is a poly-time deterministic machine such that, on inputs $x \in D$ the machine outputs $f(x)$, and on inputs $x \notin D$ the machine makes no output (/rejects/says "no"/however you want to phrase it). --- * $\mathsf{FNP}$ is a class of "function problems" (rather than a class of functions). I would also call $\mathsf{FNP}$ a "relational class", but really whatever words you use to describe it you need to clarify yourself afterwards, which is why I'm not particularly partial to this definition. To any binary relation $R \subseteq \Sigma^\* \times \Sigma^\*$ there is an associated "function problem." What is a function problem? I don't have a clean mathematical definition the way I do for language/function/relation; rather, it's defined by what a valid solution is: a valid solution to the function problem associated to $R$ is any (potentially partial) function $f$ such that if $(\exists y)[R(x,y)]$ then $f$ outputs any such $y$, and otherwise $f$ makes no output. $\mathsf{FNP}$ is the class of function problems associated to relations $R$ such that $R \in \mathsf{P}$ (when considered as a language of pairs) and is p-balanced. So $\mathsf{FNP}$ is not a class of functions, nor a class of languages, but a class of "function problems," where "problem" here is defined roughly in terms of what it means to solve it. * $\mathsf{TFNP}$ is then the class of function problems in $\mathsf{FNP}$ - defined by a relation $R$ as above - such $R$ is total, in the sense that for every $x$ there exists a $y$ such that $R(x,y)$. In order to not have to write things like "If every $\mathsf{FNP}$ (resp., $\mathsf{TFNP}$) function problem has a solution in $\mathsf{PF}$ (resp., $\mathsf{FP}$ according to above definition), then..." in this context one uses Definition 2 of $\mathsf{FP}$, which is: * $\mathsf{FP}$ (definition 2) is the class of function problems in $\mathsf{FNP}$ which have a poly-time solution. One can assume that the solution (=function) here is total by picking a special string $y\_0$ that is not a valid $y$ for any $x$, and having the function output $y\_0$ when there would otherwise be no valid $y$. (If needed, we can modify the relation $R$ by prepending every $y$ with a 1, and then take $y\_0$ to be the string 0; this doesn't change the complexity of anything involved). --- Here's how these various definitions relate to one another, $\mathsf{FNP} \subseteq \mathsf{FP}$ (definition 2, which is what you should assume because it's in a context where it's being compared with $\mathsf{FNP}$) is equivalent to $\mathsf{NPMV} \subseteq\_c \mathsf{PF}$. $\mathsf{TFNP} \subseteq \mathsf{FP}$ (def 2) is equivalent to $\mathsf{NPMV}\_t \subseteq\_c \mathsf{FP}$ (def 1).
I'm trying to write a framework to compare a set of labels such as (for a sample of 5 yes/no answers to a question) `[0, 1, 1, 1, 0]` to a series of features to determine correlation. For numerical non-sparse features, like "number of words" or "average word length", I know I can use a variance-covariance matrix and get a sense for whether or not "number of words" or "average word length" is an informative feature for a model to answer the question. I'd like to be able to do the same thing for term frequency (let's say using CountVectorizer in scikit-learn), but the resultant covariance matrix will be rather large and will only indicate whether or not that particular *term* is an informative feature. How do I get some kind of "collapsed" or "aggregate" measure of correlation? Is this even possible?
There are of course other choices to fill in for missing data. The median was already mentioned, and it may work better in certain cases. There may even be much better alternatives, which may be very specific to your problem. To find out whether this is the case, you must find out more about the *nature* of your missing data. When you understand in detail why data is missing, the probability of coming up with a good solution will be much higher. You might want to start your investigation of missing data by finding out whether you have *informative* or *non-informative* missings. The first category is produced by random data loss; in this case, the observations with missing values are no different from the ones with complete data. As for *informative* missing data, this one tells you something about your observation. A simple example is a customer record with a missing contract cancellation date meaning that this customer's contract has not been cancelled so far. You usually don't want to fill in informative missings with a mean or a median, but you may want to generate a separate feature from them. You may also find out that there are several kinds of missing data, being produced by different mechanisms. In this case, you might want to produce default values in different ways.
I'm trying to solve the recurrence $$T(n)=2T(\sqrt{n})+\log n$$ using the master theorem. Which case applies here?
The master theorem only applies to recurrences of the form $$T(n)=a\,T(n/b) + f(n)\,.$$ It says nothing about your recurrence. Our [reference question on solving recurrences](https://cs.stackexchange.com/q/2789/9550) gives details of alternative techniques.
I want to learn Gibbs sampling for a Bayesian model. How can I sample the variable from the conditional distribution? ![a Bayesian model](https://i.stack.imgur.com/ca6T4.gif) In this example, arrow means dependent; for example, `Grade` depends on `Difficulty` and `Intelligence`. To use Gibbs sampling to calculate the joint distribution, first I set the `Difficulty` and `Intelligence` to (1,1). The next step is to sample `Grade` from the $\rm{P(Grade|Difficulty=1,Intelligence=1)}$, but how can I sample?
Since we are calculating the joint distribution, we'll assume that our initial sample is $ x = P(D=0,I=0,G=0,L=0,S=0) $ . To calculate the next sample, we'll need to sample each variable from the conditional distribution. 1. $ P(D\mid G,I,S,L) $,from the conditional independencies in the Bayes net, simplifies to just sampling $ P(D)$. We sample and get the value $D=1$. 2. Similarly for $ I $, we sample and get the value $ I=1$. 3. Sampling for $ P(G\mid D,I,S,L) $, due to the conditional independencies encoded by the Bayes net, simplifies to $ P(G\mid D,I) $. Since we have already sampled $ D=1,I=1 $, we use those values and sample $ P(G\mid D=1,I=1) $. In the CPD for Grade, we can choose one of the value from the last row (where $ D=1,I=1 $). We sample and get the value $ G=2 $ (the value 0.3) 4. $ P(L\mid I, G,D,S) $ simplifies to $ P(L\mid G) $. We sample from the second row the Letter CPD, where $ G=2 $, and we sample and get $ L=1 $ (the value 0.6). 5. Similarly, sample $ P(S \mid I,L, G,D,) $ by simplifying to $ P(S \mid I) $. We get $ S=1 $ (sampling from the second row of the CPD where $ I=1 $. And we'll have a new sample $ x': P(D=1,I=1,G=2,L=1,S=1) $.
I had a question on the interaction depth parameter in gbm in R. This may be a noob question, for which I apologize, but how does the parameter, which I believe denotes the number of terminal nodes in a tree, basically indicate X-way interaction among the predictors? Just trying to understand how that works. Additionally, I get pretty different models if I have a dataset with say two different factor variables versus the same dataset except those two factor variables are combined into a single factor (e.g. X levels in factor 1, Y levels in factor 2, combined variable has X \* Y factors). The latter is significantly more predictive than the former. I had thought increasing interaction depth would pick this relationship up.
Both of the previous answers are wrong. Package GBM uses `interaction.depth` parameter as a number of splits it has to perform on a tree (starting from a single node). As each split increases the total number of nodes by 3 and number of terminal nodes by 2 (node $\to$ {left node, right node, NA node}) the total number of nodes in the tree will be $3\*N+1$ and the number of terminal nodes $2\*N+1$. This can be verified by having a look at the output of `pretty.gbm.tree` function. The behaviour is rather misleading, as the user indeed expects the depth to be the depth of the resulting tree. It is not.
Here are some ways to analyze the running time of an algorithm: 1) Worst-case analysis: Running time on the worst instance. 2) Average-case analysis: Expected running time on a random instance. 3) Amortized analysis: Average running time on the worst sequence of instances. 4) Smoothed analysis: Expected running time on the worst randomly perturbed instance. 5) Generic-case analysis: Running time on the worst of all but a small subset of instances. My question: Is this a complete list?
I have two more for the list, which are somewhat similar. 1. Parameterized Analysis expresses the running time as a function of two values instead of one, using some additional information about the input measured in what's called the ``parameter''. As an example take the Independent Set problem. The best running time for the general case is of the form $O(c^n n^{O(1)})$ for some constant $1 < c < 2$. If we now take as parameter the treewidth of the graph and represent it by the parameter $k$, an Independent Set can be computed in $O(2^k n^{O(1)})$ time. Hence if the treewidth $k$ is small compared to the total size of the graph $n$, then this parameterized algorithm is much faster. 2. Output-sensitive analysis is a technique which is applied to construction problems, and also takes the size of the output into account in the run-time expression. A good example is the problem of determining the intersection points of a set of line segments in the plane. If I'm not mistaken you can compute the intersections in $O(n \log n + k)$ time where $k$ is the number of intersections.
I have a dataset which I split as 80% training and %20 validation sets. (38140 images for training, 9520 for validation) Model that I train is a deeper (~45 layers) convolutional neural network. I got the below results in the first epochs of training: ``` Epoch 1: train loss: 1041.52 - validation loss: 1045.89 Epoch 2: train loss: 750.78 - validation loss: 749.95 Epoch 3: train loss: 425.88 - validation loss: 423.35 Epoch 4: train loss: 320.29 - validation loss: 319.35 Epoch 5: train loss: 305.41 - validation loss: 305.07 ``` As can be seen, after first epoch the validation error is slightly lower than training loss. Is it something that I worry or Is it an indicator of good convergence and generalization?
In your case the difference is tiny (< 1%), I am quite sure, that this is no problem. The train set may contain more difficult images than the test set, therefore giving a higher loss. I would interpret this example as having a good generalization without overfitting, plus a little random variation between training and test set. For more possible reasons, you can check [this](https://ai.stackexchange.com/a/4413/11515) excellent answer.
It was shown in the paper "Integer Programming with a Fixed Number of Variables" that integer programmings with constant number of constraints (or variables) are polynomially solvable. Does this hold for 0-1 programming?
I'm assuming that by "0-1 programming with a constant number of constraints" you mean the following problem: Maximize some linear function of (x\_1, x\_2, ..., x\_n) subject to the constraints that each x\_i is in {0,1} and a constant number of additional linear constraints. This problem is NP-complete even with 1 additional constraint since 0-1 knapsack can be written in this form.
I have a dataset with 9 continuous independent variables. I'm trying to select amongst these variables to fit a model to a single percentage (dependent) variable, `Score`. Unfortunately, I know there will be serious collinearity between several of the variables. I've tried using the `stepAIC()` function in R for variable selection, but that method, oddly, seems sensitive to the order in which the variables are listed in the equation... Here's my R code (because it's percentage data, I use a logit transformation for Score): ``` library(MASS) library(car) data.tst = read.table("data.txt",header=T) data.lm = lm(logit(Score) ~ Var1 + Var2 + Var3 + Var4 + Var5 + Var6 + Var7 + Var8 + Var9, data = data.tst) step = stepAIC(data.lm, direction="both") summary(step) ``` For some reason, I found that the variables listed at the beginning of the equation end up being selected by the `stepAIC()` function, and the outcome can be manipulated by listing, e.g., `Var9` first (following the tilde). What is a more effective (and less controversial) way of fitting a model here? I'm not actually dead-set on using linear regression: the only thing I want is to be able to understand which of the 9 variables is truly driving the variation in the `Score` variable. Preferably, this would be some method that takes the strong potential for collinearity in these 9 variables into account.
First off, a very good resource for this problem is T. Keith, Multiple Regression and Beyond. There is a lot of material in the book about path modeling and variables selection and I think you will find exhaustive answers to your questions there. One way to address multicollinearity is to center the predictors, that is substract the mean of one series from each value. Ridge regression can also be used when data is highly collinear. Finally sequential regression can help in understanding cause-effect relationships between the predictors, in conjunction with analyzing the time sequence of the predictor events. Do all 9 variables show collinearity? For diagnosis you can use Cohen 2003 variance inflation factor. A VIF value >= 10 indicates high collinearity and inflated standard errors. I understand you are more interested in the cause-effect relationship between predictors and outcomes. If not, multicollinearity is not considered a serious problem for prediction, as you can confirm by checking the MAE of out of sample data against models built adding your predictors one at the time. If your predictors have marginal prediction power, you will find that the MAE decreases even in the presence of model multicollinearity.
I have a lot of data and I want to do something which seems very simple. In this large set of data, I am interested in how much a specific element clumps together. Let's say my data is an ordered set like this: {A,C,B,D,A,Z,T,C...}. Let's say I want to know whether the A's tend to be found right next to each other, as opposed to being randomly (or more evenly) distributed throughout the set. This is the property I am calling "clumpiness". Now, is there some simple measurement of data "clumpiness"? That is, some statistic that will tell me how far from randomly distributed the As are? And if there isn't a simple way to do this, what would the hard way be, roughly? Any pointers greatly appreciated!
Exactly what you're describing has been codified into a procedure called the Runs Test. It's not complicated to master. You can find it in many sources on statistical tests, e.g., wikipedia or [the Nat'l Instit. of Standards and Technology](http://www.itl.nist.gov/div898/handbook/eda/section3/eda35d.htm) or [YouTube](http://www.youtube.com/watch?v=YWlod6Jdu-k).
I am used to working with PCA, tSNE, LLEs... They all do a great job projecting the data on a plane (or on linear subspaces of $\mathbb{R}^n$). Is there any other embedding technique that projects the data on non-linear spaces ? Like a sphere, or any other manifold ? The aim would be to specify a space and a distance (the example will be a sphere and the geodesic distance) to project onto. I am looking for any reference or open source project! Per example, calling $x\_i$ the elements of the data set and $d$ the initial distance, $d\_g$ the geodesic distance on the sphere, an "MDS-like" formulation could be : $$ min\_{y} \sum\_{i,j}||d(x\_i,x\_j)-d\_g(y\_i,y\_j)||^2 $$ Is there any other way than brute force to solve this problem ?
It might be possible to solve your example problem using a procedure similar to nonclassical metric MDS (using the stress criterion). Initialize the 'projected points' to lie on a sphere (more on this later). Then, use an optimization solver to find the projected points that minimize the objective function. There are a few differences compared to ordinary MDS. 1) Geodesic distances must be computed between projected points, rather than Euclidean distances. This is easy when the manifold is a sphere. 2) The optimization must obey the constraint that the projected points lie on a sphere. Fortunately, there are existing, open source solvers for performing optimization where the parameters are constrained to lie on a particular manifold. Since the parameters here are the projected points themselves, this is exactly what's needed. [Manopt](http://www.manopt.org/) is a package for Matlab, and [Pymanopt](https://pymanopt.github.io/) is for Python. They include spheres as one of the supported manifolds, and others are available. The quality of the final result will depend on the initialization. This is also the case for ordinary, nonclassical MDS, where a good initial configuration is often obtained using classical MDS (which can be solved efficiently as an eigenvalue problem). For 'spherical MDS', you could take the following approach for initialization. Perform ordinary MDS, isomap, or some other nonlinear dimensionality reduction technique to obtain coordinates in a Euclidean space. Then, map the resulting points onto the surface of a sphere using a suitable projection. For example, to project onto a 3-sphere, first perform ordinary dimensionality reduction to 2d. Map the resulting points onto a 3-sphere using something like a [stereographic projection](https://en.wikipedia.org/wiki/Stereographic_projection). If the original data lies on some manifold that's topologically equivalent to a sphere, then it might be more appropriate to perform initial dimensionality reduction to 3d (or do nothing if they're already in 3d), then normalize the vectors to pull them onto a sphere. Finally, run the optimization. As with ordinary, nonclassical MDS, multiple runs can be performed using different initial conditions, then the best result selected. It should be possible to generalize to other manifolds, and to other objective functions. For example, we could imagine converting the objective functions of other nonlinear dimensionality reduction algorithms to work on spheres or other manifolds.
In field of economics (I think) we have ARIMA and GARCH for regularly spaced time series and Poisson, Hawkes for modeling point processes, so how about attempts for modeling irregularly (unevenly) spaced time series - are there (at least) any common practices? (If you have some knowledge in this topic you can also expand the corresponding [wiki article](http://en.wikipedia.org/wiki/Unevenly_spaced_time_series).) I see irregular time series simply as series of pairs (value, time\_of\_event), so we have to model not only value to value dependencies but also value and time\_of\_event and timestamps themselves. Edition (about missing values and irregular spaced time series) : Answer to @Lucas Reis comment. If gaps between measurements or realizations variable are spaced due to (for example) Poisson process there is no much room for this kind of regularization, but it exists simple procedure : `t(i)` is the i-th time index of variable x (i-th time of realization x), then define gaps between times of measurements as `g(i)=t(i)-t(i-1)`, then we discretize `g(i)` using constant `c`, `dg(i)=floor(g(i)/c` and create new time series with number of blank values between old observations from original time series `i` and `i+1` equal to dg(i), but the problem is that this procedure can easily produce time series with number of missing data much larger then number of observations, so the reasonable estimation of missing observations' values could be impossible and too large `c` delete "time structure/time dependence etc." of analysed problem (extreme case is given by taking `c>=max(floor(g(i)/c))` which simply collapse irregularly spaced time series into regularly spaced Edition2 (just for fun): Image accounting for missing values in irregularly spaced time series or even case of point process.
When I was looking for a way to measure the amount of fluctuation in irregularly sampled data I came across these two papers on exponential smoothing for irregular data by Cipra [[1](http://dml.cz/handle/10338.dmlcz/135858), [2](http://dml.cz/handle/10338.dmlcz/134655) ]. These build further on the smoothing techniques of Brown, Winters and Holt (see the Wikipedia-entry for [Exponential Smoothing](http://en.wikipedia.org/wiki/Exponential_smoothing)), and on another method by Wright (see paper for references). These methods do not assume much about the underlying process and also work for data that shows seasonal fluctuations. I don't know if any of it counts as a 'gold standard'. For my own purpose, I decided to use two way (single) exponential smoothing following Brown's method. I got the idea for two way smoothing reading the summary to a student paper (that I cannot find now).
> > How can you force a party to be honest (obey protocol rules)? > > > I have seen some mechanisms such as commitments, proofs and etc., but they simply do not seem to solve the whole problem. It seems to me that structure of the protocol design and such mechanisms must do the job. Does any one have a good classification of that. **Edit** When designing secure protocols, if you force a party to be honest, the design would be much easier though this enforcement has its own pay-off. I have seen when designing secure protocols, designers assume something which does not seem realistic to me, for instance to assume all the parties honest in worst case or assuming honesty of server which maintains user data. But when looking at design of protocols in stricter models, you rarely see such assumptions (at least I haven't seen it - I mostly study protocols over [UC framework of Canetti](http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=959888) which I think it is not totally formalized yet). I was wondering, is there any good classification of the ways in which you can force a party to be honest or is there any compiler which can convert the input protocol to one with honest parties? Now I am going to explain why I think this merely does not do the job though it may seem irrelevant. When designing protocols in the UC framework, which benefits from ideal/real paradigm, every communication link in the ideal model is authenticated, which is not true in the real model. So protocol designers seeks alternative methods to implement this channel through means of PKI assumption or a CRS (Common Reference String). But when designing authentication protocols, assuming authenticated channels is wrong. Suppose that we are designing an authentication protocol in the UC framework, there is an attack in which adversary forges identity of a party, but due to assumption of authenticated links in the ideal model this attack is not assumed in this framework! You may refer to [Modeling insider attacks on group key exchange protocols](http://portal.acm.org/citation.cfm?id=1102146). You may notice that Canetti in his seminal work [Universally composable notions of key exchange and secure channels](http://www.springerlink.com/index/BA16FGC4R6MTQ20F.pdf) mentions a previous notion of security called SK-Security which is simply enough to assure security of authentication protocols. He somehow confesses (by stating that this is the matter of technicality) that UC definition in this context is too restrictive and provides a relaxed variant called non-information oracle (which confused me, cause I haven't seen this model of security any where, I cannot match this security pattern with any other security pattern, probably my lack of knowledge :D). [As a side note, You can nearly have any Sk-secure protocol converted to a UC secure one regardless of simulator time. For instance you may just remove the signings of the messages and have the simulator simulate the whole interactions in dummy way. See [Universally Composable Contributory Group Key Exchange](http://portal.acm.org/citation.cfm?id=1533057.1533079) for proof! Now suppose this is a group key exchange protocol with polynomially many parties, what would be the efficiency of the simulator?? This is the origin of [my other question in this forum](https://cstheory.stackexchange.com/questions/2999/simulator-efficiency-versus-algorithm-efficiency).] Anyway, due to lack of commitment in the plain model (over UC), I sought other means to make the protocol secure by just bypassing the need for that relaxation. This idea is so very basic in my mind and has come to my mind by just having studied latest commitment scheme of canetti in the plain model: [Adaptive Hardness and Composable Security in the Plain Model from Standard Assumptions](http://www.cs.cornell.edu/~rafael/papers/ccacommit.pdf). BTW, I don't seek zero-knowledge proofs because due to reason which I don't know, whenever someone has used one of them in a concurrent protocol (over UC framework), the others has mentioned the protocol as inefficient (may be due to rewinding of the simulator).
Alas, you can't force people to do what the protocol says they should do. Even well-meaning people who intended to follow the protocol occasionally make mistakes. There seem to be at least 3 approaches: * crypto theory: assume "good" agents always follow the protocol, while "malicious" agents try to subvert the protocol. Design the crypto protocol such that good agents get what they need, while malicious agents get nothing. * game theory: assume every agent looks out only for his own individual interest. Use mechanism design to maximize the total benefit to everyone. * distributed fault-tolerant network: assume every agent makes an occasional mistake, and a few 'bot agents spew out many maliciously-crafted messages. Detect and isolate the 'bot nodes until they are fixed; use error detection and correction (EDAC) to fix the occasional mistake; use [convergent protocols](http://en.wikipedia.org/wiki/Convergence_%28routing%29) that eventually settle into a useful state no matter what initial mis-information is stored in the routing tables. **mechanism design** In game theory, designing a situation (such as setting up the rules of an auction) such that people who are selfishly looking out only for their own individual interests end up doing what the designer wants them to do is called ["mechanism design"](http://en.wikipedia.org/wiki/Mechanism_design). In particular, using [implementation theory](http://en.wikipedia.org/wiki/Implementation_theory), situations can be designed such that the final outcome maximizes the total benefit to everyone, avoiding poorly-designed situations such as the "tragedy of the commons" or "prisoner's dilemma" where things happen that are not in anyone's long-term interest. Some of the most robust such processes are designed to be [incentive compatible](http://en.wikipedia.org/wiki/Incentive_compatibility). The game theory typically makes the simplifying assumption that all relevant agents are "rational". In game theory, "rational" means that an agent prefers some outcomes to other outcomes, is willing and able to change his actions in a way that he expects (given the information available to him) will result in a more preferred outcome (his own narrow self-interest), and he is smart enough to realize that other rational agents will act similarly to try to obtain the outcome that is most preferred out of all the possible outcomes that might result from that choice of action. A designer may temporarily make the simplifying assumption that all people only act according to their own narrow self-interest. That assumption makes it easier to design a situation using implementation theory. However, after the design is finished, it doesn't matter whether people act according to their own narrow self-interest ( "[Homo economicus](http://en.wikipedia.org/wiki/Homo_economicus)" ), or whether they are altruistic and want to maximize the total net benefit to everyone -- in a properly designed situation, both kinds of people make exactly the same choices and the final outcome maximizes the total benefit to everyone. **convergent protocols** When designing a [routing protocol](http://en.wikipedia.org/wiki/Routing_protocol), each node in the network sends messages to its neighbors passing on information about what other nodes are reachable from that node. Alas, occasionally these messages have errors. Worse, sometimes a node is mis-configured and spews out many misleading and perhaps even maliciously-crafted messages. Even though we humans know the message might be incorrect, we typically design the protocol so that a properly-functioning node trusts every message and stores the information in its routing table, and makes its decisions as if it believes that information to be entirely true. Once some human turns off a bad node (or disconnects it from the network), we typically design the protocol to rapidly pass good information to flush out the corrupt information, and so quickly converge on a useful state. **combined approaches** [Algorithmic mechanism design](http://en.wikipedia.org/wiki/Algorithmic_mechanism_design) seems to try to combine the fault-tolerant network approach and the game-theory mechanism approach. @Yoichi Hirai and Aaron, thank you for pointing out some interesting attempts to combine game theory and cryptography.
I have two recursive algorithms to solve a particular problem. I have calculated their time complexities as $O(n^2\times\log n)$ and $O(n^{2.32})$. I need to find which algorithm is better in terms of time complexity. I tried plotting graphs but two functions seem to be going together.
$O(n^{2.32})$ is kind of unusual. Do you have a mathematical proof for this, or is it an estimated based on observations? In that case, you can't draw any conclusions from it. Big-Oh is an upper bound. So first you need to check what is the actual behaviour. If an algorithm runs in $O(n)$ then it is correct (but not very useful) to say it runs in $O(n^{2.32})$, so check that first. If both are strict upper bounds (so the algorithms are not $o(n^2 \log n)$ and not $o(n^{2.32})$) with a little o, then the second algorithm will be slower at least for some cases if n is large. If it was $\theta$ then it would be the case for all large n. In practice, if you want to decide which algorithm to use after you implemented them both, you would measure their execution time for their typical inputs, and find the average time and the worst time. As long as the worst time is acceptable, you'd take whichever one takes less time on average for *your* inputs. For example, if you need to sort a million arrays of size 10 containing almost sorted, the Big-O of the sorting algorithm is irrelevant; what counts is the average sorting time for arrays of size 10 that are almost sorted. Quite often the execution time for small n is "fast enough". In that case you will care for cases where one or both of the algorithms are *not* "fast enough". So if the usual case is "fast enough", you worry more about the unusual, very large cases. Another problem happens when the average time is low and the worst case should be very rare, but is very slow. In that case an adversary might give you inputs that run very slow. For example, Quicksort is typically fast, but for every deterministic implementation, an adversary can prepare inputs of size n that take O(n^2) time. They will never happen in practice, only when created by an adversary.
I use mostly "Gaussian distribution" in my book, but someone just suggested I switch to "normal distribution". Any consensus on which term to use for beginners? Of course the [two terms are synonyms](https://stats.stackexchange.com/questions/55962/what-is-the-difference-between-a-normal-and-a-gaussian-distribution), so this is not a question about substance, but purely a matter of which term is more commonly used. And of course I use both terms. But which should be used mostly?
Even though I tend to say 'normal' more often (since that's what I was taught when first learning), I think "Gaussian" is a better choice, as long as students/readers are quite familiar with both terms: * The normal isn't particularly typical, so the name is itself misleading. It certainly plays an important role (not least because of the CLT), but observed data is much less often particularly near Gaussian than is sometimes suggested. * The word (and associated words like "normalize") has several meanings that can be relevant in statistics (consider "orthonormal basis" for example). If someone says "I normalized my sample" I can't tell for sure if they transformed to normality, computed z-scores, scaled the vector to unit length, to length $\sqrt{n}$, or a number of other possibilities. If we tended to call the distribution "Gaussian" at least the first option is eliminated and something more descriptive replaces it. * Gauss at least has a reasonable degree of claim to the distribution.
Based on the Wikipedia page for [a formal system](http://en.wikipedia.org/wiki/Formal_system), will all programming languages be contained within the following rules? * **A finite set of of symbols.** (This seems obvious since the computer is a discrete machine with finite memory and therefore a finite number of ways to express a symbol.) * **A grammar.** * **A set of axioms.** * **A set of inference rules.** Are all *possible* languages constrained by these rules? Is there a notable proof? EDIT: I've been somewhat convinced that my question may actually be: can programming languages be represented by something other than a formal system?
Technically, yes, because you can make your formal system have a single axiom that says “the sequence of symbols is in the set $S$” where $S$ is the set of programs in the programming language. So this question isn't very meaningful. The notion of formal system is so general that it isn't terribly interesting in itself. The point of using formal systems is to break down the definition of a language into easily-manageable parts. Formal systems lend themselves well to compositional definitions, where the meaning of a program is defined in terms of the meaning of its parts. Note that your approach only defines whether a sequence of symbols is valid, but the definition of a programming language needs more than this: you also need to specify the meaning of each program. This can again be done by a formal system where the inference rules define a [semantics](http://en.wikipedia.org/wiki/Formal_semantics_of_programming_languages) for source programs.
Is support vector machine with linear kernel the same as a soft margin classifier?
Many people have given way better answers than I possibly could, but there are two things I wanted to add. 1. The field, hypothesis, and type of data you are working with can heavily influence which philosophy you use. The hypothesis "The mass of a neutron is 1.001 times the mass of a proton" definitely has a true or false answer. A frequentist approach would be very well suited to testing this hypothesis. Compare that to "Competition drives populations into different areas." This is not always true, but it is true many times. It is completely valid to interpret a Bayesian test of this hypothesis as how often it is true or how significant this effect is. 2. I believe that you should write out how you are going to analyze the data before ever looking at it. Whenever you decide to deviate from this plan, add an explanation for why before you do the new tests. This is a way to help you identify biases before they influence your work. Plus, if you store this document with an independent review board, you are almost immune to accusations of p-hacking.
Been having some trouble trying to come up with a CFG for this language: all binary strings that contain least 2 1's and at most 2 0's So far, I've come up with this: S --> T | 0T | T0 | T0T | 00TT | TT00 | 0T0T | T0T0 | 0TT0 | T00T T--> 1T | V V--> 11Z | 1Z1 | Z11 Z--> 1Z | epsilon I realize this is mostly likely incorrect/redundant, so any feedback would be extremely helpful. Thank you!
*Hint* If $L\_{i j}$ is the language with at least $i$ $1$s and at most $j$ $0$s $$\begin{align} L\_{2 2} &= 1 \, L\_{1 2} \, | \,?\\ ... \\ L\_{0 0} &= \,? \end{align}$$ --- Edited to answer the correct question.
I have recently been reading up on TOC, and had this thought, which does not seem to be answered explicitly anywhere. They way I have understood it, a system is Turing complete if it can simulate any Turing machine. But Turing machines are limited to their instructions. Does that mean not every TM is actually Turing complete? If I am not completely mistaken, only the universal TM would truly be Turing complete. Am I missing something?
Turing completeness is *not* a property of a single program or a single machine. It **does not make sense** to ask “is this machine/program/gadget Turing-complete?” Turing-completeness is a property of a **model of computation**, which is a mathematical structure that describes a particular way of performing computation. Some examples of models of computation: 1. The set of *all* Turing machines. 2. The set of *all* Turing machines with an oracle. 3. The set of *all* general recursive functions. 4. The set of *all* valied expressions in the $\lambda$-calculus. 5. The set of *all* deterministic finite automata. 6. The set of *all* push-down atomata. **Remark:** There are several possible precise mathematical definitions of what a model of computation actually is. We also need a precise definition of what it means to exhibit a simulation betwen models. Computability books usually skim over these notions and just show simulations on case-by-case basis. Anyhow, say that a model $M$ is **as capabable** as model $N$ when $M$ can simulate $N$. (Notice that at all times we are talking about entire models, not single machines or programs). Say that models are equivalent if each is as capabable as the other. Finally, a model is Turing-complete if it is as capable as the Turing machine model. In the above list, the model of Turing machines with oracles is as capable as the model of Turing machines. The models of Turing machines, general recursive functions, and $\lambda$-calculus are equally capable, whereas finite automata and push-down atomata are not as capable as Turing machines. **Supplemental:** There is a notion that applies to a single machine, namely that of a **universal Turing machine**. It is an important concept, it plays a role in proofs of Turing-completeness, but a universal Turing machine isn't by itself "Turing-complete" – that's the wrong phrase to use.
> > Let $X\_1, \cdots, X\_n$ be iid from a uniform distribution > $U[-\theta, 2\theta]$ with $\theta \in > \mathbb{R}^+$ unknown. Check if the minimal sufficient statistic of $\theta$ is complete. > > > I found that$$T(X) = \max \left(-X\_{(1)}, \frac{X\_{(n)}}{2} \right)$$is minimal sufficient but i am having trouble checking if it's complete. My attempt: Since uniform is a location distribution, using Basu's theorem, the ancillary statistic would be the range. Since the above minimal statistic is not independent of the ancillary statistic, it is not complete. Am I right?
I think you should stick to the definition of a complete statistic. For that, you need to find the distribution of $T$. For all $0<t<\theta$, the distribution function of $T$ is \begin{align} P\_{\theta}(T\le t)&=P\_{\theta}(-t\le X\_1,X\_2,\ldots,X\_n\le 2t) \\&=\left[P\_{\theta}(-t<X\_1<2t)\right]^n \\&=\left(\frac{t}{\theta}\right)^n \end{align} So $T$ has pdf $$f\_T(t)=\frac{nt^{n-1}}{\theta^n}\mathbf1\_{0<t<\theta}$$ In other words, $T$ is distributed exactly as $\max\_{1\le i\le n} Y\_i$ where $Y\_i$'s are i.i.d $U(0,\theta)$ variables. That $T$ is a complete statistic is a well-known fact, proved in detail [here](https://math.stackexchange.com/questions/699997/complete-statistic-uniform-distribution).
I recently learned about a principle of probabilistic reasoning called "[explaining away](https://doi.org/10.1109/34.204911)," and I am trying to grasp an intuition for it. Let me set up a scenario. Let $A$ be the event that an earthquake is occurring. Let event $B$ be the event that the jolly green giant is strolling around town. Let $C$ be the event that the ground is shaking. Let $A \perp\!\!\!\perp B$. As you see, either $A$ or $B$ can cause $C$. I use "explain away" reasoning, if $C$ occurs, one of $P(A)$ or $P(B)$ increases, but the other decreases since I don't need alternative reasons to explain why $C$ occurred. However, my current intuition tells me that both $P(A)$ and $P(B)$ should increase if $C$ occurs since $C$ occurring makes it more likely that any of the causes for $C$ occurred. How do I reconcile my current intuition with the idea of explaining away? How do I use explaining away to justify that $A$ and $B$ are conditionally dependent on $C$?
**Clarification and notation** > > if C occurs, one of P(A) or P(B) increases, but the other decreases > > > This isn't correct. You have (implicitly and reasonably) assumed that A is (marginally) independent of B and also that A and B are the only causes of C. This implies that A and B are indeed *dependent conditional on C*, their joint effect. These facts are consistent because explaining away is about P(A | C), which is not the same distribution as P(A). The conditioning bar notation is important here. > > However, my current intuition tells me that both P(A) and P(B) should increase if C occurs since C occurring makes it more likely that any of the causes for C occurred. > > > You are having the 'inference from semi-controlled demolition' (see below for details). To begin with, you *already* believe that C indicates that either A *or* B happened so you can't get any more certain that either A or B happened when you see C. But how about A *and* B given C? Well, this possible but less likely than either A and not B or B and not A. That is the 'explaining away' and what you want the intuition for. **Intuition** Let's move to a continuous model so we can visualise things more easily and think about *correlation* as a particular form of non-independence. Assume that reading scores (A) and math scores (B) are independently distributed in the general population. Now assume that a school will admit (C) a student with a combined reading and math score over some threshold. (It doesn't matter what that threshold is as long as it's at least a bit selective). Here's a concrete example: Assume independent unit normally distributed reading and math scores and a sample of students, summarised below. When a student's reading and math score are together over the admission threshold (here 1.5) the student is shown as a red dot. ![explaining away as a collider relationship](https://i.stack.imgur.com/27W9h.png) Because good math scores offset bad reading scores and vice versa, the population of admitted students will be such that reading and math are now dependent and negatively correlated (-0.65 here). This is also true in the non-admitted population (-0.19 here). So, when you meet an randomly chosen student and you hear about her high math score then you should expect her to have gotten a lower reading score - the math score 'explains away' her admission. Of course she *could* also have a high reading score -- this certainly happens in the plot -- but it's less likely. And none of this affects our earlier assumption of no correlation, negative or positive, between math and reading scores in the general population. **Intuition check** Moving back to a discrete example closer to your original. Consider the best (and perhaps only) cartoon about 'explaining away'. ![semi-controlled demolition](https://i.stack.imgur.com/Jm8Ky.png) The government plot is A, the terrorist plot is B, and treat the general destruction as C, ignoring the fact there are two towers. If it is clear why the audience are being quite rational when they doubt the speaker's theory, then you understand 'explaining away'.
There are often differences between results of Skewness & Kurtosis and normality tests, and I have always doubts if it is better to choose parametric or nonparametric tests (I use SPSS). Sometimes histograms show if distribution looks normal or not, and I noticed that most often S&K are better pointers but when I did analysis last time it was different and I really don't know what to do... I read that e.g. when groups are equinumerous, in choosing between t-Student test and nonparametric ones it is better to choose t-Student's even if distributions aren't normal. Is that true?
You should maybe give more details about your application for us to be able to give specific advice. Yes, normal-based tests (for means, not for variances) are usually quite robust. But even slight differences from a normal distribution may destroy their optimality. So, if in doubt, you should use the nonparametric tests! A big advantage with normal-based theory is its larger flexibility. So, if you need this flexibility, you can combine the normal-theory tests with suitable transformations of the data (log, in case of skewed distributions, for instance).
I would like to get more understanding of deep learning. Browsing the web I find applications in speech recognition and hand-written digits. However I would be interested to get some guidance on how to apply this in the classical setting: * binary classifier * numerical features (each sample is a numerical vector of $K$ entries, no 2D pixels or such). I am doing my own experiments choosing learning rates, number of hidden neurons and so on, but I would be happy to see an application by somebody more experienced. The software that I use offers weight initialziation using Restricted Boltzmann Machines (RBMs). I wonder whether this is useful in this context and whether the other special techniques that one encounters in the literature (convolutional NN) are useful here to. Could anybody share a blog post, a paper or personal experience?
I used Binary classification for sentiment analysis of texts. I converted sentences into vectors by taking appropriate vectorizer and classified using OneVsRest classifier. On another approach, my words were converted into vectors and there, I used a CNN based approach to classify. Both when tested on my datasets were giving comparable results as of now. If you have vectors, there are already really good approaches available for binary classification which you can try. [On Binary Classification with Single–Layer Convolutional Neural Networks](http://arxiv.org/pdf/1509.03891v1.pdf) is a good read for you for classification using CNNs for starters. [This](http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/) is one of the first blogs I read to gain more knowledge about this and doesn't require much of pre-requisites to understand(I am assuming you know the basics about convolution and Neural Networks).
I am dealing with a linear system of equations that I am solving by OLS: $$ \mathbf{y} = \mathbf{X} \mathbf{p} + \mathbf{e} $$ Where I have $n$ samples and $k$ parameters ($\mathbf{X}$ is an $n \times k$ matrix) I would like to work out the relationship between samples size ($n$) and the parameter uncertainties contained within their covariance matrix ($\mathbf{C\_p}$). I have established numerically (by simulating an OLS with different $n$) that the parameter variance decreases ~exponentially with $n$, but am seeking an analytical solution. Extensive googling hasn't got me there, and neither has my sub-par knowledge of linear algebra. Apologies if this is really basic, and thanks for the help!
If you have a linear model such as: $$ \mathbf{y} = \mathbf{X} \mathbf{p} + \mathbf{e}, $$ where $\mathbf{X}$ is known and $\mathbf{e}$ is zero mean with covariance $\mathbf{C\_e}$ (other than that, the pdf of $\mathbf{e}$ is arbitrary), then according to the Gauss-Markov theorem the Best Unbiased Linear Estimator (BLUE) of $\mathbf{p}$ is $$ \hat{\mathbf{p}} = (\mathbf{X}^\top\mathbf{C\_e}^{-1}\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{C\_e}^{-1}\mathbf{y}, $$ with covariance $$ \mathbf{C}\_{\hat{\mathbf{p}}} = (\mathbf{X}^\top\mathbf{C\_e}^{-1}\mathbf{X})^{-1}. $$ So if we want to know how the variance of $\hat{\mathbf{p}}$ depends on the sample number, we can make life simple and assume $p$ is a simple scalar, $\mathbf{X} = \mathbf{1}\_N$ ($\mathbf{1}\_N$ is a column vector of size N) and $\mathbf{C\_e} = \sigma^2\mathbf{I}$. This corresponds to the model $$ y\_n = p + e\_n $$ with $n = 1\dots N$ samples and $e\_n$ is zero mean with variance $\sigma^2$. Then the covariance of $\hat{p}$ evaluates to $$ \mathbf{C}\_{\hat{\mathbf{p}}} = (\frac{1}{\sigma^2}\mathbf{1}\_N^\top\mathbf{I}\mathbf{1}\_N)^{-1} = \frac{\sigma^2}{N} = \text{var}(\hat{p}). $$ This can be extended to other models (other $\mathbf{X}$, $\mathbf{p}$ and $\mathbf{C}\_{\mathbf{e}}$), but the result holds that the variance decreases as $1/N$. So if your variance decreased exponentially numerically, then something is wrong (assuming that you used the BLUE estimator). More can be found for example in: Kay, S. M. (1993). Fundamentals of Statistical Signal Processing: Estimation Theory. In Englewood Cliffs NJ Prentice Hall (1st ed.).
If I were to pose the question: "Given a program $P$ containing statement $X$, will $X$ be executed (given enough runs with all possible inputs)?" This strikes me of being a relative of the Halting Problem, but I just don't know the taxonomy here. What is the name of this problem, or does it not have a name because it is solved/uninteresting?
This is called [the *reachability problem*](https://en.wikipedia.org/wiki/Reachability_problem) -- is it possible for a given system to enter a given state? Techniques that attempt to answer this problem fall under *reachability analysis*, which is one of the main goals of (finite / symbolic) model checking. --- As the other answer suggests, this is one of the many instances covered by Rice's theorem. Answering questions about the runtime behavior of programs from only their source code is almost always undecidable. Nonetheless, there are many attempts to solve these problems that can be fairly effective in "real world" code (e.g., [Microsoft's 2002 project SLAM](https://www.microsoft.com/en-us/research/project/slam/)), because human programmers tend to try to make their code easy to understand (for other humans; we hope incidentally also for mechanical analysis)
I am trying to understand the difference between bias-variance and overfitting-underfitting. If a modal overfits the data it means that it will not generalize well on new data because it over learns the training dataset. I can also say that this overfitting model is biased for the training examples. Now the question is, what is variance? Why does my overfitting modal has high variance when variance is not a model's property. P.S. If I become able to make sense of the variance in terms of the model, I will be able to get bias in terms of the model as well.
When we are building a machine learning model, we have to take two factors into account: 1. On one hand, we want the predictions given by the model to be as accurate as possible, this means that we want the model to have as little error as possible. 2. On the other hand, we know that the observations in our data set are likely to have errors: either measurement errors, or they are influenced by some aspect that we do not take into account in the model. So with our model we want to achieve a compromise, we want it to be able to capture the general trend of the data, without being influenced by the specific errors of the set with which we train it. An example always helps, [![enter image description here](https://i.stack.imgur.com/amg67.png)](https://i.stack.imgur.com/amg67.png) 1. If our model is too general, we will be in the situation of the left image, **underfitting**, and not really capturing the behavior of the data. 2. But if our model is not general enough, we will be in the situation of the middle image, **overfitting**, and we will be including in the model specific errors of the observations in the trainset. That is, a change in our data set will produce a very large change in the model. 3. It is clear that the best model is the one in the image on the right, because it captures the general behavior (that curved trend) without being influenced by all the specific errors of the observations. So finally, the variance in a model tells us how variable the model is. The larger the variance, the more abrupt changes, ups and downs we will see in the model's predictions. If the variance is very small, the model is very stable and under-fitted. But if it is too large, the model is too variable and over-fitted. We look for a middle ground between the two.
I think, maybe some formalism could exist for the task which makes it significantly easier. My problem to solve is that I invented a reentrant algorithm for a task. It is relative simple (its pure logic is around 10 lines in C), but this 10 lines to construct was around 2 days to me. I am 99% sure that it is reentrant (which is not the same as thread-safe!), but the remaining 1% is already enough to disrupt my nights. Of course I could start to do that on a naive way (using a formalized state space, initial conditions, elemental operations and end-conditions for that, etc.), but I think some type of formalism maybe exists which makes this significantly easier and shorter. Proving the non-reentrancy is much easier, simply by showing a state where the end-conditions aren't fulfilled. But of course I constructed the algorithm so that I can't find a such state. I have a strong impression, that it is an algorithmically undecidable problem in the general case (probably it can be reduced to the halting problem), but my single case isn't general. I ask for ideas which make the proof easier. How are similar problems being solved in most cases? For example, a non-trivial condition whose fulfillment would decide the question into any direction, would be already a big help.
[Assembly language](https://en.wikipedia.org/wiki/Assembly_language) is a way to write instructions for the computer's [instruction set](https://en.wikipedia.org/wiki/Instruction_set), in a way that's slightly more understandable to human programmers. Different architectures have different instruction sets: the set of allowed instructions is different on each architecture. Therefore, you can't hope to have a write-once-run-everywhere assembly program. For instance, the set of instructions supported by x86 processors looks very different from the set of instructions supported by ARM processors. If you wrote an assembly program for an x86 processor, it'd have lots of instructions that are not supported on the ARM processor, and vice versa. The core reason to use assembly language is that it allows very low-level control over your program, and to take advantage of all of the instructions of the processor: by customizing the program to take advantage of features that are unique to the particular processor it will run on, sometimes you can speed up the program. The write-once-run-everywhere philosophy is fundamentally at odds with that.
I am currently studying mathematics. However, I don't think I want to become a professional mathematician in the future. I am thinking of applying my knowledge of mathematics to do research in artificial intelligence. However, I am not sure how many mathematics courses I should follow. (And which CS theory courses I should follow.) From Quora, I learned that the subjects Linear Algebra, Statistics and Convex Optimization are most relevant for Machine Learning (see [this](https://www.quora.com/What-parts-of-mathematics-are-most-relevant-for-machine-learning) question). Someone else mentioned that learning Linear Algebra, Probability/Statistics, Calculus, Basic Algorithms and Logic are needed to study artificial intelligence (see [this](https://www.quora.com/What-skills-and-or-maths-are-needed-to-study-artificial-intelligence) question). I can learn about all of these subjects during my first 1.5 years of the mathematics Bachelor at our university. I was wondering, though, if there are some upper-undergraduate of even graduate-level mathematics subjects that are useful or even needed to study artificial intelligence. What about ODEs, PDEs, Topology, Measure Theory, Linear Analysis, Fourier Analysis and Analysis on Manifolds? One book that suggests that some quite advanced mathematics is useful in the study of artificial intelligence is *Pattern Theory: The Stochastic Analysis of Real-World signals* by David Mumford and Agnes Desolneux (see [this](http://rads.stackoverflow.com/amzn/click/1568815794) page). It includes chapters on Markov Chains, Piecewise Gaussian Models, Gibbs Fields, Manifolds, Lie Groups and Lie Algebras and their applications to pattern theory. To what extend is this book useful in A.I. research?
Depends on your definition of advanced, and what sort of AI you want to study. Many problems in AI are provably intractable-- optimal solutions to POMDPs are provably NP-Complete, optimal solutions to DEC-POMDPs are provably NEXP-Complete, etc. So, absent some unexpected breakthrough in complexity theory, the more one knows about approximation algorithms and their theoretical underpinnings, the better. (In addition to the measure theory, etc, needed to truly understand the Bayesian probability that underlies the POMDP model.) Multi-agent artificial intelligence, in particular, intersects with game theory; so knowing game theory is helpful which in turn depends on topology, measure theory, etc. And likewise, many problems in game theory are intractable. Some are even intractable under approximation and even understanding when it is *possible* to usefully approximate takes a considerable amount of mathematics to work out. (I note that the game theorists have been having a pretty good run in the Nobel Economics field, for the past few years, and that's heavily mathematical in nature. I predict in twenty odd years, today's algorithmic game theorists will be in about the same position.)
Here's a question about using a divide and conquer approach to find a majority element in an array. It's taken from *Algorithms* by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani, Question 2.23. I'm not convinced why this algorithm should work. [![enter image description here](https://i.stack.imgur.com/2XEiJ.png)](https://i.stack.imgur.com/2XEiJ.png) [![enter image description here](https://i.stack.imgur.com/tJhwL.png)](https://i.stack.imgur.com/tJhwL.png) It seems to give no direction on what to do with the element that is left unpaired when the array size is odd. Take for example $A = [1, 1, 1, 2, 3]$. Suppose we keep that unpaired singleton element at each step. For example, if we pair them up as $[1, 1], [1, 2], [3]$, then after keeping $1$, discarding $1, 2$, and keeping $3$, we're left with $1, 3$. Then we discard both since they're different, so we're left with no majority element. Suppose we don't keep that unpaired singleton at each step. If we pair them up as $[1, 2], [1, 3], [1]$, then we must discard all of them, so again we're left with no majority element. Where is the flaw? Can someone clarify whether this algorithm keeps or discards the unpaired singleton?
The algorithm as written assumes that $n$ is a power of 2. But you can adjust it to support odd-length arrays, while still completing in $O(n)$ time, as follows: Suppose that at some given point in time, the array has odd length. Let $x$ be the last value in the array. You can determine whether $x$ is the majority element in $O(n)$. If so, return it. If not, then after discarding the last value in the array, it still has the same majority element, but now the number of elements is even.
I am working to analyze poverty rate using census data. I have a huge dataset. I want to extract the likelihood from this dataset in order to create patterns for energy consumption. What is the best approach to tackle this problem?
You could build a random forest where each of your classes is a group of items (i.e. "green apples with farmed strawberries, with 2% milk"). Then, based on the characteristics of the shopper or whatever your predictors are, you can provide a predicted probability of purchase for each group of items. I would use R's randomForest package (<https://cran.r-project.org/web/packages/randomForest/index.html>) to do this.
I'm having trouble convincing myself of the proof for the following theorem: > > $E\_{TM} = \{\langle M\rangle\mid M$ is a TM and $L(M) = \emptyset\}$ is undecidable. > > > I think I understand why we reduce $A\_{TM}$ to this problem, where $A\_{TM} = \{\langle M, w\rangle\mid M$ is a TM and $M$ accepts $w\}$, but here is where I get lost. We create a modified TM $M\_1$ such that: > > $M\_1=$ "On input $x$: > > > 1. If $x\neq w$, reject. > 2. If $x = w$, run $M$ on input $w$ and accept if $M$ does." > > > and consequently create a TM $S$ that decides $A\_{TM}$: > > $S=$ "On input $\langle M, w\rangle$, an encoding of a TM $M$ and a string $w$: > > > 1. Use the description of $M$ and $w$ to construct the TM $M\_1$ just described. > 2. Run $R$ on input $\langle M\_1\rangle$. > 3. If $R$ accepts, reject; if $R$ rejects, accept." > > > My issue here lies with line 3 of TM $S$. If $R$ accepted, then we must have had $x = w$ and $L(M) = \emptyset$, and $S$ would have rejected. Likewise, if $R$ rejected, then we must have had $x\neq w$ (where $x$ could be any string but $w$). Yet $S$ would accept this, even though the 'goal' for $S$ was to accept string $w$. Can someone show me how to think about this correctly? It would be very much appreciated.
In answer to your question about line 3 of the description of $S$, let's first clarify things a bit: we assume there is a decider, $R$, for $E\_{TM}$ and we use this to build a decider, $S$, for $A\_{TM}$ (thus eventually establishing that such an $R$ is impossible). With that out of the way, your question is about what happens when $R$ accepts $\langle M\_1\rangle$: 1. Since $R$ was assumed to be a decider for $E\_{TM}$, $R$ accepts $\langle M\_1\rangle$ if and only if $L(M\_1)=\varnothing$. 2. From the way $M\_1$ was constructed, that can happen only if $M$ doesn't accept $w$ (since $M\_1$ rejects everything else), 3. So $R$ will accept $\langle M\_1\rangle$ only if $\langle M, w\rangle\notin A\_{TM}$. Similarly, 1. $R$ will reject $\langle M\_1\rangle$ iff $L(M\_1)\ne\varnothing$, 2. And that can only happen when $M$ accepts $w$, 3. And hence $R$ will reject $\langle M\_1\rangle$ only if $\langle M, w\rangle\in A\_{TM}$. Finally, since $S$ was designed to do the opposite of what $R$ does, we've constructed a decider for $A\_{TM}$, which of course is a contradiction, so $E\_{TM}$ must be undecidable.
I have a code that computes the accuracy, but now I would like to compute the F1 score. ``` accuracy_1 = tf.reduce_mean(tf.cast(tf.equal( tf.argmax(output_1, axis=-1), tf.argmax(y_1, axis=-1)), tf.float32), name="accuracy_1") accuracy_2 = tf.reduce_mean(tf.cast(tf.equal( tf.argmax(output_2, axis=-1), tf.argmax(y_2, axis=-1)), tf.float32), name="accuracy_2") ``` How can I compute F1 equivalent for the above code? I'm finding it difficult as I am very new to TensorFlow.
In tf 2.0+: ``` f1 = 2*(tf.compat.v1.metrics.recall(labels, predictions) * tf.compat.v1.metrics.precision(labels, predictions)) / ( tf.compat.v1.metrics.recall(labels, predictions) + tf.compat.v1.metrics.precision(labels, predictions)) ``` In previous versions you can use the contrib.metrics submodule (deprecated in 1.14): ``` tf.contrib.metrics.f1_score(labels, predictions) ```
I'm trying to model the data (not make predictions) and am NOT using lasso for this, just want to know if my plan is somewhat reasonable here: I'm modelling for a "yes/no" response variable, so I used logistic regression and stepwiseAIC for variable selection. The results gives me 13 parameters: 8 covariates with 5 interaction terms (several parameters are not significant on their own but have a significant interaction). When I instead used stepwise based on BIC criteria, I only got two covariates and their interaction. Much simpler of course, but the deviance increased quite a bit. Since all the parameters in the small model were also in the large one, I considered the small one to be nested, so I did the deviance test (likelihood ratio test) and it gave me a p-value of nearly 0, indicating that the larger model is better. Am I doing it right?
Using variable selection procedures like minimizing AIC or BIC impacts p-values of subsequent hypothesis tests (as well as impacting other things such as bias of parameter estimates and standard errors). Testing a hypothesis based on two different model selection procedures doesn't have the nominal properties; finding statistical significance for a difference of the two models is neither surprising nor necessarily informative about the relative worth of the two models. BIC and AIC are based on different assumptions about the situation; the two aren't consistent with each other -- if $n>7$ then using BIC to select a model will always penalize larger models more harshly than AIC. [If you use AIC to do 1 variable-at-a-time stepwise selection, it's equivalent to doing ordinary stepwise model selection based on a significance level of 15.7%; BIC would correspond to reducing the significance level with larger sample size.] If you're going to then use hypothesis testing to choose between them, you're essentially falling back on p-values for variable selection. AIC works out "better" one way; BIC a different way, and the likelihood ratio test a different way a again. I don't think there's a good argument for the procedure you have adopted.
In relation to the following figure [![Figure 9.3 Kline 2016](https://i.stack.imgur.com/ovF7x.png)](https://i.stack.imgur.com/ovF7x.png) Kline (2016) writes on p194: > > The numerals (1) in Figure 9.3(b) that appear next to paths from the > factors to one of their indicators are scaling constants, or unit > loading identification (ULI) constraints. The specifications that > > > $A \rightarrow X\_1 = 1.0$ and $B \rightarrow X\_4 = 1.0$ > > > scale the factors in a metric related to that of the explained > (common) variance of the corresponding indicator, or reference > (marker) variable. > > > I understand from [this answer](https://stats.stackexchange.com/a/91787/9162) that the reason for having the reference variable is so we can determine the variance of the latent variable. However, it wasn't clear to me why this goal was not also relevant to exploratory factor analysis. Why don't we set $A \rightarrow X\_1 = 1.0$ in the EFA model? Kline, R. B. (2016). Principles and practice of structural equation modeling. Guilford Press.
Your test sample is a *subset* of your training sample: ``` x_train = x[0:2635] x_test = x[0:658] y_train = y[0:2635] y_test = y[0:658] ``` This means that you evaluate your model on a part of your training data, i.e., you are doing in-sample evaluation. In-sample accuracy is a notoriously poor indicator to out-of-sample accuracy, and maximizing in-sample accuracy can lead to overfitting. Therefore, one should always evaluate a model on a true holdout sample that is completely independent of the training data. Make sure your training and your testing data are disjoint, e.g., ``` x_train = x[659:2635] x_test = x[0:658] y_train = y[659:2635] y_test = y[0:658] ```
I've been following the ELBO derivations in the paper [Automatic Differentiation Variational Inference](http://www.cs.columbia.edu/%7Eblei/papers/KucukelbirTranRanganathGelmanBlei2017.pdf) and have a few questions. With the model $p(x,\theta)$, they first transform $\theta$ so that it lies on the real coordinate plane. Let $\zeta = T(\theta)$ be the transformed variable. Once they do that transformation, they have that $p(x,\zeta) = p\big(x,T^{-1}(\zeta)\big)|\det J\_{T^{-1}}(\zeta)|$. Next, they apply a second transformation to standardize $\zeta$. Let $\eta = S\_\phi(\zeta) = L^{-1}\zeta-\mu$. Here, $L$ and $\mu$ come from the variational distribution $q(\zeta;\mu,L) = N(\zeta;\mu,L)$. $L$ is the upper triangular matrix from the Cholesky Decomposition of the covariance matrix. This is where my confusion arises. In the the paper, the authors state "The Jacobian of elliptical standardization evaluates to one, because the Gaussian distribution is a member of the location-scale family: standardizing a Gaussian gives another Gaussian distribution." This means that $p(x,\eta) = p\bigg(x,T^{-1}(S^{-1}(\eta))\bigg)|\det J\_{T^{-1}}(S^{-1}(\eta))|$ and not $p\bigg(x,T^{-1}(S^{-1}(\eta))\bigg)|\det J\_{T^{-1}}(S^{-1}(\eta))||\det J\_{S^{-1}}(\eta)|$ Why does the Jacobian evaluate to one? I'm not sure what "standardizing the gaussian gives another gaussian" has to do with the Jacobian being one. Doesn't $J\_{S^{-1}}(\eta) = L$, which means $|\det J\_{S^{-1}}(\eta)| = |\det(L)|$? They say this at the top of page ten by the way.
The transformation of the formula from (5) in the paper to the formula at the bottom of page 9 is a consequence of the Law of the Unconscious Statistician (LOTUS) -see (28) in [Monte Carlo Gradient Estimation in Machine Learning](https://jmlr.org/papers/volume21/19-346/19-346.pdf) for this form - and is a special case of the reparameterization trick described there applied to standardizing a Gaussian distribution. The proof of the LOTUS effectively involves showing that the Jacobian determinant ($|\det J\_{S^{-1}}|$) arising in the modified density for $\eta$ as a result of the change of variables *cancels out* with the Jacobian determinant arising in the integral corresponding to the expectation as a result of the same change of variables (i.e. we need to replace $d\zeta$ by $|\det J\_S| d\eta$). This doesn't happen in integrals in general, but happens in expectations because of the co-occurence of the density and the differential. So I agree that the Jacobian is not one; however, the result specified is correct.
I am wondering if there is a simple way of detecting outliers. For one of my projects, which was basically a correlation between the number of times respondents participate in physical activity in a week and the number of times they eat outside the home (fast food) in a week, I drew a scatterplot and literally removed the data points that were extreme. (The scatterplot showed a negative correlation.) This was based on value judgement (based on the scatterplot where these data points were clearly extreme). I did not do any statistical tests. I am just wondering if this is a sound way of dealing with outliers. I have data from 350 people so loss of (say) 20 data points is not a worry to me.
I have done a lot of research on outliers, particularly when I worked on energy data validation at Oak Ridge from 1978 to 1980. There are formal tests for univariate outliers for normal data (e.g. Grubbs' test and Dixon's ratio test). There are tests for multivariate outliers and time series. The book by Barnett and Lewis on "Outliers in Statistical Data" is the bible on outliers and covers just about everything. When I was at Oak Ridge working on data validation we had large multivariate data sets. For univariate outliers, there is a direction for extremes (highly above the mean and highly below the mean). But for multivariate outliers there are many directions to look for outliers. Our philosophy was to consider what the intended use of the data is. If you are trying to estimate certain parameters such as a bivariate correlation or a regression coefficient then you want to look in the direction that provides the greatest effect on the parameter of interest. At that time I had read Mallows' unpublished paper on influence functions. The use of influence functions to detect outliers is covered in Gnanadesikan's multivariate analysis book. Of course, you can find it in Barnett and Lewis also. The influence function for a parameter is defined at points in the multivariate space of the observations and essentially measures the difference between the parameter estimate when the data point is included compared with when it is left out. You can do such estimates with each sample point but usually, you can derive a nice functional form for the influence function that gives insight and faster computation. For example in my paper in the *American Journal of Mathematical and Management Science* in 1982 "The Influence Function and Its Application to Data Validation" I show the analytic formula for the influence function for bivariate correlation and that the contours of constant influence are hyperbolae. So the contours show the direction in the plane where the influence function increases the fastest. In my paper, I show how we applied the influence function for bivariate correlation with the FPC Form 4 data on generation and consumption of energy. There is a clear high positive correlation between the two and we found a few outliers that were highly influential on the estimate of correlation. Further investigation showed that at least one of the points was in error and we were able to correct it. But an important point that I always mention when discussing outliers is that automatic rejection is wrong. The outlier is not always an error and sometimes it provides important information about the data. Valid data should not be removed just because it doesn't conform with our theory of reality. Whether or not it is difficult to do, the reason why the outlier occurred should always be investigated. I should mention that this is not the first time multivariate outliers have been discussed on this site. A search for outliers would probably lead to several questions where multivariate outliers have been discussed. I know that I have referenced my paper and these books before and given links to them. Also when outlier rejection is discussed many of us on this site have recommended against it especially if it is done based solely on a statistical test. Peter Huber often mentions robust estimation as an alternative to outlier rejection. The idea is that robust procedures will downweight the outliers reducing their effect on estimation without the heavy-handed step of rejecting them and using a non-robust estimator. The influence function was actually originally developed by Frank Hampel in his PhD dissertation in the early 1970s (1974 I think). His idea was actually to use influence functions to identify estimators that were not robust against outliers and to help develop robust estimators. [Here](https://stats.stackexchange.com/questions/28944/how-to-judge-if-a-datapoint-deviates-substantially-from-the-norm) is a link to a previous discussion on this topic where I mentioned some work of mine on detecting outliers in time series using influence functions.
I was reading about unit root test, when I started to get slightly confused about the setting for the Null hypothesis vs Alternative hypothesis, and so I thought of asking the experts opinion. In the augmented Dickey-Fuller test, the null hypothesis is that there IS a unit root. My confusion comes from the fact that I think the null hypothesis *should be* that there is NO unit root. Allow me to explain: The reason why I think so (and I know that I am probably wrong but I am hoping someone might point out my error), is more philosophical rather than mathematical. By this I mean: Accepting the null hypothesis, implies that what it says MIGHT be true (statistically), and rejecting the null hypothesis based on the observed data, means that there is a a very little (tiny) chance (p-value) that the null hypothesis is true, given the data, but very very unlikely (and hence we reject the null hypothesis). But if we accept the null hypothesis, and we transform the data (by differencing it) to get rid of the unit root, then we have acted on what MIGHT be true, and as result we would be modeling a different time series. IF (and that is a big if) the null hypothesis was that there is no unit root, then after running (my hypothetical) unit root test, I would only transform the data, if there is a very little chance that the magnitude of the root is less than 1. Thanks for correcting my wrong thoughts in advance.
The null hypothesis is "the differences, $y\_{t+1} - y\_t$, are stationary". You're suggesting switching it to the opposite, but one won't be able to carry out such a test, as very-close-to-stationary will look just like stationary. But what you are really saying is that one should only take differences and act as if they're stationary if there is good evidence that they are stationary. You might satisfy this concern by being less stringent about the conclusion of non-stationarity of the differences.
This is a question related to [this one](https://cstheory.stackexchange.com/questions/2815/what-do-we-know-about-restricted-versions-of-the-halting-problem). Putting it again in a much simpler form after a lot of discussion there, that it felt like a totally different question. The classical proof of the undecidability of the halting problem depends on demonstrating a contradiction when trying to apply a hypothetical HALT decider to itself. I think that this is just denoting the impossibility of having a HALT decider that decides whether **itself** will halt or not, but doesn't give any information beyond that about the decidability of halting of any **other** cases. So the question is > > Is there a proof that the halting problem is undecidable that doesn't depend on showing that HALT can not decide itself, nor depends on the diagonalization argument ? > > > Small edit: I will commit to the original phrasing of the question, which is asking for a proof that doesn't depend on diagonlization at all (rather than a just requiring it to not depend on diagonalization that depends on HALT).
Yes, there are such proofs in computability theory (a.k.a. recursion theory). You can first show that the halting problem (the set $0'$) can be used to compute a set $G\subseteq\mathbb N$ that is *1-generic* meaning that in a sense each $\Sigma^0\_1$ fact about $G$ is decided by a finite prefix of $G$. Then it is easy to prove that such a set $G$ cannot be computable (i.e., decidable). We could replace 1-[generic](http://en.wikipedia.org/wiki/Forcing_%28recursion_theory%29) here by 1-random, i.e., [*Martin-Löf random*](http://en.wikipedia.org/wiki/Algorithmically_random_sequence), for the same effect. This uses the [Jockusch-Soare Low Basis Theorem](http://en.wikipedia.org/wiki/Low_basis_theorem). (Warning: one might consider just showing that $0'$ computes [Chaitin's $\Omega$](http://en.wikipedia.org/wiki/Chaitin%27s_Omega), which is 1-random, but here we have to be careful about whether the proof that $\Omega$ is 1-random relies on the halting problem being undecidable! Therefore it's safer to just use the Low Basis Theorem).
I know that trivially the OR function on $n$ variables $x\_1,\ldots, x\_n$ can be represented exactly by the polynomial $p(x\_1,\ldots,x\_n)$ as such: $p(x\_1,\ldots,x\_n) = 1-\prod\_{i = 1}^n\left(1-x\_i\right)$, which is of degree $n$. But how could I show, what seems obvious, that if $p$ is a polynomial that represents the OR function exactly (so $\forall x \in \{0,1\}^n : p(x) = \bigvee\_{i = 1}^n x\_i$), then $\deg(p) \ge n$?
Let $f\colon \{0,1\}^n \to \{0,1\}$ be a boolean function. If it has a polynomial representation $P$ then it has a multilinear polynomial representation $Q$ of degree $\deg Q \leq \deg P$: just replace any power $x\_i^k$, where $k \geq 2$, by $x\_i$. So we can restrict our attention to multilinear polynomials. **Claim:** The polynomials $\{ \prod\_{i \in S} x\_i : S \subseteq [n] \}$, as functions $\{0,1\}^n \to \mathbb{R}$ form a basis of for the space of all functions $\{0,1\}^n \to \mathbb{R}$. **Proof:** We first show that the polynomials are linearly independent. Suppose that $f = \sum\_S c\_S \prod\_{i \in S} x\_i = 0$ for all $(x\_1,\ldots,x\_n) \in \{0,1\}^n$. We prove by (strong) induction on $|S|$ that $c\_S = 0$. Suppose that $c\_T = 0$ for all $|T| < k$, and let us be given a set $S$ of cardinality $k$. For all $T \subset S$ we know by induction that $c\_T = 0$, and so $0 = f(1\_S) = c\_S$, where $1\_S$ is the input which is $1$ on the coordinates of $S$. $~\qquad\square$ The claim shows that the multilinear representation of a function $f\colon \{0,1\}^n \to \{0,1\}$ is unique (indeed, $f$ doesn't even have to be $0/1$-valued). The unique multilinear representation of OR is $1-\prod\_i(1-x\_i)$, which has degree $n$.
For a given sufficiently strong formal axiomatic system $\mathsf{F}$ (like $\mathsf{PA}$ or $\mathsf{ZFC}$) and any given function $p^\*(x)$ that can be specified within the formal system $\mathsf{F}$, [Hutter's algorithm](http://www.hutter1.net/ai/pfastprg.htm) $M\_{\mathsf{F},p^\*}$ computes the function $p^\*(x)$ nearly as quickly as any (provably quick) algorithm $p$ provably computing $p^\*(x)$. More precisely > > Let $p$ be any algorithm, computing provably the same function as $p^∗$ with computation time provably bounded by the function $t\_p(\xi)$ for all $\xi$. $time\_{t\_p}(x)$ is the time needed to compute the time bound $t\_p(x)$. Then Hutter's algorithm $M\_{\mathsf{F},p^∗}$ computes $p^∗(x)$ in time > $$time\_{M\_{\mathsf{F},p^∗}}(x) ≤ 5·t\_p(x)+d\_p·time\_{t\_p}(x)+c\_p$$ > with constants $c\_p$ and $d\_p$ depending on $p$ but not on $x$. > > > Here "provably" means that a proof can be found in $\mathsf{F}$. The algorithm could be interpreted as a generalization and improvement of Levin search, but I wonder whether it is always an improvement. --- Can we use the formal system $F$ to construct a function $p^\*$ for which Hutter's algorithm $M\_{\mathsf{F},p^\*}$ performs badly? How about asking whether a short proof of $\bot$ (i.e. a contradiction) can be found in $\mathsf{F}$? A short proof would be a proof with fewer than $n$ symbols, and we would use $x$ to encode $n$ in binary form. The function $p^\*$ would just be a brute force search for the contradiction. I think this should give a problem in $NEXP$, but the optimal algorithm to compute $p^\*$ would have complexity $O(1)$, because the answer is always "no". But $M\_{\mathsf{F},p^\*}$ will take double exponential time in the length of $x$ to arrive at that conclusion, if I'm not mistaken. Now I wonder whether there are more interesting ways to make $M\_{\mathsf{F},p^\*}$ look bad, perhaps similar in spirit to Rice's theorem?
*Introduction to the Theory of Computation* by Michael Sipser is a relatively recent entry into this field. It was the required book for a class my friend was taking, and I asked him for the PDF so I could browse through at my leisure. I ended up reading almost the whole book, even the chapters on topics I was already very familiar with, just because the book is such a joy to read. It's written at an introductory level, which means less notation, more exposition, and more intuition. The motivation behind every idea and theorem is crystal clear. He precedes every proof with a "proof idea" section that lays out the path the proof is going to take without getting into the gory details. The book covers Automata Theory, Computability Theory, and Complexity Theory to a satisfactory depth for an undergraduate level. I've read many textbooks in computer science and math, and this is probably my favorite.
Is it hard wired CU or microprogramed?
This is a known problem, known as [Higher Order Unification](https://en.wikipedia.org/wiki/Unification_(computer_science)#Higher-order_unification). Unfortunately, this problem is undecidable in general. There is a decidable fragment, known as Miller's Pattern Fragment. It's used extensively in, among other things, typechecking of dependently-typed programs with metavariables or pattern matching. This fragment is where unification variables are applied only to distinct bound program variables. [This paper](http://adam.gundry.co.uk/pub/pattern-unify/) provides a great tutorial of how higher order unification works, and walks through a (relatively) simple implementation of it. Unfortunately, it doesn't look like your function falls into this pattern fragment. That said, what I'm looking seeing is pretty similar to function composition. Does the following function satisfy your property? $B = \lambda f\ g\ x\ \ldotp f\ (g\ x) $ We have: * $B\ f\ (\lambda x . M)$ * $= B\ f\ (\lambda y . [y/x]M)$ by $\alpha$-equivalence * $ = \lambda x . f\ ((\lambda y. [y/x]M) x) $ * $= \lambda x . f\ ([x/y][y/x]M)$ * $= \lambda x . f\ M$
Most methods for [symbolic data analyis](http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470090162.html) are currently implemented in the SODAS software. Are there any R packages for symbolic data except clamix and clusterSim?
I've just found out that the package [symbolicDA](http://keii.ae.jgora.pl/symbolicDA/index.html) is about to appear on CRAN in a few months' time.
Problem: > > Two people are to arrive at the same location some time between 3pm > and 4pm. They both arrive at random times within the hour and only > stay for ten minutes. Since they only go one time, what is the > probability that they will meet during that hour? > > > I have found the correct answer to this problem here (the second formula P2): <http://www.mathpages.com/home/kmath124/kmath124.htm> [![enter image description here](https://i.stack.imgur.com/kt8SO.gif)](https://i.stack.imgur.com/kt8SO.gif) $$ P2 = 1 - \frac{(1-w\_1)^2 + (1-w\_2)^2}{2} = w\_1 + w\_2 - \frac{w\_1^2}{2} - \frac{w\_2^2}{2} $$ Plugging in ($1/6$) for both $w\_1$ and $w\_2$ is $11/36$ which is the correct answer. But I don't understand how they came to that conclusion and what formula they derived it from. Any help in explaining this would be appreciated!
The side of the square is one, so its surface $= 1 \* 1 = 1$. The side of the upper white square-cornered triangle is $1 - w\_1$, so its surface is $(1-w\_1)\*(1-w\_1)/2$. Similarly with the bottom triangle. So: the surface of the gray area = square - two white triangles = formula $P2$. Finally note that each 'point' in the square is equally likely to occur, so that the area of the 'valid' points (the grey area) divided by the area of 'all' points (the white square) is the probability of a 'valid' event (i.e. the two persons meeting). Since the surface of the square is 1, the result is still formula $P2$.
One way to deal with the problem of collisions for a hash table is to have a linked list for each bucket. But then the lookup time is no longer constant. Why not use a hash set instead of a linked list? The lookup time would then be constant. For example, in C++ the hash table could be defined as: ``` unordered_map<int, unordered_set<int>> m; ```
> > But then the lookup time is no longer constant > > > Not worst-case constant -- which it *never* is for (basic) hashtables -- but it is still average-case constant, provided the usual assumptions on input distribution and hashing function. > > Why not use a hash set instead of a linked list? > > > And how do you implement that one? You have created a circular definition. > > The lookup time would then be constant. > > > Nope, see above. Your major confusion seems to be about what people mean when they say that hashtables have constant lookup time. Read [here](https://cs.stackexchange.com/questions/249/when-is-hash-table-lookup-o1?rq=1) on how this is true and false.
Genetic algorithms are one form of optimization method. Often stochastic gradient descent and its derivatives are the best choice for function optimization, but genetic algorithms are still sometimes used. For example, the antenna of [NASA's ST5 spacecraft](https://en.wikipedia.org/wiki/Space_Technology_5) was created with a genetic algorithm: [![ST5 antenna](https://i.stack.imgur.com/vfP5b.jpg)](https://i.stack.imgur.com/vfP5b.jpg) When are genetic optimization methods a better choice than more common gradient descent methods?
Genetic algorithms (GA) are a family of heuristics which are empirically good at providing a *decent* answer in many cases, although they are rarely the best option for a given domain. You mention derivative-based algorithms, but even in the absence of derivatives there are plenty of derivative-free optimization algorithms that perform way better than GAs. See [this](https://stats.stackexchange.com/questions/193306/optimization-when-cost-function-slow-to-evaluate/193391#193391) and [this](https://stats.stackexchange.com/questions/203542/most-suitable-optimizer-for-the-gaussain-process-likelihood-function/203555#203555) answer for some ideas. What many standard optimization algorithms have in common (even derivative-free methods) is the assumption that the underlying space is a smooth manifold (perhaps with a few discrete dimensions), and the function to optimize is *somewhat* well-behaved. However, not all functions are defined on a smooth manifold. Sometimes you want to optimize over a graph or other discrete structures (combinatorial optimization) -- here there are dedicated algorithms, but GAs would also work. The more you go towards functions defined over complex, discrete structures, the more GAs can be useful, especially if you can find a representation in which the genetic operators work at their best (which requires a lot of hand-tuning and domain knowledge). Of course, the future might lead to forget GAs altogether and develop methods to [map discrete spaces to continuous space](https://arxiv.org/abs/1610.02415), and use the optimization machinery we have on the continuous representation.
What differences and relationships are between randomized algorithms and nondeterministic algorithms? From Wikipedia > > A **randomized algorithm** is an algorithm which employs a degree of > randomness as part of its logic. The algorithm typically uses > uniformly random bits as an auxiliary input to guide its behavior, in > the hope of achieving good performance in the "average case" over all > possible choices of random bits. Formally, the algorithm's performance > will be a random variable determined by the random bits; thus either > the running time, or the output (or both) are random variables. > > > A **nondeterministic algorithm** is an algorithm that can exhibit > different behaviors on different runs, as opposed to a deterministic > algorithm. There are several ways an algorithm may behave differently > from run to run. A **concurrent algorithm** can perform differently on > different runs due to a race condition. A **probabilistic algorithm**'s > behaviors depends on a random number generator. An algorithm that > solves a problem in nondeterministic polynomial time can run in > polynomial time or exponential time depending on the choices it makes > during execution. > > > Are randomized algorithms and probabilistic algorithms the same concept? If yes, are randomized algorithms just a kind of nondeterministic algorithms?
In short: non-determinism means to have multiple, equally valid choices of how to continue a computation. Randomisation means to use an external source of (random) bits to guide computation. --- In order to understand nondeterminism, I suggest you look at finite automata (FA). For a *deterministic* FA (DFA), the transition function is, well, a function. Given the current state and the next input symbol, the next state is uniquely defined. A non-deterministic automaton (NFA), on the other hand, has a transition *relation*: given the current state and the next input symbol, there are multiple possible next states! For example, consider this automaton for the language $(ab)^\*(ac)^\*$: ![NFA](https://i.stack.imgur.com/V4zJW.png) [[source](https://github.com/akerbos/sesketches/blob/gh-pages/src/cs_5008_1.tikz)] The automaton *guesses* which $a$ marks the border between $(ab)^\*$ and $(ac)^\*$; a deterministic automaton would have to postpone its decision until after having read the symbol after each $a$. The key point here is that acceptance is defined as "accept if *there is* an accepting run" for NFA. This existence criterion can be interpreted as "always guessing right", even though there is no actual guessing. Note that there are no probabilities here, anywhere. If you were to translate nondeterminism into programming languages, you would have statements that can cause jumps to different other statements *given the same state*. Such a thing does not exist, except maybe in esoteric programming languages designed to skrew your mind. --- Randomisation is quite different. If we break it down, the automaton/program does not have multiple choices for continuing execution. Once the random bit(s) are drawn, the next statement is uniquely defined: ``` if ( rand() > 5 ) do_stuff(); else do_other_stuff(); ``` In terms of finite automata, consider this: ![PFA](https://i.stack.imgur.com/2XWBA.png) [[source](https://github.com/akerbos/sesketches/blob/gh-pages/src/cs_5008_2.tikz)] Now every word has a probability, and the automaton defines a probability distribution over $\{a,b,c\}^\*$ (differences between $1$ and the sum of outgoing edges is the probability of terminating; words that can not be accepted have probability $0$). We can view this as a deterministic automaton, given the sequence of random decisions (which models practice quite well, as we usually use no real random sources); this can be modelled as a DFA over $\Sigma \times \Pi$ where $\Pi$ is a sufficiently large alphabet used by the random source. --- One final note: we can see that nondeterminism is a purely theoretical concept, it can not be implemented! So why do we use it? 1. It often allows for smaller representations. You might know that there are NFA for which the smallest DFA is exponentially as large¹. Using the smaller ones is just a matter of simplifying automaton design and technical proofs. 2. Translation between models is often more straight-forward if nondeterminism is allowed in the target model. Consider, for instance, converting regular expressions to DFA: the usual (and simple) way is to translate it to an NFA and determinise this one. I am not aware of a direct construction. 3. This may be an academic concern, but it is interesting that nondeterminism can increase the power of a device. This is not the case for finite automata and Turing machines, arguable the most popular machine models devices, but for example deterministic pushdown-automata, Büchi automata and top-down tree automata can accept strictly less languages than their non-deterministic siblings². --- 1. See [this question on cstheory.SE](https://cstheory.stackexchange.com/q/12622/1546) for an example. 2. See [here](https://cstheory.stackexchange.com/q/9673/1546), [here](https://en.wikipedia.org/wiki/B%FCchi_automaton#Recognizable_languages) and [here (Proposition 1.6.2)](http://tata.gforge.inria.fr/), respectively.
[Alan Turing](http://en.wikipedia.org/wiki/Alan_Turing), one of the pioneers of (theoretical) computer science, made many seminal scientific contributions to our field, including defining Turing machines, the Church-Turing thesis, undecidability, and the Turing test. However, his important discoveries are not limited to the ones I listed. In honor of his 100th Birthday, I thought it would be nice to ask for a more complete list of his important contributions to computer science, in order to have a better appreciation of his work. So, **what are Alan Turing's important/influential contributions to computer science?**
As mentioned in the question, Turing was central to defining algorithms and computability, thus he was one of the people that helped assemble the algorithmic lens. However, I think his biggest contribution was **viewing science through the algorithmic lens** and not just computation for the sake of computation. During WW2 Turing used the idea of computation and electro-mechanical (as opposed to human) computers to help create the [Turing–Welchman bombe](http://en.wikipedia.org/wiki/Bombe) and other tools and formal techniques for doing crypto-analysis. He started the transformation of cryptology, the art-form, to cryptography, the science, that Claude Shannon completed. **Alan Turing viewed cryptology through algorithmic lenses.** In 1948, Turing followed his interested in the brain, to create the [first learning artificial neural network](https://cogsci.stackexchange.com/q/1263/29). Unfortunately his manuscript was rejected by the director of the NPL and not published (until 1967). However, it predated both Hebbian learning (1949) and Rosenblatt's perceptrons (1957) that we typically associated with being the first neural networks. Turing foresaw the foundation of connectionism (still a huge paradigm in cognitive science) and computational neuroscience. **Alan Turing viewed the brain through algorithmic lenses.** In 1950, Turing published his famous *Computing machinery and intelligence* and launched AI. This had a transformative effect on Psychology and Cognitive Science which continue to view the cognition as computation on internal representations. **Alan Turing viewed the mind through algorithmic lenses.** Finally in 1952 (as @vzn mentioned) Turing published *The Chemical Basis of Morphogenesis*. This has become his most cited work. In it, he asked (and started to answer) the question: how does a spherically symmetric embryo develop into a non-spherically symmetric organism under the action of symmetry-preserving chemical diffusion of morphogens? His approach in this paper was very physics-y, but some of the approach did have an air of TCS; His paper made rigorous qualitative statements (valid for various constants and parameters) instead of quantitative statements based on specific (in some fields: potentially impossible to measure) constants and parameters. Shortly before his death, he was continuing this study by working on the basic ideas of what was to become artificial life simulations, and a more discrete and non-differential-equation treatment of biology. In a blog post [I speculate on how he would develop biology](http://egtheory.wordpress.com/2012/06/23/turing-biolog/) if he had more time. **Alan Turing started to view biology through algorithmic lenses.** I think Turing's greatest (and often ignored) contribution to computer science was showing that we can glean great insight by viewing science through the algorithmic lens. I can only hope that we honour his genious by continuing his work. --- ### Related questions * [Algorithmic lens in the social sciences](https://cstheory.stackexchange.com/q/6387/1037) * [Modern treatments of Alan Turing's B-type neural networks](https://cogsci.stackexchange.com/q/1263/29) * [Impact of Alan Turing's approach to morphogenesis](https://biology.stackexchange.com/q/2739/500)
This book is written in 1939. It's available here on [archive.org](https://archive.org/details/MathematicsOfStatisticsPartI). Would you recommend this as an introduction to the mathematics of statistics for beginners?
What's your background? [Casella & Berger](http://rads.stackoverflow.com/amzn/click/0534243126) is by far my favourite book on mathematical statistics (there's a very cheap international edition). There's [Hogg Craig](http://rads.stackoverflow.com/amzn/click/0321795431) too, which you can use as second source since it's got a similar scope.
[![enter image description here](https://i.stack.imgur.com/9Q9iC.png)](https://i.stack.imgur.com/9Q9iC.png) That is mean, variance and standard deviation. My question is how to get from point 4 to 5 and also with the variance from point 6 to 7.
These equations represent a particularly obscure way to make some important points that everybody ought to understand. I will therefore provide an indirect answer by highlighting the fundamentals (1-4 below), demonstrating them, and then applying them in what amounts to an equivalent proof. 1. **When you add a constant $a$ to all data $x\_i$, the mean of the new values is $a$ plus the mean of the old values.** This should be obvious, because adding $a$ to each of $n$ values adds $na$ to the sum. When the sum is divided by $n$ to get the mean, $na$ is divided by $n$ to show $na/n=a$ is added to the sum. 2. **When you multiply each $x\_i$ by a constant $b$, the mean of the new values is $b$ times the original mean.** This truly is obvious (it's a direct application of distributive and commutative laws of arithmetic). 3. **When you add a constant $a$ to all data, the variance is unchanged.** This is because the variance is the average of the squared residuals, $(x\_i-\bar x )^2$. By (1), $\bar x$ increases by $a$ and that exactly cancels the addition of $a$ to each $x\_i$, whence *the residuals are unchanged.* Consequently the variance is unchanged. 4. **When you multiply all data by a constant $b$, the variance is multiplied by $b^2$.** Since (3) tells us each $x\_i$ as well as their mean $\bar x$ are multiplied by $b$, the residuals $x\_i - \bar x$ are also multiplied by $b$. Consequently the squared residuals are multiplied by $b^2$ and so (exactly as in (2)) the mean squared residual is multiplied by $b^2$. The equations in the question attempt to demonstrate that the mean and variance of $z\_i$ are zero and one, respectively, when the $z\_i$ are formed by *standardizing* the data: that is, $-\bar x$ is first added to the data (giving the residuals) and those results are divided by the square root of the variance. Call the square root $s$, so the variance is $s^2$. **Here, then, is an alternative to the equations in the question:** By (1), the mean after the first step is $\bar x - \bar x = 0$. By (2), the mean remains zero upon division by the square root of the variance. (This should remind you of step "5" in the question.) By (3), the variance is unchanged after the first step. By (4), the variance $s^2$ is divided by the square of $s$ in the second step: but that just divides the variance by itself (step "7" in the question), giving $s^2/s^2=1$, *QED*.
I would like to know how to alter/add NOTs when applying Duality principle. Suppose I have P = XY(X+Y) + NOT(Y), how to find its dual? My book says that while applying Duality Principle to a relation: * Change all 0s to 1s * Change all ORs to ANDs * Change all ANDs to ORs But it doesn't say anything about NOTs.
"From what I understand, there is not an algorithm to determine whether my guess is correct." It looks like your understanding is wrong. There is no algorithm that determines whether **any** given formal language grammar is ambiguous context-free. There is no algorithm that determines whether **any** given formal language grammar is inherently ambiguous context-free. However, it is possible to decide many particular grammars is (inherently) ambiguous context-free or not by some algorithms. For example, the trivial algorithm that just outputs "ambiguous context-free" determines the grammar with rules $S\to A|B$, $A\to c$, $B\to c$ as ambiguous context-free. For example, the trivial algorithm that always outputs "not inherently ambiguous context-free" determines all regular grammars as not inherently ambiguous context-free. As said by rici, your grammar is regular. There is [an algorithm](https://cstheory.stackexchange.com/questions/19102/ambiguity-in-regular-and-context-free-languages) that can decide any given regular grammar is ambiguous or not. (Immediately by definition, a regular language is context-free.). As demonstrated by Dmitri Urbanowicz, your grammar is, in fact, not ambiguous. As said by Dmitri Urbanowicz, your grammar is wrong since it cannot generate number 101. Here is a correct grammar, which can be shown to be non-ambiguous context-free. $\quad S \to BT|A$ $\quad T \to AT|A$ $\quad A \to B|0$ $\quad B \to 1|2|3|4|5|6|7|8|9$
I am curious in determining an approach to tackling a "suggested friends" algorithm. [Facebook](http://facebook.com) has a feature in which it will recommended individuals to you which it thinks you may be acquainted with. These users normally (excluding the edge cases in [which a user specifically recommends a friend](http://www.facebook.com/help/?faq=154758887925123#How-do-I-suggest-a-friend-to-someone?)) have a highly similar network to oneself. That is, the number of friends in common are high. I assume Twitter follows a similar path for their "Who To Follow" mechanism. [Stephen Doyle (Igy)](https://stackoverflow.com/a/6851193/321505), a Facebook employee suggested that the related newsfeed that uses [EdgeRank formula](http://www.quora.com/How-does-Facebook-calculate-weight-for-edges-in-the-EdgeRank-formula) which seems to indicate that more is to valued than friends such as appearance is similar posts. Another user suggested the Google Rank system. Facebook states their News Feed Optimization as $\sum u\_{e}w\_{e}d\_{e}$ where $u\_{e}$ = affinity score between viewing user and edge creator $w\_{e}$ = weight for this edge (create, comment, like, tag, etc) $d\_{e}$ = time decay factor based on how long ago the edge was created Summing these items is supposed to give an object's rank which I assume as Igy hinted, means something in a similar format is used for suggested friends. So I'm guessing that this is the way in which connections for all types are done in general via a rank system?
What you're looking for is a heuristic. No algorithm can say, given a graph of friends as the only input, whether two individuals not directly connected are friends or aren't; the friendship/acquaintance relation isn't guaranteed to be transitive (we can assume symmetry, but that might even be a stretch in real life). Any good heuristic will therefore need to be based on an understanding of how people interact, rather than some mathematical understanding of the nature of graphs of relations (although we will need to quantify the heuristic in these terms). Suggesting friends of friends with equal probability is a relatively cheap but inaccurate heuristic. For instance, my father has friends, but I wouldn't say I'm friends with any of them (although I'd probably say I'm a friend of my father's for the purposes of, e.g., a social network). Having a person at a relatively close distance doesn't necessarily make them a great candidate. Suggesting people to whom you have a great many extended connections also seems like a poor choice in general, because this will tend to lead to exponential growth of friends of people who pull ahead early on (the seven degrees of separation from Kevin Bacon game is an example of this). I suggest a circuit-based model. Assume that each link is a resistor of resistance $R$. Then the best candidate for a new friend might be the individual with the lowest equivalent resistance. Here's a poorly-executed ASCII graphics example: ``` _____ / \ a---c f | | / b d---e | \ | g h i ``` Say we want to find new friends for `a`. `a`'s current friends are `b`, `c`, and `f`. We evaluate the net equivalent resistance between `a` and each of `d`, `e`, `g`, `h`, and `i`: ``` pair resistance (a,d) 6/7 (a,e) 13/7 (a,g) 7/4 (a,h) 1/1 (a,i) inf ``` According to this heuristic, `d` is the best candidate friend, followed closely by `h`. `g` is the next best bet, followed closely by `e`. `i` can never be a candidate friend by this heuristic. Whether you find the results of this heuristic to be representative of real human social interactions is what's important. Computationally speaking, this would involve finding a subgraph containing all paths between two individuals (or, perhaps interestingly, some meaningfully selected truncation of this), then evaluating the equivalent resistance between the source and sink nodes. EDIT: So what's my social motivation for this? Well, this might be a rough model of how hard it is to get in touch with, and subsequently communicate possibly significant amounts of information through, intermediaries (friends). In CS terms (rather than physics terms), this might be construed as bandwidth between two nodes in a graph. Extensions of this system would be to allow different kinds of links between people with different weights (resistance, bandwidth, etc.) and proceed as above.
Given a string of values $AAAAAAAABC$, the Shannon Entropy in log base $2$ comes to $0.922$. From what I understand, in base $2$ the Shannon Entropy rounded up is the minimum number of bits in binary to represent a single one of the values. Taken from the introduction on this wikipedia page: <https://en.wikipedia.org/wiki/Entropy_%28information_theory%29> So, how can three values be represented by one bit? $A$ could be $1$, $B$ could be $0$; but how could you represent $C$? Thank you in advance.
The entropy you've calculated isn't really for the specific string but, rather, for a random source of symbols that generates $A$ with probability $\tfrac{8}{10}$, and $B$ and $C$ with probability $\tfrac1{10}$ each, with no correlation between successive symbols. The calculated entropy for this distribution, $0.922$ means that you can't represent strings generated from this distribution using less than $0.922$ bits per character, on average. It might be quite hard to develop a code that will achieve this rate.\* For example, Huffman coding would allocate codes $0$, $10$ and $11$ to $A$, $B$ and $C$, respectively, for an average of $1.2$ bits per character. That's quite far from the entropy, though still a good deal better than the naive encoding of two bits per character. Any attempt at a better coding will probably exploit the fact that even a run of ten consecutive $A$s is more likely (probability $0.107$) than a single $B$. --- \* Turns out that it isn't hard to get as close as you want – see the other answers!
I have a hierarchical model that I need to validate. My model is as follows: we have a collection of $\lambda\_i$ that we draw from $Gamma(\alpha,\beta)$. Then, we draw our data point $y\_i$ from $Poisson(\lambda\_i)$. I get a distribution of $\alpha,\beta,\{\lambda\_i\}$ via a Gibbs sampler combined with a Metropolis step. This part is fine. My question is how do I validate such a model? I have my set of data, each one corresponding to one particular $\lambda\_i$. I'm not sure what statistical tests/ other steps I should take to check are.
Posterior predictive checks, outlined in Gelman et al (1996), are an obvious starting point. Given how simple the model is, it probably makes sense to use graphical checks. Plot the histogram of $(y\_1,...,y\_n)$ against histograms of several posterior predictive replications $(y\_1^{rep},...,y\_n^{rep})$. If you spot a feature that doesn't fit, you can formalize things by defining an appropriate discrepancy statistic and computing the posterior predictive p-value of your model against that statistic.
Can there be a computer without software (only hardware) which can produce meaningful output? * "Software" would be for example an operating system (whether in the level of "firmware" or not). * "Meaningful output" would be for example anything useful for the user, but a practical example might be the solution to any "mathematical exercise" (addition/subtraction/multiplication/division and so forth).
Despite your attempt at more precise specification, it seems to me there is still a problem with understanding what you call software. Even without requiring Turing power, I assume that your computer is a little bit more than a logical circuit computing always the same result, and does manipulate some data, if only as input. Then, how do you distinguish data and software. One important point of theory of computing is precisely that there is no such distinction, thanks to Gödel numbering. For example, a computer C running a program P on input x can be seen as a "*raw computer*" (i.e., without software) running on an input which is a pair <P,x>. Conversely and more to the point, consider a would be "*raw computer*" C that performs some useful computation on some data x to produce a result f(x). Now you can cut you data into a pair of 2 pieces x=<x1,x2>, and view x1 as a program run by your *raw computer* C on input x2, to produce f(<x1,x2>), i.e. f(x). So C is now a computer that uses software x1 to compute on input x2. This is related to techniques called partial evaluation. Then, whether you have achieved an example of a useful *raw computer* depends only on the way you look at it. It's all in the eyes of the beholder. I expect there are other ways to discuss this. For example, a purely hardware circuitry could be represented in a harware description language, and this linguistic representation could then be interpreted by a circuit description emulator. Then you might say that the circuit is hardware, while its linguistic description is software. But, again, where is the distinction. Software is always represented physically in computers, even though it is linguistic in nature, exactly like our circuitry which is physical with a linguistic representation. And that applies as well to parts of the circuit. Is a given part of the circuit a piece of hardware, or just a physical representation of a software written in a circuit description language ("language" is the important word).
### background Several years ago, when I was an undergraduate, we were given a homework on amortized analysis. I was unable to solve one of the problems. I had asked it in [comp.theory](http://groups.google.com/group/comp.theory/browse_thread/thread/bb34b65ca22641f8/b2b6ecb44bc3e912), but no satisfactory result came up. I remember the course TA insisted on something he couldn't prove, and said he forgot the proof, and ... [you know what]. Today, I recalled the problem. I was still eager to know, so here it is... ### The Question > > Is it possible to implement **a stack** using **two queues**, so that both **PUSH** > and **POP** operations run in **amortized time O(1)**? If yes, could you tell > me how? > > > Note: The situation is quite easy if we want to implement **a queue** with **two stacks** (with corresponding operations **ENQUEUE** & **DEQUEUE**). Please observe the difference. PS: The above problem is not the homework itself. The homework did not require any lower bounds; just an implementation and the running time analysis.
I claim we have $\Theta(\sqrt{N})$ amortized cost per operation. Alex's algorithm gives the upper bound. To prove the lower bound I give a worst-case sequence of PUSH and POP moves. The worst case sequence consists of $N$ PUSH operations, followed by $\sqrt{N}$ PUSH operations and $\sqrt{N}$ POP operations, again followed by $\sqrt{N}$ PUSH operations and $\sqrt{N}$ POP operations, etc. That is: $ PUSH^N (PUSH^{\sqrt{N}} POP^{\sqrt{N}})^{\sqrt{N}} $ Consider the situation after the initial $N$ PUSH operations. No matter how the algorithm works, at least one of the queues must have at least $N/2$ entries in it. Now consider the task of dealing with the (first set of) $\sqrt{N}$ PUSH and POP operations. Any algorithmic tactic whatsoever must fall into one of two cases: In the first case, the algorithm will use both queues. The larger of these queues has at least $N/2$ entries in it, so we must incur a cost of at least $N/2$ queue operations in order to eventually retrieve even a single element we ENQUEUE and later need to DEQUEUE from this larger queue. In the second case, the algorithm does not use both queues. This reduces the problem to simulating a stack with a single queue. Even if this queue is initially empty, we can't do better than using the queue as a circular list with sequential access, and it appears straightforward that we must use at least $\sqrt{N}/2$ queue operations on average for each of the $2\sqrt{N}$ stack operations. In both cases, we required at least $N/2$ time (queue operations) in order to handle $2\sqrt{N}$ stack operations. Because we can repeat this process $\sqrt{N}$ times, we need $N\sqrt{N}/2$ time to process $3N$ stack operations in total, giving a lower bound of $\Omega(\sqrt{N})$ amortized time per operation.
I'm busy with a supervised machine learning problem where I am predicting contract cancellation. Although a lengthy question, I do hope someone will take the time as I'm convinced it will help others out there (I've just been unable to find ANY solutions that have helped me) **I have the following two datasets:** *1) "Modelling Dataset"* Contains about 400k contracts (rows) with 300 features and a single label (0 = "Not Cancelled", 1 = "Cancelled"). Each row represents a single contract, and each contract is only represented once in the data. There are 350k "Not Cancelled" and 50k "Cancelled" cases. Features are all extracted as at a specific date for each contract. This date is referred to as the "Effective Date". For "Cancelled" contracts, the "Effective Date" is the date of cancellation. For "Not Cancelled" contracts, the "Effective Date" is a date say 6 months ago. This will be explained in a moment. *2) "Live Dataset"* Contains 300k contracts (rows) with the same list of 300 features. All these contracts are "Not Cancelled" of course, as we want to predict which of them will cancel. These contracts were followed for a period of 2 months, and I then added a Label to this data to indicate whether it actually ended up cancelling in those two months: 0 = "Not Cancelled", 1 = "Cancelled" **The problem:** I get amazing results on the "Modelling Dataset" (random train/test split) (eg Precision 95%, AUC 0.98), but as soon as that model is applied to the "Live Dataset", it performs poorly (cannot predict well which contracts ends up cancelling) (eg Precision 50%, AUC 0.7). On the Modelling Dataset, the results are great, almost irrespective of model or data preparation. I test a number of models (E.g. SkLearn random forest, Keras neural network, Microsoft GbmLight, SkLearn Recursive feature elimination). Even with default settings, the models generally perform well. I've standardized features. I've binned features to attempt improving how well it will generalize. Nothing has help it generalize to the "Live Dataset" **My suspicion:** In my mind, this is not an over-training issue because I've got a test set within the "Modelling Dataset" and those results are great on the test set. It is not a modelling or even a hyper-parameter optimization issue, as the results are already great. I've also investigated whether there are significant differences in the profile of the features between the two datasets by looking at histograms, feature-by-feature. Nothing is worryingly different. I suspect the issue lies therein that the same contracts that are marked as "Not Cancelled" in the "Modelling Dataset", which the model trains to recognize "Not Cancelled" of course, is basically the exact same contracts in the "Live Dataset", except that 6 months have now passed. I suspect that the features for the "Not Cancelled" cases has not changed enough to now make the model recognize some of them as about to be "Cancelled". In other words, the contracts have not moved enough in the feature space. **My questions:** Firstly, does my suspicion sound correct? Secondly, if I've stated the problem to be solved incorrectly, how would I then set up the problem-statement if the purpose is to predict cancellation of something like contracts (when the data on which you train will almost certainly contain the data one which you want to predict)? For the record, the problem-statement I've used here is similar to the way others have done this. And they reported great results. But I'm not sure that the models were ever tested in real live. In other cases, the problem to be solved was lightly different, e.g. hotel booking cancellations, which is different because there a stream of new incoming bookings and booking duration is relatively short, so no bookings in common between the modelling and live dataset. Contracts on the other hand have long duration and can cancel at any time, and sometimes never.
If your model makes a prediction 6 months into the future, then it doesn't make sense to judge its performance before 6 months. If only 2 months have passed, then possibly 2/3 of the true positives have yet to reveal their true nature and you are arriving a premature conclusion. To test this theory, I would train a new model to predict 2 months out and use that to get an approximation of live accuracy while your wait 4 more months for the first model. Of course, there could be other problems, but this is what I would try first.
Let's say I establish the following rules from a data set: **Rule A:** If you exercise daily, you have a 70% chance of having a BMI under 28, based on 100 cases from the data. **Rule B:** If you eat fast food less than once per week, you have a 90% chance of having a BMI under 28, based on 10 cases from the data. The challenge for me is that Rule B seems to show a stronger correlation between a factor and a BMI under 28, but it is observed in much fewer cases than in Rule A, and therefore has a higher variance. How can I mathematically determine which rule is more significant?
If you want to know which effect is stronger, then it makes sense to just look at your point estimates and choose case b. However, your instinct, that there is considerable uncertainty in rule b, is right. So, it is good practice to also consider the uncertainty in these estimates. One common way of doing so is to look at the confidence intervals. So in case a you have 70 successes out of 100 tries, leading to a 95% confidence interval [60.0, 78.8], while in case b you have 9 successes out of 10 tries leading to a 95% confidence interval of [55.5, 99.7]. These confidence intervals are based on the binomial distribution. Alternatives exist.
In a recitation video for [MIT OCW 6.006](http://www.youtube.com/watch?feature=player_embedded&v=P7frcB_-g4w) at 43:30, Given an $m \times n$ matrix $A$ with $m$ columns and $n$ rows, the 2-D peak finding algorithm, where a peak is any value greater than or equal to it's adjacent neighbors, was described as: *Note: If there is confusion in describing columns via $n$, I apologize, but this is how the recitation video describes it and I tried to be consistent with the video. It confused me very much.* > > 1. Pick the middle column $n/2$ // *Has complexity $\Theta(1)$* > 2. Find the max value of column $n/2$ //*Has complexity $\Theta(m)$ because there are $m$ rows in a column* > 3. Check horiz. row neighbors of max value, if it is greater then a peak has been found, otherwise recurse with $T(n/2, m)$ //*Has complexity $T(n/2,m)$* > > > Then to evaluate the recursion, the recitation instructor says > > $T(1,m) = \Theta(m)$ because it finds the max value > > > $$ T(n,m) = \Theta(1) + \Theta(m) + T(n/2, m) \tag{E1}$$ > > > I understand the next part, at 52:09 in the video, where he says to treat $m$ like a constant, since the number of rows never changes. But I don't understand how that leads to the following product: $$ T(n,m) = \Theta(m) \cdot \Theta(\log n) \tag{E2}$$ I think that, since $m$ is treated like a constant, it is thus treated like $\Theta(1)$ and eliminated in $(E1)$ above. But I'm having a hard time making the jump to $(E2)$. Is this because we are now considering the case of $T(n/2)$ with a constant $m$? I think can "see" the overall idea is that a $\Theta(\log n)$ operation is performed, at worst, for m number of rows. What I'm trying to figure out is how to describe the jump from $(E1)$ to $(E2)$ to someone else, i.e. gain real understanding.
This is murky indeed. I can't say I agree with the distinctions mentioned about parameterization. I think there are parametric and non-parametric examples of both. My 2 cents: a major distinction is the direction of causality in the system. In ML the "real-world" direction of causality can go in either direction x-to-y or y-to-x, but the model always has y (the "target") at the output and x (the "observation") at the input. In sys ID, x tends to drive the system, thus generating y.
My research is a about clustering a huge data set. Right now I'm not using any technique for feature selection, because I only need 3 attributes from each row. I hope I'm making myself clear, lets say I have 1000 rows of data and in each row there are at least 15 attributes (field with different category, e.g. IP add, timestamp, numbers .. etc.) From this 15 attributes I'm only using 3 attributes. Do I need to use PCA? I'm clustering when there is a perfect match. I'm still reading about PCA, it is talking about dimensional in the data, how to determine the need to use PCA. Will the PCA technique make my cluster more accurate?
The Nelder-Mead simplex algorithm seems to work well.. It is implemented in Java by the Apache Commons Math library at <https://commons.apache.org/math/> . I've also written a paper about the Hawkes processes at [Point Process Models for Multivariate High-Frequency Irregularly Spaced Data](http://vixra.org/abs/1211.0094) . felix, using exp/log transforms seems to ensure positivity of the parameters. As for the small alpha thing, search the arxiv.org for a paper called "limit theorems for nearly unstable hawkes processes"
Suppose, I have a dichotomous variable - gender: Male coded as 0 Female coded as 1 Frequency of male - 30 Frequency of female - 20 The mean of a dichotomous variable is just the proportion which has been coded as 1. So, in this case, I believe it is 30/50. The confusing part is while solving for standard deviation. How can I do that? Also, is the answer a meaningful one?
If you code males as 0 and females as 1, the mean will be $\frac{20}{50}$, not $\frac{30}{50}$. For the SD, just use [the standard formula](https://en.wikipedia.org/wiki/Standard_deviation) with your coding, $x\_i\in\{0,1\}$ and the mean $\frac{20}{50}$ you calculated above. No, I don't think this answer is meaningful. In reporting the mean and SD, we summarize a distribution in two numbers. If we have a binary distribution, we have a much easier and more intuitive way of summarizing it in two numbers: just report the numbers of males and females.
What are theoretical reasons to not handle missing values? Gradient boosting machines, regression trees handle missing values. Why doesn't Random Forest do that?
> > "What are [the] theoretical reasons [for RF] to not handle missing values? Gradient boosting machines, regression trees handle missing values. Why doesn't Random Forest do that?" > > > RF **does** handle missing values, just not in the same way that CART and other similar decision tree algorithms do. User777 correctly describes the two methods used by RF to handle missing data (median imputation and/or proximity based measure), whereas Frank Harrell correctly describes how missing values are handled in CART (surrogate splits). For more info, see links on missing data handling for [CART](http://web.archive.org/web/20151204113413/http://www.salford-systems.com/products/cart/faqs/what-are-intelligent-surrogates-for-missing-values) (or it's FOSS cousin: [RPART](http://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf)) and [RF](https://www.stat.berkeley.edu/%7Ebreiman/RandomForests/cc_home.htm#missing1). An answer to your **actual question** is covered clearly, IMHO, in Ishwaran et al's 2008 paper entitled [Random Survival Forests](http://arxiv.org/pdf/0811.1645.pdf). They provide the following plausible explanation for why RF does not handle missing data in the same way as CART or similar single decision tree classifiers: > > "Although surrogate splitting works well for trees, the method may not > be well suited for forests. Speed is one issue. Finding a surrogate > split is computationally intensive and may become infeasible when > growing a large number of trees, especially for fully saturated trees > used by forests. Further, surrogate splits may not even be meaningful > in a forest paradigm. RF randomly selects variables when splitting a > node and, as such, variables within a node may be uncorrelated, and a > reasonable surrogate split may not exist. Another concern is that > surrogate splitting alters the interpretation of a variable, which > affects measures such as [Variable Importance]. > > > For these reasons, a different strategy is required for RF." > > > This is an aside, but for me, this calls into question those who claim that RF uses an ensemble of CART models. I've seen this claim made in many articles, but I've never seen such statements sourced to any authoritative text on RF. For one, the trees in a RF are grown [**without pruning**](https://www.stat.berkeley.edu/%7Ebreiman/RandomForests/cc_home.htm#overview), which is usually not the standard approach when building a CART model. Another reason would be the one you allude to in your question: CART and other ensembles of decision trees handle missing values, whereas [the original] RF does not, at least not internally like CART does. With those caveats in mind, I think you could say that RF uses an ensemble of **CART-like** decision trees (i.e., a bunch of unpruned trees, grown to their maximum extent, without the ability to handle missing data through surrogate splitting). Perhaps this is one of those punctilious semantic differences, but it's one I think worth noting. --- **EDIT**: On my side note, which is unrelated to the actual question asked, I stated that "I've never seen such statements sourced to any authoritative text on RF". Turns out Breiman **DID** specifically state that CART decision trees are used in the original RF algorithm: > > "The simplest random forest with random features is formed by selecting > at random, at each node, a small group of input variables to split on. > Grow the tree using *CART methodology* to maximum size and do not prune." [My emphasis] > > > Source: p.9 of [Random Forests. Breiman (2001)](https://www.stat.berkeley.edu/%7Ebreiman/randomforest2001.pdf) However, I still stand (albeit more precariously) on the notion that these are ***CART-like*** decision trees in that they are grown without pruning, whereas a CART is normally never run in this configuration as it will almost certainly over-fit your data (hence the pruning in the first place).
I was going through [the following slides](http://fsl.cs.illinois.edu/images/7/75/CS522-Fall-2018-Lambda-slides.pdf) and I wanted to show the following: $$ \lambda x. x \equiv\_{\alpha} \lambda y . y$$ formally. They define a an $\alpha$-conversion on page 15 as follows: $$ \lambda x . E = \lambda z.(E[x \leftarrow z])$$ however, I wasn't sure how to formally show the statement I am trying to show. Essentially I guess I don't know how to formally show in a proof that two distinct objects actually belong to this same equivalence class. The intuition and idea is clear, but how do I know if I've shown the statement? In fact if someone can show me how to do the more complicated one too that would be really helpful too: $$ \lambda x.x (\lambda y . y) \equiv\_{\alpha} \lambda y . y (\lambda x. x)$$ how do I know if I've shown what is being asked? --- Actually I think page 16 is the one thats confusing me most: > > Using the equation above, one has now the possibility to prove > $\lambda$-expressions "equivalent". To capture this provability > relation formally, we let $E \equiv\_{\alpha} E^\prime$ denote the fact > that the equation $E = E^\prime$ can proved using standard equational > deduction form the equational axioms above (($\alpha$) plus those for > substitution). > > > **Exercise 3** *Prove the following equivalences of $\lambda$-expressions:* > > > * $\lambda x.x \equiv\_{\alpha} \lambda y.y$, > * $\lambda x.x (\lambda y.y) \equiv\_{\alpha} \lambda y.y (\lambda x.x)$, > * $\lambda x.x(\lambda y.y) \equiv\_\alpha \lambda y.y(\lambda y.y)$. > > > what does: > > can be proved using standard equational deduction from the > equational axioms above > > > mean? --- Since there is already an answer that is not helping (because I don't understand the notation) I will add what I thought was the answer but I'm not sure: I would have guessed that: $$ \lambda x. x \equiv\_{\alpha} \lambda y . y$$ if and only if there is a variable such that if we plug it into the lambda functions evaluates to the same function with the same variables. i.e. $$ \lambda x. x \equiv\_{\alpha} \lambda y . y \iff \exists z \in Var : \lambda x . x = \lambda z. ( (\lambda y . y)[y \leftarrow z] )$$ if we set $z = x$ we get: $$\lambda z. ( (\lambda y . y)[y \leftarrow z] )$$ $$\lambda x. ( (\lambda y . y)[y \leftarrow x] )$$ $$\lambda x. (\lambda x .x )$$ which I assume the last line is the same as $\lambda x .x$ but I am not sure. If that were true then I'd show I can transform $\lambda y . y$ to $\lambda x . x$ which is what I assume the equivalence class should look like. Where did I go wrong?
By definition of substitution we have $$x [x \leftarrow z] = z$$ therefore $$\lambda z . x [x \leftarrow z] = \lambda z . z \tag{1}$$ because $\lambda$-abstraction is a congruence (it preserves equality). By the definition of $\alpha$-equality we have $$\lambda x . x = \lambda z . x [x \leftarrow z] \tag{2}.$$ By transitivity of equality we get from (1) and (2) that $$\lambda x . x = \lambda z . z$$ If you require more details than this, you should use a computer proof assistant to check the details.
The [raven paradox](https://en.wikipedia.org/wiki/Raven_paradox) is roughly: *"The statement **All ravens are black** is logically equivalent to **All non-black entities are not ravens**. Whenever we observe a non-black non-raven, the probability for the latter statement to be true rises. In turn, the probability for the former statement must also rise. Therefore, we can infer something about the colour of ravens without looking at them."* I want to attack the paradox from a slightly different angle than the [standard bayesian resolution](https://en.wikipedia.org/wiki/Raven_paradox#Standard_Bayesian_solution) (which states that one can in fact infer a tiny amount of information about ravens from looking at non-ravens). Let us assume, for simplicity and to increase the magnitude of the expected effect, that only frogs (which, unbeknownst to us, are always green) and ravens are interesting to us. Furthermore, we only recognise the colours black and green. So initially, we hold equally probable that an observation will reveal a green frog, a black frog, a green raven, or a black raven ($GF, BF, GR, BR$). So $p(GF) = p(BF) = p(GR) = p(BR) = \frac{1}{4}$. We now observe a number of green frogs. This updates our probabilities such that $p(GF) > \frac{1}{4}$. Since we did not make any other sightings, by symmetry all the other probabilities must be $\frac{1 - p(GF)}{3}$. What is now the probability of a raven being black? Before the observations, it was clearly $\frac{1}{2}$. Now it is: $$p(B|R) = \frac{p(BR)}{p(R)} = \frac{p(BR)}{p(BR) + p(GR)} = \frac{1}{2}$$ Weird. I would have thought we learned something about black ravens. Let's double check. What's the probability of a non-black entity to be a non-raven? In other words, what's the probability of a green animal being a frog? $$p(F|G) = \frac{p(GF)}{p(G)} = \frac{p(GF)}{p(GF) + p(GR)} > \frac{1}{2}$$ So while the probability of raven-ness implying blackness did not increase, the probability of non-blackness implying non-raven-ness did increase. How can this be? Is this a known, different paradox? Is my explanation an acceptable resolution to the raven paradox? Or is the probability of a statement being true not so easily related to its conditional probability?
I don't think tracking changes to $\Pr(B|R)$ captures completely how the probability of the raven hypothesis changes. All ravens are black means for all things, if a thing has the predicate raven (R), then it has the predicate black (B). So what is $\Pr(\forall x \,\ Rx \rightarrow Bx)$? $\forall x \,\ Rx \rightarrow Bx $ is false if and only if $\exists x \,\ Rx\land \neg Bx$. Therefore $\Pr(\forall x \,\ Rx \rightarrow Bx) = 1 - \Pr(\exists x \,\ Rx\land \neg Bx)$. This is the probability whose changes we need to track as we learn about the world. It does not seem, in general, to be equal to $\Pr(B|R)$ which is what you've looked it. Instead it is $1 - (1- \Pr(B|R))\Pr(R)$. Rewrite the second term as $\left(1 - \Pr(B|R)\right)\frac{\Pr(BR)}{\Pr(B|R)}$. The conditional probability doesn't change, but because we've seen green frogs, the odds of seeing black frogs, green ravens and also black ravens have gone down. So $\Pr(BR)$ becomes smaller. But then the term becomes smaller... and the probability that all ravens are black becomes greater. Ceteris paribus, it's good for the contention that all ravens are black if ravens are rare, because there need to be ravens for it to have any chance of being wrong. That's not captured when focusing on $\Pr(B|R)$ alone and it seems to suffice to restore the paradoxical conclusion.
You are invited to a party. Suppose the times at which invitees arrives are independent uniform(0,1) random variables. Suppose that aside from yourself the number of other people who are invited is a Poisson random variable with mean 10. I want to find out 1. The expected number of people who arrive before you 2. The probability that you are the nth person to arrive Answer: 1. Let X be the number of people who arrive before you.Because you are equally likely to be the first,second,or third,...,or nth arrival. P{X=I}=$\frac1n\displaystyle\sum^n\_1 i =\frac{n(n+1)}{2n}, i =0,...,n$ Therefore E[X]=$\frac{n+1}{2}$ and 2. P{X=n}=$\frac{e^{-10}10^n}{n!}$ Are these answers correct? The author has not provided the correct answers.
To check (1), note that the expected number of people arriving after you must equal the expected number of people arriving before you and that the sum of those is the expected number of people (besides yourself), which the problem states is equal to 10. Therefore the answer must be $10/2=5$. For (2), use the hint you provided in your tags: these are conditional-probability problems. Thus, we partition the event "I am the $n^\text{th}$ person to arrive" into the disjoint events * "I am at position $n$" and "$k-1=n-1$ people arrive with me", * "I am at position $n$" and "$k-1=n$ people arrive with me", * "I am at position $n$" and "$k-1=n+1$ people arrive with me", *etc*, with $k=n, n+1, n+2, \ldots$. Each of those events "$k-1$ people arrive with me" occurs with the Poisson probability associated with $k-1$. Conditional on $k-1$, the chance that I am in position $n$ is the same as the chance that anyone else is in that position, which therefore must be $1/k$ when there are $k-1$ people plus myself. In terms of $\lambda=10$ and letting $X$ be a Poisson$(\lambda)$ variable with distribution function $F\_\lambda$, the answer therefore is $$\sum\_{k=n}^\infty \frac{1}{k} e^{-\lambda} \frac{\lambda^{k-1}}{(k-1)!}=\frac{1}{\lambda}e^{-\lambda}\sum\_{k=n}^\infty \frac{\lambda^k}{k!}=\frac{1}{\lambda}\Pr(X\ge n) = \frac{1}{\lambda}(1-F\_\lambda(n-1)).$$ A [well-known relationship between the Poisson distribution and the Gamma distribution](https://stats.stackexchange.com/questions/10546) equates this to $$\frac{1}{\lambda}(1-F\_\lambda(n-1)) = \frac{1}{\lambda}\Gamma(\lambda;n)$$ where $$\Gamma(\lambda;n) = \frac{1}{\Gamma(n)}\int\_0^\lambda t^{n-1} e^{-t}\,\mathrm{d}t$$ is the Gamma distribution of parameter $x$. Being proportional to a $\chi^2$ probability, which is a basic statistical computation, it can be obtained with just about any statistical software, as illustrated in `R` here, using its `pgamma` function. [![Figure](https://i.stack.imgur.com/kRPX7.png)](https://i.stack.imgur.com/kRPX7.png) ```R library(data.table) library(ggplot2) lambda <- 10 # The Poisson parameter x <- 1:30 X <- data.table(n=rep(x, 3), lambda=rep(c(5,10,20), each=length(x))) # # Here is where the chances are calculated. # X[, probability := pgamma(lambda, n)/lambda] # # Plot them. # ggplot(X, aes(n, probability)) + geom_point(aes(color=factor(lambda)), size=2, alpha=0.8) + guides(color=guide_legend(title=expression(lambda), title.vjust=0)) + theme(legend.title=element_text(size=24)) + ggtitle("(Unconditional) Chance of Being n in Line") ```
As far as I understand, the geometric complexity theory program attempts to separate $VP \neq VNP$ by proving that the permament of a complex-valued matrix is much harder to compute than the determinant. The question I had after skimming through the GCT Papers: Would this immediately imply $P \neq NP$, or is it merely a major step towards this goal?
*Assuming the generalized Riemann hypothesis* (GRH), the following quite strong connections are known between $ VP= VNP $ and the collapse of the polynomial hierarchy ($ {\rm PH}$): > > 1. If $ VP= VNP\,$ (over any field) then the polynomial hierarchy collapses to the second level; > 2. If $ VP=VNP\,$ over a field of characteristic $ 0 $, then $ \rm{NC}^3/{\rm poly}={\rm P}/{\rm poly} = {\rm PH}/{\rm poly} $; > 3. If $ VP=VNP\,$ over a field of finite characteristic $ p $, then $ \rm{NC}^2/{\rm poly}={\rm P}/{\rm poly} = {\rm PH}/{\rm poly} $. > > > These are results from: Peter Burgisser, "[Cook’s versus Valiant’s hypothesis](http://dx.doi.org/10.1016/S0304-3975%2899%2900183-8)", Theor. Comp. Sci., 235:71–88, 2000. See also: Burgisser, "[Completeness and Reduction in Algebraic Complexity Theory](http://math-www.uni-paderborn.de/agpb/work/habil.ps)", 1998.
Forgive me I am a newbie of random variables. I saw a lot of course which introduce the `Discrete Random Variables` which always be illustrated with a histogram or bar chart . Like below [![enter image description here](https://i.stack.imgur.com/PTNU7.png)](https://i.stack.imgur.com/PTNU7.png) But in my understanding .I think it should be represented by a graph like below, all the probability value for the lowercase x {2,3,4,5} is point. not an area. (forgive me bad drawing). My question is why it should be illustrated by a histogram or bar chart instead of point? Please correct me if there is something wrong . Thanks. [![enter image description here](https://i.stack.imgur.com/lDawT.png)](https://i.stack.imgur.com/lDawT.png)
"Noise" does not mean that something is wrong, or incorrect, and it does not have to be Gaussian. When we talk about using a statistical model to describe some phenomenon, we have in mind some function $f$ of the features, that is used to predict target variable $y$, i.e. something like $$ y = f(x) + \varepsilon $$ where $\varepsilon$ is some "noise" (it does not have to be additive). By noise in here, we simply mean the things that is not accurately predicted by our function $$ y - f(x) = \varepsilon $$ So if $f(x)$ is a very good approximation of $y$, then the "noise" is small (it can be zero if you have perfect fit $y = f(x)$), if it is a bad approximation, it is relatively larger. So looking at your picture, the green line is what we predicted, while the blue points is the actual data, noise is the discrepancy between them. [![enter image description here](https://i.stack.imgur.com/3plwx.png)](https://i.stack.imgur.com/3plwx.png) > > Is the noise added when the person who is creating the data set > separates the emails into a spam or non-spam (and how)? > > > It can be, but it can be also a number of other cases. It can be precision of the measurement device that you used, human errors, but also the data can be noisy, for example, there can be spam e-mails that are almost impossible to distinguish from the valid e-mails, of users can mark valid e-mails as spam ("I don't like this newsletter any more, don't show it to me"), you can also have non-spam e-mails that look very much like spam etc. All this may lead to misclassifications, "noise" is the catch-all term for all such factors. > > Just knowing that there is some "noise" isn't very helpful (there is > pretty much noise in everything in our physical realm), there any > actual method of modeling this noise, mathematically speaking? > > > Yes, often we build our models in such way, that they also tell us something about what can we expect from the noise. For example, simple linear regression is defined as $$\begin{align} y &= \beta\_0 + \beta\_1 x + \varepsilon \\ \varepsilon &~ \sim \mathcal{N}(0, \sigma^2) \end{align}$$ so we assume constant variance, and under this assumption, we estimate the variance of the noise $\sigma^2$, so we can estimate [prediction intervals](https://stats.stackexchange.com/questions/16493/difference-between-confidence-intervals-and-prediction-intervals). What you can see from this example, $\varepsilon$ is a random variable. It is random not because someone throws a coin and based on the result, distorts your data, but because it is unpredictable for us (e.g. [coin toss](https://stats.stackexchange.com/questions/153076/is-tossing-a-coin-a-fair-way-of-randomising-a-group-into-two-groups/153080#153080) is a deterministic process, but we consider it as random). If we would be able to predict when and how exactly our model would be wrong, then we wouldn't be making incorrect predictions in the first place. So the "noise" has some distribution (e.g. Gaussian) that tells us what could be the possible spread of the errors we make when using our model to make predictions. Estimating the distribution is modelling the noise.
I have an algorithmic problem. Given an array (or a set) $T$ of $n$ nonnegative integers. Find the maximal set $S$ of $T$ such that for all $a\in S$, $a\geqslant |S|$. For example: 1. If $T$=[1, 3, 4, 1, 3, 6], then $S$ can be [3, 3, 6] or [3, 4, 6] or [4, 3, 6]. 2. In $T$=[7, 5, 1, 1, 7, 4], then $S$ is [7, 5, 7, 4]. I have tried this recursive function. ``` function(T): if minimum(T) >= length(T): return T else: return function(T\minimum(T)) ``` Is there any non-recursive algorithm. (I did not check my recursive algorithm, so it could has some flaw.)
From my comment originally: This is closely related to a quantity ubiquitous in academic productivity assessment, the Hirsh index, better known as the [$h$-index](https://en.wikipedia.org/wiki/H-index). In short it is defined as the number of publications $h$ one has such that each of them has at least $h$ citations (the largest such $h$). The only way your problem differs is that you would be interested not only in *how many* publications satisfy the criterion but also *what their citation counts are*, but that's a trivial modification. The data's already there, the original algorithm just drops it. The generally implemented [calculation](https://en.wikipedia.org/wiki/H-index#Calculation) is rather straightforward and agrees with [Karolis Juodelė's answer](https://cs.stackexchange.com/a/66050/50510). **Update:** Depending on the size and character of your data, it may be worth exploring methods which partially sort the array by filtering data above and below a pivotal point (quicksort comes to mind). Then depending on whether there's too little or too many adjust the pivot and redo on the subset that contains it and so on. You don't need an order between elements higher than $h$, and certainly not between those lower than that. So for example, once you found all elements greater or equal to $h\_1$ and there's less than $h\_1$ of them, you don't need to touch that subset again, just add to it. This converts the recursion inherent to quicksort to a *tail recursion* and thus can be rewritten as a loop. My Haskell is a bit rusty but this should do what I described above and seems to work. Hope it can be understood to some degree, I am happy to provide further explanation. ``` -- just a utility function merge :: [a] -> [a] -> [a] merge [] ys = ys merge (x:xs) ys = x : merge xs ys -- the actual implementation topImpl :: [Int] -> [Int] -> [Int] topImpl [] granted = granted topImpl (x:xs) granted | x == (1 + lGreater + lGranted) = x : merge greater granted | x > (1 + lGreater + lGranted) = topImpl smaller (x : merge greater granted) | otherwise = topImpl greater granted where smaller = [y | y <- xs, y < x] greater = [y | y <- xs, y >= x] lGreater = length greater lGranted = length granted -- starting point is: top of whole array, granted is empty top :: [Int] -> [Int] top arr = topImpl arr [] ``` The idea is to collect in `granted` what you know will definitely participate in the result, and not sort it any further. If `greater` together with `x` fits, we're lucky, otherwise we need to try with a smaller subset. (The pivot `x` is simply whatever happened to be the first item of the sublist that's currently considered.) Note that the significant advantage against taking largest elements one by one is that we do this on blocks of average size $remaining/2$ and don't need to sort them further. **Example:** Let's take your set `[1,3,4,1,3,6]`. 1. `x = 1`, `granted = []`, `greater = [3,4,1,3,6]`. Ouch, we hit a pathological case when the pivot is too small (actually so small that `smaller` is empty) right in the first step. Luckily our algo is ready for that. It discards `x` and tries again with `greater` alone. 2. `x = 3`, `granted = []`, `greater = [4,3,6]`. Together, they form an array of length 4 but we only have that limited from below by 3 so that's too much. Repeat on `greater` alone. 3. `x = 4`, `granted = []`, `greater = [6]`. This gives an array of 2 elements ≥ 4 each, seems we might have use for some more of them. Keep this and repeat on `smaller = [3]`. 4. `x = 3`, `granted = [4,6]`, `greater = []`. This together gives an array of 3 elements ≥ 3 each, so we have our solution `[3,4,6]` and we can return. (Note that the permutation may vary depending on the ordering of the input, but will always contain the highest possible terms, never `[3,3,6]` or `[3,3,4]` for your example.) (Btw. note that the recursion indeed just collapsed to a cycle.) The complexity is somewhat better than quicksort because of the many saved comparisons: Best case (e.g. [2,2,1,1,1,1]): a single step, $n-1$ comparisons Average case: $O(\log n)$ steps, $O(n)$ comparisons in total Worst case (e.g. [1,1,1,1,1,1]): $n$ steps, $O(n^2)$ comparisons in total There are a few needless comparisons in the code above, like calculating `smaller` whether we need it or not, they can be easily removed. (I think lazy evaluation will take care of that though.)
My data (continuous) is highly skewed and it doesn't follow the normal distribution. Using `sns.distplot` I found out that **exponweib** fits the data better. How to deal with this? My end goal is to use the data for machine learning model (SVM).
You’re fine to proceed without worrying about lacking normal data. Go run your SVM. Even in linear regression, the frequent assumption about normality has to do with the error term. Further, that assumption is not part of prediction. The Gauss-Markov theorem does not assume a normal error term, so the $\hat{\beta}=(X^TX)^{-1}X^Ty$ parameter estimate is the best linear unbiased estimator whether the error term is normal or not. When we do make an assumption about a normal error term, that is to help us with parameter inference, not prediction. That’s on the side of the response variable, though. For the predictor variables, we absolutely do not make any assumptions about normality, not even for parameter inference. So please feel free to run your SVM without worrying about your data lacking a normal distribution.
We use electronics to build computers and do computation. Is computation independent of the hardware we use? Would it be possible to do whatever a computer does with pen and paper? If computation is not dependent of how we make our computers, then does the same principle which apply to a computer apply to how brain works? I want to know if computation is an abstract object which then is realized as computers. In other words, if we visit some alien civilization we expect that our mathematicians and physicists understand the alien physics and math. Is it the same as with the notion or concept of computation? It seems that in order to consider computation first we need consider a realization of a machine or a model. For example according to the first two lines of [this question](https://cs.stackexchange.com/questions/971/quantum-lambda-calculus?noredirect=1&lq=1) there are Turing machine, circuits and lambda calculation. How do we know that there is not another way of computation? For example, the way the brain works and does computation might not be describable by the computation which we consider in Turing machine. To put it another way, are there axioms which we build a theory of computation based on them? like the way we build Euclidean geometry.
To answer the title question, yes, computation is independent of hardware. Computation is defined by the transformation of information, not by how it's embedded in the real world. This is easy to see: all the computation models have a mathematical formulation, and you can write mathematics as symbols on paper. In fact, many models of computation, such as the [lambda calculus](https://en.wikipedia.org/wiki/Lambda_calculus) (1936) and [Turing machines](https://en.wikipedia.org/wiki/Turing_machine) (1936) were invented before there were general-purpose computing machines (there were numeric calculators and data sorters, but programmable machines didn't exist until the 1940s). It is possible to model today's computers by mathematical tools. They're just finite-state machines, after all. There are a lot of states, so a complete pen-and-paper representation is not feasible, but if you only model the part that you're interested in, that can be within reach. As to whether there is only a single notion of computation, we don't know. We think so: that's the [Church-Turing thesis](https://en.wikipedia.org/wiki/Church-Turing_thesis). We think so because we've invented a lot of models of computation, and they all turned out to be equivalent. Some of them are easier to work with than others, or more directly applicable to certain problems, or are a closer match for certain computing hardware (e.g. in terms of performance or locality). But they can express the same computations. Are Turing machines (or any of the equivalent concepts) the last word on the topic? We don't know. We certainly have *weaker* models of computation, but we think of them as restricted computation. And we have *stronger* models of computation, but we think of them as “magic“ — we can reason about them, but we have no idea how they could be implemented. Is the brain a Turing machine? That's a question about the brain, and we don't know. We know that it's at least as powerful as a Turing machine, since it can simulate one. We are very far from being able to accurately simulate a brain with a Turing machine, but we don't know whether that's a matter of scale (a brain has tens of millions of neurons — a PC doesn't have enough memory to even store all the connexions). We do tend to think of brains as being more powerful than computers — when a problem is undecidable (in the Turing computation sense), we offer to solve it by thinking, but that doesn't answer the question. When we solve a problem by thinking, the *solution* is expressible in Turing machine terms. The problem solving method, however, may or may not be more powerful than Turing machines. Would aliens have the same notion of computations? We don't know. We think so — mathematics seems universal to us — but that's a question about aliens, and since we don't know anything about aliens except what your imagination tells us, we can't answer it.
I'm exploring some manual approaches to understand whether specific changes on a site can be deemed conclusive and statistically significant. I appreciate that any A/B testing platform has a reporting part of doing this job; however, changes have been implemented without any platform, so my best bet was pulling out data and doing this exercise manually. **The scenario** 36 pages, 24 of which have a feature the remaining 12 have not. Looking at the daily traffic values, traffic rose by +50% on the group where the feature is in. But I need to provide a confidence interval or a statistically significant coefficient for the senior management. The two groups of data (8760 & 4380 entries respectively) are heteroscedastic (Levene test suggests it) and not normally distributed (after all they are traffic data). [![enter image description here](https://i.stack.imgur.com/1JX9s.png)](https://i.stack.imgur.com/1JX9s.png) [![enter image description here](https://i.stack.imgur.com/NPYcM.png)](https://i.stack.imgur.com/NPYcM.png) Note: I believe the data are not normally distributed as the bell curve is not properly centered, but I might be wrong in saying so. Still have to gain experience in this field. **The result** When running a Welch's test as below, I get a p-value that is absnormal: `stats.ttest_ind(faq['traffic'], no_faq['traffic'], equal_var=False, axis=0)` Ttest\_indResult(statistic=-23.253270956567135, pvalue=1.3551062210749643e-114) Now, if the H0 is that the page feature does not influence the traffic, the high p-value here confirms me the opposite. **Question** What I'm struggling to understand is the very high p-value that goes beyond the 100% mark. Am I doing something wrong? Should I use a different test? What to calculate a confidence score?
You're saying the p-value is really high, but it is the opposite. Your p-value is really small, but you may be overlooking the scientific notation. $P=$1.3551062210749643e-114$ = \frac{1.3551062210749643}{10^{114}} \approx \frac{1}{10^{114}} << 0.05 = \alpha$
I have an array of $n$ real values, which has mean $\mu\_{old}$ and standard deviation $\sigma\_{old}$. If an element of the array $x\_i$ is replaced by another element $x\_j$, then new mean will be > > $\mu\_{new}=\mu\_{old}+\frac{x\_j-x\_i}{n}$ > > > Advantage of this approach is it requires constant computation regardless of value of $n$. Is there any approach to calculate $\sigma\_{new}$ using $\sigma\_{old}$ like the computation of $\mu\_{new}$ using $\mu\_{old}$?
A [section in the Wikipedia article on "Algorithms for calculating variance"](http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Compute_running_.28continuous.29_variance) shows how to compute the variance if elements are added to your observations. (Recall that the standard deviation is the square root of the variance.) Assume that you append $x\_{n+1}$ to your array, then $$\sigma\_{new}^2 = \sigma\_{old}^2 + (x\_{n+1} - \mu\_{new})(x\_{n+1} - \mu\_{old}).$$ **EDIT**: Above formula seems to be wrong, see comment. Now, replacing an element means adding an observation and removing another one; both can be computed with the formula above. However, keep in mind that problems of numerical stability may ensue; the quoted article also proposes numerically stable variants. To derive the formula by yourself, compute $(n-1)(\sigma\_{new}^2 - \sigma\_{old}^2)$ using the definition of sample variance and substitute $\mu\_{new}$ by the formula you gave when appropriate. This gives you $\sigma\_{new}^2 - \sigma\_{old}^2$ in the end, and thus a formula for $\sigma\_{new}$ given $\sigma\_{old}$ and $\mu\_{old}$. In my notation, I assume you replace the element $x\_n$ by $x\_n'$: $$ \begin{eqnarray\*} \sigma^2 &=& (n-1)^{-1} \sum\_k (x\_k - \mu)^2 \\ (n-1)(\sigma\_{new}^2 - \sigma\_{old}^2) &=& \sum\_{k=1}^{n-1} ((x\_k - \mu\_{new})^2 - (x\_k - \mu\_{old})^2) \\ &&+\ ((x\_n' - \mu\_{new})^2 - (x\_n - \mu\_{old})^2) \\ &=& \sum\_{k=1}^{n-1} ((x\_k - \mu\_{old} - n^{-1}(x\_n'-x\_n))^2 - (x\_k - \mu\_{old})^2) \\ &&+\ ((x\_n' - \mu\_{old} - n^{-1}(x\_n'-x\_n))^2 - (x\_n - \mu\_{old})^2) \\ \end{eqnarray\*}\\ $$ The $x\_k$ in the sum transform into something dependent of $\mu\_{old}$, but you'll have to work the equation a little bit more to derive a neat result. This should give you the general idea.
I have a groupby in jupyter-notebook that takes ages to run and after 10 minutes of running it says 'kernel died...', The groupby looks like this: ``` df1.groupby(['date', 'unit', 'company', 'city'])['col1', 'col2', 'col3', 'col4', ... 'col20'].mean() ``` All of the 'col' columns are float values. I am running everything locally. Any ideas? UPDATE: The shape of df1 is: ``` (1360, 24) ``` Memory and dtypes: ``` dtypes: category(3), datetime64[ns](2), float64(17), int64(2) memory usage: 266.9 KB ``` The unique size of city, date, company, unit: ``` len(df1.date.unique()) = 789 len(df1.unit.unique()) = 76 len(df1.company.unique()) = 205 len(df1.city.unique()) = 237 ``` I have 16GB of memory on MacBook Pro. UPDATE 2: It works only if I have date and unit inside the groupby columns as the only 2 columns. If I add either a company or city, it doesn't work anymore, it keeps running indefinitely.
I thought it might be because of the different types being used in the columns, but I created an example below, which works fine over mixed column types. The only real different is the size - that is why I think you are probably running out of memory. > > Working example > =============== > > > I use `int`, `str` and `datetime` objects: ``` In [1]: import pandas as pd In [2]: import datetime In [3]: df = pd.DataFrame({'Branch': 'A A A A A A A B'.split(), 'Buyer': 'Carl Mark Carl Carl Joe Joe Joe Carl'.split(), 'Quantity': [1, 3, 5, 1, 8, 1, 9, 3], 'Date':[datetime.datetime(2013, 1, 1, 13, 0), datetime.datetime(2013, 1, 1, 13, 5), datetime.datetime(2013, 10, 1, 20, 0), datetime.datetime(2013, 10, 2, 10, 0), datetime.datetime(2013, 10, 1, 20, 0), datetime.datetime(2013, 10, 2, 10, 0), datetime.datetime(2013, 12, 2, 12, 0), datetime.datetime(2013, 12, 2, 14, 0)]}) In [4]: df Out[4]: Branch Buyer Quantity Date 0 A Carl 1 2013-01-01 13:00:00 1 A Mark 3 2013-01-01 13:05:00 2 A Carl 5 2013-10-01 20:00:00 3 A Carl 1 2013-10-02 10:00:00 4 A Joe 8 2013-10-01 20:00:00 5 A Joe 1 2013-10-02 10:00:00 6 A Joe 9 2013-12-02 12:00:00 7 B Carl 3 2013-12-02 14:00:00 In [5]: df.shape Out[5]: (8, 4) ``` Now I just repeat the dataframe again, but add one hour to each of the datetime values, just to increase the number of groupby combinations to expect: ``` In [14]: df.iloc[0:8, 3] += datetime.timedelta(hours=1) ``` Now perform a groupby over all columns, and sum only on `Quantity` (it is my only numeric column). The reuslts are as expected: ``` In [16]: df.groupby(["Branch", "Buyer", "Quantity", "Date"])["Quantity"].sum() Out[16]: Branch Buyer Quantity Date A Carl 1 2013-01-01 13:00:00 1 2013-01-01 14:00:00 1 2013-10-02 10:00:00 1 2013-10-02 11:00:00 1 5 2013-10-01 20:00:00 5 2013-10-01 21:00:00 5 Joe 1 2013-10-02 10:00:00 1 2013-10-02 11:00:00 1 8 2013-10-01 20:00:00 8 2013-10-01 21:00:00 8 9 2013-12-02 12:00:00 9 2013-12-02 13:00:00 9 Mark 3 2013-01-01 13:05:00 3 2013-01-01 14:05:00 3 B Carl 3 2013-12-02 14:00:00 3 2013-12-02 15:00:00 3 Name: Quantity, dtype: int64 ``` > > Break your problem down > ======================= > > > It might be difficult to break down your problem, because you need to whole data for the groupby operation. You could however save each of the groups to disk, perform the `mean()` computation on them separately and merge the results yourself. The name of each group is actually the combination of the `groupby` columns selected. This can be used to build the index of the reuslting dataframe. It could look something like this: ``` for name, group in df1.groupby(['date', 'unit', 'company', 'city']): print("Processing groupby combination: ", name) # This is the current groupby combination result = group.mean() _df = pd.DataFrame(index=[name], data=[result]) _df.to_csv("path/somewhere/" + name + ".csv ``` You will then have a folder full of the results for each group and will have to just read them back in and combine them. > > Other methods > ============= > > > It is known that Pandas does not handle many operations on huge datasets very efficienty (compared to e.g. the [**`data.table`**](https://github.com/h2oai/datatable) package). There is the [**Dask**](https://dask.org/) package, which essentially does Pandas things in a distributed manner, but that might be overkill (and you'll of course need more resources!)
[How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs](https://www.microsoft.com/en-us/research/uploads/prod/2016/12/How-to-Make-a-Multiprocessor-Computer-That-Correctly-Executes-Multiprocess-Programs.pdf) > > ... the result of any execution is the same as if the operations of all > the processors were executed in some sequential order, and the > operations of each individual processor appear in this sequence in the > order specified by its program. A multiprocessor satisfying this > condition will be called **sequentially consistent**. > > > [Spanner docs](https://cloud.google.com/blog/products/gcp/why-you-should-pick-strong-consistency-whenever-possible?hl=is): > > Strong Consistency All accesses are seen by all parallel processes (or > nodes, processors, etc.) in the same order (sequentially)5 In some > definitions, a replication protocol exhibits "**strong consistency**" if > the replicated objects are linearizable. > > > The `5` in that doc is a link to [the wikipedia page](https://en.wikipedia.org/wiki/Strong_consistency), which apparently hasn't ever cited any sources. I can't seem to find any original source for it. The "some definitions" mentioned seem to be nowhere to find online. What's the difference between strong and sequential consistency?
Here is a solution that uses only comparisons. For simplicity, assume that $n$ is a power of 2. Find the median of the original array in $O(n)$, and extract the largest $n/2$ elements. Then find the median of the new array in $O(n/2)$, and extract the largest $n/4$ elements. Continue in this way, extracting the $n/8,n/16,\ldots,1$ largest elements. In total, preprocessing takes time $$ O(n+n/2+n/4+\cdots) = O(2n) = O(n). $$ Given $k$, find $\ell$ such that $n/2^{\ell+1} < k \leq n/2^{\ell}$. By construction, $n/2^\ell < 2k$. The $k$th largest element is thus one of the $n/2^{\ell}$ largest elements. Using the linear time selection algorithm, locate the $k$th element among them in $O(n/2^\ell) = O(k)$. --- We can improve the running time for $k \leq Cn/\log n$ (for arbitrary $C$) to $O(1)$ as follows. During preprocessing, use a linear time selection algorithm to locate the $Cn/\log n$-th largest element in $O(n)$, and extract all larger elements. Sort them in $O(n)$. During query time, locate the $k$th largest element for $k \leq Cn/\log n$ in $O(1)$ using the new array. Conversely, we can show that this $O(1)$ behavior cannot extend beyond $O(n/\log n)$ for comparison-based algorithms. Indeed, suppose that there is an algorithm which preprocesses an array in $O(n)$, and is able to locate the $k$th element in $O(1)$ for $k \leq f(n)$, where $f(n) = \omega(n/\log n)$. This allows us to sort an array of size $f(n)$ in time $O(n) + O(f(n)) = o(f(n) \log f(n))$ by adding $n - f(n)$ dummy elements, contradicting the well-known lower bound for sorting.
How to modify Dijkstra's algorithm, for wheel chair users, to take into account the road quality? There are three levels of quality: $1$ for pure concrete, $2$ for partly concrete and $3$ for rough road. Taking this into account, if there are roads $p1$: the short distance with road quality $3$ and $p2$: the long distance with quality $1$, the longer path $p2$ is chosen due to easier road difficulty. The quality of the road is more important than the distance. If there are roads of equal quality the shorter one is chosen.
Well each edge could contain a pair of natural numbers. While constructing the shortest path you could consider the road quality first, then the distance if there are potential paths with the same road quality. I hope this helps people with wheelchairs navigate more easily!
I had this question on my final exam so sadly I don't have the question but as far as I remember, the question was saying: > > How many minimum spanning trees does a graph with 20 edges have. > > > I know that we can find minimum spanning trees with algorithms like Prim's algorithm etc. But how can we know the total number of minimum spanning trees in a graph with no figure given in the question (text only)?
I'm hoping you misremembered the question, as the number of MSTs (minimum spanning trees) is not uniquely determined by the number of edges. It depends on what edges are and are not present and also what their weight's are. For example, if the graph has 21 vertices and 20 edges, then it is a tree and it has exactly one MST. On the other hand, if it has seven vertices and 20 edges, then it is a clique with one edge deleted and, depending on the edge weights, it might have just one MST or it might have literally thousands of them. (A clique on seven vertices has 21 edges, which is only one more than your graph is allowed. If all the edge weights were the same, every subtree would be an MST and, by [Cayley's formula](https://en.wikipedia.org/wiki/Cayley%27s_formula), there are $7^5=16807$ different trees with seven vertices.)
Suppose we have $n$ covariates $x\_1, \dots, x\_n$ and a binary outcome variable $y$. Some of these covariates are categorical with multiple levels. Others are continuous. How would you choose the "best" model? In other words, how do you choose which covariates to include in the model? Would you model $y$ with each of the covariates individually using simple logistic regression and choose the ones with a significant association?
This is probably not a good thing to do. Looking at all the individual covariates first, and then building a model with those that are significant is logically equivalent to an automatic search procedure. While this approach is intuitive, inferences made from this procedure are not valid (e.g., the true p-values are different from those reported by software). The problem is magnified the larger the size of the initial set of covariates is. If you do this anyway (and, unfortunately, many people do), you cannot take the resulting model seriously. Instead, you must run an entirely new study, gathering an independent sample and fitting the previous model, to test it. However, this requires a lot of resources, and moreover, since the process is flawed and the previous model is likely a poor one, there is a strong chance it will not hold up--meaning that it is likely to *waste* a lot of resources. A better way is to evaluate models of substantive interest to you. Then use an information criterion that penalizes model flexibility (such as the AIC) to adjudicate amongst those models. For logistic regression, the AIC is: $$ AIC = -2\times\ln(\text{likelihood}) + 2k $$ where $k$ is the number of covariates included in that model. You want the model with the smallest value for the AIC, all things being equal. However, it is not always so simple; be wary when several models have similar values for the AIC, even though one may be lowest. I include the complete formula for the AIC here, because different software outputs different information. You may have to calculate it from just the likelihood, or you may get the final AIC, or anything in between.
Is there *a need* for $L\subseteq \Sigma^\*$ to be *infinite* to be undecidable? I mean what if we choose a language $L'$ be a *bounded finite version of* $L\subseteq \Sigma^\*$, that is $|L'|\leq N$, ($N \in \mathbb{N}$), with $L' \subset L$. Is it possible for $L'$ to be an undecidable language? I see that there is a problem of "How to choose the $N$ words that $\in$ $L' "$ for which we have to establish a rule for choosing which would be the first $N$ elements of $L'$, a kind of "finite" Kleene star operation. The aim is to find undecidability language without needing an infinite set, but I can't see it. **EDIT Note:** Although I chose an answer, many answers **and all comments** are important.
Yes, there is a need for $L$ to be infinite in order to be undecidable. To add up on the answers of Raphael and Sam, you should think about "decidable" as things that a computer-program can solve. The program required is very simple, it just needs to output "Yes" for elements in $L$, or otherwise, say no. So the more "complex" $L$ is, the longer the program you are required to write. In other words, the longer the program you run, you can check more things... So if someone gives a language $L$ which is finite, say $L=\{ a\_1, a\_2, \ldots, a\_n\}$, you can write the following program: ``` if INPUT = $a_1$ output Yes; if INPUT = $a_2$ output Yes; ... if INPUT = $a_n$ output Yes; output No; ``` Now, if some one gives you a larger $L$ (yet finite), you will just write a longer program. This is always true, and any finite $L$ will have it's own program. The only "interesting" case is what happens when $L$ is infinite - your program **cannot** be infinite. The issue of "undecidability" is even more interesting: its those (infinite) $L$'s that have no program that works correctly for them. We know that such languages must exists since there are way more (infinite) languages $L$ than the number of programs of finite (but unbounded) length.
The simple (naive?) answer would be O(n) where n is the length of the shorter string. Because in the worst case you must compare every pair of characters. So far so good. I think we can all agree that checking equality of two *equal length* strings requires O(n) runtime. However many (most?) languages (I'm using Python 3.7) store the lengths of strings to allow for constant time lookups. So in the case of two *unequal length* strings, you can simply verify `len(string_1) != len(string_2)` in constant time. You can verify that Python 3 does indeed make this optimization. Now, if we're checking the equality of two *truly* arbitrary strings (of arbitrary length) then it is much more likely (infinitely, I believe) that the strings will be of unequal length than of equal length. Which (statistically) ensures we can nearly always compare them in constant time. So we can compare two arbitrary strings at O(1) average, with a very rare worst-case of O(n). Should we consider strings comparisons then to be O(1) in the same way we consider hash table lookups to be O(1)?
In order to discuss the expected time complexity of an operation, you have to specify a distribution on the inputs, and also explain what you mean by $n$. One has to be careful, however. For example, consider the suggestion in the comments, to consider some kind of distribution over words of length at most 20. In this case, string comparison is clearly $O(1)$, since 20 is just a constant. There are several ways to avoid it: * Ask for a non-asymptotic time complexity. Since time complexity is highly dependent on the computation model, you can count (for example) the number of input memory cells accessed. * You can specify an input distribution which depends on a parameter $m$, and then ask for the asymptotic complexity in terms of $m$. Here is an example. Given two random binary strings of length $n$, there will be roughly 4 accesses in expectation. In contrast, if the strings are chosen at random from the collection $0^i1^{n-i}$, the number of accesses will be roughly $(2/3)n$. These two distributions can be separated even if we use asymptotic notation: the algorithm runs in $O(1)$ on the first distribution, and in $\Theta(n)$ on the second. Another issue is the meaning of $n$. Consider for example a string $0^m$, where $m \sim G(1/2)$ is a geometric random variable. When run on inputs of lengths $a,b$, the running time is $\Theta(\min(a,b))$. How should we express this in terms of $n = a+b$? One choice is to ask for the expected running time given that the input length is $n$. In this case, $$ \mathbb{E}[\min(a,b)] = \sum\_{a=1}^{n-1} \frac{(1/2)^a (1/2)^{n-1-a}}{\sum\_{a'=1}^{n-1} (1/2)^{a'} (1/2)^{n-1-a'}} \min(a,n-a) = \frac{1}{n-1} \sum\_{a=1}^{n-1} \min(a,n-a) \approx \frac{n}{4}, $$ so the expected running time is $\Theta(n)$.
While going through probabilistic/average analysis of an algorithm, I found written somewhere that average cost and expected cost are same. Can anyone please tell me what does exactly expected cost stands for.I think we take care of likelihood of an event while finding expected cost unlike average cost.
In short, the average is the expected value of the uniform distribution. If $T(x)$ denotes the runtime of some algorithm on input $x \in \mathcal{X}$, then the *expected* runtime for input size $n$ is $\qquad\displaystyle \mathbb{E}[T(X) \mid |X| = n] = \sum\_{x\in\mathcal{X}\_n} \operatorname{Pr}[X=x \mid |X| = n] \cdot T(x)$ given some random distribution for random variable $X$. *Average* runtime is more specific and corresponds to $\qquad\displaystyle \overline{T}(n) = \frac{1}{|\mathcal{X}\_n|} \cdot \sum\_{x\in\mathcal{X}\_n} T(x)$ or, in other words, the expected runtime given a uniform distribution over inputs of the same size. Since we *usually* use the uniform distribution the terms are often used interchangeably in algorithm analysis. One famous exception is the analysis of binary search trees. As opposed to averaging height over all rooted binary trees, we calculate the expected height w.r.t. the uniform distribution of different insert sequences ("random permutation model"), which assigns some (shapes of) trees a higher probability than others. That is not a technical detail: the average height of rooted binary trees is in $\Theta(\sqrt{n})$ whereas the expected height of BSTs in the random permutation model is in $\Theta(\log n)$.
The well-known textbook [Introduction to Algorithms](https://books.google.de/books?id=aefUBQAAQBAJ&printsec=frontcover&dq=Algorithms%20CLRS&hl=de&sa=X&ved=0ahUKEwjasaierKfkAhXGbFAKHRLqAUgQ6AEIOTAC#v=onepage&q&f=false) ("CLRS", 3rd edition, chapter 3.1) claims the following: $$ \sum\_{i=1}^n O(i) $$ is not the same as (I'm not using DNE because the book explicitly says "is not the same as") $$ O(1) + O(2) + \dots + O(n) $$ Why is this? How are we to explicitly represent the first summation if not with the second? As an extension of the previous question, how do we do asymptotic analysis on such a summation?
No one knows, but: * It is suspected that neither factoring nor discrete logarithm are NP-complete, but we have no proof. (Evidence for the suspicion: they are in NP $\cap$ coNP. See, e.g., <https://cstheory.stackexchange.com/q/159/5038>, <https://cstheory.stackexchange.com/q/167/5038> for factoring. It's similarly easy to prove that discrete log is in NP $\cap$ coNP; see, e.g., [How hard is finding the discrete logarithm?](https://cs.stackexchange.com/q/2658/755). Moreover, if any problem in NP $\cap$ coNP is NP-complete, it follows that NP = coNP. This would be an unexpected result.) * It is suspected that there is no polynomial-time algorithm for factoring or discrete logarithm, but we have no proof. (Evidence for the suspicion: we haven't been able to find a polynomial-time algorithm for either, despite a lot of trying.) * There is no known (classical) reduction between discrete log and factoring. However, I personally wouldn't be shocked if one were found, or if both were found to be instances of some broader problem, or something. For instance, when we find an algorithm technique that works against one, historically often we've been able to adapt it to the other as well. So, they seem to be connected or related in some deep way.
I got an assignment to create a new data structure, with the following rules: 1. Init - O(1). 2. Insert x - O(log$\_2$n). 3. Delete x - O(log$\_2$n). 4. Search for x- O(log$\_2$n). 5. Find max difference between two values in the DS - O(1). 6. Find min difference between two values in the DS - O(1). I've outlined the basics of the structure to look somewhat like an array/arryaList in which I can complete the tasks using heap-like methods, and that way I'll be able to complete 1-4 in the times given. Regarding 5 - I need to return the difference between the maxVal and minVal of the array, so it'll be the first number (arr[0]) and I'll change the leaves to be so that the max is the last value (a[n-1]), and then I'll be in O(1). **Regarding 6, and this is where I'm stuck** - how can I find the smallest difference between two values in O(1) time? I don't know of any methods that accomplish the task in O(1)... Thank you!
Use an AVL tree with each node having three additional entries $\min,\; \max$, and $\text{closest\_pair} = (i,j)$, representing the minimum and maximum values of the tree rooted at that node. At the time of insertion and deletion, these values will be updated (Note that only $O(\log(n))$ node updates are needed per insert/delete operation). Now, this data structure represents your required data structure. N.B. For a node $i,$ following relation holds. $i$.closest\_pair = closest( [ ($i$.left,$i$), ($i$,$i$.right), $i$.left.closest\_pair, $i$.right.closest\_pair ] )
> > We define xor operation on languages $L,M\subseteq \{0,1\}^\*$: > > $$L\oplus M = \{u\oplus v :|u|=|v|, u\in L, v\in M\}$$ $\oplus$ is > defined as xor on postions, for example: $001\oplus 100=101$ Show > that there exist languages $L,M\in PTIME$ such that $L\oplus M\in > NP complete$ > > > It is hard to me. I have been thinking a lot about it. My intuition is: Write the problem as equations system and reduce 3-SAT to it. But, I am not sure if it is ok and, if yes, how to solve it.
This seems like a rather difficult question. Here is one approach. Every 3CNF on $n$ variables can be encoded as a binary string of length $8n^3$ (how?). Consider the following two languages: $$ \begin{align\*} L\_1 &= \bigcup\_{n=1}^\infty \{ xy0^{8n^3} : |x|=n, |y|=8n^3, \text{$x$ is an assignment satisfying the 3CNF $y$} \}, \\ L\_2 &= \bigcup\_{n=1}^\infty \{ x0^{8n^3}y : |x|=n, |y|=8n^3, \text{$x$ is an assignment satisfying the 3CNF $y$} \}. \end{align\*} $$ These two languages are clearly in P, and so their XOR is clearly in NP. However, $$ (L\_1 \oplus L\_2) \cap \bigcup\_{n=1}^\infty \{0^ny^2 : |y|=8n^3\} = \bigcup\_{n=1}^\infty \{0^n y^2 : |y|=8n^3, y \text{ is satisfiable}\}, $$ showing that $L\_1 \oplus L\_2$ is NP-complete.
I have inter-arrival times of vehicles recorded by a vehicle detection algorithm. I want to find the closest distribution (e.g., Poisson or other) of this data. How can I do that? Here is a graph of the inter-arrival times from a [crosspost on SO](https://stackoverflow.com/questions/8743601/how-to-find-the-closest-distribution-of-a-given-data). ![enter image description here](https://i.stack.imgur.com/gxQwj.jpg)
I'd suggest starting with a quick read of the chapter of Law and Kelton's "Simulation Modeling and Analysis" textbook that discusses methods for selecting distributions to use in Monte Carlo simulations. This chapter discusses methods for selecting candidate distributions, fitting the distributions to your data, and then testing the goodness of fit. It's quite common to find that many different distributions adequately fit your data. Depending on what you're doing with your model, the choice that you make can have a big effect on the results. In that case, it's appropriate to run your simulation with the different distributions to see how sensitive your results are to the assumed distribution. For interarrival times, it is nearly always the case in practice that the Poisson process (that is, exponential interarrival times but a Poisson distribution for the number of arrivals in a time period) is the way to go. However, the arrival rate may vary (e.g. by day of the week, time of day, and so on.)
$$L=\{0^m1^n \enspace | \enspace m \neq n\}$$ I saw that this exact question exists [elsewhere](https://math.stackexchange.com/questions/480430/show-0m1n-m-neq-n-is-not-regular), but I couldn't understand what was being said there. My question does not mandate the use of the Pumping Lemma as stated "elsewhere", but I am using the Pumping Lemma anyway. I want to present what I have so far, and for someone to tell me if I'm on the right track: 1. Assume $L$ is not regular. 2. Let $p$ be the pumping length given by the Pumping Lemma for regular languages. 3. Let the string $w = 0^p1^{p+1} \in L$ 4. By the Pumping Lemma, $w = xy^iz$, where $i \geq 0$, $\color{green}{\lvert y \rvert \geq 1}$, and $\color{red}{\lvert xy \rvert \lt p}$. 5. Let: \begin{equation} \begin{aligned} \mathcal{x} &= \mathcal{0}^{p} \\ {y} & = {1}^{p+1} \\ {z} & = \varepsilon \end{aligned} \end{equation} It is at this point in the proof that I get confused. I feel as if I've set it up well, but just can't finish. Here's what I've got, though: 6. We see that $\lvert y \rvert= p+1 \geq 1 \enspace \color{green}{\checkmark}$ 7. However, $\lvert xy \rvert= p+p+1 \gt p \enspace \color{red}{\textbf{X}}$ As we can see by $\textit{(7)}$, our test string $w$ violates a $\color{red}{condition}$ of the Pumping Lemma, thus is not regular. Thumbs up, thumbs down, anyone? Did I make the appropriate inferences about my split string $w$ in order to achieve a contradiction, and did I even split the string correctly? And to boot, did I even pick a $w$ that is useful to the proof?
This is a very hard one to prove via the pumping lemma, as you have to construct a string that eventually gives you something of the form $0^{n}1^{n}$ for some $n$ when you pump it the right number of times. As Renato notes, your decomposition (at the time of writing this answer) is incorrect. We'll come back to this. A simpler way is to use the closure properties of regular languages. In particular, regular languages are closed under complementation and intersection, so 1. If $R$ is regular, then $\overline{R} = \Sigma^{\ast}\setminus R$ is also regular. 2. If $R\_{1}$ and $R\_{2}$ are regular, then $R\_{3} = R\_{1} \cap R\_{2}$ is also regular. So we assume that $L = \{0^{m}1^{n} \mid m \neq n\}$ is regular. Then by (1), $\overline{L} = \Sigma^{\ast}\setminus L$ is regular. In particular note that $\overline{L}$ includes all strings of the form $0^{n}1^{n}$. The language $A = \{0^{a}1^{b} \mid a,b \in \mathbb{N}\}$ is regular (we can show this via a regular expression, DFA or regular grammar for the language, e.g. $0^{\ast}1^{\ast}$). Then by (2) $\overline{L}\cap A$ must also be regular, but this is the language $\{0^{n}1^{n}\}$, which is not regular (and much easier to prove via the pumping lemma!). As noted in the question you linked however, it is possible to do it with the pumping lemma, but you need to pick your starting string carefully. As we only have two sections to the string, the obvious approach is to have a string of the form $0^{p}1^{x}$ for some $x$, so then we know that $y = 0^{k}$ for some $k\leq p$. Now the trick is to pick $x$ such that we can pump the string some number $y$ times and get $p + (y-1)\cdot k = x$. Of course we don't know what $k$ is - it could be anything from $1$ to $p$, so $x$ needs to take this into account. Hence $x$, in part, needs to (almost) be a multiple of every number from $1$ to $p$. So if we pick $x = p + p!$, we can say that for every $k$, there exists a $y$ such that $p + (y-1)\cdot k = p + p!$. In particular $y = \frac{p!}{k}+1$. So then our starting string is $s=0^{p}1^{p+p!}$. By the pumping lemma if $L$ is regular, we can break $s$ up in $xyz$ such that $|y| \geq 1$, $|xy| \leq p$ and $xy^{i}z \in L$ for every $i \in \mathbb{N}$, but we have just shown that for $i = \frac{p!}{k}+1$, we get the string $0^{p+p!}1^{p+p!} \notin L$. Hence $L$ violates the pumping lemma, can thus can't be regular.
I have the mean from a previous study (the Brief Symptom Inventory) that I would like to compare the mean found in my study. Both samples have a population over 100. I do not have the entire data set for the previous study. I typically use SPSS for my stats. I was hoping to use a paired sample t-test as I am comparing the mean scores of the same scale. However, I was told by the program this was not possible with the values I entered. Is there another way that I may do this? Basically, looking to compare the means to determine if there is a statistical difference without having the entire population.
You cannot do a paired sample t-test because you do not have paired samples. In fact, you only have one sample and one number. What you can do is a one sample t-test, but, rather than compare to 0 (the typical default) you compare to the earlier mean (treating that as a fixed value). One way to do this would be to subtract the earlier mean from every score and then do a t-test vs. 0; but SPSS may make it easier (I am not an SPSS user).
I want to perform K-means clustering on objects I have, but the objects aren't described as points in space, i.e. by `objects x features` dataset. However, I am able to compute the distance between any two objects (it is based on a similarity function). So, I dispose of the distance matrix `objects x objects`. I've implemented K-means before, but that was with points dataset input; and with distance matrix input it's not clear to me how to update the clusters to be the cluster "centers" without a point-representation. How would this normally be done? Are there versions of K-means or methods close to it, for that?
Obviously, k-means needs to be able to compute *means*. However, there is a well-known variation of it known as [**k-medoids** or PAM](http://en.wikipedia.org/wiki/K-medoids) (Partitioning Around Medoids), where the medoid is the *existing* object most central to the cluster. K-medoids only needs the pairwise distances.
A few years ago, a youtube channel named hackerdashery, made an extraordinary youtube video explaining **P vs NP**, in a semi-vulgarized way : <https://www.youtube.com/watch?v=YX40hbAHx3s> At 7 minutes and 56 seconds however, he talks about why it is so hard to prove that **P $=$ NP** or that **P $\neq$ NP**, and states that proving things **is** in fact an **NP** problem (in the video however, he writes down that it is actually a **co-NP** problem). Thus, is proving things, in particular **P $=$ NP** or **P $\neq$ NP**, an **NP** or **co-NP** problem? If so, why? And where could I find more on this?
It isn't meaningful to say that a single specific question is in NP or any other complexity class. In order to classify things as being in P, or NP, or co-NP, etc. they need to be sets of problems with some parameter. So for example, the problem "Is the positive integer n prime" is a question where we can discuss this sort of thing. In a certain overly pedantic sense, the correct answer to your question is that the problem you care about is in P, since there's an algorithm which will answer every question of your form in polynomial (in fact constant time). But we don't know if that algorithm is just "Return TRUE" or is "Return False." The idea that the video may be touching on is the idea that if P != NP, then in general, finding proofs is hard. In fact, "Is statement S provable in formal system A with a proof of length at most x" is an NP-hard problem" (Questions in this are things like "Is there a proof of Fermat's Last Theorem that's shorter than 5 pages.") . And we also expect that if P != NP, for most "natural" NP-complete problems, random instances will often not be easy. So if P != NP, then searching for proofs should often be difficult, and thus we shouldn't be surprised that the proof that P != NP would itself be one of those. This may be what they were trying to say.
This is along the lines of "[Algorithms from the Book](https://cstheory.stackexchange.com/questions/189/algorithms-from-the-book)". Although reductions are algorithms as well, I thought it doubtful that one would think of a reduction in response to the question about algorithms from the book. Hence a separate query! Reductions of all kinds are most welcome. I'll start off with the really simple reduction from vertex cover to multicut on stars. The reduction almost suggests itself once the source problem is identified (before which I would find it hard to believe that the problem would be hard on stars). This reduction involves constructing a star with $n$ leaves, and associating a pair of terminals with every edge in the graph, and it is "easy to see" that it works. I will update this with a link to a reference, once I find one. Those who are missing the context of the book may want to look at the question about [Algorithms from the book](https://cstheory.stackexchange.com/questions/189/algorithms-from-the-book). **Update:** I realize that I was not entirely clear as to what qualifies as a reduction from the book. I find this issue a little bit tricky, so I confess to half-deliberately dodging the issue by slipping in a reference to the other thread :) So let me describe what I had in mind, and I suppose it goes without saying - YMMV in this regard. I intend a direct analogy to the original intent of Proofs from the Book. I have seen reductions that are awfully clever, and leave me gaping at how that sequence of thoughts might have occurred to anyone. While such reductions leave me with a definite sense of awe, those are not the examples that I am looking to collect in this context. What I am looking for are reductions that are described without too much difficulty, and are perhaps mildly surprising, for the reason that they are easy to grasp but aren't easy to come up with. If you estimate that the reduction in question will require a lecture to cover, then likely it doesn't fit the bill, although I am sure there might be exceptions where the high-level idea is elegant and the devil's in the details (for the record, I'm not sure I can think of any). The example I gave was deliberately simple, and hopefully somewhat - if not perfectly - illustrative of these characteristics. The first time I heard about multi-cut was in a classroom, and our instructor began by saying that not only is it NP-hard in general, it is NP-hard even when restricted to trees... {dramatic pause} of *height one*. I recall not being able to prove it immediately, although it seems obvious in retrospect. I suppose *obvious in retrospect* closely describes what I am looking for. I am not sure if this has anything to do with the complexity of the description - perhaps there are situations where something apparently murky might classify as elegant - feel free to bring up your examples (exceptions?), but I would really appreciate a justification. Given that after some point this is a matter of taste, you should certainly feel free to find what I see as insanely complex, perfectly beautiful. I am looking forward to seeing a variety of examples!
Integer multiplication to fast Fourier transforms!
From the link [Solving SAT by converting to disjunctive normal form](https://math.stackexchange.com/questions/159591/solving-sat-by-converting-to-disjunctive-normal-form), I learnt that the algorithm to transform any boolean formula to disjunctive form takes exponential time in worst case. But I have a question that for unsatisfiable boolean formulas, does it also take exponential time to transform them into disjunctive form? More specifically, for 3CNF unsatisfiable boolean formulas, does it take exponential time to transform them into disjunctive form?
> > For 3CNF unsatisfiable boolean formulas, does it take exponential time to transform them into disjunctive form? > > > For *unsatisfiable* formulas, the answer is trivially no -- in fact it takes constant time. Output any disjunctive unsatisfiable formula, like $(p \land \lnot p)$. It is possible to fix your question so that it's not trivially answered, but that does require some more effort. We could ask: is there an algorithm that takes as input a 3CNF boolean formula, and outputs a disjunctive normal form, such that for unsatisfiable inputs, the algorithm takes sub-exponential time (but it is allowed to take exponential or longer time on the satisfiable inputs)? To this question, D.W.'s answer applies.
After doing k-means clustering on a set of observations, I would like to construct a discriminant function so as to classify new observations into the categories I found after k-means. Is this at all a good idea? What should I be careful with?
The idea to get a classifier for new cases after the classes had been identified in a cluster analysis is in itself natural and sane. Discriminant analysis could be such a tool, especially after K-means clustering. However, [the DA's classifier](https://stats.stackexchange.com/questions/31366/linear-discriminant-analysis-and-bayes-rule/31384#31384) will work well if assumptions of DA hold, such as (1) continuous data with approximately multivariate normal distribution for each class, (2) *n* in classes not very unbalanced, (3) variances/covariances not very different within classes (otherwise, Quadratic DA remains valid technique though). Another thing that one should keep in mind is that the classifier can assign only to the existing classes; it doesn't tell you openly "this new case is so unusual that you better create a new class for it". Still, you can draw decision to create a new class if you compare Mahalanobis or euclidean distance (or PDF function which depend on the distance) of the new case to the class it has been assigned to with the one "typical" for that class in your training sample (i.e. the sample that you had clustered). Finally, one should be aware that K-means clustering procedure can itself be used for classification of new cases; this is the "assign cases only, don't iterate" regime. A case will be assigned to the cluster to which it is closer. This simple way of classification new cases differs from DA's classification in that it does not rely on distributional assumptions and Bayes formula. Honestly, I can't tell you at this time whether I think this or that classification approach is better and why. Maybe a keen commentator will drop a hint here.
If there is a deadlock between the processes does that mean that there is starvation also? My Thinking: deadlock is no process using that resources , but starvation is like not giving chance to only that process so there is progress in starvation but not in deadlock Is my thinking right?
In both cases you will have a infinite waiting. The difference's Deadlock is when two or more processess are waiting for each other, like a cicrular waiting. On the other hand, starvation is also a kind of waiting, but proccess is expecting a resource that will never be available. So the difference is deadlock occurs between processess and starvation you'll have a process and a "servant" resource.