input
stringlengths
38
38.8k
target
stringlengths
30
27.8k
I'm working with squared matrices that can be quite large, for instance, a `M = 50 x 50 matrix`. My objective is to compute the power of the squared matrix `M^t` for very large `t` values (for example `t = 4000`). I work in `R` and I have used the `R` function `matrix.power` from the `matrixcalc` R package. I'm exploring the possibility to write a code for matrix power in C++ and then import it in R through the R package `Rcpp`. One alternative would be to use the matrix multiplication approach (as in the `matrix.power` function in R), but looking around I have understood that there might be faster approaches to calculate a matrix power. Do you have any experience on that? Does anyone know a library in C++ that does matrix power calculation fast and efficiently? Consider that I'm working on a Mac0s laptop with 16 GB of RAM and 4 CPUs.
Since your matrices are small $(50 \times 50)$, you can probably just compute $M^t$ through repeated exponentiation where the exponents are powers of $2$. Write $t$ in binary so that $t = 2^{k\_1} + 2^{k\_2} + \dots + 2^{k\_\ell}$. Then $M^t = \prod\_{i=1}^\ell M^{2^{k\_i}}$. Moreover, for $k\_i \ge 1$ you have $M^{2^{k\_i}} = \left( M^{2^{k\_i - 1}} \right)^2$, so you need at most $O(\log t)$ matrix multiplications. Here is a pseudocode where "&" denotes "bitwise and" and "~" denotes "bitwise not": ``` Power(M, t): if(t & 1): //Handle odd values of t (this saves a multiplication later) R = M; t = t & ~1; //Clear the least significant bit of t else: R = I; i=1; B=M; //B will always be M^i, where i is a power of 2 while t!=0: i = i*2; //Advance i to the next power of 2 B = B*B; //B was M^(i/2) and is now M^i if(t & i): //i is of the form 2^j. Is the j-th bit of t set? R = R*B; //Multiply the result with B=A^i t = t & ~i; //Clear the j-th bit of t Return R; ```
I have written the following code for a neural network to perform regression on a dataset, but I am getting a `ValueError`. I have looked up to different answers and they suggested to use `df = df.values` to get a numpy array. I tried it but it still produced the same error. How to fix this? **CODE** ``` from keras import Sequential from keras.layers import Dense, Dropout, Flatten from keras.optimizers import Adam from sklearn.model_selection import train_test_split #Define Features and Label features = ['posted_by', 'under_construction', 'rera', 'bhk_no.', 'bhk_or_rk', 'square_ft', 'ready_to_move', 'resale', 'longitude', 'latitude'] X=train[features].values y=train['target(price_in_lacs)'].values #Train Test Split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state = 23, shuffle = True) #Model model = Sequential() model.add(Dense(10, activation='relu', kernel_initializer='random_normal', input_dim = 10)) model.add(Dense(1, activation = 'relu', kernel_initializer='random_normal')) #Compiling the neural network model.compile(optimizer = Adam(learning_rate=0.1) ,loss='mean_squared_logarithmic_error', metrics =['mse']) #Fitting the data to the training dataset model.fit(X_train,y_train, batch_size=256, epochs=100, verbose=0) ``` **ERROR** ``` ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type int). ```
As `X_train` and `y_train` are `pandas.core.series.Series` they can't be parsed. Try converting them to list as below: ``` X=train[features].to_list() y=train['target(price_in_lacs)'].to_list() ```
Several optimization problems that are known to be NP-hard on general graphs are trivially solvable in polynomial time (some even in linear time) when the input graph is a tree. Examples include minimum vertex cover, maximum independent set, subgraph isomorphism. Name some natural optimization problems that remain NP-hard on trees.
Graph Motif is NP-Complete problem on trees of maximum degree three: Fellows, Fertin, Hermelin and Vialette, [Sharp Tractability Borderlines for Finding Connected Motifs in Vertex-Colored Graphs](https://doi.org/10.1007/978-3-540-73420-8_31) Lecture Notes in Computer Science, 2007, Volume 4596/2007, 340-351
This question has been asked at [StackOverflow](https://stackoverflow.com/questions/3793510/find-the-discrete-pair-of-x-y-that-satisfy-inequality-constriants) ( a variant of this has been [asked at Math SE](https://math.stackexchange.com/questions/5289/find-range-of-x-1-x-2-where-y-min-leq-yx-1-x-2-leq-y-max)), but so far there is no great response. So I'm going to reask here-- with a bit of twist. I have a few inequalities regarding $x,y$ ( both must be integer), that satisfies the following equations: $$x>=0$$ $$y>=0$$ $$100 \leq x^2+y^2 \leq 200$$ Is there an algorithm (*non-brute force*!) that allows me to find every admissible pair of $x,y$? The $100 \leq x^2+y^2 \leq 200$ is just an example; in general the constraint equation(s) can be of polynomial functions with any degree.
This is a rasterization problem. For linear inequalities, the regions are polygonal. The classical approach to polygon rasterization is scan conversion; you may learn about it in any textbook on computer graphics. Preparatory to scan conversion, you will need to convert your linear inequalities (defining half-planes) into an oriented list of edges. One way is to compute the convex hull of the dualized inequalities. For general polynomial inequalities, the problem is considerably harder. Scan conversion with plane sweeping and active edge lists will do the trick. Detecting sweep events requires you to solve systems of polynomial equations in two variables, which may be done with a combination of numerical and symbolic methods. One way uses resultants and root finding to determine the intersection events up front. Another approach is more incremental, less efficient but significantly simpler: Assume a bounding box is given. For each y going from the maximum to the minimum, incrementally evaluate the coefficients of the polynomials as functions of y using forward differencing, yielding polynomials depending only on x for each scan-line. Their roots will serve as the rasterization bounds for that scan-line, and you can learn how to find them in a book on numerical methods (Numerical Recipes is popular). Edit: When I wrote my answer, the last inequality had subscripts rather than superscripts, so it appeared linear. Now I see it is quadratic. That still makes it a rasterization problem although a non-polygonal one. For this specific case you can use circle rasterization to cut out the annulus. In the most general case, you will need polynomial techniques along the lines of my last paragraph. I added some more detail on this.
Given an sorted array of integers, I want to find the number of pairs that sum to $0$. For example, given $\{-3,-2,0,2,3,4\}$, the number of pairs sum to zero is $2$. Let $N$ be the number of elements in the input array. If I use binary search to find the additive inverse for an element in the array, the order is $O(\log N)$. If I traverse all the elements in the set, then the order is $O(N\log N)$. > > How to find an algorithm which is of order $O(N)$? > > >
Let $A$ be the sorted input array. Keep two pointers $l$ and $r$ that go through the elements in $A$. The pointer $l$ will go through the "left part" of $A$, that is the negative integers. The pointer $r$ does the same for the "right part", the positive integers. Below, I will outline a pseudocode solution and assume that $0 \notin A$ for minor simplicity. Omitted are also the checks for the cases where there are only positive or only negative integers in $A$. ``` COUNT-PAIRS(A[1..N]): l = index of the last negative integer in A r = index of the first positive integer in A count = 0; while(l >= 0 and r <= N) if(A[l] + A[r] == 0) ++count; ++right; --left; continue; if(A[r] > -1 * A[l]) --left; else ++right; ``` It is obvious the algorithm takes $O(N)$ time.
we have a six-sided dice numbered from 1 to 6, and each has the probability $p =\frac{1}{6}$. My task is to create a unbiased method to generate a random number from 1 to 10 using the given dice. I am aware that the method should maps the following set $$ \{1,\dots 6\}^k\mapsto \{1,\dots, 10\}$$, with $k$ is the number of time we throw the dice. For sure not the 36 elements will be mapped into the set containing 1 to 10. What i mean, the method will use a while loop and only stops if a certain tuple is found. My task then is the find those tuples and prove each thoese 10 tuples has the probaitly of $\frac{1}{10}.$ I think $k$ should be 2 in this case , but then i am not quite sure how to extract the 10 elements from 36 elements ($6^k$,with $k=2$) and have a probability of $p'=\frac{1}{10}$. can we maybe also generalize that for$ n$ elements ? $$ \{1,\dots 6\}^k\mapsto \{1,\dots, n\} $$ I will appreciate any good ideas.
Throw a dice until your number is not a six, giving a number x = one to five. Throw the dice once more, giving a number y = one to six. There are thirty combinations. Calculate (x - 1) \* 6 + (y - 1), take the last digit of the result, then add 1.
I understand that using DFS "as is" will not find a shortest path in an unweighted graph. But why is tweaking DFS to allow it to find shortest paths in unweighted graphs such a hopeless prospect? All texts on the subject simply state that it cannot be done. I'm unconvinced (without having tried it myself). Do you know any modifications that will allow DFS to find the shortest paths in unweighted graphs? If not, what is it about the algorithm that makes it so difficult?
*Breadth*-first-search is the algorithm that will find shortest paths in an unweighted graph. There is a simple tweak to get from DFS to an algorithm that will find the shortest paths on an unweighted graph. Essentially, you replace the stack used by DFS with a queue. However, the resulting algorithm is no longer called DFS. Instead, you will have implemented breadth-first-search. The above paragraph gives correct intuition, but over-simplifies the situation a little. It's easy to write code for which the simple swap does give an implementation of breadth first search, but it's also easy to write code that at first looks like a correct implementation but actually isn't. You can find a related cs.SE question on [BFS vs DFS here](https://cs.stackexchange.com/questions/329/do-you-get-dfs-if-you-change-the-queue-to-a-stack-in-a-bfs-implementation). You can find some [nice pseudo-code here.](http://www.ics.uci.edu/~eppstein/161/960215.html)
I find some books about computers, but all of them are about technology. I want something more linked to theory.
Opening and doing a quick search in the (classic) *Computational Complexity* book of Arora and Barak ([online draft here](http://theory.cs.princeton.edu/complexity/)), there are 19 occurrences of the word "philosophical", including such subsections as * "On the philosophical importance of $\mathrm{P}$" * "The philosophical importance of $\mathrm{NP}$" * a discussion of randomness in Chapter 16 ("Derandomization, Expanders and Extractors").
I was working on word2vec gensim model and found it really interesting. I am intersted in finding how a unknown/unseen word when checked with the model will be able to get similar terms from the trained model. Is this possible? Can word2vec be tweaked for this? Or the training corpus needs to have all the words of which i want to find similarity.
Every algorithm that deals with text data has a vocabulary. In the case of word2vec, the vocabulary is comprised of all words in the input corpus, or at least those above the minimum-frequency threshold. Algorithms tend to ignore words that are outside their vocabulary. However there are ways to reframe your problem such that there are essentially no Out-Of-Vocabulary words. Remember that words are simply "tokens" in word2vec. They could be ngrams or they could be letters. One way to define your vocabulary is to say that every word that occurs at least X times is in your vocabulary. Then the most common "syllables" (ngrams of letters) are added to your vocabulary. Then you add individual letters to your vocabulary. In this way you can define any word as either 1. A word in your vocabulary 2. A set of syllables in your vocabulary 3. A combined set of letters and syllables in your vocabulary
I have read that using R-squared for time series is not appropriate because in a time series context (I know that there are other contexts) R-squared is no longer unique. Why is this? I tried to look this up, but I did not find anything. Typically I do not place much value in R-squared (or Adjusted R-Squared) when I evaluate my models, but a lot of my colleagues (i.e. Business Majors) are absolutely in love with R-Squared and I want to be able to explain to them why R-Squared in not appropriate in the context of time series.
Some aspects of the issue: If somebody gives us a vector of numbers $\mathbf y$ and a conformable matrix of numbers $\mathbf X$, we do not need to know what is the relation between them to execute some estimation algebra, treating $y$ as the dependent variable. The algebra will result, irrespective of whether these numbers represent cross-sectional or time series or panel data, or of whether the matrix $\mathbf X$ contains lagged values of $y$ etc. The fundamental definition of the coefficient of determination $R^2$ is $$R^2 = 1 - \frac {SS\_{res}}{SS\_{tot}}$$ where $SS\_{res}$ is the sum of squared residuals from some estimation procedure, and $SS\_{tot}$ is the sum of squared deviations of the dependent variable from its sample mean. Combining, the $R^2$ will always be uniquely calculated, for a specific data sample, a specific formulation of the relation between the variables, and a specific estimation procedure, subject only to the condition that the estimation procedure is such that it provides point estimates of the unknown quantities involved (and hence point estimates of the dependent variable, and hence point estimates of the residuals). If any of these three aspects change, the arithmetic value of $R^2$ will in general change -but this holds for any type of data, not just time-series. So the issue with $R^2$ and time-series, is not whether it is "unique" or not (since most estimation procedures for time-series data provide point estimates). The issue is whether the "usual" time series specification framework is technically friendly for the $R^2$, and whether $R^2$ provides some useful information. The interpretation of $R^2$ as "proportion of dependent variable variance explained" depends critically on the residuals adding up to zero. In the context of linear regression (on whatever kind of data), and of Ordinary Least Squares estimation, this is guaranteed only if the specification includes a constant term in the regressor matrix (a "drift" in time-series terminology). In autoregressive time-series models, a drift is in many cases not included. More generally, when we are faced with time-series data, "automatically" we start thinking about how the time-series will evolve into the future. So we tend to evaluate a time-series model based more on how well it *predicts future values*, than how well it *fits past values*. But the $R^2$ mainly reflects the latter, not the former. The well-known fact that $R^2$ is non-decreasing in the number of regressors means that we can obtain a *perfect fit* by keeping adding regressors (*any* regressors, i.e. any series' of numbers, perhaps totally unrelated conceptually to the dependent variable). Experience shows that a perfect fit obtained thus, will also give *abysmal* predictions outside the sample. Intuitively, this perhaps counter-intuitive trade-off happens because by capturing the whole variability of the dependent variable into an estimated equation, we turn unsystematic variability into systematic one, as regards prediction (here, "unsystematic" should be understood relative to our knowledge -from a purely deterministic philosophical point of view, there is no such thing as "unsystematic variability". But to the degree that our limited knowledge forces us to treat some variability as "unsystematic", then the attempt to nevertheless turn it into a systematic component, brings prediction disaster). In fact this is perhaps the most convincing way to show somebody why $R^2$ should not be the main diagnostic/evaluation tool when dealing with time series: increase the number of regressors up to a point where $R^2\approx 1$. Then take the estimated equation and try to predict the future values of the dependent variable.
Let's say we have two models trained. And let's say we are looking for good accuracy. The first has an accuracy of 100% on training set and 84% on test set. Clearly over-fitted. The second has an accuracy of 83% on training set and 83% on test set. On the one hand, model #1 is over-fitted but on the other hand it still yields better performance on an unseen test set than the good general model in #2. Which model would you choose to use in production? The First or the Second and why?
These numbers suggest that the first model is not, in fact, overfit. Rather, it suggests that your training data had few data points near the decision boundary. Suppose you're trying to classify everyone as older or younger than 13 y.o. If your test set contains only infants and sumo wrestlers, then "older if weight > 100 kg, otherwise younger" is going to work really well on the test set, not so well on the general population. The bad part of overfitting isn't that it's doing really well on the test set, it's that it's doing poorly in the real world. Doing really well on the test set is an indicator of this possibility, not a bad thing in and of itself. If I absolutely had to choose one, I would take the first, but with trepidation. I'd really want to do more investigation. What are the differences between train and test set, that are resulting in such discrepancies? The two models are both wrong on about 16% of the cases. Are they the same 16% of cases, or are they different? If different, are there any patterns about where the models disagree? Is there a meta-model that can predict better than chance which one is right when they disagree?
When I started with artificial neural networks (NN) I thought I'd have to fight overfitting as the main problem. But in practice I can't even get my NN to pass the 20% error rate barrier. I can't even beat my score on random forest! I'm seeking some very general or not so general advice on what should one do to make a NN start capturing trends in data. For implementing NN I use Theano Stacked Auto Encoder with [the code from tutorial](https://github.com/lisa-lab/DeepLearningTutorials/blob/master/code/SdA.py) that works great (less than 5% error rate) for classifying the MNIST dataset. It is a multilayer perceptron, with softmax layer on top with each hidden later being pre-trained as autoencoder (fully described at [tutorial](http://deeplearning.net/tutorial/deeplearning.pdf), chapter 8). There are ~50 input features and ~10 output classes. The NN has sigmoid neurons and all data are normalized to [0,1]. I tried lots of different configurations: number of hidden layers and neurons in them (100->100->100, 60->60->60, 60->30->15, etc.), different learning and pre-train rates, etc. And the best thing I can get is a 20% error rate on the validation set and a 40% error rate on the test set. On the other hand, when I try to use Random Forest (from scikit-learn) I easily get a 12% error rate on the validation set and 25%(!) on the test set. How can it be that my deep NN with pre-training behaves so badly? What should I try?
The problem with deep networks is that they have lots of hyperparameters to tune and very small solution space. Thus, finding good ones is more like an art rather than engineering task. I would start with working example from tutorial and play around with its parameters to see how results change - this gives a good intuition (though not formal explanation) about dependencies between parameters and results (both - final and intermediate). Also I found following papers very useful: * [Visually Debugging Restricted Boltzmann Machine Training with a 3D Example](http://yosinski.com/media/papers/Yosinski2012VisuallyDebuggingRestrictedBoltzmannMachine.pdf) * [A Practical Guide to Training Restricted Boltzmann Machines](https://www.cs.toronto.edu/~hinton/absps/guideTR.pdf) They both describe RBMs, but contain some insights on deep networks in general. For example, one of key points is that networks need to be debugged layer-wise - if previous layer doesn't provide good representation of features, further layers have almost no chance to fix it.
Is there any algorithm that tells us how to modify semantic actions associated with a left-recursive grammar? For example, we have the following grammar, and its associated semantic actions: $ S \rightarrow id = expr $ { S.s = expr.size } S $\rightarrow$ if expr then $S\_1$ else $S\_2$ { $S\_1.t = S.t + 2; $ $S\_2.t = S.t + 2;$ $S.s = expr.size + S\_1.size + S\_2.size + 2;$ } S $\rightarrow$ while expr do $S\_1$ { $S\_1.t = S.t + 4;$ $S.s = expr.size + S\_1.s + 1;$ } S $\rightarrow$ $S\_1$ ; $S\_2$ {$S\_1.t = S\_2.t = S.t;$ $S.s = S\_1.s + S\_2.s; $ } Clearly the non-recursive version of the grammer is: S $\rightarrow$ id = expr T S $\rightarrow$ if expr then $S\_1$ else $S\_2$ T S $\rightarrow$ while expr do $S\_1$ T T $\rightarrow$ ; $S\_2$ T T $\rightarrow$ $\epsilon$ But we also need to change the semantic actions accordingly. Any ideas how this can be done?
It's impossible to prove unless you state the algorithm. But, let's say your algorithm maintain a candidate index $j$ for loop iteration, which is update to $j \leftarrow j+1$, if $v\_j \neq v$, and if the loop termination depends on $j$ not updating or reaching $n$ and the output is $j$, then you could say that a valid loop invariant is $v\_i \neq v\_j$ for $1 \leq i < j \le n $.
During my quest to understand back propagation in a more rigorous approach I have come across with the definition of **error signal of a neuron** which is defined as follows for the $j^{\text{th}}$ neuron in the $l^{\text{th}}$ layer: \begin{eqnarray} \delta^l\_j \equiv \frac{\partial C}{\partial z^l\_j} \tag{1}\end{eqnarray} Basically, $\delta^l\_j$ *measures how much the total error changes when the input sum of the neuron is changed* and is used for calculating the weights and biases of the neural network as follows: \begin{eqnarray} \frac{\partial C}{\partial w^l\_{jk}} = a^{l-1}\_k \delta^l\_j \tag{2}\end{eqnarray} \begin{eqnarray} \frac{\partial C}{\partial b^l\_j} = \delta^l\_j. \tag{3}\end{eqnarray} Besides being useful for the calculation of the weights and the biases in the sense it makes it possible to reuse its value several times, is there any other reason why this definition is always brought up when addressing back propagation?
From somebody with a PhD in Probability working with AI/ML for a living. Basics of Probability theory, maybe wikipedia/cousera/... for the very basics if you never had a class in it, followed by e.g. “Probability with Martingales” by Williams. The papers [here](https://en.m.wikipedia.org/wiki/List_of_important_publications_in_computer_science#Machine_learning) will also give you a good feel. As for books, [this](https://www.cs.huji.ac.il/%7Eshais/UnderstandingMachineLearning/copy.html.) one on “classical” machine learning is pretty good and free For Deep Learning <https://www.deeplearningbook.org/>, this one also has the basics of probability theory. For reinforcement learning <http://incompleteideas.net/book/the-book-2nd.html>. For the applied side, slides from <https://web.stanford.edu/class/cs224n/> and <http://cs231n.stanford.edu/>. As for statistics don’t bother, I’ve never seen any machine learning work reference a theorem in “pure” statistics, if there is such a thing. Of course standard undergraduate calculus and linear algebra. And learn some Python while you’re at it
Ok, I have done a lot of research on Quantum computers. I understand that they are possibly the future of computers and may be commonplace in approximately 30-50 years time. I know that a Binary is either 0 or 1, but a Qubit can be 0 or 1. But what I don't understand is how it can be anything other then 0 or 1. Surely a computer can only understand on and off, despite however fast it may be?
Here is the crux of the matter. > > what I don't understand is how it can be anything other then 0 or 1. > > > This is actually a physics question in disguise, but I think that this forum is still a good place to address it. The two key facts are that information is physical at its root, and that physics is described well by quantum mechanics. **On the physical storage of information** What is a "bit"? It's a system with two easily distinguishable states. This could be a zero or a one written on paper; those are clearly distinguishable symbols, and importantly the ink and the paper are fairly stable with time, so that the ink on the paper can be used to "store" that information. (If it weren't stable, books would have to work differently, and the main reason why we have books is for storage of information.) In conventional electronics, we implement bits by having wires with two different voltage levels, "high" and "low" — separated by a gap of of *other* voltages, which we actively try to prevent the wire from having; but this gap ensures that the "high" and "low" voltages are easily distinguished from one another. In hard drives, the distinct states are represented by magnetic domains pointing in different directions. But in each case, a bit is represented by the states of a physical system which we can easily distinguish from one another. **A simple quantum-mechanical system** An example of a physical system with two easily distinguished states, among very small physical systems, is the orientation of the spin of a single particle, such as a proton. "Spin" is a vector quantity which is akin to angular momentum (hence the term), but does not actually arise from the particle revolving on an axis. Nevertheless, it has a magnetic moment (*i.e.* it acts like a microscopic bar magnet), and we can talk about which direction that bar magnet points: the direction in this case is what "spin" refers to. In particular, we can easily distinguish when it is pointing "up" from when it is pointing "down", *e.g.* using equipment such as in the [Stern–Gerlach experiment](http://en.wikipedia.org/wiki/Stern%E2%80%93Gerlach_experiment). There are other ways that the spin of a proton can point. If we use X, Y, and Z axes, and if "up" is in the +Z direction, we can also consider the spin pointing in the +X direction, the +Y direction, the −X direction, or in fact any direction described by a non-zero vector in three dimensions. Furthermore, the +X direction is distinguishable from the −X direction, and more generally any direction is clearly distinguishable from the opposite direction. **Measurement of a quantum system** In the usual representation of qubits, we identify 0 and 1 with the spin-directions +Z and −Z respectively. However, all of the other spin directions, such as +X, −X, +Y, −Y, and all of the other directions in 3D space are also legitimate possibilities for the proton's spin state. The point at which quantum mechanics rears its head is this: *is the +X direction, the +Y direction, or any other direction than −Z clearly distinguishable from +Z*? Obviously, −Z ought to be *the most easily* distinguished state from +Z, but one might guess that we could detect if the spin were pointing in a slightly different direction than +Z. However, it turns out to be the case that if you attempt to measure the spin of a proton, a single measurement can only distinguish two opposite states: and that if you measure a particle whose spin is *different* from those states, you get an outcome which is **(a)** random and **(b)** indistinguishable from having randomly been in one of the two opposite outcomes in the first place. Furthermore, the spin of the particle changes to be consistent with the measurement outcome — if you measure it again, you will always get the same outcome — so that any information about the original state of the system (aside from the fact that it was not pointing opposite to the outcome which you obtained) is in effect destroyed; only if you have many copies of "the same state", stored by many other particles, can you get the statistical information required to determine what orientation the particle actually had to begin with. *Why* this is the case I cannot tell you, and it's outside of the scope of this forum besides. I can only report that it *apparently is* the case, supported by something approaching a century of experimental observation. **The importance of quantum states as intermediate states** One simple reaction to this state of affairs would be to say: if the other spin directions don't seem to be distinguishable from +Z and −Z, then let's just not use them. This is one thing that you could do from the point of information storage: though avoiding them physically is not really practical, we can try to make fast transitions the way we currently do with voltages in electronics today. However, we aren't really interested in quantum information as a way of describing how to *store* information, but for *transformations* of it — ways that you can use with it, either on your own (computation) or with others (communication) to evaluate functions and obtain the properties of structures represented by the information. While the information is being transformed, we don't necessarily have to care about how to describe the intermediate steps of the computation in terms of distinguishable states; and so this limitation of measurement, and the idea of distinguishing the possible states that a single "quantum bit" may have, is less important. As far as we are able to determine, this freedom to ignore the distinguishability of the various states of a qubit in the midst of a computation is crucial: it is in effect the source of the (likely) power of quantum computation beyond classical computation. Ignoring the distinguishability of *possible* states of particles in mid-computation allows us to make use of the full state-space of a qubit; and importantly, also allows us to explore the full state-space of multi-particle states. Doing so allows us to perform computations in a manner which are popularly described as happening in "multiple worlds"; but given that we can only access the information computed by these multiple "worlds" if we can arrange for most of them to conspire to yield similar answers with high probability by so-called "interference", I think it is much better to recognise that the expanded state-space allows us more flexibility in how we transform information, allowing us literally to short-cut (*i.e.* find a shorter route than one might expect) through computational space to obtain solutions more quickly. What are the nature of these short-cuts? Well, although we cannot perfectly distinguish +X from +Z, or −X from −Z, we can certainly perform operations on a quantum bit which transforms +Z to +X, and −Z to −X; or in fact any rotation of the direction of spin of a single qubit. This, together with *controlled-not* operations (which may be understood as essentially a reversible *exclusive or* operation on classical bits, where again we represent the classical bits 0 and 1 in terms of the +Z / −Z spin directions), is in principle enough to perform universal quantum computation. But what do these single-qubit rotations mean *in terms of the information stored in a single bit*? Well: like a single-bit NOT operation, there isn't very much that you can do with them on their own. But the distinction between classical computation and quantum computation — and the distinction between mere shared randomness and entanglement — essentially boils down to the fact that these operations are sensible; that the +X state is as sensible a state in its own right as +Z, and that −X is as distinct from +X as +Z is from −Z. These other directions are sometimes described as being "in between" +Z and −Z, but it is equally correct to say that +Z and −Z are "in between" +X and −X. The different axes along which a qubits state may point are in themselves equally legitimate, and in their own way *completely determined* (as opposed to random) states which may represent pieces of information. In short: that a quantum bit may meaningfully store information *in a manner other than the states which you use to represent 0 and 1*, in a way which is important when considering the intermediate steps of a computation. That is as much as I think anyone can tell you about quantum bits, without actually getting into physics, or linear algebra over the complex numbers.
I would like to learn both Python and R for usage in data science projects. I am currently unemployed, fresh out of university, scouting around for jobs and thought it would be good if I get some Kaggle projects under my profile. However, I have very little knowledge in either language. Have used Matlab and C/C++ in the past. But I haven't produced production quality code or developed an application or software in either language. It has been dirty coding for academic usage all along. I have used a little bit of Python, in a university project, but I dont know the fundamentals like what is a package , etc etc. ie havent read the intricacies of the language using a standard Python Textbook etc.. Have done some amount of coding in C/C++ way back (3-4 years back then switched over to Matlab/Octave). I would like to get started in Python Numpy Scipy scikit-learn and pandas etc. but just reading up Wikipedia articles or Python textbooks is going to be infeasible for me. And same goes with R, except that I have zero knowledge of R. Does anyone have any suggestions?
I have found the video tutorial/IPython notebook format really helped me get into the python ecosystem. There were two tutorials at SciPy 2013 that cover sklearn ([part 1 of 1st tutorial](https://www.youtube.com/watch?v=r4bRUvvlaBw), [github repo for notebooks](https://github.com/jakevdp/sklearn_scipy2013)). Similar tutorials, from PyCon2012 and PyData2012, are out there for pandas but I don't have the rep to link searching for `pandas tutorial` on youtube should allow you to find them. Since you mention Kaggle, I guess you will have seen their getting started with python tutorial for the titanic passenger dataset (I don't have the rep here to provide a link but searching for `Getting Started with Python: Kaggle's Titanic Competition` should get you there).
A terminology problem. In machine learning we have the following problem: Choosing the optimal model (or training): $$ f^\* = \arg\min\_{f \in \mathcal{F}} \sum\_i l(x\_i,y\_i) $$ Is the term "model selection" always "exactly" referring to this? Or something else?
[Gnuplot](http://www.gnuplot.info/index.html) is free, open source and highly versatile and what I use and I think it will meet your needs. You can point and click with the mouse to zoom in and out on any part of a graph, and you can even write a script to scroll through the data as if watching a film.
I am looking for some interesting applications of regular expressions. Can you name any unusual, or unobvious, cases where regexes find their application?
I don't know if this question belongs here (the answer could be subjective and depend on your definition of "unusual") but here is my favorite unusual application of regex: > > converting [T9 input (2-9)](https://en.wikipedia.org/wiki/T9_%28predictive_text%29) to English text. > > > For example if the user wants to write `hello` they presses `42556`. Convert the input to `[ghi][def][jkl][jkl][mno]` and test this regex against the whole vocabulary: the word `hello` will match.
I am a 2nd year graduate student in theory. I have been working on a problem for the last year (in graph theory/algorithms). Until yesterday I thought I am doing well (I was extending a theorem from a paper). Today I realized that I have made a simple mistake. I realized that it will be much harder than I thought to do what I intended to do. I feel disappointed so much I am thinking about leaving grad school. Is this a common situation that a researchers notices that her idea is not going to work after considerable amount of work? What do you do when you realized that an approach you had in mind is not going to work and the problem seems too difficult to solve? What advice would you give to a *student* in my situation?
This is very common, and certainly frustrating. Here is my advice: Don't wait until you have a complete result to start writing. Maintain a TeX document with formal descriptions of your problem, proofs of preliminary lemmas, etc. as you go. It is easy to convince yourself that something is true and overlook simple mistakes if you are holding the argument only in your head, but the process of writing something down formally forces you to find these mistakes. If you wait to write until the end, you might not find the mistake until you have already expended a lot of effort; if you write as you go, you can find the errors more quickly.
To demonstrate the correctness of a frequentist estimator, it is common to simulate an experiment N times (with N being large), then show that 95% of the resulting N confidence intervals cover the true parameter values. What's the equivalent simulation exercise for Bayesian model? The Bayesian credible interval quantifies my belief about the parameter *given this one particular dataset that I got*, so it doesn't make sense to simulate N experiments for N new datasets. That's where I got stuck in my thinking. **What I want to achieve specifically**: I want to check whether my Stan model is implemented correctly. As recommended by the Stan manual, I generate mock data with a known DGP and fit my Stan model to it. Sometimes the resulting 95% credible interval covers the true value, sometimes not. My first reaction is to re-run this process N times and to check whether 95% of N times, my credible interval covers the true value. Is this valid?
It sounds as if you are looking for the procedure described in this [paper](https://scholar.google.com/scholar?start=10&hl=en&as_sdt=0,33&cluster=10223484718885906634 "Validation of Software for Bayesian Models Using Posterior Quantiles"), which is implemented in the **BayesValidate** R [package](https://cran.r-project.org/package=BayesValidate) and the `pp_validate` function in the **rstanarm** R [package](https://cran.r-project.org/package=rstanarm). Briefly, you draw repeatedly from the *prior* predictive distribution of the model to create simulated datasets, condition on each simulated dataset to draw from its posterior distribution, and use the quantiles of these posterior distributions to conduct a statistical test of the null hypothesis that the software is correct.
In hypothesis testing we set an accepted level of Type I error probability $\alpha$ and observe whether a sample statistic is equally likely or less likely to be observed if the null hypothesis was true. The exact probability of observing a sample score or more extreme under the null is the $p$ value. More generally we reject if $\alpha>p$. I am now wondering about the following. The $p$ value seems to give an exact estimate of the probability of falsely rejecting a true null hypothesis (if we decide to do so), which is akin to the Type I error definition. Since we know (estimate) the probability of observing the sample score (or more extreme values), $\alpha$ seems to be a maximum acceptable Type I error, whereas $p$ is exact. Put differently it appears to give the minimum $\alpha$ level under which we could still reject the null. Is this correct?
The $p$-value is not "an exact estimate of the probability of falsely rejecting a true null hypothesis". This probability is fixed by construction of an $\alpha$-level test. Rather it is an estimate of the probability that other realisations of the experiment are more extreme than the actual realisation. Only if the present realisation belongs to the top $\alpha$ extreme realisations, we reject the null hypothesis. But it is right that you can imagine the $p$-value to be the minimum $\alpha$, such that , if this $\alpha$ had been chosen this way, the test would be on the border of significance to insignificance for the present data. Maybe a different explanation helps: We say that we reject the null hypothesis, iff the present outcomes can be shown to belong to the extreme $100 \alpha \%$ of possible outcomes, provided the null hypothesis holds. The $p$-value just indicates how extreme our outcomes actually are.
Stable Marriage Problem: <http://en.wikipedia.org/wiki/Stable_marriage_problem> I am aware that for an instance of a SMP, many other stable marriages are possible apart from the one returned by the Gale-Shapley algorithm. However, if we are given only $n$ , the number of men/women, we ask the following question - Can we construct a preference list that gives the maximum number of stable marriages? What is the upper bound on such a number?
For an instance with $n$ men and $n$ women, the trivial upper bound is $n!$, and nothing better is known. For a lower bound, [Knuth (1976)](http://www-cs-faculty.stanford.edu/~uno/ms.html) gives an infinite family of instances with $\Omega(2.28^n)$ stable matchings, and [Thurber (2002)](http://dx.doi.org/10.1016/S0012-365X%2801%2900194-7) extends this family to all $n$.
I need to convert the table A to table B. How can I do that using R? **TABLE A** ``` Y 10 Y 12 Y 18 X 22 X 12 Z 11 Z 15 ``` **TABLE B** ``` X 22 12 Y 10 12 18 Z 11 15 ```
I asked a [similar question](https://stats.stackexchange.com/questions/168/choosing-a-bandwidth-for-kernel-density-estimators) a few months ago. Rob Hyndman provided an excellent [answer](https://stats.stackexchange.com/questions/168/choosing-a-bandwidth-for-kernel-density-estimators/179#179) that recommends the Sheather-Jones method. One addition point. In R, for the `density` function, you set the bandwidth explicitly via the `bw` argument. However, I often find that the `adjust` argument is more helpful. The `adjust` argument scales the value of the bandwidth. So `adjust=2` means double the bandwidth.
I am looking for examples of results which go against people's intuition for a general audience talk. Results which if asked from non-experts "what does your intuition tell you?", almost all would get it wrong. Results' statement should be easily explainable to undergraduates in cs/math. I am mainly looking for results in computer science. What are the most counterintuitive/unexpected results (of general interest) in your area?
For a general audience you have to stick to things that they can *see*. As soon as you start theorizing they'll start up their mobile phones. Here are some ideas which could be worked out to complete examples: 1. There is a surface which [has only one side](https://en.wikipedia.org/wiki/Mobius_strip). 2. A curve may [fill an entire square](https://en.wikipedia.org/wiki/Space_filling_curve). 3. There are [constant width curves](https://en.wikipedia.org/wiki/Curve_of_constant_width) other than a circle. 4. It is possible to color the plane with three colors in such a way that [every border point is a tri-border](https://en.wikipedia.org/wiki/Newton_fractal). If you can rely on a bit of mathematical knowledge, you can do more: 1. There are as many odd numbers as there are natural numbers. 2. There is a [continuous and nowhere differentiable function](https://en.wikipedia.org/wiki/Weierstrass_function). 3. There is a function $f : \mathbb{R} \to \mathbb{R}$ which is discontinuous at all rational numbers and differentiable at all irrational numbers. 4. The [Banach-Tarski "paradox"](https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox). For programmers you can try: 1. The [impossible functionals](http://math.andrej.com/2007/09/28/seemingly-impossible-functional-programs/): there is a program which takes a predicate `p : stream → bool`, where `stream` is the datatype of infinite binary sequences, and returns `true` if and only if `p α` is `true` for *all* streams `α` (that's uncountably many), and `false` otherwise. 2. It is possible to [play poker by telephone](http://www.math.stonybrook.edu/~scott/blair/How_play_poker.html) in a trusted way which prevents cheating. 3. A group of people can calculate their average salary without anybody finding out any other person's salary. 4. There is a program which [constructs a *binary* tree $T$](http://math.andrej.com/2006/04/25/konigs-lemma-and-the-kleene-tree/) with the following properties: * the tree $T$ is infinite * there is no program that traces an infinite path in $T$
I am trying to model time-elapsed data (time from event A to event B) and am stuck on deciding between standard multiple linear regression vs. Poisson regression. A number of papers published on a similar topic seem to use Poisson regression, but in my head I've always associated the Poisson distribution with "count" data. What are your thoughts? Thanks!
I give you the perspective of industry. Industries don't like to spend money on sensors and monitoring systems which they don't know how much they will benefit from. For instance, I don't want to name, so imagine a component with 10 sensors gathering data every minute. The asset owner turns to me and asks me how well can you predict the behavior of my component with these data from 10 sensors? Then they perform a cost-benefit analysis. Then, they have the same component with 20 sensors, they ask me, again, how well can you predict the behavior of my component with these data from 20 sensors? They perform another cost-benefit analysis. At each of these cases, they compare the benefit with the investment cost due to sensors installations. (This is not just adding a $10 sensor to a component. A lot of factors play a role). Here is where a variable selection analysis can be useful.
I'm solving a problem where I've this 'expectation': $$ \int\_{0}^y x\cdot f(x) dx $$ where $f(x)$ is a *PDF* with support on $[0, z]$, with $z>y$. Is there a way to rewrite it without the integral and as a function of the *CDF*? I've tried integration by parts, but without great success: $$ \int\_{0}^y x\cdot f(x) dx = y\cdot F(y) -\int\_0^y F(x) dx $$ I have hard time to solve the second part.
For cdfs $F$ of distributions with supports on $(0,a)$, $a$ being possibly $+\infty$, a useful representation of the expectation is $$\mathbb{E}\_F[X]=\int\_0^a x \text{d}F(x)=\int\_0^a \{1-F(x)\}\text{d}x$$ by applying integration by parts, \begin{align\*}\int\_0^a x \text{d}F(x)&=-\int\_0^a x \text{d}(1-F)(x)\\&=-\left[x(1-F(x))\right]\_0^a+\int\_0^a \{1-F(x)\}\text{d}x\\&=-\underbrace{a(1-F(a))}\_{=0}+\underbrace{0(1-F(0))}\_{=0}+\int\_0^a \{1-F(x)\}\text{d}x\end{align\*} In the special case when $a=+\infty$, $$\lim\_{x\to\infty}x(1-F(x))=\lim\_{x\to\infty}\frac{1-F(x)}{1/x} =\lim\_{x\to\infty}\frac{-f(x)}{-1/x^2}=\lim\_{x\to\infty}x^2f(x)=0$$ by [L'Hospital's rule](https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule) and the fact that $xf(x)$ is integrable at infinity (the expectation $\mathbb E\_F[X]$ is assumed to exist). In the current case, one can turn the integral into an expectation as $$\int\_0^y x\text{d}F(x)=F(y)\int\_0^y x\frac{\text{d}F(x)}{F(y)}=\mathbb{E}\_{\tilde{F}}[X]$$with $$\tilde{F}(x)=F(x)\big/F(y)\mathbb{I}\_{(0,y)}(x)$$Thus $$\int\_0^y x\text{d}F(x)=F(y)\int\_0^y \{1-F(x)\big/F(y)\}\text{d}x$$ which is the representation that you found.
I am a computer scientist performing research, which includes calculating the Spearman rank correlation of two lists, one ranked by a human, another by a computer program. I have the following questions: 1. I was reading this [post](https://stats.stackexchange.com/questions/18887/how-to-calculate-a-confidence-interval-for-spearmans-rank-correlation) and have become concerned with how large my sample size has to be to have a significant confidence level. 2. Also if I have understand correctly the Fisher transform gives you a confidence level after you have gotten the Spearman rank correlation. If possible I would like to know the ideal sample size before I start the experimen 3. On interpreting the value given to you by the Spearman rank correlation: I have read about the [null hypothesis](https://statistics.laerd.com/statistical-guides/spearmans-rank-order-correlation-statistical-guide-2.php) > > The general form of a null hypothesis for a Spearman correlation is: > H0: There is no association between the two variables [in the population]. > > > However what I want to prove is the opposite ie: that there is a very close association between the two variables. How is this done?
If you are concerned about sample size and significance, good concepts to start out with include effect size and power (while on the topic of CIs, you might want to include accuracy as well) As noted previously, 95% CI refers not to probability but to confidence; it is not the likelihood that the current CI contains the population parameter, but that out of 100 CIs 95 CIs will succeed in capturing the population parameter. The true probability remains unknown (unless you take a Bayesian approach), see [Explorations in statistics: confidence intervals](http://advan.physiology.org/content/33/2/87.full).
Is it possible to have more than one unbiased estimator for a single unknown parameter?If "Yes" then how and if "No" then why?
As an example, from a i.i.d. sample of (finite) size $n$, where the common mean is $\mu \neq 0$ we can have an *infinite* (and not even countably) number of unbiased estimators of the form $$\hat \mu(a) = aX\_i + (1-a)X\_{j}, \;i\neq j, \;a \in \mathbb R$$ The number of estimators is uncountably infinite because $\mathbb R$ has the cardinality of the continuum. And that's just one way to obtain *so many* unbiased estimators.
Can anyone recommend an easy way to cluster hundreds of GPS trajectories to find out their common paths? The GPS data is coming from different vehicles that have traveled thousands of miles.
It seems you just need to estimate the spatial distribution of "good" locations belonging to ordinary paths in order to detect outliers, which is a way nicer problem than path clustering. The naive but likely sufficient way is to convert the entire path bundle into a density raster with a resolution equal to your intended tolerance (~100m), and use it to rise alert whenever the vehicle detours onto an empty pixel (or below some threshold in case your data already has outliers).
I have a longitudinal data set of individuals and some of them were subject to a treatment and others were not. All individuals are in the sample from birth until age 18 and the treatment happens at some age in between that range. The age of the treatment may differ across cases. Using propensity score matching I would like to match treated and control units in pairs with exact matching on the year of birth such that I can track each pair from their birthyear until age 18. All in all there are about 150 treated and 4000 untreated individuals. After the matching the idea is to use a difference-in-differences strategy to estimate the effect of the treatment. The problem I face at the moment is to do the matching with panel data. I am using Stata's `psmatch2` command and I match on household and individual characteristics using propensity score matching. In general with panel data there will be different optimal matches at each age. As an example: if A is treated, B and C are controls, and all of them were born in 1980, then A and B may be matched in 1980 at age 0 whilst A and C are matched in 1981 at age 1 and so on. Also A may be matched with its own pre-treatment values from previous years. To get around this issue, I took the average of all time-varying variables such that the matching can identify individuals who are on average the most similar over the duration of the sample and I do the matching separately for each age group 0 to 18. Unfortunately this still matches a different control unit to each treated unit per age group. If someone could direct me towards a method to do pairwise matching with panel data in Stata this would be very much appreciated.
Steps: 1. As it has been mentioned in detail by Greg, you can use a cross-sectional dataset, either on pre-treatment means or on a sepecific pre-treatment period to generate the matching. 2. Using the whole panel you assign indicator variables for a. treatedIndividual b. treatedPeriod, the latter is equal to zero as soon as the treatment occurs for the treatedIndividual. Since the point in time where treatedPeriod changes from 0 to 1 varies across individuals and never turns to 1 for untreated you must assign the same starting point from the treated match to the untreated match. This is intuitive but I would still like to see a good reference that justifies this approach which I have not found so far. The regression set-up would be: ``` depvar = treatedIndvidual + treatedPeriod + treatedIndvidual*treatedPeriod + controls ``` where the interaction term gives you the treatment effect.
Activation functions are used to introduce non-linearities in the linear output of the type `w * x + b` in a neural network. Which I am able to understand intuitively for the activation functions like sigmoid. I understand the advantages of ReLU, which is avoiding dead neurons during backpropagation. However, I am not able to understand why is ReLU used as an activation function if its output is linear? Doesn't the whole point of being the activation function get defeated if it won't introduce non-linearity?
> > I understand the advantages of ReLU, which is avoiding dead neurons during backpropagation. > > > This is not completely true. The neurons are not dead. If you use sigmoid-like activations, after some iterations the value of gradients saturate for most the neurons. The value of gradient will be so small and the process of learning happens so slowly. This is vanishing and exploding gradients that has been in sigmoid-like activation functions. Conversely, the dead neurons may happen if you use `ReLU` non-linarity, which is called [*dying ReLU*](https://datascience.stackexchange.com/questions/5706/what-is-the-dying-relu-problem-in-neural-networks). > > I am not able to understand why is ReLU used as an activation function if its output is linear > > > Definitely it is not linear. As a simple definition, linear function is a function which has same derivative for the inputs in its domain. > > [The linear function](http://www.columbia.edu/itc/sipa/math/linear.html) is popular in economics. It is attractive because it is simple and easy to handle mathematically. It has many important applications. Linear functions are those whose graph is a straight line. A linear function has the following property: > > > $f(ax + by) = af(x) + bf(y)$ > > > [`ReLU` is not linear](https://www.quora.com/Why-is-ReLU-non-linear). *The simple answer is that `ReLU`'s output is not a straight line, it bends at the x-axis. The more interesting point is what’s the consequence of this non-linearity. In simple terms, linear functions allow you to dissect the feature plane using a straight line. But with the non-linearity of `ReLU`s, you can build arbitrary shaped curves on the feature plane.* `ReLU` may have a disadvantage which is its expected value. There is no limitation for the output of the `Relu` and its expected value is not zero. `Tanh` was more popular than `sigmoid` because its expected value is equal to zero and learning in deeper layers occurs more rapidly. Although `ReLU` does not have this advantage `batch normalization` solves [this problem](https://datascience.stackexchange.com/q/23493/28175). You can also refer [here](https://stats.stackexchange.com/q/299915/179078) and [here](https://stats.stackexchange.com/q/141960/179078) for more information.
I'm designing an expression language that's trying to (a) be maximally compatible with a different ambiguous language; and (b) be LR(1). I'm facing the current fragment of the language: $$ \begin{align} S & → T \quad | \quad \texttt{prefix} \quad T \quad S \\ T & → F \quad | \quad T \texttt{-} F \quad | \quad \texttt{-} F \\ F & → \texttt{1} \end{align} $$ The tokens `prefix` and `1` and `-` are terminals. Note that `prefix 1 - 1 - 1` has at least two parses: `prefix (1-1) (-1)` and `prefix 1 (-(1-1))`. I'm willing to make small changes to the language to resolve this ambiguity, but I would prefer only making grammar changes that resolve just this ambiguity. Are there local transformations I can make? Global ones? I think inserting a new token, e.g. `:`, between T and S in the `prefix` production should disambiguate the grammar. Is it possible to transform the grammar such that it (a) becomes LR(1); and (b) encodes the rule "always parse the shortest possible substring as the T part in a prefix production"? (I think I cannot do the longest string—it seems hard to know that what follows will fail to parse as an S. It will become even harder once I add in the rest of the language.)
I'm going to limit myself, at least for now, to the question actually asked here: how to deal with the ambiguity between an operator which could be prefix or infix (such as unary negation) and a prefix operator which takes two consecutive arguments. There is a second interesting question, which has to do with prefix operators which bind less tightly than (some) infix operators. These two language features are not really related, but they are similar in that the naïve grammars are ambiguous, and the ambiguities are difficult to resolve in a strict LR(k) grammar. They are also similar in the fact that many languages suffer from inadequate resolutions of these ambiguities, or resolve them in ways that are poorly understood by language users, in part because the resolutions are hard to document. Ambiguity with prefix negation ------------------------------ Since prefix negation and infix difference operators generally use the same symbol (`-`), an ambiguity is created in any syntactic construct in which two expressions can appear consecutively without an intervening token. Many languages have such constructs. They include: * Prefix operators which take more than one argument, as in your question. * **Implicit operators** such as implied multiplication (`2x` instead of `2*x`) and function application (as in Haskell, where `f a` calls `f` with argument `a`, similar to `f(a)` in many other languages). Other examples include Awk's implicit string concatenation syntax, where `a b` represents the concatenation of the two variables. * **Undelimited sequences**, usually lists or tuples, where consecutive elements are simply specified one after another and the elements can be expressions. One early language with this feature is Logo, but there are many others. * **Undelimited statements**. In languages in which statements can be written consecutively without a separating delimiter (Lua, for example), two expressions can be consecutive if an expression can be a statement, or more generally if there are statement syntaxes which start with an expression (such as assignment statements) and other syntaxes which end with an expression (such as `return` statements). That's not an exhaustive catalogue, but it gives an idea of the range of the issue. There are three broad strategies to resolve this ambiguity: 1. Require the first expression (or both expressions) to be either an atomic term or a parenthesised subexpression. If the ambiguity is the result of an implicit operator, this resolution will be natural if the implicit operator's binding precedence is at the top of the precedence list; that's the case with Haskell, for example, where function application takes precedence over any other operator. This strategy is also sometimes used for undelimited lists, allowing a list of three elements to be written `[a b c]` if the components are simple, but requiring `[a (b + c) d]` for more complicated components. 2. Prohibit the second expression from starting with an ambiguous prefix operator. Or, in other words, resolve the ambiguity in favour of the infix operator, when there is a choice. This is the preferred solution for implicit multiplication, and it follows the Principle of Least Astonishment, since resolving `2-x` as `2*(-x)` would astonish most users. This does not require the implicit operator to have maximum precedence; indeed, most grammars with implicit multiplication would parse `2x^4` (where `^` is the exponentiation operator) as `2*(x^4)`, again to avoid surprises. (Different parsers resolve `2a/3b` in different ways, so there can still be surprises. Some people feel that implicit multiplication should bind more strongly than explicit multiplication and division; others that implicit and explicit multiplication should bind equally. But that's a different issue.) 3. Resolve the ambiguity between unary negation and binary difference during lexical analysis. For example, Logo requires that the unary negation operator either be preceded by an open parenthesis or some token which cannot be part of an expression, so that binary difference is not a possible interpretation, or that it be written with at least one space before and no spaces after. The second rule allows `a -10 * b` to be interpreted as the two expressions `a` and `((-10)*b)`, while `a - 10 * b` would necessarily be interpreted as a single expression. A similar, but more complicated, set of whitespace-aware rules was proposed for the Frontier language (never implemented, to my knowledge); it was criticised for being too subtle for code readers. Solution one is appropriate for Haskell. Since it's simply an operator precedence rule, it's easy to describe. The expression `fmap f (Just x)` needs to be written that way because function application is left-associative; `fmap f Just x` would be `(((fmap f) Just) x)` so the requirement to parenthesise `(Just x)` is pretty clear, in the same way that you would have to write `a - (b - c)`, if that's what you meant. (It's worth noting that Haskell also has an explicit application operator, `$`, with low binding precedence, which can sometimes be used to avoid parentheses.) Solution three is certainly possible, but whitespace-aware syntax is a common source of confusion for casual users. (And, as noted above, is easy to miss when you're reading code.) So my preference would be solution two: when both interpretations of the unary/binary operator are possible, always prefer the binary interpretation. That's often simply the expected interpretation, although it can lead to surprises when the consecutive expressions are separated by a newline; even then, the rule is easy to explain. It's also easy to implement. The basic idea is to define two different expression non-terminals. One is unrestricted; it is used for the first expression in a consecutive sequence. The other one does not accept any expression whose first token is an ambiguous unary operator. (If the language has unambiguous unary operators, they don't need to be restricted.) If you were using a parser generator which accepted the formalism in the ECMAScript standard, this would be trivial. Unfortunately, few (if any) parser generators allow templated non-terminals, so the implementation requires annoying code duplication in the grammar. In yacc/bison syntax, assuming that operator precedence is established with prior precedence declarations: ``` expr: expr '+' expr | expr '-' expr | expr '*' expr /* other binary operators omitted */ | '-' expr %prec UNOP | '+' expr %prec UNOP | CONSTANT | IDENTIFIER | '(' expr ')' expr_follow : expr_follow '+' expr | expr_follow '-' expr | expr_follow '*' expr /* ... other binary operators */ /* ambiguous unary operators omitted */ | CONSTANT | IDENTIFIER | '(' expr ')' ``` Note that the `expr_follow` restriction cascades only through the first operand of the productions; the operands following an operator or `(` do not need to be restricted since they will not affect the first token in the expression. If you were using cascading-precedence style rather than precedence operators, you'd need to create two non-terminals at each precedence level, something like this: ``` expr: additive additive : additive '+' multiplicative | additive '-' multiplicative | multiplicative multiplicative : multiplicative '*' negative | multiplicative '/' negative | multiplicative '%' negative | negative negative : '-' negative | exponential exponential : unary '^' negative | unary unary : '&' unary /* '&' represents an unambiguous unary operator */ | atom atom: CONSTANT | IDENTIFIER | '(' expr ')' expr_follow : additive_follow additive_follow : additive_follow '+' multiplicative | additive_follow '-' multiplicative | multiplicative_follow multiplicative_follow : multiplicative_follow '*' exponential | multiplicative_follow '/' exponential | multiplicative_follow '%' exponential | exponential ``` In the first stanza, I would usually simplify `negative` to: ``` negative : '-' negative | unary '^' negative | unary ``` I wrote it the way I did to better illustrate the difference between the two cascades. The second cascade does not have `negative` because the unary negative cannot appear at the start of `expr_follow` (so there is no `negative_follow`). `exponential`, `unary` and `atom` do not need to be duplicated because none of them can start with unary negative. Read this if you're considering designing a language with consecutive expressions --------------------------------------------------------------------------------- On the whole, a syntax which allows two consecutive expressions is a snakepit of perpetual syntactic problems. Each ambiguity needs to be identified and then resolved in the parser. Since there is no universal principle which determines a unique resolution, the language designer needs to think through the possible disambiguations and choose the one they feel is best suited to the problem domain. Each such resolution is, in general, easy to implement once it has been precisely spelled out, but the rules can be very difficult to describe or justify to users. That can lead to mysterious bugs, in which a syntactic construct was resolved in an unexpected way, or mysterious syntax errors, in which a syntactic construct which seems reasonable to the coder was banned for ease of implementation. One frequent consequence in languages which develop to the point of having style guides and linters, is a stylistic ban on precisely these syntactic constructs; either alternative syntaxes or redundant parentheses are required, and these can even give rise to the language implementation itself issuing warnings.
Memory is used for many things, as I understand. It serves as a disk-cache, and contains the programs' instructions, and their stack & heap. Here's a thought experiment. If one doesn't care about the speed or time it takes for a computer to do the crunching, what is the bare minimum amount of memory one can have, assuming one has a very large disk? Is it possible to do away with memory, and just have a disk? Disk-caching is obviously not required. If we set up swap space on the disk, program stack and heap also don't require memory. Is there anything that does require memory to be present?
It is conceptually possible. RAM is just a caching level. There are many caching levels in a modern computer (see the CPU's L1,L2,L3.. caches, of course Ram, the swap area -which is a logical section of the disk used as RAM...-), if you put or add one, the machine will work. For example, an Ubuntu live cd may not use the caching level of the HDD. However, i don't think there are any OS that can support the absence of a RAM level.
I recently read a discussion about ARIMA models where someone said (referring to **d** as in ARIMA (p, d, q)): > > Its true that d=1 takes out deterministic trends when they are present > (they would appear only in the drift term.) But it does more than > that. > > > I know that's not much context, but I seem to remember reading something similar in regards to detrending via differencing. Two questions: 1. Does differencing (not just in an ARIMA context) do something more to your data than just detrend it? If so, what else does it do? (Add or remove?) 2. There are other detrending methods, such as fitting a curve (loess, linear regression) and using the residuals as detrended data. Would these other methods not do the "more than that" that differencing does? (Hence, might they be preferrable?)
Differencing isn't actually the preferred way of removing a trend---detrending is. Detrending involves estimating the trend and calculating the deviation from the estimated trend in any particular period. The main use of differencing is to remove the problem of unit roots. A unit root arises, for example, when $\rho=1$ in the simple AR(1) model $y\_{t} = \rho y\_{t-1} + \nu\_t$. In this case, differencing yields a stationary white noise process $\nu\_t$ that is appropriate for analysis. Differencing a process without a unit root, but with a trend, can actually produce bad results (the new, differenced error term can have a strange distribution that contains autocorrelation, but of a tricky process). Similarly, detrending a process without a trend, but with a unit root can fail to eliminate the problem of non-stationarity (that is, it doesn't fix the unit root problem).
I understand what it is, but I don't see how it is any use for algorithms or anything. Maybe I am missing something. I need someone to give me an example of how it can be used so I can understand it better.
The obvious application of the lambda calculus is any functional programming language (e.g., Lisp, ML, Haskell), and any language that supports anonymous functions. As for combinator calculus, does there have to be a "real-world application"? Turing machines, for example, are hardly ever used "in the real world" but they form the basis of the theory of computation. One useful feature of combinator calculi is that they're simpler systems than, e.g., Turing machines. If you want to prove that some other system is Turing-complete, it might be easier to show how it can simulate combinators than to show it can simulate a Turing machine.
Is the set of all countably infinite strings over a finite alphabet that contains more than one letter countably infinite?
This is usually called a "blind" operation. This is common in log-structured merge trees, where a blind delete is often written as a "tombstone" that will take precedence over any previous insert or update of the given value.
I have a dataset of water temperature measurements taken from a large waterbody at irregular intervals over a period of decades. (Galveston Bay, TX if you’re interested) Here’s the head of the data: ``` STATION_ID DATE TIME LATITUDE LONGITUDE YEAR MONTH DAY SEASON MEASUREMENT 1 13296 6/20/91 11:04 29.50889 -94.75806 1991 6 20 Summer 28.0 2 13296 3/17/92 9:30 29.50889 -94.75806 1992 3 17 Spring 20.1 3 13296 9/23/91 11:24 29.50889 -94.75806 1991 9 23 Fall 26.0 4 13296 9/23/91 11:24 29.50889 -94.75806 1991 9 23 Fall 26.0 5 13296 6/20/91 11:04 29.50889 -94.75806 1991 6 20 Summer 28.0 6 13296 12/17/91 10:15 29.50889 -94.75806 1991 12 17 Winter 13.0 ``` (MEASUREMENT is the temperature measurement of interest.) The full set is available here: <https://github.com/jscarlton/galvBayData/blob/master/gbtemp.csv> I would like to remove the effects of seasonal variation to observe the trend (if any) in the temperature over time. Is a time series decomposition the best way to do this? How do I handle the fact that the measurements were not taken at a regular interval? I'm hoping there is an R package for this type of analysis, though Python or Stata would be fine, too.  (Note: for this analysis, I’m choosing to ignore the spatial variability in the measurements. Ideally, I’d account for that as well, but I think that doing so would be hopelessly complex.)
Rather than try to decompose the time series explicitly, I would instead suggest that you model the data spatio-temporally because, as you'll see below, the long-term trend likely varies spatially, the seasonal trend varies with the long-term trend and spatially. I have found that generalised additive models (GAMs) are a good model for fitting irregular time series such as you describe. Below I illustrate a quick model I prepared for the full data of the following form \begin{align} \begin{split} \mathrm{E}(y\_i) & = \alpha + f\_1(\text{ToD}\_i) + f\_2(\text{DoY}\_i) + f\_3(\text{Year}\_i) + f\_4(\text{x}\_i, \text{y}\_i) + \\ & \quad f\_5(\text{DoY}\_i, \text{Year}\_i) + f\_6(\text{x}\_i, \text{y}\_i, \text{ToD}\_i) + \\ & \quad f\_7(\text{x}\_i, \text{y}\_i, \text{DoY}\_i) + f\_8(\text{x}\_i, \text{y}\_i, \text{Year}\_i) \end{split} \end{align} where * $\alpha$ is the model intercept, * $f\_1(\text{ToD}\_i)$ is a smooth function of time of day, * $f\_2(\text{DoY}\_i)$ is a smooth function of day of year , * $f\_3(\text{Year}\_i)$ is a smooth function of year, * $f\_4(\text{x}\_i, \text{y}\_i)$ is a 2D smooth of longitude and latitude, * $f\_5(\text{DoY}\_i, \text{Year}\_i)$ is a tensor product smooth of day of year and year, * $f\_6(\text{x}\_i, \text{y}\_i, \text{ToD}\_i)$ tensor product smooth of location & time of day * $f\_7(\text{x}\_i, \text{y}\_i, \text{DoY}\_i)$ tensor product smooth of location day of year& * $f\_8(\text{x}\_i, \text{y}\_i, \text{Year}\_i$ tensor product smooth of location & year Effectively, the first four smooths are the main effects of 1. time of day, 2. season, 3. long-term trend, 4. spatial variation whilst the remaining four tensor product smooths model smooth interactions between the stated covariates, which model 5. how the seasonal pattern of temperature varies over time, 6. how the time of day effect varies spatially, 7. how the seasonal effect varies spatially, and 8. how the long-term trend varies spatially The data are loaded into R and massaged a bit with the following code ``` library('mgcv') library('ggplot2') library('viridis') theme_set(theme_bw()) library('gganimate') galveston <- read.csv('gbtemp.csv') galveston <- transform(galveston, datetime = as.POSIXct(paste(DATE, TIME), format = '%m/%d/%y %H:%M', tz = "CDT")) galveston <- transform(galveston, STATION_ID = factor(STATION_ID), DoY = as.numeric(format(datetime, format = '%j')), ToD = as.numeric(format(datetime, format = '%H')) + (as.numeric(format(datetime, format = '%M')) / 60)) ``` The model itself is fitted using the `bam()` function which is designed for fitting GAMs to larger data sets such as this. You can use `gam()` for this model also, but it will take somewhat longer to fit. ``` knots <- list(DoY = c(0.5, 366.5)) M <- list(c(1, 0.5), NA) m <- bam(MEASUREMENT ~ s(ToD, k = 10) + s(DoY, k = 30, bs = 'cc') + s(YEAR, k = 30) + s(LONGITUDE, LATITUDE, k = 100, bs = 'ds', m = c(1, 0.5)) + ti(DoY, YEAR, bs = c('cc', 'tp'), k = c(15, 15)) + ti(LONGITUDE, LATITUDE, ToD, d = c(2,1), bs = c('ds','tp'), m = M, k = c(20, 10)) + ti(LONGITUDE, LATITUDE, DoY, d = c(2,1), bs = c('ds','cc'), m = M, k = c(25, 15)) + ti(LONGITUDE, LATITUDE, YEAR, d = c(2,1), bs = c('ds','tp'), m = M), k = c(25, 15)), data = galveston, method = 'fREML', knots = knots, nthreads = 4, discrete = TRUE) ``` The `s()` terms are the main effects, whilst the `ti()` terms are tensor product *interaction* smooths where the main effects of the named covariates have been removed from the basis. These `ti()` smooths are a way to include interactions of the stated variables in a numerically stable way. The `knots` object is just setting the endpoints of the cyclic smooth I used for the day of year effect — we want 23:59 on Dec 31st to join up smoothly with 00:01 Jan 1st. This accounts to some extent for leap years. The model summary indicates all these effects are significant; ``` > summary(m) Family: gaussian Link function: identity Formula: MEASUREMENT ~ s(ToD, k = 10) + s(DoY, k = 12, bs = "cc") + s(YEAR, k = 30) + s(LONGITUDE, LATITUDE, k = 100, bs = "ds", m = c(1, 0.5)) + ti(DoY, YEAR, bs = c("cc", "tp"), k = c(12, 15)) + ti(LONGITUDE, LATITUDE, ToD, d = c(2, 1), bs = c("ds", "tp"), m = list(c(1, 0.5), NA), k = c(20, 10)) + ti(LONGITUDE, LATITUDE, DoY, d = c(2, 1), bs = c("ds", "cc"), m = list(c(1, 0.5), NA), k = c(25, 12)) + ti(LONGITUDE, LATITUDE, YEAR, d = c(2, 1), bs = c("ds", "tp"), m = list(c(1, 0.5), NA), k = c(25, 15)) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 21.75561 0.07508 289.8 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(ToD) 3.036 3.696 5.956 0.000189 *** s(DoY) 9.580 10.000 3520.098 < 2e-16 *** s(YEAR) 27.979 28.736 59.282 < 2e-16 *** s(LONGITUDE,LATITUDE) 54.555 99.000 4.765 < 2e-16 *** ti(DoY,YEAR) 131.317 140.000 34.592 < 2e-16 *** ti(ToD,LONGITUDE,LATITUDE) 42.805 171.000 0.880 < 2e-16 *** ti(DoY,LONGITUDE,LATITUDE) 83.277 240.000 1.225 < 2e-16 *** ti(YEAR,LONGITUDE,LATITUDE) 84.862 329.000 1.101 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.94 Deviance explained = 94.2% fREML = 29807 Scale est. = 2.6318 n = 15276 ``` A more careful analysis would want to check if we need all these interactions; some of the spatial `ti()` terms explain only small amounts of variation in the data, as indicated by the $F$ statistic; there's a lot of data here so even small effect sizes may be *statistically* significant but uninteresting. As a quick check, however, removing the three spatial `ti()` smooths (`m.sub`), results in a significantly poorer fit as assessed by AIC: ``` > AIC(m, m.sub) df AIC m 447.5680 58583.81 m.sub 239.7336 59197.05 ``` We can plot the partial effects of the first five smooths using the `plot()` method — the 3D tensor product smooths can't be plotted easily and not by default. ``` plot(m, pages = 1, scheme = 2, shade = TRUE, scale = 0) ``` [![enter image description here](https://i.stack.imgur.com/KaIkR.png)](https://i.stack.imgur.com/KaIkR.png) The `scale = 0` argument there puts all the plots on their own scale, to compare the magnitudes of the effects, we can turn this off: ``` plot(m, pages = 1, scheme = 2, shade = TRUE) ``` [![enter image description here](https://i.stack.imgur.com/SQOae.png)](https://i.stack.imgur.com/SQOae.png) Now we can see that the seasonal effect dominates. The long-term trend (on average) is shown in the upper right plot. To really look at the long-term trend however, you need to pick a station and then predict from the model for that station, fixing time of day and day of year to some representative values (midday, for a day of the year in summer say). There early year or two of the series has some low temperature values relative to the rest of the records, which is likely being picked up in all the smooths involving `YEAR`. These data should be looked at more closely. This isn't really the place to get into that, but here are a couple of visualisations of the model fits. First I look at the spatial pattern of temperature and how it varies over the years of the series. To do that I predict from the model for a 100x100 grid over the spatial domain, at midday on day 180 of each year: ``` pdata <- with(galveston, expand.grid(ToD = 12, DoY = 180, YEAR = seq(min(YEAR), max(YEAR), by = 1), LONGITUDE = seq(min(LONGITUDE), max(LONGITUDE), length = 100), LATITUDE = seq(min(LATITUDE), max(LATITUDE), length = 100))) fit <- predict(m, pdata) ``` then I set to missing, `NA`, the predicted values `fit` for all data points that lie some distance from the observations (proportional; `dist`) ``` ind <- exclude.too.far(pdata$LONGITUDE, pdata$LATITUDE, galveston$LONGITUDE, galveston$LATITUDE, dist = 0.1) fit[ind] <- NA ``` and join the predictions to the prediction data ``` pred <- cbind(pdata, Fitted = fit) ``` Setting predicted values to `NA` like this stops us extrapolating beyond the support of the data. Using **ggplot2** ``` ggplot(pred, aes(x = LONGITUDE, y = LATITUDE)) + geom_raster(aes(fill = Fitted)) + facet_wrap(~ YEAR, ncol = 12) + scale_fill_viridis(name = expression(degree*C), option = 'plasma', na.value = 'transparent') + coord_quickmap() + theme(legend.position = 'top', legend.key.width = unit(2, 'cm')) ``` we obtain the following [![enter image description here](https://i.stack.imgur.com/q6fCV.png)](https://i.stack.imgur.com/q6fCV.png) We can see the year-to-year variation in temperatures in a bit more detail if we animate rather than facet the plot ``` p <- ggplot(pred, aes(x = LONGITUDE, y = LATITUDE, frame = YEAR)) + geom_raster(aes(fill = Fitted)) + scale_fill_viridis(name = expression(degree*C), option = 'plasma', na.value = 'transparent') + coord_quickmap() + theme(legend.position = 'top', legend.key.width = unit(2, 'cm'))+ labs(x = 'Longitude', y = 'Latitude') gganimate(p, 'galveston.gif', interval = .2, ani.width = 500, ani.height = 800) ``` [![enter image description here](https://i.stack.imgur.com/9xqUi.gif)](https://i.stack.imgur.com/9xqUi.gif) To look at the long-term trends in more detail, we can predict for particular stations. For example, for `STATION_ID` 13364 and predicting for days in the four quarters, we might use the following to prepare values of the covariates we want to predict at (midday, on day of year 1, 90, 180, and 270, at the selected station, and evaluating the long-term trend at 500 equally spaced values) ``` pdata <- with(galveston, expand.grid(ToD = 12, DoY = c(1, 90, 180, 270), YEAR = seq(min(YEAR), max(YEAR), length = 500), LONGITUDE = -94.8751, LATITUDE = 29.50866)) ``` Then we predict and ask for standard errors, to form an approximate pointwise 95% confidence interval ``` fit <- data.frame(predict(m, newdata = pdata, se.fit = TRUE)) fit <- transform(fit, upper = fit + (2 * se.fit), lower = fit - (2 * se.fit)) pred <- cbind(pdata, fit) ``` which we plot ``` ggplot(pred, aes(x = YEAR, y = fit, group = factor(DoY))) + geom_ribbon(aes(ymin = lower, ymax = upper), fill = 'grey', alpha = 0.5) + geom_line() + facet_wrap(~ DoY, scales = 'free_y') + labs(x = NULL, y = expression(Temperature ~ (degree * C))) ``` producing [![enter image description here](https://i.stack.imgur.com/r0pB9.png)](https://i.stack.imgur.com/r0pB9.png) Obviously, there's a lot more to modelling these data than what I show here, and we'd want to check for residual autocorrelation and overfitting of the splines, but approaching the problem as one of modelling the features of the data allows for a more detailed examination of the trends. You could of course just model each `STATION_ID` separately, but that would throw away data, and many stations have few observations. Here the model borrows from all the station information to fill in the gaps and assist in estimating the trends of interest. ### Some notes on `bam()` The `bam()` model is using all of **mgcv**'s tricks to estimate the model quickly (multiple threads [4](https://i.stack.imgur.com/9xqUi.gif)), fast REML smoothness selection (`method = 'fREML'`), and discretization of covariates. With these options turned on the model fits in less than a minute on my 2013-era dual 4-core Xeon workstation with 64Gb of RAM.
I usually hear about "ordinary least squares". Is that the most widely used algorithm used for linear regression? Are there reasons to use a different one?
To answer the letter of the question, "ordinary least squares" is not an algorithm; rather it is a type of problem in computational linear algebra, of which linear regression is one example. Usually one has data $\{(x\_1,y\_1),\dots,(x\_m,y\_m)\}$ and a tentative function ("model") to fit the data against, of the form $f(x)=c\_1 f\_1(x)+\dots+c\_n f\_n(x)$. The $f\_j(x)$ are called "basis functions" and can be anything from monomials $x^j$ to trigonometric functions (e.g. $\sin(jx)$, $\cos(jx)$) and exponential functions ($\exp(-jx)$). The term "linear" in "linear regression" here does not refer to the basis functions, but to the coefficients $c\_j$, in that taking the partial derivative of the model with respect to any of the $c\_j$ gives you the factor multiplying $c\_j$; that is, $f\_j(x)$. One now has an $m\times n$ rectangular matrix $\mathbf A$ ("design matrix") that (usually) has more rows than columns, and each entry is of the form $f\_j(x\_i)$, $i$ being the row index and $j$ being the column index. OLS is now the task of finding the vector $\mathbf c=(c\_1\,\dots\,c\_n)^\top$ that minimizes the quantity $\sqrt{\sum\limits\_{j=1}^{m}\left(y\_j-f(x\_j)\right)^2}$ (in matrix notation, $\|\mathbf{A}\mathbf{c}-\mathbf{y}\|\_2$ ; here, $\mathbf{y}=(y\_1\,\dots\,y\_m)^\top$ is usually called the "response vector"). There are at least three methods used in practice for computing least-squares solutions: the normal equations, QR decomposition, and singular value decomposition. In brief, they are ways to transform the matrix $\mathbf{A}$ into a product of matrices that are easily manipulated to solve for the vector $\mathbf{c}$. George already showed the method of normal equations in his answer; one just solves the $n\times n$ set of linear equations $\mathbf{A}^\top\mathbf{A}\mathbf{c}=\mathbf{A}^\top\mathbf{y}$ for $\mathbf{c}$. Due to the fact that the matrix $\mathbf{A}^\top\mathbf{A}$ is symmetric positive (semi)definite, the usual method used for this is Cholesky decomposition, which factors $\mathbf{A}^\top\mathbf{A}$ into the form $\mathbf{G}\mathbf{G}^\top$, with $\mathbf{G}$ a lower triangular matrix. The problem with this approach, despite the advantage of being able to compress the $m\times n$ design matrix into a (usually) much smaller $n\times n$ matrix, is that this operation is prone to loss of significant figures (this has something to do with the "condition number" of the design matrix). A slightly better way is QR decomposition, which directly works with the design matrix. It factors $\mathbf{A}$ as $\mathbf{A}=\mathbf{Q}\mathbf{R}$, where $\mathbf{Q}$ is an orthogonal matrix (multiplying such a matrix with its transpose gives an identity matrix) and $\mathbf{R}$ is upper triangular. $\mathbf{c}$ is subsequently computed as $\mathbf{R}^{-1}\mathbf{Q}^\top\mathbf{y}$. For reasons I won't get into (just see any decent numerical linear algebra text, like [this one](http://books.google.com/books?id=epilvM5MMxwC&pg=PA385)), this has better numerical properties than the method of normal equations. One variation in using the QR decomposition is the [method of seminormal equations](http://dx.doi.org/10.1016/0024-3795%2887%2990101-7). Briefly, if one has the decomposition $\mathbf{A}=\mathbf{Q}\mathbf{R}$, the linear system to be solved takes the form $$\mathbf{R}^\top\mathbf{R}\mathbf{c}=\mathbf{A}^\top\mathbf{y}$$ Effectively, one is using the QR decomposition to form the Cholesky triangle of $\mathbf{A}^\top\mathbf{A}$ in this approach. This is useful for the case where $\mathbf{A}$ is sparse, and the explicit storage and/or formation of $\mathbf{Q}$ (or a factored version of it) is unwanted or impractical. Finally, the most expensive, yet safest, way of solving OLS is the singular value decomposition (SVD). This time, $\mathbf{A}$ is factored as $\mathbf{A}=\mathbf{U}\mathbf \Sigma\mathbf{V}^\top$, where $\mathbf{U}$ and $\mathbf{V}$ are both orthogonal, and $\mathbf{\Sigma}$ is a diagonal matrix, whose diagonal entries are termed "singular values". The power of this decomposition lies in the diagnostic ability granted to you by the singular values, in that if one sees one or more tiny singular values, then it is likely that you have chosen a not entirely independent basis set, thus necessitating a reformulation of your model. (The "condition number" mentioned earlier is in fact related to the ratio of the largest singular value to the smallest one; the ratio of course becomes huge (and the matrix is thus ill-conditioned) if the smallest singular value is "tiny".) This is merely a sketch of these three algorithms; any good book on computational statistics and numerical linear algebra should be able to give you more relevant details.
What is the space complexity of function $f(x) = \sum\_{i=1}^x g(i)$ where g(n) is O(n)? Is it O(n) because the maximum stack size is n, or is it O($n^2$) because there are $n(n+1)/2$ memory references?
There is no such algorithm. Here's an information-theoretic proof that it can't be done, inspired by gnasher79's answer. Let's focus on the special case where $R=3$. Suppose there is a constant $c$ and an algorithm that examines at most $c$ elements of the array, regardless of $n$. Then this algorithm can't possibly solve your problem. In particular, consider all arrays of the form $[0,\dots,0,1,2,\dots,2]$, i.e., containing some number of 0's, followed by a single 1, followed by 2's. Let's imagine what the algorithm does when you ask it to find a $1$. There are $n-1$ different possible positions for the $1$. However, after doing $c$ probes, the algorithm learns only $c \lg 3$ bits of information about the position of the $1$. Thus, if $n-1 > 2^{c \lg 3}$, the algorithm can't possibly work correctly for all such arrays: there exists some pair of arrays where the algorithm outputs the same thing in both cases, but the correct answers differs, so the algorithm must be wrong for at least one of those two cases. In other words, if we take $n$ to be sufficiently large (namely, $n> 3^c + 1$), the algorithm will be incorrect. This implies there is no algorithm for the case $R=3$ whose running time is upper-bounded by a constant. It follows that there is no algorithm whose running time is $O(\lg R)$ (and whose running time doesn't depend on $n$): if there was, plugging in $R=3$ would give us a $O(1)$-time algorithm for the special case where $R=3$... but I just proved that no such algorithm can exist. --- However, the special case $R=2$ can be solved in $O(1)$ time: simply look at the first and last element of the array, and pick whichever one matches the value you're looking for.
I am trying to figure out how to get class probabilitis when running a classification using glmnet. I have built the model and done predictions. But all I have is a huge matrix which I don't really know what to do with. Page 14 of <http://www.jstatsoft.org/v28/i05/paper> talks about something similar but the `extractProb` functions want my `Y` values. It shouldn't need it to do what I want and there is no reason to assume I have them. I mean those are what I want to predict! So, I get the feeling that is not what I want to do. If I try without giving any `Y`-value I get: ``` extractProb(netFit$finalModel, posTestSet[,-ncol(posTestSet)]) Error in x$method : $ operator is invalid for atomic vectors ``` How is this supposed to work?
How you deal with unbalanced data classes depends on the particular classifier you work with. What classifier are you using?. For this cases, using the one-vs-class strategy has been reported to perform better than a naive approach in this case, since each classifier works with a more balanced data set. But there are a couple of strategies which are classifier agnostic like [stratified sampling](http://cdn.intechopen.com/pdfs/10691.pdf) and other [sampling methods](http://wwwmath.uni-muenster.de/u/lammers/EDU/ws07/Softcomputing/Literatur/4-DMI5467.pdf). P.S. you said, you are using kNNs. One standard approach is to weight vectors according to the distance to the sample. There are a few approaches. I am just familiar with this [one](http://rp-www.cs.usyd.edu.au/~weiliu/Webpage/kNN_pakdd11.pdf). The paper is quite nice. As for linear regression methods, you may try to regularize your weights to avoid potential overfitting. Again, you could try something like one-vs-all if it applies to the algorithm you are using.
How are PCA and classical MDS different? How about MDS versus nonmetric MDS? Is there a time when you would prefer one over the other? How do the interpretations differ?
Uhm... **quite** different. In PCA, you are given the multivariate continuous data (a multivariate vector for each subject), and you are trying to figure out if you don't need that many dimensions to conceptualize them. In (metric) MDS, you are given the matrix of distances between the objects, and you are trying to figure out what the locations of these objects in space are (and whether you need a 1D, 2D, 3D, etc. space). In non-metric MDS, you only know that objects 1 and 2 are more distant than objects 2 and 3, so you try to quantify that, on top of finding the dimensions and locations. With a notable stretch of imagination, you can say that a common goal of PCA and MDS is to visualize objects in 2D or 3D. But given how different the inputs are, these methods won't be discussed as even distantly related in any multivariate textbook. I would guess that you can convert the data usable for PCA into data usable for MDS (say, by computing Mahalanobis distances between objects, using the sample covariance matrix), but that would immediately result in a loss of information: MDS is only defined up to location and rotation, and the latter two can be done more informatively with PCA. If I were to briefly show someone the results of non-metric MDS and wanted to give them a rough idea of what it does without going into detail, I could say: > > Given the measures of similarity or dissimilarity that we have, we are trying to map our objects/subjects in such a way that the 'cities' they make up have distances between them that are as close to these similarity measures as we can make them. We could only map them perfectly in $n$-dimensional space, though, so I am representing the two most informative dimensions here -- kinda like what you would do in PCA if you showed a picture with the two leading principal components. > > >
I have a model which has many categorical variables. For each categorical variable there are many levels, like 50~. But not all of them have significant counts. I got these counts using the function `value_counts()` in Python: ``` A 50 B 38 C 26 D 18 E 10 ... T 1 X 1 Z 1 ``` How can I change the levels with count (say) less than 5 to a new level "others"? ``` for x in data.class: if x.value_counts() <30: x = "others" ```
It is a linear transformation. For example, lines that were parallel before the transformation are still parallel. Scaling, rotation, reflection etcetera. With regard to neural networks, it is usually just the input matrix multiplied by the weight matrix.
I am going through this paper: <http://www.cs.ucr.edu/~eamonn/PID4481997_extend_Matrix%20Profile_I.pdf> And on Page 4, it is claimed that the squared z-normalized euclidean distance between two vectors of equal length, `Q` and `T[i]`, (the latter of which is just the ith subsequence of a longer 1D array, `T`) can be calculated from: [![enter image description here](https://i.stack.imgur.com/tBHRom.png)](https://i.stack.imgur.com/tBHRom.png) Here, `m` is the length of `Q` (or `T[i]`), `mu_Q` is the mean of `Q`, `M_T[i]` is the mean for the ith subsequence of `T`, `sigma_Q` is the standard deviation of `Q`, `sigma_T[i]` is the standard deviation for the ith subsequence of `T`, and `Q.T[i]` is the dot product between `Q` and `T[i]`. I am attempting to derive this equation from first principals but can't see to reconcile the final steps: [![enter image description here](https://i.stack.imgur.com/sz9dY.png)](https://i.stack.imgur.com/sz9dY.png) In this case, the summation loops through each element of either `T[i]` or `Q`. Also, recall that: [![enter image description here](https://i.stack.imgur.com/3m5ajt.png)](https://i.stack.imgur.com/3m5ajt.png) I've gotten as close as this but it's not quite right: [![enter image description here](https://i.stack.imgur.com/lbi5D.png)](https://i.stack.imgur.com/lbi5D.png) **A semi-related question is [here](https://math.stackexchange.com/questions/296292/pearson-correlation-and-metric-properties).**
Here's a straightforward way. Let $\mathbf u$ and $\mathbf v$ be $m$-vectors. Let $\mu\_1=\frac 1m\sum\_{k=1}^m u\_k$ and $\mu\_2=\frac 1m\sum\_{k=1}^m v\_k$ denote their means. Let $\sigma\_1^2=\frac 1m\sum\_{k=1}^m (u\_k-\mu\_1)^2$ and $\sigma\_2^2=\frac 1m\sum\_{k=1}^m (v\_k-\mu\_2)^2$ denote their standard deviations. This rewrites as $\sigma\_1^2=\frac 1m\left|\left| \mathbf u-\mu\_1\mathbf 1 \right| \right|^2$, hence $m=\left| \left| \frac{\mathbf u-\mu\_1\mathbf 1}{\sigma\_1} \right| \right|^2=\left| \left| \frac{\mathbf v-\mu\_2\mathbf 1}{\sigma\_2} \right| \right|^2$ The squared $z$-normalized Euclidean distance between $\mathbf u$ and $\mathbf v$ is $$\begin{align}\left|\left|\frac{\mathbf u-\mu\_1\mathbf 1}{\sigma\_1} -\frac{\mathbf v-\mu\_2\mathbf 1}{\sigma\_2}\right|\right|^2 &= \left| \left| \frac{\mathbf u-\mu\_1\mathbf 1}{\sigma\_1} \right| \right|^2+\left| \left| \frac{\mathbf v-\mu\_2\mathbf 1}{\sigma\_2} \right| \right|^2 -\frac{2}{\sigma\_1 \sigma\_2} \langle \mathbf u-\mu\_1\mathbf 1, \mathbf v-\mu\_2\mathbf 1\rangle \\ &= 2m-\frac{2}{\sigma\_1 \sigma\_2} \left(\langle \mathbf u,\mathbf v \rangle-\mu\_2 \sum\_{k=1}^mu\_k -\mu\_1 \sum\_{k=1}^mv\_k + \mu\_1 \mu\_2m \right)\\ &=2m-\frac{2}{\sigma\_1 \sigma\_2} \left(\langle \mathbf u,\mathbf v\rangle -\mu\_2m\mu\_1 - \mu\_1m\mu\_2+\mu\_1 \mu\_2m \right)\\ &=2m \left(1-\frac{1}{m\sigma\_1 \sigma\_2}\left(\langle \mathbf u,\mathbf v\rangle -m\mu\_1\mu\_2\right) \right) \end{align} $$
What is the famous "hard" problems that were shown to be in $\mathcal{P}$ after? I want to know a list of problems that are difficult to prove in the class of "easy" problems? Maybe like matching, linear programming. Any list or references would be appreciated.
Testing perfect graphs. Famous people (Lovasz, Knuth, ...) conjectured in the 1980s that there is a polynomial time recognition for perfect graphs. Such an algorithm was found after almost 20 years later by famous people ( Cornuéjols and other, FOCS 2003).
New to machine learning and have been reading about ensemble modelling. A statement that keeps reappearing is: "Ensemble models work better when we ensemble models of low correlation." I've seen examples, but I can't find an explanation for why this case? Could anyone shed some light?
It relates to a very popular word these days: ***Conspiracy!*** Think about how poor classifiers might hurt an ensemble that "votes" on the class to predict. They could add noise, sure, but as long as there are enough good classifiers to cast their votes in the right direction, the addition of the poor classifiers is unlikely to do much harm. Unless, of course, the poor classifiers "team up." There's already too much personification already in this answer, so the point is that highly correlated poor classifiers could overturn the collective answer of the better ones. Also, positively correlated data leads to less precise estimates in general, because there is less information than your sample size would suggest. For arithmetic means, you can see this in the [variance formula for sums of correlated random variables](https://en.wikipedia.org/wiki/Variance#Sum_of_correlated_variables).
In my PhD thesis I am working on spatial modeling of different chemical parameters in groundwater, and for spatial modeling I am also using the multiple statistical approach. I have a question about multiple regression analysis. (Or it is better to use polynomial regression?) The equation for spatial regression modeling is: $Y = α + β\_1x\_1 + β\_2x\_2 +.... + β\_ix\_i + ε$ For my dependent variable, I have concentrations of calcium in groundwater, which were measured from different sampling points in the entire research area. For the independent variable, I choose the spatial data that influence the distribution of calcium in groundwater. I have lithology, vegetation, slope, climatic conditions (temperature, precipitation), depth of soil, ... The problem is that lithology and vegetation are categorical data (lithology = 3 categories from `1` to `3`, where `1` means clastic rocks, `2`= carbonate rocks and `3`= metamorphic and igneous rocks; and vegetation = 4 categories (`1`= bare rocks, `2`= agriculture land, `3`= grassland, `4`= forests); all others variables are numerical and continuous. Do you have any idea how to solve the problem with categorical data in multiple regression analysis? Might it be better to use some other method? Best regards and thank you very much for your help.
It's a little unclear what your objectives are, so other methods might be preferable depending on those. Polynomial regression may suit continuous variables, but wouldn't make sense for categorical ones. You can add higher-order terms for the continuous variables alongside categorical predictors though. Nominal predictors can be added to a multiple regression model using dummy codes. In your case, you could enter lithology as two dummy variables: using clastic rocks as the reference group (for example; not necessarily the one you want to choose), you could create one binary variable indicating whether a case involves carbonate rocks (`1` if so, `0` if not), and another equivalent one for metamorphic/igneous rocks. The same process of dummy coding can work for any number of levels, and won't be too much harder to interpret for vegetation. Here's an example dataset: $$\begin{array}{c|cccccc}\rm Case&\rm Carbonate&\rm Metamorphic/Igneous&\rm Farm&\rm Grass&\rm Forest&...\\\hline\small\rm Clastic\ bare&0&0&0&0&0&...\\\small\rm Carbonate\ bare&1&0&0&0&0&...\\\small\rm Igneous\ bare&0&1&0&0&0&...\\\small\rm Clastic\ farm&0&0&1&0&0&...\\\small\rm Carbonate\ farm&1&0&1&0&0&...\\\small\rm Igneous\ forest&0&1&0&0&1&...\\...&...&...&...&...&...&... \end{array}$$See how that works? (I can elaborate if not.) Just enter these binary predictors like any other. The corresponding $\beta$s represent the differences between given groups and the reference group. E.g., $\beta\_{\rm Carbonate}$ represents the difference between carbonate and clastic in the above example.
A graph property is called **hereditary** if it closed with respect to deleting vertices (i.e., all induced subgraphs inherit the property). A graph property is called **additive** if it is closed with respect to taking disjoint unions. It is not hard to find properties that are hereditary, but not additive. Two simple examples: $\;\;\;$ (1) The graph is complete. $\;\;\;$ (2) The graph does not contain two vertex-disjoint cycles. In these cases it is obvious that the property is inherited by induced subgraphs, but taking two *disjoint* graphs that have the property, their union may not preserve it. Both of the above examples are polytime decidable properties (although for (2) it is somewhat less trivial). If we want harder properties, they could still be created by following the pattern of (2), but replacing the cycles with more complicated graph types. Then, however, we can easily run into the situation where the problem does not even remain in $NP$, under standard complexity assumptions, such as $NP\neq coNP$. It appears less trivial to find an example which stays within $NP$, but it is still hard. > > **Question:** Do you know a (preferably natural) $NP$-complete graph property that is hereditary, but not additive? > > >
I think the $k$-clique cover problem, which asks whether there exists a partition of the vertices in $k$ sets such that each set induces a clique, has the desired properties. Clearly, taking induced subgraphs can't make the minimum size of such partition increase. On the other hand, when you take the disjoint union of two graphs, you have to take the union of the partition into cliques of each one.
My organisation provides consultation to other firms, in part by making use of neural networks trained with extensive datasets that we have collected over the years. Whenever available, we would inquire for any similar data our clients may have so that we can tailor a new model based on a checkpoint of our own. This has worked well as the data we used for training is relatively generalised and often collected by organisations in general. However, recently one of our clients provided data that is highly predictive of the same response variable in our own models, but differs wildly in variable types and size. An example in order to simplify: I trained my own network to predict height based on weight using 100,000 records. Oftentimes my clients provide an additional 10,000 records of their employees height & weight which allows me to tailor the network to their needs. A new client, however, has provided me with 10,000 records of their employees' gender, country of origin, and daily dairy consumption. All of which, when trained in a separate network, prove highly predictive of height as well. How do I effectively combine these separate models into a single classifier or regressor?
This process is called [fine tuning](https://flyyufelix.github.io/2016/10/03/fine-tuning-in-keras-part1.html) frequently in the neural network literature. If you already have the trained network, you simply run additional training steps on the new data. You'll need to balance the weight of the prior training with that of the new observations to some degree, and that decision is parameterized by the number of free variables in the network, the learning rate of the new training steps, and the number of new training steps. However, your particular case is very complicated by the presence of new variables. Maintaining the form of the prior network is difficult in this case. You may want to simply ensemble a newly trained network on the new data and variables with the old network to combine the predictions.The simplest ensemble (but maybe not the best) would be averaging the results of the two models.
Binary math is at the heart of most computing, in large part because of the ease with which two energy states can be achieved. I have always thought that having more states could improve computing power (e.g. using a [trit](http://en.wikipedia.org/wiki/Trit) instead of a bit), but there is a seeming lack of attention being paid to the problem. Some work has been done in the quantum computing area using qubits to achieve more than the normal two energy levels (see, for example, [the recent effort at UCSB](http://www.sciencemag.org/cgi/content/abstract/sci;325/5941/722?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=Matthew+Neeley&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT)). What advances have been made toward having [ternary](http://en.wikipedia.org/wiki/Ternary_computer) (or greater) computers, and what are the primary the implications of the extra states?
Certainly in the context of quantum computation systems of dimension 3 or more have been looked at. These are known as qudits in general, and qutrits for 3 level systems. The motivation here has mostly been the available physical systems, rather than any expectation of a significant change in computational power. Indeed in quantum systems it is very easy to see that there is no such advantage: Evolution in quantum systems is described by unitary operations which can be associated with the Lie group su(D), where D is the total dimensionality of the system. Given entangling gates between subsystems, together with individual control over these local subsystems, it is known that it is always possible to approximate any such operator, independent of the dimensionality of the local systems. Obviously, if you can reach all unitary operations on D dimensions, then you can reach all of the ones on $d<D$ dimensions. So, you can use, for example 2 qutrits to replace 3 qubits, or 4 qubits to replace 3 qutrits, with only constant overhead. With this in mind, the relationship between qutrits and qubits versus trits and bits is similar, and the level of interest on a theoretical level has been fairly similar. It is really in the context of physical implementations that qutrits attract interest, since quantum computing has not really settled on a dominant architecture yet, and there are plenty of systems with local dimensionality > 2. You may want to have a look into what are called continuous variable systems, which are a type of system considered in quantum computation which have an infinite number of local dimensions (position is an example of a continuous variable quantity). Such systems do actually exist, and are fundamentally different from analog computers due to the quantization of energy levels.
A colored graph can be described as tuple $(G,c)$ where $G$ is a graph and $c : V(G) \rightarrow \mathbb{N}$ is the coloring. Two colored graphs $(G,c)$ and $(H,d)$ are said to be isomorphic if there exists an isomorphism $\pi : V(G) \rightarrow V(H)$ such that the coloring is obeyed, i.e. $c(v) = d(\pi(v))$ for all $v \in V(G)$. This notion captures the isomorphism of colored graphs in a very strict sense. Consider the case where you have two political maps of the same region but they use different color sets. If one asks if they are colored in the same fashion one would assume this to mean whether there exists a bijective mapping between the two color sets such that the colors of both maps coincide via this mapping. This notion can be formalized by describing colored graphs as tuple $(G,\sim)$ where $\sim$ is an equivalence relation on the vertex set of $G$. We can then say two such graphs $(G,\sim\_1)$ and $(H,\sim\_2)$ are isomorphic if there exists an isomorphism $\pi : V(G) \rightarrow V(H)$ such that for all pairs $v\_1,v\_2 \in V(G)$ it holds that $$v\_1 \sim\_1 v\_2 \text{ iff } \pi(v\_1) \sim\_2 \pi(v\_2)$$ My question is whether this concept has been studied previously w.r.t. finding canonical forms etc. and if so under what name it is known?
The problem you describe has definitely been considered (I remember discussing it in grad school, and at the time already it had been discussed long before then), though I can't point to any particular references in the literature. Possibly because it is linearly equivalent to uncolored graph isomorphism, as follows (this is true even for canonical forms). Call the problem you describe EQ-GI. GI is just the special case of EQ-GI where each graph has just one equivalence class consisting of all vertices. In the other direction, to reduce EQ-GI to GI, let $(G, \sim\_G)$ be a graph with equivalence relation with $n$ vertices, $m$ edges, and $c$ equivalence classes. Construct a graph $G'$ whose vertex set consists of the vertices of $G$, together with new vertices $v\_1, \dotsc, v\_c$, one for each equivalence class in $=\_G$, as well as $n+c+1$ new vertices $w\_0, \dotsc, w\_{n+c}$. Connect the $w\_i$'s in a path $w\_0 - w\_1 - w\_2 - \dotsb - w\_{n+c}$, connect each $v\_i$ to $w\_0$, and for every vertex in $G$, connect it to the corresponding equivalence class vertex $v\_i$. Then $G'$ has at most $n + 2c + n +1 \leq O(n)$ vertices and can be constructed in essentially the same time bound. (It also has at most $m + n + c + (n+c+1) \leq m + 4n + 1 \leq O(m+n)$ edges - which is $O(m)$ for connected graphs - but that's somewhat less relevant since most GI algorithms have running times that essentially only depend on $n$.) **Update**: Since there was some confusion in the comments, I'm adding here a sketch of the correctness of the above argument. Given $(G\_1, \sim\_1)$ and $(G\_2, \sim\_2)$, let $G\_1'$ and $G\_2'$ be the graphs constructed as above; let $v\_{i,1}$ denote the vertex $v\_i$ from above in $G\_1'$, and $v\_{i,2}$ the one in $G\_2'$, and similarly for $w\_{i,1}$ and $w\_{i,2}$. If there is an isomorphism $G\_1' \cong G\_2'$, it must send $w\_{i,1}$ to $w\_{i,2}$ for all $i$, since in each graph $w\_{n+c}$ is the unique vertex that is the endpoint of any path of length at least $n+c+1$. In particular, $w\_{0,1}$ maps to $w\_{0,2}$. Since the neighbors of $w\_0$ that aren't $w\_1$ are exactly the $v\_i$, the isomorphism must map the set $\{v\_{1,1},\dotsc,v\_{c,1}\}$ to the set $\{v\_{1,2},\dotsc,v\_{c,2}\}$ (and in particular both $\sim\_1$ and $\sim\_2$ must have the same number, $c$, of equivalence classes). Note that the isomorphism need not send $v\_{i,1}$ to $v\_{i,2}$ for all $i$, but is allowed to permute the indices of the $v$'s so long as the corresponding equivalence classes can be mapped to one another. Conversely, based on this description of how isomorphisms between $G\_1'$ and $G\_2'$ can look, it is easy to see that if $(G\_1, \sim\_1) \cong (G\_2, \sim\_2)$ then this gives an isomorphism $G\_1' \cong G\_2'$.
Assume we have n observations $x\_i$ (i from 1 to n), each from the a normal distribution with mean 0 and some variance component: $X\_i \sim N(0, \sigma^2)$. The random variables $X\_i$s have some (let's assume known) correlation structure. I.e.: $Corr(X\_i, X\_j) = \rho\_{ij}$. How should we estimate $\sigma^2$? How would the correlated structure impact an estimator such as $\hat \sigma^2 = \frac{\sum(x\_i^2)}{n}$? Could we devise a better estimator (given the correlation structure is known)?
Let $R = (\rho\_{ij})$ be the correlation matrix so that the covariance matrix is $\Sigma = \sigma^2 R.$ Consider $\mathbf x = (x\_1,\ldots, x\_n)^\prime$ to be a *single* observation of the [$n$-variate Normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution) with zero mean and $\Sigma$ covariance. Because the log likelihood of $N\ge 1$ independent such observations is, up to an additive constant (depending only on $N,$ $n,$ and $R$) given by $$\Lambda(\sigma) = \sum\_{i=1}^N (-n \log \sigma) - \frac{1}{2\sigma^2} \sum\_{i = 1}^N\mathbf{x}\_i^\prime\, R^{-1}\,\mathbf{x}\_i,$$ it has critical points where $\sigma\to 0,$ $\sigma\to\infty,$ and at any solutions of $$0 = \frac{\mathrm d}{\mathrm{d}\sigma}\Lambda(\sigma) = -\frac{nN}{\sigma} + \frac{1}{\sigma^3} \sum\_{i = 1}^N\mathbf{x}\_i^\prime\, R^{-1}\,\mathbf{x}\_i.$$ Unless the form $ \sum\_{i = 1}^N\mathbf{x}\_i^\prime\, R^{-1}\,\mathbf{x}\_i$ is zero, there is a unique global maximum at > > $$\hat\sigma^2 = \frac{1}{nN} \sum\_{i = 1}^N \mathbf{x}\_i^\prime\, R^{-1}\,\mathbf{x}\_i.$$ > > > This is the Maximum Likelihood estimate. *It exists even when $N=1$* (which is the situation posited in the question). **Intuitively, this will be superior to any estimate that ignores the correlations assumed in $R.$** To check, we could compute the Fisher Information matrix for this estimate -- but I will leave that to you, in part because the result shouldn't be convincing for small values of $N$ (where the maximum likelihood asymptotic results might not apply). **To illustrate what actually happens,** here are one thousand estimates of $\sigma^2$ from experiments with $n=4$ and $N=1.$ The value of $\sigma$ was set to $1$ throughout. In these experiments, the correlation was always $$R = \pmatrix{1 &0.3055569 &0.5513377 &0.5100989\\ 0.3055569 &1 &0.1240151 &0.09634469\\ 0.5513377 &0.12401511 &1 &-0.4209064\\ 0.5100989 &0.09634469 &-0.4209064 &1}$$ (as generated randomly at the outset). In the figure the leftmost panel is a histogram of the foregoing Maximum Likelihood estimates; the middle panel is a histogram of estimates using the usual (unbiased) variance estimator; and the right panel is a QQ plot of the two sets of estimates. The slanted line is the line of equality. You can see **the usual variance estimator tends to yield more extreme values.** It is also biased (due to ignoring the correlation): the mean of the MLEs is 0.986 -- surprisingly close to the true value of $\sigma^2 =1^2 =1$ while the mean of the usual estimates is only 0.791. (I write "surprisingly" because it is well-known the usual maximum likelihood estimator of $\sigma^2,$ where no correlation is involved, has a bias of order $1/(nN),$ which is pretty large in this case.) [![Figure](https://i.stack.imgur.com/ufiK8.png)](https://i.stack.imgur.com/ufiK8.png) You may experiment with the `R` code that produced these figures by modifying the values of `n`, `sigma`, `N`, `n.sim`, `Rho`, and the random number seed `17`. ```R f <- function(x, Rho) { # The MLE of sigma^2 given data `x` and correlation `Rho` S <- solve(Rho) sum(apply(x, 1, function(x) x %*% S %*% x)) / length(c(x)) } n <- 4 sigma <- 1 N <- 1 n.sim <- 1e3 set.seed(17) # # Generate a random correlation matrix. Larger values of `d` yield more # spherical matrices in general. # d <- 1 Rho <- cor(matrix(rnorm(n*(n+d)), ncol=n)) (ev <- eigen(Rho, only.values=TRUE)$values) # # Run the experiments. # library(MASS) sim <- replicate(n.sim, { x <- matrix(mvrnorm(N, rep(0,n), sigma^2 * Rho), N) c(f(x, Rho), var(c(x))) }) (rowMeans(sim)) # # Plot the results. # par(mfrow=c(1,3)) hist(sim[1,], col=gray(.93), xlab="Estimate", main=expression(paste("Histogram of Estimates of ", sigma^2))) abline(v = sigma^2, col="Red", lwd=2) hist(sim[2,],col=gray(.93), xlab="Estimate", main=expression(paste("Histogram of Independent Estimates of ", sigma^2))) abline(v = sigma^2, col="Red", lwd=2) y1 <- sort(sim[1,]) y2 <- sort(sim[2,]) plot(y1, y2,asp=1, xlab="Correlation-based estimate", ylab="Independent estimate") abline(0:1, col="Red", lwd=2) par(mfrow=c(1,1)) ```
I'd like to know if I have the right intuition and my answer is headed the correct way. I am given a function $ f = \{0, 1\}^\* \rightarrow \{0, 1\}^\* $ that is computable in space $O(\log n)$ assume that for every $x \in \{0, 1\}^\*$, $f$ is length preserving, $|f(x)| = |x|$. Define $$L = \left\{ x\#y \mid \ x, y \in \{0, 1\}^\*, |x| = |y|, \ \text{and} \ f(x) = f(y)\right\}$$ I am suppose to prove that $ L \in {\sf DSPACE}(\log n) $. Please correct me if my intuition is incorrect. My solution would be to build a decider $M$ which is a Turing machine. $M$ takes inputs $x$ and $y$, run the function $f$ on input $x$ and $y$ and if the lengths of the two strings are equal then accept, otherwise reject. Now the Turing machine runs in $ O(\log n) $ because the function $f$ is computable in $ O(\log n) + O(\log n) = O(\log n) $ and comparing the length returned by the function is $ O(1) $ Thus the language is decidable by a Turing machine that is run in $ O(\log n) $ and only takes Space $ O(\log n) $.
Here is a rough idea how to solve this. You compare $f(x)$ and $f(y)$ character by character. Your tape is devided into 4 pieces. The first one is for simulating $f(x)$ the second one for $f(y)$, then you have two parts, one that counts which character of $x$ you are currently computig, and another $O(\log n)$ area that helps you to navigate. Now the programm looks like this 1. Determine $|x|$ and $|y|$ (store it in binary!) and check if there are the same. 2. Compute the first character of $f(x)$ by simulating it on your machine. Store the output in a TM-state. Pause the Computation of $f(x)$. 3. Walk $|x|+1$ step to the right. 4. Simulate $f(y)$ until you get the first character, check if it coincides with the start of $f(x)$, if no reject, otherwise continue. 5. Walk $|x|$ steps left. 6. Repeat the steps 2.-5. until you check $f(x)=f(y)$ completely.
The $n$th Fibonacci number can be computed in linear time using the following recurrence: ``` def fib(n): i, j = 1, 1 for k in {1...n-1}: i, j = j, i+j return i ``` The $n$th Fibonacci number can also be computed as $\left[\varphi^n / \sqrt{5}\right]$. However, this has problems with rounding issues for even relatively small $n$. There are probably [ways around this](https://cs.stackexchange.com/questions/7145/calculating-binets-formula-for-fibonacci-numbers-with-arbitrary-precision) but I'd rather not do that. Is there an efficient (logarithmic in the value $n$ or better) algorithm to compute the $n$th Fibonacci number that does not rely on floating point arithmetic? Assume that integer operations ($+$, $-$, $\times$, $/$) can be performed in constant time.
You can use matrix powering and the identity $$ \begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix}^n = \begin{bmatrix} F\_{n+1} & F\_n \\ F\_n & F\_{n-1} \end{bmatrix}. $$ In your model of computation this is an $O(\log n)$ algorithm if you use repeated squaring to implement the powering.
Assume a binary tree of height N. All nodes have exactly 2 children and all leaves have height N. For example, the following tree has N=3: ``` 3 / \ / \ / \ 2 2 / \ / \ / \ / \ 1 1 1 1 / \ / \ / \ / \ 0 0 0 0 0 0 0 0 ``` When we do a depth-first traversal and note the height of each node, we get the following array (for any N>3): ``` [0, 0, 1, 0, 0, 1, 2, 0, 0, 1, 0, 0, 1, 2, 3, 0, 0, 1, ...] ``` Note how the head of the array isn't dependent on N and can go infinitely. I'm looking for a function F that, for any index in that array, would give the corresponding value. For example: ``` F(2) = 1 F(4) = 0 F(6) = 2 ``` If possible, the function shouldn't be recursive. I do no want to have to iterate through millions of elements to calculate a height at an index far in.
Please note that "the big-O of the function" [isn't a correct formulation](https://cs.stackexchange.com/questions/53044/what-is-the-big-theta-of-log-n2-9-log-n7/53085#53085). We assume $\log$ is the binary logarithm $\log\_2$. But actually the proof can be extended to any base. --- We have $$\log(x)\lt x, \forall x>0$$ and, if we plug in $\log(n)$ for $x$ $$\log(\log(n))\lt \log(n).\tag 1$$ Recall the [definition](https://en.wikipedia.org/wiki/Big_O_notation#Formal_definition) of $f(n)=O(g(n))$: $$|f(n)|\le M |g(n)|,\forall n\ge n\_0$$ for appropriate $M$ and $n\_0$. So if we choose $f(n)=\log(\log(n))$, $g(n)=\log(n)$, $M=1$ ,$n\_0=2$ we see that $(1)$ is $$\log(\log(n))=O(\log(n))$$ and of course $$\log(\log(n))=O(n\log(n)).$$ So all three function in your expressions are $O(n\log(n))$ and therefore every linear combination of them $$a\log(\log(n)) + b \, n\log(n) + c\log(n), \quad a,b,c \in \mathrm R$$ is $O(n\log(n))$.
ok i am not a statistician. I am doing paired comparison for several outcomes of cross validated interpolation results. I have to consider only the absolute value in my analysis and so although original data follows a normal distribution the resultant absolute values are only half normal. So is the t-test still valid and sound and in either case (yes/no) what is the reasoning behind it ?
``` Analyze // Nonparametric // Legacy dialogs // 1- Sample KS ``` Well, I felt this one-line answer would be enough. Ok, so this option conducts a KS (Kolmogorov-Smirnov) Test of Normality. Essentially doing the same as the Shapiro-Wilk test previously mentioned. If you need the pros and cons of both tests, wikipedia might give a good answer. In many cases Likert scales are skewed, since people tend to do yea-saying if the data is from the US. So you will probably standardize each Likert variable by subtracting mean and dividing by standard deviation. You can then conduct the KS test on the transformed variables. Skewness you can already detect by looking at the descriptives.
Feedback vertex set (FVS) problem is NP-complete for both undirected and directed graphs, and it is NP-complete even for bipartite graphs and tournaments. Is there any special family of graphs other than trees for which FVS is solvable in polynomial time? I kindly request you to throw some light on it.
[According to Festa, Pardalos and Resende](http://www2.research.att.com/~mgcr/doc/sfsp.pdf) (link at the bottom of the Wikipedia page), the problem can be solved in polynomial time for various classes of graphs. At the risk of sounding harsh, I find parts of the text awkwardly unconvincing (e.g. "... it returns an optimal solution in polynomial time for certain types of graphs" without specifying which), but it might be an early draft that they put together in a rush. Anyway, section 3.2 of that paper should help you find some leads.
Recently I was introduced to the field of Data Science (its been 6 months approx), and Ii started the journey with Machine Learning Course by Andrew Ng and post that started working on the Data Science Specialization by JHU. On practical application front, I have been working on building a predictive model that would predict attrition. So far I have used glm, bayesglm, rf in an effort to learn and apply these methods, but I find a lot of gap in my understanding of these algorithms. My basic dilemma is: *Whether I should focus more on learning the intricacies of a few algorithms or should I use the approach of knowing a lot of them as and when and as much as required?* Please guide me in the right direction, maybe by suggesting books or articles or anything that you think would help. I would be grateful if you would reply with an idea of guiding someone who has just started his career in the field of Data Science, and wants to be a person who solves practical issues for the business world. I would read (as many as possible) resources (books,articles) suggested in this post and would provide a personal feed back on the pros and cons of the same so as to make this a helpful post for people who come across a similar question in future,and i think it would be great if people suggesting these books can do the same.
Arguably someone calling themself a data scientist ought to know more about the intricacies of the algorithms they use—e.g. what affects the convergence rate of the Fisher scoring algorithm in GLM—than a common or garden statistician—who might be content just to know that the maximum-likelihood solution will be found (perhaps after they make a cup of coffee). In any case understanding the general concepts of statistics & machine learning is important in addition to familiarity with the methods you do use—the theory behind them, the assumptions they make, what diagnostic checks you should perform, how to interpret the results. Avoid being [this parody](https://stats.stackexchange.com/questions/104500/skills-hard-to-find-in-machine-learners/104507#104507). You'd probably enjoy reading [Hastie et al. (2009), *The Elements of Statistical Learning*](http://statweb.stanford.edu/%7Etibs/ElemStatLearn/).
For a 4-bit multiplier there are $2^4 \cdot 2^4 = 2^8$ combinations. The output of 4-bit multiplication is 8 bits, so the amount of ROM needed is $2^8 \cdot 8 = 2048$ bits. Why is that? Why does the ROM need all the combinations embedded into it? What will be the case with RAM?
If you are storing all the value of the function (in this case, multiplication) in a big table, then the size of this table is $2^{\text{# input bits}}\cdot \text{# output bits}$. This becomes impractical rather quickly, so CPUs implement arithmetic operations differently.
What were the most surprising results in complexity? I think it would be useful to have a list of unexpected/surprising results. This includes both results that were surprising and came out of nowhere and also results that turned out different than people expected. **Edit**: *given the **[list](http://blog.computationalcomplexity.org/2005/12/surprising-results.html)** by Gasarch, Lewis, and Ladner on the complexity blog (pointed out by @Zeyu), let's focus this community wiki on results not on their list.* Perhaps this will lead to a focus on results after 2005 (as per @Jukka's suggestion). An example: **Weak Learning = Strong Learning [Schapire 1990]**: (Surprisingly?) Having any edge over random guessing gets you PAC learning. Lead to the AdaBoost algorithm.
If $P \neq NP$, then there is a "diagonalization" proof for it. This result is due to Kozen. Not everyone agrees with what he calls a "diagonalization" proof.
I'm currently studying growth of function chapter in Introduction to Algorithm. In exercise 3.1-2 the question is: > > Show that for any real constants $a$ and $b$, where $b>0$, > > $(n + a)^b = \Theta(n^b)$. > > > I understand I need to find constants $c\_1$, $c\_2$, $n\_0 > 0$ where > > $0 \leq c\_1n^b \leq (n+a)^b \leq c\_2n^b$ for all $n \geq n\_0$. > > > But I don't understand what should I do after above step. Looking at the solution, it shows > > $n + a \leq n + |a|$ > > $n + a \leq 2n$, when $|a| \leq n$ > > > and > > $n + a \geq n - |a|$ > > $n + a \geq \frac{n}{2}$ , when $|a| \leq \frac{n}{2}$ > > > How did they come up with $n + a \leq n + |a|$ and $n + a \geq n - |a|$ just from $0 \leq c\_1n^b \leq (n+a)^b \leq c\_2n^b$ ? Are they derived from $0 \leq c\_1n^b \leq (n+a)^b \leq c\_2n^b$ ? Or are they just a general knowledge that can help solve this problem? Please help me to understand them. Thank you! UPDATE: I have updated the question to be more specific.
> > I understand I need to find constants $c\_1$, $c\_2$, $n\_0 > 0$ where > > > $0 \leq c\_1n^b \leq (n+a)^b \leq c\_2n^b$ for all $n \geq n\_0$. > > > But I don't understand what should I do after above step. > > > The approach you would like to use here is "going backwards from the target". Suppose we do have $c\_1n^b \leq (n+a)^b$ for some constant $c\_1\gt0$ when $n$ is large enough. What can we understand or deduce from it? What is needed to make it true? Since the exponent is the same, we can use the law of same exponent, $(x/y)^m = x^m/y^m$ to simplify that inequality so that we will have the appearances of our variable $n$ closer to each other. $$ c\_1\leq \frac{(n+a)^b}{n^b}=\left(\frac{n+a}n\right)^b$$ Raising both sides to the power of $\frac1b$, we have $$ (c\_1)^{\frac1b}\leq \left(\frac{n+a}n\right)^{b\,\frac1b}=\frac{n+a}n = 1+\frac an$$ (Another way to obtain $(c\_1)^{\frac1b}\leq\frac{n+a}n$ is to raise both sides of $c\_1n^b \leq (n+a)^b$ to the power of $\frac 1b$.) Since $c\_1$ is a positive constant, $(c\_1)^{\frac1b}$ is also a positive constant. To make the above inequality hold, we would like to make $\left|\frac an\right|$ small enough. For example, we can require $\left|\frac an\right|\le \frac12$. That is why you see the following condition. $$|a|\le\frac12 n$$ What is nice here is that we can reverse the above argument to obtain a wanted constant $c\_1$. Note that it is just as fine if we choose a different condition such as $|a|\le\frac13 n$ or $|a|\le\frac1{2019}n$ or infinitely many others. If we start from $(n+a)^b \leq c\_2n^b$, we could arrive at $|a|\le n$. > > Are they derived from $0\le c\_1n^b\le(n+a)^b\le c\_2n^b$ ? Or are they just a general knowledge that can help solve this problem? > > > Yes, they are derived as you have just seen. You could say that they are general knowledge that helps solve this problem. You could also say that we just discovered some specific facts that help solve this problem.
This is related to the general question of "[How do I referee a paper?](https://cstheory.stackexchange.com/questions/1893/how-do-i-referee-a-paper)". I am reviewing a paper for a conference, and this paper should be rejected, because it is not significant enough for publication and has flaws in some of its technical details. The paper is not wrong, but the ways in which it is right are not very interesting. How do I write a negative review in such a settings? More importantly, how do I do this as a non-senior researcher? As an example, they make part of their argument through numerics, but I can prove the same result analytically. The analytic treatment is longer than what I can include in a review. I know the author's emails (the review process is not blind both ways), should I email them? This is a conference, so there is no time for revisions, will they be mad at me for rejecting their paper? Should I email them before or after they receive the reviews? The paper is also sparsely cited, and does not connect strongly to existing literature. I am familiar with many aspects of the relevant literature, how detailed should I make my recommendations on further reading/references? To extract the general questions: * How does a junior researcher write a negative review? * What should you do if you have specific technical comments/improvements that are too long for a standard review?
I think this is too much of a duplicate of the other question, but my comment got too long. * Firstly, the review process is blind, so you simply have to do the best that you can. Be objective, be polite, give good reasons – see other [question](https://cstheory.stackexchange.com/questions/1893/how-do-i-referee-a-paper). * There is probably no such thing has having technical comments/improvements that are too long for the standard review, unless you will end up rewriting the paper. Authors tend to appreciate comments, even if they mean more work. If the paper is borderline and is worth improving, then give as many comments as you have time for. * If you have additional comments that you want to send to the authors, consult with the chair/editor before contacting the authors. Perhaps your comments can be sent anonymously via the chair/editor. * Papers get rejected, we learn to live with it. The best you can do is provide good comments to help the authors improve the paper for next time round. Authors may get mad, but they don't get even. * If the paper is terrible, just save your time and don't bother with all the comments. Write a sufficiently convincing review giving reasons why the paper should be rejected, point out that there are many typos and other mistakes, but don't enumerate them all. * On the other hand, if you are presenting and proving an alternative axiomatisation of their results to the degree that you are not only rewriting their paper, but lifting their results to a whole new level, then you should consult the chair/editor to see what to do. Maybe you should get credit for your contribution. You certainly should *never* write up your results independently, not until the paper has appeared in print.
I know that different authors use different notation to represent programming language semantics. As a matter of fact [Guy Steele addresses this problem in an interesting video](https://www.youtube.com/watch?v=7HKbjYqqPPQ). I'd like to know if anyone knows whether the leading turnstile operator has a well recognized meaning. For example I don't understand the leading $\vdash$ operator at the beginning of the denominator of the following: $$\frac{x:T\_1 \vdash t\_2:T\_2}{\vdash \lambda x:T\_1 . t\_2 ~:~ T\_1 \to T\_2}$$ Can someone help me understand? Thanks.
On the left of the turnstile, you can find the local context, a finite list of assumptions on the types of the variables at hand. $$ x\_1:T\_1, \ldots, x\_n:T\_n \vdash e:T $$ Above, $n$ can be zero, resulting in $\vdash e:T$. This means that no assumptions on variables are made. Usually, this means that $e$ is a closed term (without any free variables) having type $T$. Often, the rule you mention is written in a more general form, where there can be more hypotheses than the one mentioned in the question. $$ \dfrac{ \Gamma, x:T\_1 \vdash t : T\_2 }{ \Gamma\vdash (\lambda x:T\_1. t) : T\_1\to T\_2 } $$ Here, $\Gamma$ represents any context, and $\Gamma, x:T\_1$ represents its extension obtained by appending the additional hypothesis $x:T\_1$ to the list $\Gamma$. It is common to require that $x$ did not appear in $\Gamma$, so that the extension does not "conflict" with a previous assumption.
We know that Karger's mincut algorithm can be used to prove (in a non-constructive way) that the maximum number of possible mincuts a graph can have is $n \choose 2$. I was wondering if we could somehow prove this identity by giving a bijective (rather injective) proof from the set of mincuts to another set of cardinality $n \choose 2$. No specific reasons, its just a curiosity. I tried doing it on my own but so far have not had any success. I would not want anyone to squander time over this and so if the question seems pointless I would request the moderators to take action accordingly. Best -Akash
The $\binom{n}{2}$ bound I think was originally proven by Dinitz, Karzanov and Lomonosov in 1976, in "A structure for the system of all minimum cuts of a graph". Perhaps you can find what you're looking for in this paper, but I'm not sure if it's online.
Assembly language is converted in to machine language by assembler. Why would a compiler convert high-level language to assembly? Can't it directly convert from the high-level language to machine code?
A compiler does usually convert high-level code directly to machine language, but it can be built in a modular way so that one back-end emits machine code and the other assembly code (like GCC). The code generation phase produces "code" which is some internal representation of machine code, which then has to be converted to a usable format like machine language or assembly code.
I have problem in determine whether it is decidable or not, can somebody help me please
A language over $\{0\}$ is any subset of the set S of all strings consisting of a finite number of 0s. S is countable, the set of all subsets of S is uncountable. Any decidable language must be decided by some algorithm. The set of algorithms, no matter how you describe it, is countable, not uncountable. Therefore some subsets of S (actually almost all) are not decidable.
Could anyone advise if the following makes sense: I am dealing with an ordinary linear model with 4 predictors. I am in two minds whether to drop the least significant term. It's $p$-value is a little over 0.05. I have argued in favor of dropping it along these lines: Multiplying the estimate of this term by (for example) the interquartile range of the sample data for this variable, gives some meaning to the clinical effect that keeping this term has on the overall model. Since this number is very low, approximately equal to typical intra-day range of values that the variable can take when measuring it in a clinical setting, I see it as not clinically significant and could therefore be dropped to give a more parsimonious model, even though dropping it reduces the adjusted $R^2$ a little.
There are at least two other possible reasons for keeping a variable: 1) It affects the parameters for OTHER variables. 2) The fact that it is small is clinically interesting in itself To see about 1, you can look at the predicted values for each person from a model with and without the variable in the model. I suggest making a scatterplot of these two sets of values. If there are no big differences, then that's an argument against this reason For 2, think about why you had this variable in the list of possible variables. Is it based on theory? Did other research find a large effect size?
The typical general linear model (GLM) for count data uses the Poisson link function. The counts there are assumed to be "independent". Now suppose the counts are not "independent" in a sense illustrated by the following toy example. There are data points on 2100 students who each take 7 courses. The response is the number of "A" grades they earn. The histogram below illustrates the observed "A" count. I'm interested in a GLM for this kind of response distribution with some predictors (for example, # hours spent studying + household income). From a modeling perspective, it is reasonable to believe that students who get an "A" in one course are likely to get an "A" in other courses (and vice versa). So I am unsure as to what an appropriate link function would be for a GLM. It is clear that the responses don't follow a Poisson distribution in this example. But would a logarithmic link function (i.e. Poisson regression) still be valid in this scenario? Any thoughts would be much appreciated. [![Example of Count distribution](https://i.stack.imgur.com/lTJC1.png)](https://i.stack.imgur.com/lTJC1.png)
Without knowing the true value, I don't see how you can speak to accuracy for a given height estimate. The variance component could be defined as the imprecision of the estimate (whether or not that estimate is correct). That said, you surely have trained your algorithm using some known values or truth, such that you could at least provide the accuracy and precision for the training data. This will obviously differ, but at least there is some information about how well it performs. If you used cross-valiation and test samples, all the better in providing how well the algorithm performed overall (e.g. precision, recall, accuracy, efficiency etc.).
I have observed that neural network models (using Keras TensorFlow) can be very unstable (when my sample size is small) in the sense that if I were to train 999 NN models, there might only be 99 with good training accuracy. I imagine this is due to the stochastic nature of the initiation of weights in the NN; hence only some initiation was able to lead to a local minima. However, when I use logistic regression (specifically the `statsmodels` package in python), the trained model is fairly stable in the sense that no matter how many times I train it, the accuracy and recall etc are fairly constant. My question is - is this a consequence of the difference in nature between logistic regression and NN (e.g. could it be because logistic regression does not need random initiation of weights?) or is this merely a consequence of the packages I am using? (e.g. perhaps `statsmodels` has defined constant starting state?) My understanding is that a logistic regression could also be viewed as a single node NN so I am wondering why should it be any different.
There is a key difference between logistic regression and neural networks. Neural Networks have multiple local minima and thus it's inherently sensible to kick off your gradient descent multiple times from different initialisations, as well as to use stochastic gradient descent. You would expect to end up in different places depending on where you start. The logistic regression cost function however [can be shown](https://math.stackexchange.com/q/3198681) to be convex, and thus even if you kick your gradient descent off from different initialisations, you should always end up in the same place, give or take numerical effects associated with (S)GD. It is true that logistic regression is a single layer neural network, but in somewhat handwaving terms, the term which goes through the logistic function is linear in all model parameters (the decision boundary is linear in all model parameters). As soon as you add another layer, the term which goes through the logistic function is a non-linear function of some of the model parameters. This is what starts to make the cost function non-convex (I state vaguely without proof), and that's why even a two-layer neural network will end up in different places if you initialise different and logistic regression is the special case
I have trained a simple CNN (using Python + Lasagne) for a 2-class EEG classification problem, however, the network doesn't seem to learn. loss does not drop over epochs and classification accuracy doesn't drop from random guessing (50%): [![training curve](https://i.stack.imgur.com/EIoa4.png)](https://i.stack.imgur.com/EIoa4.png) **Questions** 1. Is there anything wrong with the code that is causing this? 2. Is there a better (more correct?) way to handle EEG data? **EEG setup** Data is collected from participants completing a total of 1044 EEG trials. Each trial lasts 2 seconds (512 time samples), has 64 channels of EEG data, and labelled 0/1. All trials have been shuffled so as to not learn on one set of participants and test on another. The goal is to predict the label of a trial after being given the 64x512 matrix of raw EEG data The raw input data (which I can't show here as its part of a research project) has a shape of `(1044, 1, 64, 512)` train/validation/test splits are then created at 60/20/20% With such a small dataset I would have thought overfitting would be a problem, but training loss doesn't seem to reflect that **Code** Network architecture: ``` def build_cnn(input_var=None): l_in = InputLayer(shape=(None, 1, 64, 512), input_var=input_var) l_conv1 = Conv2DLayer(incoming = l_in, num_filters = 32, filter_size = (1, 3), stride = 1, pad = 'same', W = lasagne.init.Normal(std = 0.02), nonlinearity = lasagne.nonlinearities.rectify) l_pool1 = Pool2DLayer(incoming = l_conv1, pool_size = (1, 2), stride = (2, 2)) l_fc = lasagne.layers.DenseLayer( lasagne.layers.dropout(l_pool1, p=.5), num_units=256, nonlinearity=lasagne.nonlinearities.rectify) l_out = lasagne.layers.DenseLayer( lasagne.layers.dropout(l_fc, p=.5), num_units=2, nonlinearity=lasagne.nonlinearities.softmax) return l_out ``` Note: I have tried adding more conv/pool layers as I thought the network wasnt deep enough to learn the categories but 1) this doesn't change the outcome I mentioned above and 2) I've seen other EEG classification code where a simple 1 conv layer network can get above random chance Helper for creating mini batches: ``` def iterate_minibatches(inputs, targets, batchsize, shuffle=False): assert len(inputs) == len(targets) if shuffle: indices = np.arange(len(inputs)) np.random.shuffle(indices) for start_idx in range(0, len(inputs) - batchsize + 1, batchsize): if shuffle: excerpt = indices[start_idx:start_idx + batchsize] else: excerpt = slice(start_idx, start_idx + batchsize) yield inputs[excerpt], targets[excerpt] ``` Running the model: ``` def main(model='cnn', batch_size=500, num_epochs=500): input_var = T.tensor4('inputs') target_var = T.ivector('targets') network = build_cnn(input_var) prediction = lasagne.layers.get_output(network) loss = lasagne.objectives.categorical_crossentropy(prediction, target_var) loss = loss.mean() train_acc = T.mean(T.eq(T.argmax(prediction, axis=1), target_var), dtype=theano.config.floatX) params = lasagne.layers.get_all_params(network, trainable=True) updates = lasagne.updates.nesterov_momentum(loss, params, learning_rate=0.01) test_prediction = lasagne.layers.get_output(network, deterministic=True) test_loss = lasagne.objectives.categorical_crossentropy(test_prediction, target_var) test_loss = test_loss.mean() test_acc = T.mean(T.eq(T.argmax(test_prediction, axis=1), target_var), dtype=theano.config.floatX) train_fn = theano.function([input_var, target_var], [loss, train_acc], updates=updates) val_fn = theano.function([input_var, target_var], [test_loss, test_acc]) print("Starting training...") for epoch in range(num_epochs): # full pass over the training data: train_err = 0 train_acc = 0 train_batches = 0 start_time = time.time() for batch in iterate_minibatches(train_data, train_labels, batch_size, shuffle=True): inputs, targets = batch err, acc = train_fn(inputs, targets) train_err += err train_acc += acc train_batches += 1 # full pass over the validation data: val_err = 0 val_acc = 0 val_batches = 0 for batch in iterate_minibatches(val_data, val_labels, batch_size, shuffle=False): inputs, targets = batch err, acc = val_fn(inputs, targets) val_err += err val_acc += acc val_batches += 1 # After training, compute the test predictions/error: test_err = 0 test_acc = 0 test_batches = 0 for batch in iterate_minibatches(test_data, test_labels, batch_size, shuffle=False): inputs, targets = batch err, acc = val_fn(inputs, targets) test_err += err test_acc += acc test_batches += 1 # Run the model main(batch_size=5, num_epochs=30) ```
I had the same problem when I used TensorFlow to build a self driving car. The training error for my neural nets bounced around forever and never converged on a minimum. As a sanity check I couldn't even intentionally get my models to overfit, so I knew something was definitely wrong. What worked for me was scaling my inputs. My inputs were pixel color channels between 0 and 255, so I divided all values by 255. From that point onward, my model training (and validation) error hit a minimum as expected and stopped bouncing around. I was surprised how big of a difference it made. I can't guarantee it will work for your case, but it's definitely worth trying, since it's easy to implement.
For example, taking the image from [sebastian raschkas post "Machine Learning FAQ"](https://sebastianraschka.com/faq/docs/bagging-boosting-rf.html#boosting): [![enter image description here](https://i.stack.imgur.com/YknB5.png)](https://i.stack.imgur.com/YknB5.png) I would expect a very similar (if not exactly the same) result for a decision tree: Given only two features, it finds the optimal feature (and value for that feature) to split the classes. Then, the decision tree does the same for each child considering only the data which arrives in the child. Of course, boosting considers all the data again, but at least in the given sample it leads to exactly the same decision boundary. **Could you make an example where a decision tree would have a different decision boundary on the same training set than boosted decision stumps?** I have the intuition that boosted decision stumps are less likely to overfit because the base classifier is so simple, but I couldn't exactly pin point why.
[Decision stump](https://en.wikipedia.org/wiki/Decision_stump) are decision trees with one step from root to leaves whereas [Decision trees](https://en.wikipedia.org/wiki/Decision_tree_learning) can have several steps between root and leaves. Easy example of these two is that a decision stump could be which side of coin faces up when thrown and a decision tree would be that if the coin could is touching the ground already (states are interconnected): ``` stump tree Is the coin thrown Is it touching ground | | 50 50 can turn? lays still? | | to right to left 0 0 | | 0 100 100 0 ``` Boosting can't help if decision tree in my example exactly knows **before decision** for example the side of turning possibility and for stump always one step **after**. So, stump can help in finding a statistical pattern in that example but not the underlying external facts affecting the system in certain move, if the conditions vary randomly in time.
I am looking for a Python library that can perform [segmented regression (a.k.a. piecewise regression)](https://en.wikipedia.org/wiki/Segmented_regression#Segmented_linear_regression.2C_two_segments). [Example](https://onlinecourses.science.psu.edu/stat501/node/310): [![enter image description here](https://i.stack.imgur.com/ZNoPv.png)](https://i.stack.imgur.com/ZNoPv.png)
`numpy.piecewise` can do this. > > piecewise(x, condlist, funclist, \*args, \*\*kw) > > > Evaluate a piecewise-defined function. > > > Given a set of conditions and corresponding functions, evaluate each > function on the input data wherever its condition is true. > > > An example is given on SO [here](https://stackoverflow.com/questions/29382903/how-to-apply-piecewise-linear-fit-in-python). For completeness, here is an example: ``` from scipy import optimize import matplotlib.pyplot as plt import numpy as np x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11, 12, 13, 14, 15], dtype=float) y = np.array([5, 7, 9, 11, 13, 15, 28.92, 42.81, 56.7, 70.59, 84.47, 98.36, 112.25, 126.14, 140.03]) def piecewise_linear(x, x0, y0, k1, k2): return np.piecewise(x, [x < x0, x >= x0], [lambda x:k1*x + y0-k1*x0, lambda x:k2*x + y0-k2*x0]) p , e = optimize.curve_fit(piecewise_linear, x, y) xd = np.linspace(0, 15, 100) plt.plot(x, y, "o") plt.plot(xd, piecewise_linear(xd, *p)) ```
> > Assume $L\_1$ is a regular language, and define: > > > $$L = \{wcv ∈ \{a, b, c\}^\* \mid |w|\_a + 2|v|\_b ≡ 3 \bmod 5, w, v ∈ L\_1\}.$$ > > > Show that $L$ is regular. > > > I first tried to prove by showing that the pumping lemma holds true, then learned that it was not a double implication and can only be used to prove languages are not regular. Then I tried to draw an NFA, but didn't make any progress. What's a good way to prove that a language like this is regular?
The answer depends on whether $L\_1$ is a language over $\{a,b\}$ or over $\{a,b,c\}$. **$L\_1$ is a language over $\{a,b\}$** In this case, the easiest way to proceed is using closure operations. Show first (by constructing a DFA) that the following language is regular: $$ L\_2 = \{wcv \mid |w|\_a+2|v|\_b \equiv 3 \bmod 5 , w,v \in \{a,b\}^\*\}. $$ Your language is $L = L\_1cL\_1 \cap L\_2$. **$L\_1$ is a language over $\{a,b,c\}$** In this case we have to be more careful. Given a DFA (or an NFA) for $L\_1$, we construct one for $L$ which in three stages: 1. Simulate $L\_1$ and keep track of the number of $a$s modulo 5. Whenever at an accepting state of $L\_1$, add a nondeterministic move to stage 2. 2. Read $c$. 3. Simulate $L\_1$ (starting from its initial state again) and keep track of the number of $b$s modulo 5. A state is accepting if it is accepting for $L\_1$ and the constraint $|w|\_a+2|v|\_b$ is satisfied, where $w$ is the word read at the first stage, and $v$ is the word read at the third stage. I'll let you fill in the details.
I wish there were more, but the subject pretty much captures my whole question. Is there a non-Turing-complete model (some constrained term rewriting system or automaton or what have you) which is known to be able to enumerate the prime numbers, all of the prime numbers (well, til you pull the plug on the algorithm), and only the prime numbers? To be clear, one of the criteria I'm imposing is that this algorithm would be **non-halting**, and so long as it is left running, it will continue intermittently outputting prime numbers. This seems to necessitate an unbounded working memory, which gets you a lot of the way towards Turing-complete already. Furthermore, this must be a model with a *finite* description, meaning no cute "consider the system mapping the natural numbers to the primes" answers, please. Even if technically correct, what I'm really after is whether it seems probable that prime enumeration would be a strong indicator of Turing completeness. **Edit:** As for the objection that Turing machines can't yield values at will, I consider that semantics and not relevant to the spirit of the question. A Turing machine could certainly record all prime numbers found thus far on its tape, which we could presumably examine at will.
As observed above, "locally" the problem of enumerating primes is very easy: the function sending $n$ to the $n$th prime, $n\mapsto \mathsf{thprime}(n)$, is primitive recursive (and of course that's massive overkill). However, you specifically want a framework which allows programs to run forever. This complicates things a bit: in some sense we want the pseudocodish expression > > for i in Nat: print$(\mathsf{thprime}(i))$ > > > to live inside some simple context despite the unbounded for-loop. There is in fact something we can do here, which may be highly unnatural from a CS perspective but is quite natural from a *logic* perspective: require our programs to **come with proofs of niceness** (in the appropriate sense, over some fixed appropriate theory). For example, let a **ZFC-tame program** be a *pair* $(p,\pi)$, where: * $p$ is a program in the usual sense, and * $\pi$ is a formal $\mathsf{ZFC}$-proof that $p$ prints infinitely many things (so there's no "running forever without doing anything"). Of course this is still a bit vague but the point should be clear. It turns out that this will always fall well short of Turing completeness: > > There is no ZFC-tame program $(p,\pi)$ which prints the sequence $(k\_i)\_{i\in\omega}$, where $k\_i$ is the $i$th output of the $i$th ZFC-tame program in some appropriate enumeration. > > > *(Compare with the "theory-free" result that the set $\mathsf{Tot}$ of indices of total computable functions is not computably enumerable.)* This *despite* the fact that we can very simply enumerate the ZFC-tame programs! [This old MSE answer of mine](https://math.stackexchange.com/questions/2272734/diagonalization-on-turing-machines-and-proofs-where-does-the-argument-fail/2273392#2273392) discusses essentially this point.
I am new to this site and this question is certainly not research level - but oh well. I have a little background in software engineering and almost none in CSTheory, but I find it attractive. To make a long story short, I would like a more detailed answer to the following if this question is acceptable on this site. So, I know that every recursive program has an iterative analog and I kind of understand the popular explanation that is offered for it by maintaining something similar to the "system stack" and pushing environment settings like return address etc. I find this kind of handwavy. Being a little more concrete, I would like to (formally) see how does one prove this statement in cases where you have a function invoking chain $F\_0 \rightarrow F\_1 \ldots F\_i \rightarrow F\_{i+1} \ldots F\_n \rightarrow F\_0$. Further, what if there are some conditional statements which might lead an $F\_i$ make a call to some $F\_j$? That is, the potential function call graph has some strongly connected components. I would like to know how can these situations be handled by let us say some recursive to iterative converter. And is the handwavy description I referred to earlier, really enough for this problem? I mean then why is it that I find removing recursion in some cases easy. In particular removing recursion from pre-order traversal of a Binary tree is really easy - its a standard interview question but removing recursion in case of post order has always been a nightmare for me. What I am really asking is $2$ questions (1) Is there really a more formal (convincing?) proof that recursion can be converted to iteration? (2) If this theory is really out there, then why is it that I find, for eg, *iteratizing* preorder easier and postorder so hard? (other than my limited intelligence)
If you're familiar with languages that support lambdas then one avenue is to look into the CPS transformation. Removing use of the call stack (and recursion in particular) is exactly what the CPS transformation does. It transforms a program containing procedure calls into a program with only tail calls (you can think of these as gotos, which is an iterative construct). The CPS transformation is closely related to explicitly keeping a call stack in a traditional array based stack, but instead of in an array the call stack is represented with linked closures.
One of the methods for dealing with seasonal influence is to establish a multiplicative factor for each season within a "year". For example, this happens with exponential smoothing models of type $(\*,\*,M)$. These adjustment factors need to be normalized - if there are $m$ seasons in a "year", then the seasonality normally represents only $m-1$ degrees of freedom. The standard normalization requirement for adjustment factors $f\_1, \dotsc, f\_m$ seems to be to require that $\sum\_{i=1}^m f\_i = m$ - that is, the arithmetic mean of the factors is $1$. This is certainly the case in Hyndman et al., "Forecasting wtih Exponential Smoothing" (section 8.2). I would have expected the requirement that the geometric mean be $1$, i.e., $\prod\_{i=1}^m f\_i = 1$. This because the standard for *additive* seasonal adjustment is that $\sum\_{i=1}^m f\_i = 0$; these two are equivalent under taking logarithms. Is there a good mathematical reason for the choice, or is it just a convention?
It doesn't make much difference in practice. But your suggested multiplicative adjustment is more consistent with the way the seasonal factors are used in ETS(\*,\*,M) models.
[Why use softmax as opposed to standard normalization?](https://stackoverflow.com/questions/17187507/why-use-softmax-as-opposed-to-standard-normalization) In the comment area of the top answer of this question, @Kilian Batzner raised 2 questions which also confuse me a lot. It seems no one gives an explanation except numerical benefits. > > I get the reasons for using Cross-Entropy Loss, but how does that relate to the softmax? You said "the softmax function can be seen as trying to minimize the cross-entropy between the predictions and the truth". Suppose, I would use standard / linear normalization, but still use the Cross-Entropy Loss. Then I would also try to minimize the Cross-Entropy. So how is the softmax linked to the Cross-Entropy except for the numerical benefits? > > > As for the probabilistic view: what is the motivation for looking at log probabilities? The reasoning seems to be a bit like "We use e^x in the softmax, because we interpret x as log-probabilties". With the same reasoning we could say, we use e^e^e^x in the softmax, because we interpret x as log-log-log-probabilities (Exaggerating here, of course). I get the numerical benefits of softmax, but what is the theoretical motivation for using it? > > >
It is more than just numerical. A quick reminder of the softmax: $$ P(y=j | x) = \frac{e^{x\_j}}{\sum\_{k=1}^K e^{x\_k}} $$ Where $x$ is an input vector with length equal to the number of classes $K$. The softmax function has 3 very nice properties: 1. it normalizes your data (outputs a proper probability distribution), 2. is differentiable, and 3. it uses the exp you mentioned. A few important points: 1. The loss function is not directly related to softmax. You can use standard normalization and still use cross-entropy. 2. A "hardmax" function (i.e. argmax) is not differentiable. The softmax gives at least a minimal amount of probability to all elements in the output vector, and so is nicely differentiable, hence the term "soft" in softmax. 3. Now I get to your question. The $e$ in softmax is the natural exponential function. **Before** we normalize, we transform $x$ as in the graph of $e^x$: [![natural exponential function](https://i.stack.imgur.com/Rs2tj.png)](https://i.stack.imgur.com/Rs2tj.png) If $x$ is 0 then $y=1$, if $x$ is 1, then $y=2.7$, and if $x$ is 2, now $y=7$! A huge step! This is what's called a non-linear transformation of our unnormalized log scores. The interesting property of the exponential function combined with the normalization in the softmax is that high scores in $x$ become much more probable than low scores. **An example**. Say $K=4$, and your log score $x$ is vector $[2, 4, 2, 1]$. The simple argmax function outputs: $$ [0, 1, 0, 0] $$ The argmax is the goal, but it's not differentiable and we can't train our model with it :( A simple normalization, which is differentiable, outputs the following probabilities: $$ [0.2222, 0.4444, 0.2222, 0.1111] $$ That's really far from the argmax! :( Whereas the softmax outputs: $$ [0.1025, 0.7573, 0.1025, 0.0377] $$ That's much closer to the argmax! Because we use the natural exponential, we hugely increase the probability of the biggest score and decrease the probability of the lower scores when compared with standard normalization. Hence the "max" in softmax.
**Input:** A set of $\ell$ arrays $A\_i$ (of numbers). The elements within each array are in sorted order, but the set of arrays is not necessarily sorted. The arrays are not necessarily the same size. The total number of elements is $n$. **Output:** The $k$th smallest element out of all elements in the input. What's the most efficient algorithm for this problem? Is it possible, for example to achieve a running time of $O(\ell + \log n)$?
Here is a randomized $O(\ell\log^2 n)$ algorithm. It can probably be derandomized using the same trick used to derandomize the usual quickselect. We emulate the classical quickselect algorithm. In each phase, you pick a pivot and calculate how many elements are below it, in $O(\ell\log n)$, using binary search in each list. Then you remove elements on the wrong side, and repeat. The process ends after $\log n$ iterations in expectation.
I trained a linear regression model on some data. Now I have the intercept and the other coefficients. How to relate that with percent change in target given some percent change in a feature, keeping all others constant?
Let's say we have fitted a model such as: $$ y = x\_1 + x\_2 + \epsilon$$ and we obtained esimates giving the following equation: $$ \hat{y} = 2x\_1 + 3x\_2 $$ Thus for a 1 unit change in $x\_1$ we expect a change of 2 units of $y$. It is not possible to get the expected % change in $y$ from a % change in $x$ unless the variables are on a log scale prior to running the model.
Consider the maximum integral flow problem on a directed graph $G=(V,E)$ with integral capacities $c:E\to \mathbb{N}$. We have an additional constraint that for the set of edges in $F\subseteq E$, the flow value has to be even. Such flow is called $F$-even max-flow. Is finding the maximum $F$-even max-flow NP-hard? **The gap between $F$-even max-flow and max-flow** Consider 2 edges $(s,v)$,$(v,t)$. where edge $(s,v)$ has capacity $2$, and $F=\{(s,v)\}$. $(v,t)$ has capacity 1. The max flow is 1 and $F$-even max-flow is 0. One can use this to construct larger examples. The difference between $F$-even max-flow and max-flow is bounded by $|E|$. Setting $c'(e) = \lfloor c(e)/2 \rfloor$, we can compute a maximum flow with respect to $c'$. We can scale it to obtain a $E$-even max-flow with respect to $c$. The difference with the max-flow with respect to $c$ is at most $|E|$. Maybe one can show it is bounded by $|F|$.
We can construct a widget for an all-or-nothing flow of capacity 4 from vertex s to t using the widget below. The stars (\*) indicate even flows. By recursively applying similar widgets one can emulate all-or-nothing flows of any small size, so we can reduce the all-or-nothing flow problem to it, which we know is NP-hard (<https://cs.stackexchange.com/a/114903/28999>). [![enter image description here](https://i.stack.imgur.com/6DCG4.png)](https://i.stack.imgur.com/6DCG4.png)
I am using the bootstrap algorithm to compute standard errors of the estimates of my normalmixEM output. I am not really sure if they are reliable? My code is (data [here](http://www.sendspace.com/file/80tr5g)): ``` # load package install.packages("mixtools") library(mixtools) B = 1000 # Number of bootstrap samples mu1sample <- mu2sample <- sigma1sample <- sigma2sample <- lambdasample <- vector() # Bootstrap for(i in 1:B){ print(i) subsample = sample(mydatatest,rep=T) normalmix <- normalmixEM(subsample, mu=c(-0.002294,0.002866),sigma=c(0.00836,0.02196), lambda=c(0.6746903,(1-0.6746903)),k=2, fast=FALSE, maxit=10000, epsilon = 1e-16, maxrestarts=1000) mu1sample[i] = normalmix$mu[1] # $ mu2sample[i] = normalmix$mu[2] # $ sigma1sample[i] = normalmix$sigma[1] # $ sigma2sample[i] = normalmix$sigma[2] # $ lambdasample[i] = normalmix$lambda[1] # $ } # standard errors sd(mu1sample) sd(mu2sample) sd(sigma1sample) sd(sigma2sample) sd(lambdasample) # show distribution of the bootstrap samples hist(mu1sample) hist(mu2sample) hist(sigma1sample) hist(sigma2sample) hist(lambdasample) ``` This gives the following pictures: mu1 ![m1](https://i.stack.imgur.com/L0n8F.png) mu2 ![m2](https://i.stack.imgur.com/RG2OA.png) sigma1 ![s1](https://i.stack.imgur.com/UnUcm.png) sigma2 ![s2](https://i.stack.imgur.com/iipWM.png) lambda ![la](https://i.stack.imgur.com/kHYnM.png) EDIT: If you look at my variable, the mydatatest and use a KD to show the distribution with the following code ``` plot(density(mydatatest),col="red",main="",lwd=2,cex.axis=1.2,cex.lab=1.2) ``` it looks like ![tt](https://i.stack.imgur.com/kaAYE.png) 2nd EDIT: I now included the mus and sigmas to be fixed. I updated the code and the pictures. Now again my question, what do you think about it?
As explained by Nick Cox and an anonymous user, what you think of as instability is just what the mixture models do: they don't care about labels unless you make it very clear that you know what your modes look like, roughly. In terms of what you can do about fixing the labels where you need them to be, you would want to feed the full sample estimates of everything (both $\mu$s, both $\sigma$s, not just the $\lambda$ that you are feeding in now) as starting values. One can argue that this violates the spirit of maximum likelihood, but that may be the best you can do. If that does not really work, you may have to force even more information in, like insisting that $\mu\_1 < \mu\_2 - \delta$ and $\sigma\_1 < \sigma\_2 - \Delta$ and $\lambda > \frac12$. If `normalmixEM()` does not support that kind of cruelty to the parameter space, you would need to write your own likelihood with your own parameterization that accounts for such relations.
Suppose $X=[X\_{1},X\_{2}]$ and $X$~$N\_2(μ,Σ)$. I wish to find the distribution of $5X\_{1}^2+2X\_{1}X\_{2}+X\_{2}^2$. Since this is of a quadratic form I do not know a way of solving this. However I kind of feel like its chi-squared distributed with parameter 2. Any answers will be welcomed. Thanks
As Alecos and nivag pointed out, there is no closed form distribution. Only approximations are available. The latest such approximation is by Liu, Tang and Zhang: "A new chi-square approximation to the distribution of non-negative definite quadratic forms in non-central normal variables", CSDA 2008. See in the references there for other approximations.
What is the right way to think of omitted variable bias in a regression that only has dummy variables? Let's say I have the following equation: (1) y=β0+β1x1+β2x2+β3x3+ϵ, where y is the price of shoes; and x1, x2 and x3 are dummy variables for three different regions (x4, the reference region was omitted). And I suspect that I am missing a variable (x4) to adjust for 'level of urbanization of each region' -- each region contains an uneven number of areas with different levels of urbanization. Thus, the true model is, or so I suspect: (2) y=β0+β1x1+β2x2+β3x3+δx4+ϵ Now I know I can sign the bias of any one of the coefficients in equation (1) if two conditions are met: a) x4 is correlated with either x1, x2 or x3; and b) x4 has an impact on y (i.e. δ>0). However, I am not sure if I can explore condition a) above since the correlation of x4, in this case, would be with a categorical variable, not a continuous one. How can I go about this?
Whether a variable is binary or continuous has no bearing on its potential for omitted variable bias (unless it's a poorly measured version of a truly continuous variable). If the "true" model uses the binary variables as they are, then the same rules that apply for continuous predictors and continuous omitted variables apply here. Regression models don't know whether predictors are continuous or binary; they treats them all as continuous. It is only our interpretation of them as binary that makes them any different from continuous variables. Again, things get hairy if the binary variable is a categorized version of a continuous variable (or has measurement error in it some other way).
What are good papers/books to better understand the power of Modular Decomposition and its properties? I'm particularly interested in algorithmic aspects of Modular Decomposition. I have heard that it is possible to find a Modular Decomposition of a graph in linear time. Is there are an relatively simple algorithm for that? What about a not so efficient but simpler algorithm?
There is a recent survey Habib and Paul (2010). [A survey of the algorithmic aspects of modular decomposition.](http://dx.doi.org/10.1016/j.cosrev.2010.01.001) Computer Science Review 4(1): 41-59 (2010) that you should check out.
The scenario: Suppose there is a square shaped region with three cities in it. A dataset is collected by citizens, who collect data where they live. Thus, data is collected much more heavily in those three parts of the state. Traditional response I see in the literature: This uneven distribution creates a spatial bias. Overlay a grid across the state, and sample (say) 100 observations from each grid cell, assuming each cell has 100 datapoints in it. This way, each part of the state is evenly represented in the data. This may result in throwing out lots of data in some cells (the ones near or in cities), but that is okay and better than having the bias. Me: I do not understand why having unevenly distributed data across the state calls for this type of sampling. I see no reason this will create bias. Can someone please enlighten me? EDIT: I also have landcover data, including satellite data the tells if a given area is urban, grassland, forest, etc. Wouldn't this make sampling as described obsolete? Background on the model: I am modeling the abundance of a bird species as a function of land and observer effort covariates. Examples of covariates are: percentage of land type in a 2.5 kilometer by 2.5 kilometer region around the species observation (urban, forest, etc.), time species was observed, and time/distance traveled by observer that obtained the observation. Most raster cells don't have the bird (absence was recorded), so I'm using a zero-inflated Poisson model.
The basic problem is that individuals or households or whatever the observational units happen to be are not randomly distributed across space. Consider for example individuals. To some extent people choose themselves where to live. However they do so according to their preferences (if your are an economist) hence certain people live in certain places and the people of one area are therefore not representative for the population of interest (as @Whuber points out this offcourse depend on what population you are interested in). Consider a simple regression model $$y\_{i} = x\_{i}^\top \beta + z\_{c(i)} \lambda + \epsilon\_{i},$$ where $z\_{c(i)}$ is the characteristic $z\_c$ of the area $c(i)$ that individual $i$ have chosen to live. Hence $z\_{c(i)} := \sum\_c z\_c 1[i \in c]$ such that $$y\_{i} = x\_{i}^\top \beta + \sum\_c z\_c 1[i \in c] \lambda + \epsilon\_{i},$$ if the unobserved characteristics of the individual $\epsilon\_i$ are relevant for where the individual choose to live then the indicator $1[i \in c]$ is assumedly correlated with $\epsilon\_i$ hence $z\_{c(i)} = \sum\_c z\_c 1[i \in c]$ is correlated with $\epsilon\_i$ and the OLS estimation becomes inconsistent. You could think of $z\_c$ as the population density of the city and $y\_i$ as individual worker wage. High density trigger mechanisms of agglomeration resulting in higher wages (imaginge some theory claiming that a causal effect of high density is higher productivity resulting in higher wages, perhaps something like big cities have airports and other high fix costs facilities that firms can share making them more productive, due to the high fixed cost such shared facilities do not appear in low density areas). This effect of density is assumed homogeneous across US-cities hence $\lambda$ measuring this efefct is not city specific. However individuals with high unobserved skill (which is people with high $\epsilon\_i$ - after all high $\epsilon\_i$ means higher wage $y\_i$ so the skills we do not observe and assume affect $y\_i$ is in $\epsilon\_i$ hopefully intuitive) are starting to return to the center of US metropolitan areas (high density $z\_c$ areas). They choose to live in the high density centers of metropolitan areas perhaps because they are attracted by the aminities. Nevertheless this means that the observed $z\_{c(i)} = \sum\_c z\_c 1[i \in c]$ becomes positive correlated with $\epsilon\_i$. The problem is then that when you measure the effect of density on wage you erroneously include a compsitional effect (big city labor force have a larger share of high skill labour, larger share of $\epsilon\_i$) so part of the effect you attribute to density really appear becuase clever people to a higher extent prefer to live in the city rather than on the countryside (getting up early and milking the cow). The indicator is only there to make the choice explicit. What you want to do is to assign individuals randomly to cities (but short of dictatorship this is hard). *You should not read to much into this example. It is just a stylished example to illustrate how a non-random distribution of individuals across space can result in bias.*
By <http://www.cs.umd.edu/~jkatz/complexity/relativization.pdf> If $A$ is a PSPACE-complete language, $P^{A}=NP^{A}$. If $B$ is a deterministic polynomial-time oracle, $P^{B}\ne NP^{B}$ (assuming $P\ne NP$). $PP$ is the class of decision problems analog for $\#P$ and $P\subseteq PP\subseteq PSPACE$, but neither $P=PP$ nor $PP=PSAPCE$ is known. But is it true that $coNP^{\#P}=NP^{\#P}=P^{\#P}$?
It is an open problem in complexity theory for many years if $\mathsf{PH}^{\mathsf{\#P}}$ collapse, where $\mathsf{PH}$ is the polynomial time hierarchy. It is also an open problem to construct an oracle to separate $\mathsf{P}^{\mathsf{\#P}}$ from $\mathsf{PSPACE}$.
We have a DAG. We have a function on the nodes $F\colon V\to \mathbb N$ (loosely speaking, we number the nodes). We would like to create a new directed graph with these rules: 1. Only nodes with the same number can be contracted into the same new node. $F(x) \neq F(y) \Rightarrow x' \neq y'$. (However, $x' \neq y'\nRightarrow F(x) \neq F(y)$.) 2. We add all the old edges between new nodes: $(x,y) \in E \land x' \neq y' \iff (x',y')\in E'$. 3. This new graph is still a DAG. What is the minimal $|V'|$? What is an algorithm creating a minimal new graph?
One approach to solving this problem would be to use integer linear programming (ILP). Let's tackle the decision version of the problem: given $k$, is there a way to contract same-color vertices to get a DAG of size $\le k$? This can be expressed as an ILP instance using standard techniques. We're given the color of each vertex in the original graph. I suggest that we label each vertex with a label in $\{1,2,\dots,k\}$; all vertices with the same label and same color will be contracted. So, the decision problem becomes: does there exist a labelling, such that contracting all same-color same-label vertices yields a DAG? To express this as an integer linear program, introduce an integer variable $\ell\_v$ for each vertex $v$, to represent the label on vertex $v$. Add the inequality $1 \le \ell\_v \le k$. The next step is to express the requirement that the contracted graph must be a DAG. Notice that if there is a labelling of the form listed above, without loss of generality there exists such a labelling where the labels induce a topological sort on the contracted graph (i.e., if $v$ precedes $w$ in the contracted graph, then $v$'s label is smaller than $w$'s label). So, for each edge $v\to w$ in the original graph, we'll add the constraint that either $v$ and $w$ have the same label and same color, or else $v$'s label is smaller than $w$'s label. Specifically, for each edge $v\to w$ in the initial graph where $v,w$ have the same color, add the inequality $\ell\_v \le \ell\_w$. For each edge $v \to w$ where $v,w$ have different colors, add the inequality $\ell\_v < \ell\_w$. Now see if there is any feasible solution to this integer linear program. There will be a feasible solution if and only if the labelling is of the desired form (i.e., contracting all same-color same-label vertices yields a DAG). In other words, there will be a feasible solution if and only if there is a way to contract the original graph to a DAG of size $\le k$. We can use any integer linear programming solver; if the ILP solver gives us an answer, we have an answer to the original decision problem. Of course, this isn't guaranteed to complete in polynomial time. There are no guarantees. However, ILP solvers have gotten pretty good. I would expect that, for a reasonable-sized graph, you've got a decent chance that an ILP solver might be able to solve this problem in a reasonable amount of time. It's also possible to encode this as a SAT instance and use a SAT solver. I don't know whether that would be more effective. The ILP version is probably easier to think about, though. (I hope this is right. I haven't checked every detail carefully, so please double-check my reasoning! I hope I haven't gone awry somewhere.) --- Update (10/21): It looks like ILPs of this form can be solved in linear time, by processing the DAG in topologically sorted order and keeping track of the lower bound on the label for each vertex. This has me suspicious of my solution: have I made a mistake somewhere?
I clustered data according to latitude and longitude.Use the kmeans++ for increasing the accuracy of kmeans. But still result does not change much. Here is my db index graph for kmeans++ ![For Kmeans++](https://i.stack.imgur.com/2QD1Z.png) For kmeans ![kmeans](https://i.stack.imgur.com/iNlnX.png) I could not decide which should be the optimal number of clusters. I also plot the sum of squares of clusters. For kmeans++ ![kmeans++](https://i.stack.imgur.com/10Ojw.png) for kmeans ![kmeans](https://i.stack.imgur.com/qHgik.png) What should I do? Give me suggestion. Here is the value of boot and noise using clusterboot ![boot & noise](https://i.stack.imgur.com/vpj5K.png)
Let me first explain what a [conjugate prior](http://en.wikipedia.org/wiki/Conjugate_prior) is. I will then explain the Bayesian analyses using your specific example. Bayesian statistics involve the following steps: 1. Define the *prior distribution* that incorporates your subjective beliefs about a parameter (in your example the parameter of interest is the proportion of left-handers). The prior can be "uninformative" or "informative" (but there is no prior that has no information, see the discussion [here](https://stats.stackexchange.com/questions/20520/what-is-an-uninformative-prior-can-we-ever-have-one-with-truly-no-information)). 2. Gather data. 3. Update your prior distribution with the data using Bayes' theorem to obtain a *posterior distribution.* The posterior distribution is a probability distribution that represents your updated beliefs about the parameter after having seen the data. 4. Analyze the posterior distribution and summarize it (mean, median, sd, quantiles, ...). The basis of all bayesian statistics is Bayes' theorem, which is $$ \mathrm{posterior} \propto \mathrm{prior} \times \mathrm{likelihood} $$ In your case, the likelihood is binomial. If the prior and the posterior distribution are in the *same family,* the prior and posterior are called *conjugate* distributions. The beta distribution is a conjugate prior because the posterior is also a beta distribution. We say that the beta distribution is the conjugate family for the binomial likelihood. Conjugate analyses are convenient but rarely occur in real-world problems. In most cases, the posterior distribution has to be found numerically via MCMC (using Stan, WinBUGS, OpenBUGS, JAGS, PyMC or some other program). If the prior probability distribution does not integrate to 1, it is called an *improper* prior, if it does integrate to 1 it is called a *proper* prior. In most cases, an improper prior does not pose a major problem for Bayesian analyses. The posterior distribution *must* be proper though, i.e. the posterior must integrate to 1. These rules of thumb follow directly from the nature of the Bayesian analysis procedure: * If the prior is uninformative, the posterior is very much determined by the data (the posterior is data-driven) * If the prior is informative, the posterior is a mixture of the prior and the data * The more informative the prior, the more data you need to "change" your beliefs, so to speak because the posterior is very much driven by the prior information * If you have a lot of data, the data will dominate the posterior distribution (they will overwhelm the prior) An excellent overview of some possible "informative" and "uninformative" priors for the beta distribution can be found in [this post](https://stats.stackexchange.com/questions/297901/choosing-between-uninformative-beta-priors/298176#298176). Say your prior beta is $\mathrm{Beta}(\pi\_{LH}| \alpha, \beta)$ where $\pi\_{LH}$ is the proportion of left-handers. To specify the prior parameters $\alpha$ and $\beta$, it is useful to know the mean and variance of the beta distribution (for example, if you want your prior to have a certain mean and variance). The mean is $\bar{\pi}\_{LH}=\alpha/(\alpha + \beta)$. Thus, whenever $\alpha =\beta$, the mean is $0.5$. The variance of the beta distribution is $\frac{\alpha\beta}{(\alpha + \beta)^{2}(\alpha + \beta + 1)}$. Now, the convenient thing is that you can think of $\alpha$ and $\beta$ as previously observed (pseudo-)data, namely $\alpha$ left-handers and $\beta$ right-handers out of a (pseudo-)sample of size $n\_{eq}=\alpha + \beta$. The $\mathrm{Beta}(\pi\_{LH} |\alpha=1, \beta=1)$ distribution is the uniform (all values of $\pi\_{LH}$ are equally probable) and is the equivalent of having observed two people out of which one is left-handed and one is right-handed. The posterior beta distribution is simply $\mathrm{Beta}(z + \alpha, N - z +\beta)$ where $N$ is the size of the sample and $z$ is the number of left-handers in the sample. The posterior mean of $\pi\_{LH}$ is therefore $(z + \alpha)/(N + \alpha + \beta)$. So to find the parameters of the posterior beta distribution, we simply add $z$ left-handers to $\alpha$ and $N-z$ right-handers to $\beta$. The posterior variance is $\frac{(z+\alpha)(N-z+\beta)}{(N+\alpha+\beta)^{2}(N + \alpha + \beta + 1)}$. Note that a highly informative prior also leads to a smaller variance of the posterior distribution (the graphs below illustrate the point nicely). In your case, $z=2$ and $N=18$ and your prior is the uniform which is uninformative, so $\alpha = \beta = 1$. Your posterior distribution is therefore $Beta(3, 17)$. The posterior mean is $\bar{\pi}\_{LH}=3/(3+17)=0.15$. Here is a graph that shows the prior, the likelihood of the data and the posterior ![The prior, the likelihood of the data and the posterior distribution with a uniform prior](https://i.stack.imgur.com/vi1e6.png) You see that because your prior distribution is uninformative, your posterior distribution is entirely driven by the data. Also plotted is the highest density interval (HDI) for the posterior distribution. Imagine that you put your posterior distribution in a 2D-basin and start to fill in water until 95% of the distribution are above the waterline. The points where the waterline intersects with the posterior distribution constitute the 95%-HDI. Every point inside the HDI has a higher probability than any point outside it. Also, the HDI always includes the peak of the posterior distribution (i.e. the mode). The HDI is different from an equal tailed 95% credible interval where 2.5% from each tail of the posterior are excluded (see [here](http://doingbayesiandataanalysis.blogspot.ch/2012/04/why-to-use-highest-density-intervals.html)). For your second task, you're asked to incorporate the information that 5-20% of the population are left-handers into account. There are several ways of doing that. The easiest way is to say that the prior beta distribution should have a mean of $0.125$ which is the mean of $0.05$ and $0.2$. But how to choose $\alpha$ and $\beta$ of the prior beta distribution? First, you want your mean of the prior distribution to be $0.125$ out of a pseudo-sample of equivalent sample size $n\_{eq}$. More generally, if you want your prior to have a mean $m$ with a pseudo-sample size $n\_{eq}$, the corresponding $\alpha$ and $\beta$ values are: $\alpha = mn\_{eq}$ and $\beta = (1-m)n\_{eq}$. All you are left to do now is to choose the pseudo-sample size $n\_{eq}$ which determines how confident you are about your prior information. Let's say you are very sure about your prior information and set $n\_{eq}=1000$. The parameters of your prior distribution are thereore $\alpha = 0.125\cdot 1000 = 125$ and $\beta = (1 - 0.125)\cdot 1000 = 875$. The posterior distribution is $\mathrm{Beta}(127, 891)$ with a mean of about $0.125$ which is practically the same as the prior mean of $0.125$. The prior information is dominating the posterior (see the following graph): ![The prior, the likelihood of the data and the posterior distribution with strong informative prior](https://i.stack.imgur.com/Dk2cJ.png) If you are less sure about the prior information, you could set the $n\_{eq}$ of your pseudo-sample to, say, $10$, which yields $\alpha=1.25$ and $\beta=8.75$ for your prior beta distribution. The posterior distribution is $\mathrm{Beta}(3.25, 24.75)$ with a mean of about $0.116$. The posterior mean is now near the mean of your data ($0.111$) because the data overwhelm the prior. Here is the graph showing the situation: ![The prior, the likelihood of the data and the posterior distribution with beta prior corresponding to a pseudo-sample size of 3](https://i.stack.imgur.com/xnNr5.png) A more advanced method of incorporating the prior information would be to say that the $0.025$ quantile of your prior beta distribution should be about $0.05$ and the $0.975$ quantile should be about $0.2$. This is equivalent of saying that your are 95% sure that the proportion of left-handers in the population lies between 5% and 20%. The function `beta.select` in the R package `LearnBayes` calculates the corresponding $\alpha$ and $\beta$ values of a beta distribution corresponding to such quantiles. The code is ``` library(LearnBayes) quantile1=list(p=.025, x=0.05) # the 2.5% quantile should be 0.05 quantile2=list(p=.975, x=0.2) # the 97.5% quantile should be 0.2 beta.select(quantile1, quantile2) [1] 7.61 59.13 ``` It seems that a beta distribution with paramters $\alpha = 7.61$ and $\beta=59.13$ has the desired properties. The prior mean is $7.61/(7.61 + 59.13)\approx 0.114$ which is near the mean of your data ($0.111$). Again, this prior distribution incorporates the information of a pseudo-sample of an equivalent sample size of about $n\_{eq}\approx 7.61+59.13 \approx 66.74$. The posterior distribution is $\mathrm{Beta}(9.61, 75.13)$ with a mean of $0.113$ which is comparable with the mean of the previous analysis using a highly informative $\mathrm{Beta}(125, 875)$ prior. Here is the corresponding graph: ![The prior, the likelihood of the data and the posterior distribution with prior that has 0.05 and 0.975 quantiles of 0.05 and 0.2](https://i.stack.imgur.com/AvnZk.png) See also [this reference](http://patricklam.org/teaching/bayesianhour_print.pdf) for a short but imho good overview of Bayesian reasoning and simple analysis. A longer introduction for conjugate analyses, especially for binomial data can be found [here](http://patricklam.org/teaching/conjugacy_print.pdf). A general introduction into Bayesian thinking can be found [here](http://patricklam.org/teaching/bayesian_print.pdf). More slides concerning aspects of Baysian statistics are [here](http://patricklam.org/#teaching).
There is plenty of machine/assembly languages such as LC-3, DLX, etc. designed for educational purposes. I am looking for an educational VM, by VM I mean stack virtual machine that has instructions higher-level than assembly language, something similar to JVM, but much simpler, in order to make the implementation of a compiler for this VM a doable task for a single person in restricted time, but yet practical to be a compilation target for high-level OOP language. I failed to google one, are there any?
The [WebAssembly](https://webassembly.org) has a [formal specification](https://webassembly.github.io/spec/), a [reference interpreter](https://github.com/WebAssembly/spec) implemented in OCaml, and a bunch of [advanced tools](https://webassembly.org/getting-started/advanced-tools/). And it's the Real Thing. If you're looking for something educational and do not care about usability, you can have a look at my [Programming Langauges Zoo](http://plzoo.andrej.com). Two languages have VMs and compilers: 1. The functional language [MiniML](http://plzoo.andrej.com/language/miniml.html) has a [virtual machine](https://github.com/andrejbauer/plzoo/blob/master/src/miniml/machine.ml) and a [compiler](https://github.com/andrejbauer/plzoo/blob/master/src/miniml/compile.ml). 2. The imperative language [comm](http://plzoo.andrej.com/language/comm.html) has a simple [stack VM](https://github.com/andrejbauer/plzoo/blob/master/src/comm/machine.ml) and a [compiler](https://github.com/andrejbauer/plzoo/blob/master/src/comm/compile.ml). If you run it with the `--code` command-line option it will show you the compiled code.
From this statement > > As there is no surjection from $\mathbb{N}$ onto $\mathcal{P}(\mathbb{N})$, thus there must exist an undecidable language. > > > I would like to understand why similar reasoning does not work with a *finite* set $B$ which also has no surjection onto $\mathcal{P}(B)$! (with $|B|=K$ and $K \in \mathbb{N}$) Why is there a minimum need for the infinite set? **EDIT Note:** Although I chose an answer, many answers **and all comments** are important.
If your language $L'$ is finite, you can perform [table lookup](https://en.wikipedia.org/wiki/Lookup_table) on a hardcoded table containing all words in $L'$. This is awkward to write down as Turing machine, but in other, equivalent models that is quite clear. In fact, finite automata are sufficient. Construct an automaton for $L'$ as follows: 1. For every $w \in L'$, create a linear chain of states that accepts $w$. 2. Create a new initial state $q\_0$. 3. Connect $q\_0$ to the initial states of all automata constructed in 1. with $\varepsilon$-transitions. The thus constructed automaton obviously accepts $L'$. Therefore, $L'$ is regular and therewith computable (by $\mathrm{REG} \subsetneq \mathrm{RE}$). Note that the some reasoning applies for *co-finite* $L'$, that is $\big|\overline{L'}\big| < \infty$; you just hardcode the elements *not* in $L'$.
A very specific question, I'm aware, and I doubt it will be answered by anyone that isn't already familiar with the rules of Magic. Cross-posted to [Draw3Cards](http://draw3cards.com/questions/2851/is-magic-turing-complete). Here are the [comprehensive rules for the game Magic: the Gathering](http://www.wizards.com/magic/comprules/MagicCompRules_20101001.txt). See [this question](http://draw3cards.com/questions/2852/an-online-list-of-all-magic-cards) for a list of all Magic Cards. My question is - is the game Turing Complete? For more details, please see [the post at Draw3Cards](http://draw3cards.com/questions/2851/is-magic-turing-complete).
[Alex Churchill (@AlexC)](https://cstheory.stackexchange.com/users/1713/alexc) has posted a solution that does *not* require cooperation between the players, but rather models the complete execution of a universal Turing machine with two states and 18 tape symbols. For details, see <https://www.toothycat.net/~hologram/Turing/> [[archive](http://archive.is/xbCak)].
How can I test whether the mean (e.g., blood pressure) of a subgroup (e.g., those who died) differs from the whole group (e.g., everyone who had the disease including those that died)? Clearly, the first one is a subgroup of the second one. What hypothesis test should I use?
The way to test here is to compare those who had the disease and died to those who had the disease and did not die. You could apply the two sample t test or the Wilcoxon rank sum test if normality cannot be assumed.
I've been reading a bit on boosting algorithms for classification tasks and Adaboost in particular. I understand that the purpose of Adaboost is to take several "weak learners" and, through a set of iterations on training data, push classifiers to learn to predict classes that the model(s) repeatedly make mistakes on. However, I was wondering why so many of the readings I've done have used decision trees as the weak classifier. Is there a particular reason for this? Are there certain classifiers that make particularly good or bad candidates for Adaboost?
I talked about this in an [answer to a related SO question](https://stackoverflow.com/a/20534713/2484687). Decision trees are just generally a very good fit for boosting, much more so than other algorithms. The bullet point/ summary version is this: 1. Decision trees are non-linear. Boosting with linear models simply doesn't work well. 2. The weak learner needs to be consistently better than random guessing. You don't normal need to do any parameter tuning to a decision tree to get that behavior. Training an SVM really does need a parameter search. Since the data is re-weighted on each iteration, you likely need to do another parameter search on each iteration. So you are increasing the amount of work you have to do by a large margin. 3. Decision trees are reasonably fast to train. Since we are going to be building 100s or 1000s of them, thats a good property. They are also fast to classify, which is again important when you need 100s or 1000s to run before you can output your decision. 4. By changing the depth you have a simple and easy control over the bias/variance trade off, knowing that boosting can reduce bias but also significantly reduces variance. Boosting is known to overfit, so the easy nob to tune is helpful in that regard.
Specifically, I'm looking for references (papers, books) which will rigorously show and explain the curse of dimensionality. This question arose after I began reading this [white paper](http://www3.stat.sinica.edu.tw/statistica/J16N2/editorial3.pdf) by Lafferty and Wasserman. In the third paragraph they mention a "well known" equation which implies that the best rate of convergence is $n^{-4/(4-d)}$; if anyone can expound upon that (and explain it), that would be very helpful. Also, can anyone point me to a reference which derives the "well known" equation?
Following up on richiemorrisroe, here is the relevant image from the [Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/), chapter 2 (pp22-27): ![ESL page 25](https://i.stack.imgur.com/p81s3.png) As you can see in the upper right pane, there are more neighbors 1 unit away in 1 dimension than there are neighbors 1 unit away in 2 dimensions. 3 dimensions would be even worse!
I have five millions of objects each of them having one or more tags. How do I compute statistically sound similarity score between each pair of the objects taking into account that: 1. There are 100 millions of tags most of them used only once. 2. Some tags are used very often, say one tag may be used on 1% of whole dataset. 3. Some objects are heavily tagged with hundreds of thousands tags on them while others may be tagged only a couple of times. Second question: how do I cluster objects taking into account 1-3? My guess is that k-means and other popular clustering techniques won't do much here. I've tried k-means already with simple distance defined as number of similar tags on the objects and clusters are so vague to the point being almost meaningless.
Consider the following setup. We have a $p$-dimensional parameter vector $\theta$ that specifies the model completely and a maximum-likelihood estimator $\hat{\theta}$. The Fisher information in $\theta$ is denoted $I(\theta)$. What is usually referred to as the *Wald statistic* is $$(\hat{\theta} - \theta)^T I(\hat{\theta}) (\hat{\theta} - \theta)$$ where $I(\hat{\theta})$ is the Fisher information evaluated in the maximum-likelihood estimator. Under regularity conditions the Wald statistic follows asymptotically a $\chi^2$-distribution with $p$-degrees of freedom when $\theta$ is the true parameter. The Wald statistic can be used to test a simple hypothesis $H\_0 : \theta = \theta\_0$ on the entire parameter vector. With $\Sigma(\theta) = I(\theta)^{-1}$ the inverse Fisher information the Wald test statistic of the hypothesis $H\_0 : \theta\_1 = \theta\_{0,1}$ is $$\frac{(\hat{\theta}\_1 - \theta\_{0,1})^2}{\Sigma(\hat{\theta})\_{ii}}.$$ Its asymptotic distribution is a $\chi^2$-distribution with 1 degrees of freedom. For the normal model where $\theta = (\mu, \sigma^2)$ is the vector of the mean and the variance parameters, the Wald test statistic of testing if $\mu = \mu\_0$ is $$\frac{n(\hat{\mu} - \mu\_0)^2}{\hat{\sigma}^2}$$ with $n$ the sample size. Here $\hat{\sigma}^2$ is the maximum-likelihood estimator of $\sigma^2$ (where you divide by $n$). The $t$-test statistic is $$\frac{\sqrt{n}(\hat{\mu} - \mu\_0)}{s}$$ where $s^2$ is the unbiased estimator of the variance (where you divide by the $n-1$). The Wald test statistic is almost but not exactly equal to the square of the $t$-test statistic, but they are asymptotically equivalent when $n \to \infty$. The squared $t$-test statistic has an exact $F(1, n-1)$-distribution, which converges to the $\chi^2$-distribution with 1 degrees of freedom for $n \to \infty$. The same story holds regarding the $F$-test in one-way ANOVA.
Suppose you are given a $95 \%$ CI $(1,6)$ based on the normal distribution. Is there any easy way to find $\mu$ and $\sigma$? What if it came from a gamma distribution? Can we do this in R?
**Please read the erratum at the end of the answer.** First note that there is not enough information to solve this problem. In both cases, $n$ the sample size is missing. In the case of the Gaussian distribution, assuming you know $n$, you can easily do it by following @Michael Chernick's instructions. In R that would give something like this (with $n=43$ for the sake of the example). ``` n <- 43 ci <- c(1,6) # Take the middle of the CI to get x_bar (3.5). x_bar <- mean(ci) # Use 1 = x_bar - 1.96 * sd/sqrt(n) S2 <- n^2 * (x_bar - ci[1])/1.96 ``` For the case of the Gamma distribution, things are a bit more complicated because it is not symmetric. So the mean is not in the center of the CI. For example, say you sample from a Gamma population $\Gamma(\alpha,1)$ where $\alpha$ is unknown. The sample mean is the sum of $n$ variables distributed as $\Gamma(\alpha,1)$ divided by $n$, so it is a variable distributed as $\Gamma(n\alpha,1/n)$. Say that we observed a mean of $1.7$ for a sample size of $n=5$. There are several CI that contain this value as we can check. ``` > qgamma(.975, shape=1.7*5, scale=1/5) [1] 3.019101 > qgamma(.975, shape=1.7*5, scale=1/5, lower.tail=F) [1] 0.7564186 ``` A 95% CI for $\alpha$ is $(.756, 3.019)$, the middle of which is $1.89$, not $1.70$. In short, finding the $\alpha$ and $\theta$ that produce a 95% CI is possible because the solution is unique, but it is a hack. Fortunately, as $n$ increases, the distribution becomes more and more Gaussian and symmetric, so the CI will be symmetric around the mean. The mean and variance of a $\Gamma(n\alpha,\theta/n)$ are $\alpha\theta$ and $\alpha\theta/n$, so you can use the results of the Gaussian case and solve this very simple equation to get $\alpha$ and $\theta$. **Erratum:** Following @whuber's comment I realized that the proposed way to get a confidence interval for $\alpha$ is not good. The example given above was meant to demonstrate that getting CI with Gamma variables is much more tedious than with Gaussian variables. My mistake proves the point even better. At @whuber's prompt I will show that the CI I proposed is incorrect. ``` set.seed(123) # Simulate 100,000 means of 5 Gaussian(0,1) variables (positive control). means <- rnorm(100000, sd=1/sqrt(5)) upper <- means + qnorm(.975)/sqrt(5) lower <- means - qnorm(.975)/sqrt(5) mean((upper > 0) & (lower < 0)) [1] 0.95007 # OK. # Simulate 100,000 means of 5 Gamma(1,1) variables. means <- rgamma(100000, shape=5, scale=1/5) upper <- qgamma(.975, shape=5*means, scale=1/5) lower <- qgamma(.975, shape=5*means, scale=1/5, lower.tail=FALSE) mean((upper > 1) & (lower < 1)) [1] 0.94666 # Almost, but not quite. ```
Loosely speaking, permutation pattern matching deals with problems of the following kind: > > Given permutations $\pi$ in $S\_n$ and $\sigma$ in $S\_m$, with $m\leq n$, does $\pi$ contain a [subsequence](http://en.wikipedia.org/wiki/Subsequence) $\tau$ of length $m$ whose elements are ordered according to $\sigma$? > > > For example, if $\pi=\langle 3\ 1\ 5\ 4\ 2\ 8\ 6\ 7\rangle$ and $\sigma=\langle 2\ 1\ 3\rangle$, then the subsequence $3\ 1\ 4$ matches $\sigma$. As you can see, we're not looking here for an exact match, but rather for something that "looks like" the specified pattern. Does anyone know whether work has been conducted on extending permutation pattern matching problems to strings? Google unfortunately did not help, since the well-known pattern matching problem on strings has nothing to do with this.
I finally managed to dig out [a nice survey by Kitaev and Mansour](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.87.9739&rep=rep1&type=pdf), which gives pointers to the literature related to permutation pattern matching on "usual"/signed/coloured permutations and words.
Assume I have a dataset for a supervised statistical classification task, e.g., via a Bayes' classifier. This dataset consists of 20 features and I want to boil it down to 2 features via dimensionality reduction techniques such as Principal Component Analysis (PCA) and/or Linear Discriminant Analysis (LDA). Both techniques are projecting the data onto a smaller feature subspace: with PCA, I would find the directions (components) that maximize the variance in the dataset (without considering the class labels), and with LDA I would have the components that maximize the between-class separation. Now, I am wondering if, how, and why these techniques can be combined and if it makes sense. For example: 1. transforming the dataset via PCA and projecting it onto a new 2D subspace 2. transforming (the already PCA-transformed) dataset via LDA for max. in-class separation or 1. skipping the PCA step and using the top 2 components from a LDA. or any other combination that makes sense.
**Summary: PCA can be performed before LDA to regularize the problem and avoid over-fitting.** Recall that LDA projections are computed via eigendecomposition of $\boldsymbol \Sigma\_W^{-1} \boldsymbol \Sigma\_B$, where $\boldsymbol \Sigma\_W$ and $\boldsymbol \Sigma\_B$ are within- and between-class covariance matrices. If there are less than $N$ data points (where $N$ is the dimensionality of your space, i.e. the number of features/variables), then $\boldsymbol \Sigma\_W$ will be singular and therefore cannot be inverted. In this case there is simply no way to perform LDA directly, but if one applies PCA first, it will work. @Aaron made this remark in the comments to his reply, and I agree with that (but disagree with his answer in general, as you will see now). However, this is only part of the problem. The bigger picture is that LDA very easily tends to overfit the data. Note that within-class covariance matrix *gets inverted* in the LDA computations; for high-dimensional matrices inversion is a really sensitive operation that can only be reliably done if the estimate of $\boldsymbol \Sigma\_W$ is really good. But in high dimensions $N \gg 1$, it is really difficult to obtain a precise estimate of $\boldsymbol \Sigma\_W$, and in practice one often has to have *a lot* more than $N$ data points to start hoping that the estimate is good. Otherwise $\boldsymbol \Sigma\_W$ will be almost-singular (i.e. some of the eigenvalues will be very low), and this will cause over-fitting, i.e. near-perfect class separation on the training data with chance performance on the test data. To tackle this issue, one needs to *regularize* the problem. One way to do it is to use PCA to reduce dimensionality first. There are other, arguably better ones, e.g. *regularized LDA* (rLDA) method which simply uses $(1-\lambda)\boldsymbol \Sigma\_W + \lambda \boldsymbol I$ with small $\lambda$ instead of $\boldsymbol \Sigma\_W$ (this is called *shrinkage estimator*), but doing PCA first is conceptually the simplest approach and often works just fine. Illustration ------------ Here is an illustration of the over-fitting problem. I generated 60 samples per class in 3 classes from standard Gaussian distribution (mean zero, unit variance) in 10-, 50-, 100-, and 150-dimensional spaces, and applied LDA to project the data on 2D: ![Overfitting in LDA](https://i.stack.imgur.com/r3lC0.png) Note how as the dimensionality grows, classes become better and better separated, whereas in reality there is *no difference* between the classes. We can see how PCA helps to prevent the overfitting if we make classes slightly separated. I added 1 to the first coordinate of the first class, 2 to the first coordinate of the second class, and 3 to the first coordinate of the third class. Now they are slightly separated, see top left subplot: ![Overfitting in LDA and regularization with PCA](https://i.stack.imgur.com/Gv4n7.png) Overfitting (top row) is still obvious. But if I pre-process the data with PCA, always keeping 10 dimensions (bottom row), overfitting disappears while the classes remain near-optimally separated. PS. To prevent misunderstandings: I am not claiming that PCA+LDA is a good regularization strategy (on the contrary, I would advice to use rLDA), I am simply demonstrating that it is a *possible* strategy. --- **Update.** Very similar topic has been previously discussed in the following threads with interesting and comprehensive answers provided by @cbeleites: * [Should PCA be performed before I do classification?](https://stats.stackexchange.com/questions/54547) * [Does it make sense to run LDA on several principal components and not on all variables?](https://stats.stackexchange.com/questions/62714) See also this question with some good answers: * [What can cause PCA to worsen results of a classifier?](https://stats.stackexchange.com/questions/52773)
[Here](https://stats.stackexchange.com/questions/88980/why-on-average-does-each-bootstrap-sample-contain-roughly-two-thirds-of-observat) @gung makes reference to the .632+ rule. A quick Google search doesn't yield an easy to understand answer as to what this rule means and for what purpose it is used. Would someone please elucidate the .632+ rule?
In my experience, primarily based on simulations, the 0.632 and 0.632+ bootstrap variants were needed only because of severe problems caused by the use of an improper accuracy scoring rule, namely the proportion "classified" correctly. When you use proper (e.g., deviance-based or Brier score) or semi-proper (e.g., $c$-index = AUROC) scoring rules, the standard Efron-Gong optimism bootstrap works just fine.
I have two different measuring instruments, A and B, both measure the same physical quantity but with different unit of measures: $u\_A$ and $u\_B$. A is a reference instrument. I measured a reference part $L$ $n$ times with A and I get the $n$ values $L\_{Ai}$ ($i=1 \dots n$) expressed in term of the unit of measure $u\_A$. Then I measure the same reference part, $L$, $m$ times with B and I get the $m$ values $L\_{Bj}$ ($j=1 \dots m$) expressed in term of the unit of measure $u\_B$. In the future I will make my measures with B but I will be interested in the measure expressed in term of the unit of measure $u\_A$. I assume I can convert $u\_B$ into $u\_A$ by means of just one multiplicative conversion factor $k$. Now, I have three questions: 1. Is it possible to assess the validity of the above assumption starting from the values $L\_{Ai}$ and $L\_{Bj}$? 2. If the assumption is valid, how can I compute the conversion factor $k$ to convert the measure from $u\_B$ to $u\_A$, i.e. $L\_A=k L\_B$? 3. How to manage the case where I have more than one part, i.e. $L\_1$, $L\_2$, etc. My first attempt is to assume the assumption as valid and then compute $k$ as $k=\frac{m\sum\_{i=1}^n LA\_i}{n\sum\_{j=1}^m LB\_i}$ but it is based more on "common sense" rather than on some proper statistical basis. Can you give me some hints about the part of statistics that covers these kind of problem? Maybe linear regression?
If you have several measurements of the *same* quantity several times in the two units, there is, in general, no way to estimate the transformation from one unit to the other. However, if you *knew* that that there is a multiplicative relationship between the two, *and* that the noise in the two sets if measurements is zero-mean normal (with equal variances or different but known variances), then you can estimate the multiplicative factor $k$ by maximum-likelihood. If you make the above assumptions you can proceed as follows. Let $X\_B$ be the actual value of the quantity you repeatedly measure in units of $B$. Then $L\_{Ai} = k X\_B + e\_i$, $i = 1, \dots, n$, and $L\_{Bj} = X\_B + f\_j$, $j = 1, \dots, m$. $e\_i$ and $f\_j$ are normal i.i.d., normal random variables with mean 0 and variance $\sigma^2$. You can write the log-likelihood of the data as $$ L(data; k, X\_B) = const - \frac{1}{\sigma^2}\sum\_i (L\_{Ai} - k X\_B)^2 - \frac{1}{\sigma^2}\sum\_i (L\_{Bi} - X\_B)^2 $$ You should be able to maximize this quantity in terms of $k$ and $X\_B$ to obtain your transformation (and an estimate of the quantity). In fact, if you go through the algebra of setting the partial derivatives of the log-likelihood function with respect to $k$ and $X\_B$ to zero, you should get the expression for $k$ you have in your question. $X\_B = \frac{\sum\_j L\_{Bj}}{m}$ and $k = \frac{ m \sum\_i L\_{Ai}}{n \sum\_j L\_{Bj}}$
Hi I'm new to data science. Learning data science from course-era. I'm having pandas data frame as follows, ``` time value A 9 5 A 8 4 A 7 3 B 9 3 B 8 2 B 7 1 C 9 3 C 8 2 C 7 1 ``` I want to convert this as , ``` A B C 9 5 3 3 8 4 2 2 7 3 1 1 ``` As I start to write query for this, it is getting complicated. Is there any easy way to do this? Thanks for the help.
For me, when it comes to reshaping a dataframe(switching columns/indices/rows and such) its fairly intuitive using the [pivot\_table](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html) function. ``` my_df.pivot_table(index='time', columns=my_df.index, values='value') ```