input
stringlengths
38
38.8k
target
stringlengths
30
27.8k
I have a XML file has this structure (not exactly a tree though) ``` <posthistory> <row Id="1" PostHistoryTypeId="2" PostId="1" RevisionGUID="689cb04a-8d2a-4fcb-b125-bce8b7012b88" CreationDate="2015-01-27T20:09:32.720" UserId="4" Text="I just got a pound of microroasted, local coffee and am curious what the optimal way to store it is (what temperature, humidity, etc)" /> ``` I am using apache pig to extract just the "Text" part using this code ``` grunt> A = load 'hdfs:///parsingdemo/PostHistory.xml' using org.apache.pig.piggybank.storage.XMLLoader('posthistory') as(x:chararray); grunt> result = foreach A generate XPath(x, 'posthistory/Text'); ``` this returns "()" (null) Upon examining the XML file, I learnt that my XML file should be in this format: ``` <root> <child> <subchild>.....</subchild> </child> </root> ``` But my XML data file (stackoverflow data dump actually) is not in this format. Is there a way the tree structure can be imposed? what is wrong with my pig query?
This XPath will look for a *tag* called `<Text>` inside a tag called `<posthistory>`: ``` XPath(x, 'posthistory/Text'); ``` You want to find the `Text` attribute of the `row` tag in `posthistory` tags. An XPath something like this will do that: `/posthistory/row/@Text` See example here: <http://www.xpathtester.com/xpath/bac9874ec344f9d8ebcfb250633aaf65> and click "Test" to see results set. Learn up on XPath notation for more.
This is a soft question. I don't know a lot about cryptography or its history, but it seems like a common use for RSA is to do key exchange by encrypting a symmetric key to send a longer message (e.g., the description of iMessage [here](http://blog.cryptographyengineering.com/2016/03/attack-of-week-apple-imessage.html)). Isn't this exactly the thing that Diffie-Hellman key exchange, which is older (and to me seems simpler) is for? Looking at Wikipedia, they were also both patented, so this wouldn't have been responsible for the choice. To be clear, I'm not asking whether it's theoretically important that public key cryptography is possible. I'm asking why it became a standard method in practice for doing key exchange. (To a non-cryptographer, DH looks easier to implement, and also isn't tied to the details of the group used.)
There is no strong technical reason. We could have used Diffie-Hellman (with appropriate signatures) just as well as RSA. So why RSA? As far as I can tell, non-technical historical reasons dominated. RSA was patented and there was a company behind it, marketing and advocating for RSA. Also, there were good libraries, and RSA was easy to understand and familiar to developers. For these reasons, RSA was chosen, and once it was the popular choice, it stayed that way due to inertia. These days, the main driver that has caused an increase of usage of Diffie-Hellman is the desire for perfect forward secrecy, something that is easy to achieve by using Diffie-Hellman but is slower with RSA. Incidentally: It's Diffie-Hellman key exchange, not Diffie-Hellman secret sharing. Secret sharing is something else entirely.
Under what circumstances should the data be normalized/standardized when building a regression model. When i asked this question to a stats major, he gave me an ambiguous answer "depends on the data". But what does that really mean? It should either be an universal rule or a check list of sorts where if certain conditions are met then the data either should/ shouldn't be normalized.
Sometimes standardization helps for numerical issues (not so much these days with modern numerical linear algebra routines) or for interpretation, as mentioned in the other answer. Here is one "rule" that I will use for answering the answer myself: Is the regression method you are using **invariant**, in that the substantive answer does not change with standardization? Ordinary least squares is invariant, while methods such as lasso or ridge regression are not. So, for invariant methods there is no real need for standardization, while for non-invariant methods you should probably standardize. (Or at least think it through). The following is somewhat related: [Dropping one of the columns when using one-hot encoding](https://stats.stackexchange.com/questions/231285/dropping-one-of-the-columns-when-using-one-hot-encoding/329281#329281)
I know of normality tests, but how do I test for "Poisson-ness"? I have sample of ~1000 non-negative integers, which I suspect are taken from a Poisson distribution, and I would like to test that.
For a Poisson distribution, the mean equals the variance. If your sample mean is very different from your sample variance, you probably don't have Poisson data. The dispersion test also mentioned here is a formalization of that notion. If your variance is much larger than your mean, as is commonly the case, you might want to try a negative binomial distribution next.
I have an $n\times p$ matrix, where $p$ is the number of genes and $n$ is the number of patients. Anyone whose worked with such data knows that $p$ is always larger than $n$. Using feature selection I have gotten $p$ down to a more reasonable number, however $p$ is still greater than $n$. I would like to compute the similarity of the patients based on their genetic profiles; I could use the euclidean distance, however Mahalanobis seems more appropriate as it accounts for the correlation among the variables. The problem (as noted in this [post](http://r.789695.n4.nabble.com/Mahalanobis-Distance-td3844960.html)) is that Mahalanobis distance, specifically the covariance matrix, doesn't work when $n < p$. When I run Mahalanobis distance in R, the error I get is: ``` Error in solve.default(cov, ...) : system is computationally singular: reciprocal condition number = 2.81408e-21 ``` So far to try solve this, I've used PCA and instead of using genes, I use components and this seems to allow me to compute the Mahalanobis distance; 5 components represent about 80% of the variance, so now $n > p$. **My questions are:** Can I use PCA to meaningfully get the Mahalanobis distance between patients, or is it inappropriate? Are there alternative distance metrics that work when $n < p$ and there is also much correlation among the $n$ variables?
Take a look into the following paper: Zuber, V., Silva, A. P. D., & Strimmer, K. (2012). [A novel algorithm for simultaneous SNP selection in high-dimensional genome-wide association studies](http://arxiv.org/abs/1203.3082). *BMC bioinformatics*, **13**(1), 284. It exactly deals with your problem. The authors suppose the use of a new variable-importance measurements, besides that they earlier introduced a penalized estimation method for the correlation-matrix of explanatory variables which fits your problem. They also use the Mahalanobis distance for decorrelation! The methods are included in the R-package 'care', [available on CRAN](https://cran.r-project.org/web/packages/care/index.html)
Now I'm from a mathematical background, and I found CS people's definition of average time complexity a bit... confusing, to say the least. Here is a definition that I feel comfortable with: Consider a set $A$ of finite elements, with each $a\in A$ indicating individual input cases. There exists a function $T(a)\mapsto t\in\mathbb{N}$, i.e., *running time* for individual cases. Now we can define the average running time of input set to be simply $$\overline{T}(A)=\frac{\sum\limits\_{a\in A}T(a)}{\#A},$$ where $\#A$ denotes *number of elements* in $A$. For example, in QuickSort, we let $A\_n=\{\text{array of $n$ unsorted integers}\}=\mathbb{Z}^n$. But now we have to do an additional step. An integer can take on an *infinite* set of values, so we naturally consider the memory constraint and instead confine each integer $i$ to be $L\le i\le U$. Now $\#(A\_n)=(U-L+1)^n$, and we have a clearly defined $T(\cdot)$, and we can try to figure out $\overline{T}(A\_n)$, although this is a *very* tough combinatorics problem. We can also consider $i$ to be a bounded real number, with the modification $\#(A\_n)=\mu\_{\text{Le}}(A\_n)=(U-L)^n$, and $$ \overline{T}(A\_n)=\frac{1}{(U-L)^n}\int\limits\_{a\in A\_n}T(a)\,\mathrm{d}\mu\_{\text{Le}}. $$ What CS people did instead, is stating $T(a) \le T(\text{head})+T(\text{tail})+cn$ for some $c$, then simply averaging $T(\text{head})$ and $T(\text{tail})$ for varying head or tail lengths. This is stating implicitly somehow varying head (or tail) lengths are "equally likely", with out even considering the constraint that makes $A\_n$ finite. This is like saying you can pick an odd number from the set of all integers at "$50\%$ probability" without even bothering to define what this "probability" means! So how is this average time complexity rigorously defined over an infinite, countable number of cases? If average time complexity is dependent on a set of rules of translating clearly defined recursion to what is essentially an intuitive ad-hoc definition each time, how can we define average time complexity for arbitrary code?
Here is the definition of average-case time complexity of an algorithm: > > Let $T(x)$ be the running time of some algorithm $A$ on input $x$. For every $n$, let $\mu\_n$ be a distribution on inputs of length $n$. The *average-case* or *expected time complexity* of $A$ on inputs of length $n$ is $$T\_{\mathit{avg}}(n) := \mathbb{E}\_{x \sim \mu\_n} T(x). $$ > > > As you can see, in order to talk about average-case complexity, *you have to specify a distribution*. In the definition above I have alluded to the common case in which the complexity is parameterized by input length, but we could also have more parameters, or no parameters at all. For comparison-based sorting algorithms, we usually consider the following distribution $\mu\_n$: the uniform distribution on all $n!$ permutations of $1,\ldots,n$. However, we would obtain *exactly* the same notion of average-case complexity (for comparison-based algorithms) if instead we pick *any* atom-less distribution $\mu$, and define $\mu\_n$ to consist of $n$ iid copies of $\mu$. Unfortunately, no distribution on the integers is atom-less. This creates a problem, since if we generate $n$ iid copies of a distribution $\mu$ with atoms, then there is positive probability that the generated elements are not distinct. While comparison-based sorting algorithms can certainly handle this case, the situation becomes much less clean since the average-case time complexity now depends on $\mu$. Finally, you seem to be quoting a quite informal average-case complexity analysis of quicksort. You can find rigorous analysis of the average-case complexity of quicksort (in fact, two different ones) in [lecture notes of Avrim Blum](https://www.cs.cmu.edu/%7Eavrim/451f11/lectures/lect0906.pdf).
I came across the following in *Pattern Recognition and Machine Learning by Christopher Bishop* - > > **A balanced data set in which we have selected equal numbers of examples from each of the classes would allow us to find a more accurate model. > However, we then have to *compensate for the effects of our modifications to > the training data*. Suppose we have used such a modified data set and found models for the posterior probabilities. From Bayes’ theorem, we see that the posterior probabilities are proportional to the prior probabilities, which we can interpret as the fractions of points in each class. We can therefore simply take the posterior probabilities obtained from our artificially balanced data set and first divide by the class fractions in that data set and then multiply by the class fractions in the population to which we wish to apply the model. Finally, > we need to normalize to ensure that the new posterior probabilities sum to one.** > > > I don't understand what the author intends to convey in the bold text above - I understand the need for balancing, but not how the "**compensation for modification to training data**" is being made. Could someone please explain the compensation process in detail, and why it is needed - preferably with a numerical example to make things clearer? Thanks a lot! --- P.S. For readers who want a background on why a balanced dataset might be necessary: > > Consider our medical X-ray problem again, and > suppose that we have collected a large number of X-ray images from the general population for use as training data in order to build an automated screening > system. Because cancer is rare amongst the general population, we might find > that, say, only 1 in every 1,000 examples corresponds to the presence of cancer. If we used such a data set to train an adaptive model, we could run into > severe difficulties due to the small proportion of the cancer class. For instance, > a classifier that assigned every point to the normal class would already achieve > 99.9% accuracy and it would be difficult to avoid this trivial solution. Also, > even a large data set will contain very few examples of X-ray images corresponding to cancer, and so the learning algorithm will not be exposed to a > broad range of examples of such images and hence is not likely to generalize > well. > > >
With fewer equations: Ideally, to make a decision, we need to know the probability that the input vector $x$ belongs to class $i$, using Bayes rule, $p\_t(C\_i|x) = \frac{p\_t(x|C\_i)p\_t(C\_i)}{p\_t(X)}$ where the $t$ subscript represents the conditions given in the training set. Now if the training set is representative of operational conditions, then the output of the classifier will be a good estimate of the probability of class membership in operational conditions as well,i.e. $P\_t(C\_i|x) \approx P\_o(C\_i|x)$. But what if this is not the case. Say we have re-balanced the data set so that the classes are each represented by the same number of examples, but this was done in a way that did not affect the likelihoods, $P\_t(x|C\_i)$. In this case all we need to do is to multiply by the ratio of the operational and training set prior probabilities, to give un-normalised operational class probabilities, $q\_o(C\_i|x) = p\_t(x|C\_i)p\_t(C\_i)\times\frac{p\_o(C\_i)}{p\_t(C\_i)} = p\_t(x|C\_i)p\_o(C\_i) \approx p\_o(x|C\_i)p\_o(C\_i)$ The $o$ subscript indicates the operational conditions. We can then just re-normalise these probabilities so we have the probabilities of class membership calibrated for operational conditions, $p\_o(C\_i|x) = \frac{q\_o(C\_i|x)}{\sum\_{j}q\_o(C\_j|x)}$ If you have information about misclassification costs, these can also be factored in in a similar manner. So basically divide by the training set prior probability to "cancel" it from Bayes rule and multiply by the operational prior probability to "insert" it into Bayes rule, but that will mess up the normalisation constant on the denominator, so re-normalise so that all the probabilites sum to one.
I have data for motor vehicle crashes by hour of the day. As you would expect, they are high in the middle of the day and peak at rush-hour. ggplot2's default geom\_density smooths it out nicely A subset of the data, for drink-drive-related crashes, is high at either end of the day (evenings and early mornings) and highest at the extremes. But ggplot2's default geom\_density still dips at the right-hand extreme. What to do about this? The aim is merely visualisation -- no need (is there?) for robust statistical analysis. ![Imgur](https://i.imgur.com/MyKHHVS.png) ``` x <- structure(list(hour = c(14, 1, 1, 9, 2, 11, 20, 5, 22, 13, 21, 2, 22, 10, 18, 0, 2, 1, 2, 15, 20, 23, 17, 3, 3, 16, 19, 23, 3, 4, 4, 22, 2, 21, 20, 1, 19, 18, 17, 23, 23, 3, 11, 4, 23, 4, 7, 2, 3, 19, 2, 18, 3, 17, 1, 9, 19, 23, 9, 6, 2, 1, 23, 21, 22, 22, 22, 20, 1, 21, 6, 2, 22, 23, 19, 17, 19, 3, 22, 21, 4, 10, 17, 23, 3, 7, 19, 16, 2, 23, 4, 5, 1, 20, 7, 21, 19, 2, 21) , count = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L)) , .Names = c("hour", "count") , row.names = c(8L, 9L, 10L, 29L, 33L, 48L, 51L, 55L, 69L, 72L, 97L, 108L, 113L, 118L, 126L, 140L, 150L, 171L, 177L, 184L, 202L, 230L, 236L, 240L, 242L, 261L, 262L, 280L, 284L, 286L, 287L, 301L, 318L, 322L, 372L, 380L, 385L, 432L, 448L, 462L, 463L, 495L, 539L, 557L, 563L, 566L, 570L, 577L, 599L, 605L, 609L, 615L, 617L, 624L, 663L, 673L, 679L, 682L, 707L, 730L, 733L, 746L, 754L, 757L, 762L, 781L, 793L, 815L, 817L, 823L, 826L, 856L, 864L, 869L, 877L, 895L, 899L, 918L, 929L, 937L, 962L, 963L, 978L, 980L, 981L, 995L, 1004L, 1005L, 1007L, 1008L, 1012L, 1015L, 1020L, 1027L, 1055L, 1060L, 1078L, 1079L, 1084L) , class = "data.frame") ggplot(x, aes(hour)) + geom_bar(binwidth = 1, position = "dodge", fill = "grey") + geom_density() + aes(y = ..count..) + scale_x_continuous(breaks = seq(0,24,4)) ``` Happy for anyone with better stats vocabulary to edit this question, especially the title and tags.
**To make a periodic smooth (on any platform), just append the data to themselves, smooth the longer list, and cut off the ends.** Here is an `R` illustration: ``` y <- sqrt(table(factor(x[,"hour"], levels=0:23))) y <- c(y,y,y) x.mid <- 1:24; offset <- 24 plot(x.mid-1, y[x.mid+offset]^2, pch=19, xlab="Hour", ylab="Count") y.smooth <- lowess(y, f=1/8) lines(x.mid-1, y.smooth$y[x.mid+offset]^2, lwd=2, col="Blue") ``` (Because these are counts I chose to smooth their square roots; they were converted back to counts for plotting.) The span in `lowess` has been shrunk considerably from its default of `f=2/3` because (a) we are now processing an array three times longer, which should cause us to reduce $f$ to $2/9$, and (b) I want a fairly local smooth so that no appreciable endpoint effects show up in the middle third. It has done a pretty good job with these data. In particular, the anomaly at hour 0 has been smoothed right through. ![Plot](https://i.stack.imgur.com/VrLP2.png)
I saw "shift and scale invariant" terms for the first time, and I'm wondering what's their meaning? in other word: *Is Shift and Scalar invariant same as invariant under linear transforms?* thanks.
The paper you linked answers this question: > > In contrast, $T\_n$ is not invariant under orthogonal transformations, but it is invariant under location shifts and scalar transformations. > > > Orthogonal transformations are linear, so it would seem the answer is *no*. *Location shifts* and *scalar transformation* seem to have a domain specific definitions that I've not encountered before. From the same paper > > Here, the location shifts and scalar transformations mean $X\_{ij} \mapsto B X\_{ij} + c$ for $i=1,2, \ldots$, $j=1,\ldots,n\_i$, where $c$ is a constant vector, $B=\text{diag}(b\_{21},...,b\_{2p})$, and $b\_{21}, ..., b\_{2p}$ are non-zero constants. > > > They don't offer a definition of *constant vector*, but it seems they must mean a vector all of who's components are equal. You'd have to read in detail to be sure. As for *scalar transformation*, generally I'd expect all the diagonal entries to be equal for that. Quite confusing use of terminology here.
I've came up with a result while reading some automata books, that Turing machines appear to be more powerful than pushdown automata. Since the tape of a Turing machine can always be made to behave like a stack, it'd seem that we can actually claim that TMs are more powerful. Is this true?
Turing machines are indeed more powerful than regular PDAs. However in special case of a PDA with two stacks (TPDA or 2-PDA) the TPDA is equally powerful than a turing automata. The basic idea is that you can simulate the TM's tape using two stacks: in the left stack everything is stored which is left from the head on the Turing-tape, while the symbol under the head and everything right from the head is stored in the other stack. And thus the TPDA can simulate the work of a Turing machine, and they are equivalent. A slightly more detailed description can be found [here](http://www.cs.uiuc.edu/class/fa08/cs373/Problem_Sets/hw9-sol.pdf).
I have a methodological question, and therefore no sample dataset is attached. I'm planning to do a propensity score adjusted Cox regression that aims to examine whether a certain drug will reduce the risk of an outcome. The study is observational, comprising 10,000 individuals. The data set contains 60 variables. I judge that 25 of these might affect treatment allocation. I would never adjust for all 25 of these in a Cox regression, but I've heard that you can include that many variables as predictors in a propensity score and then only include the propensity score subclass and treatment variable in the Cox regression. (covariates that will not be equal after prop score adjustment would of course have to be included in the Cox regression). Bottom line, is it really smart to include that many predictors in the prop score? --- @Dimitriy V. Masterov Thank you for sharing these important facts. On the contrary to books and articles considering other regression frameworks, I don't see any (reading Rosenbaums book) guidelines on model selection in propensity score analyses. While standard textbooks / review articles seem to always recommend stringent variable selection and keeping the number of predictors low, I haven't seen much of this discussion in prop score analyses. You write: (1) *"Theoretical insight, institutional knowledge, and good research should guide selection of Xs"*. I agree but there are circumstances where we have a variable at hand and don't really know (but it might be possible) if the variable effects either treatment allocation or outcome. For example: should I include kidney function, as measure by filtration rate, in a prop score aiming to adjust for statin treatment. Statin treatment has nothing to do with kidney function and I have already included an array of variables that will effect statin treatment. But it is still tempting to include kidney function; it might adjust even more. Now some would say that it should be included because it effects outcome, but I could give you another example (such as the binary variable urban / rural living) of a variable that don't effect treatment nor outcome, as far as we know. But I would like to include it, as long as it don't effect the prop score precision. (2) *"Including Xs affected by the treatment, either ex post or ex ante in anticipation of treatment, will invalidate the assumption".* I'm not sure what you mean here. But if I study the effect of statins on cardiovascular outcome, I will include various measurements of blood lipids in the propensity score. Blood lipids are effected by the treatment. I guess I misunderstood this statement. @statsRus thank you for sharing the facts, particularly what you call "a note on selecting inputs". I think I reasons much the same way you do. Unfortunately prop score methods discuss various adjustment strategies instead of model selection strategies. Perhaps model fit is not important. If that is the case, I would adjust for every available variable that might effect outcome and treatment allocation the slightest. I am not a statician, but if model fit is of no importance then I would like to adjust for all variables that might affect treatment allocation and outcome. This would in many cases mean including variables that will be effected by the treatment. Furthermore, some people suggest that the subsequent Cox regression should only include the treatment variable and prop score subclass. While others suggest that the cox adjustment should include the prop score additionally to all other variables that you would adjust for.
I've personally been asking this question for at least 5 years since for me it's the "big" practical question for using propensity score matching on observational data to estimate causal effects. This is a superb question and there's a subtle disagreement that runs deep in the statistics versus computer science communities. From my experience statisticians tend to advocate "throwing the kitchen sink" of observable inputs into the estimation of the propensity score, while computer scientists tend to advocate a theoretical reason for the inputs (though statisticians may occasionally mention the importance of theory in justifying selection of inputs into the propensity score model). The difference, I believe, stems from the fact that computer scientists (in particular Judea Pearl) tend to think of causal in terms of directed acyclic graphs. When viewing causality through directed acyclic graphs, it's fairly easy to see that you can condition on a so-called "collider" variable, which may "un-block" backdoor paths and actually induce bias into your estimation of a causal effect. My takeaway? If you have solid theory on what affects selection into the treatment, use that in the propensity score estimation. Then conduct a sensitivity analysis to determine how sensitive your estimate is to unobserved confounding variables. If you have almost no theory to guide you, then throw in the "kitchen sink" and then conduct a sensitivity analysis. A note on selecting inputs for the propensity score model (this may be obvious but it's worth noting for others unfamiliar with estimating causal effects from observational data): Don't control for post-treatment variables. That is, you want your inputs in the propensity score model to be measured before the treatment and your outcome to be measured after the treatment. In observational data this practically means that you need three waves of data, with a detailed set of baseline of covariates, treatment measured in the second wave, and the outcome measured in the final wave.
I'm trying to determine number of clusters for k-means using `sklearn.metrics.silhouette_score`. I have computed it for `range(2,50)` clusters. How to interpret this? What number of clusters should I choose? [![enter image description here](https://i.stack.imgur.com/Suw3W.png)](https://i.stack.imgur.com/Suw3W.png)
They are all bad. A good Silhouette would be 0.7 Try other clustering algorithms instead.
I have 3 independent variables (1-5 Likert Scale) questions and I want to check how well these three can predict/explain my DV (1-5 Likert scale) The three independent variables are: 1. Quality of information 2. Accessibility of staff 3. Quality of technical advice My DV is: Overall evaluation of service center All variables are ordinal (1 = low... 5 = high) Which analysis would be appropriate to run here? I would prefer an easy approach and I think Ordinal Logistic Regression is way too complicated. Can I use a Linear Regression? Basically, I want to be able to say that (for example) "quality of technical advice" is better at predicting "overall evaluation" than "Accessibility of staff" Also, I have a 0 value on all variables ("No opinion", so in fact all are measured on 0-5 Likert scale)). How should I treat this variable? Can I replace the 0s with the mean of the observations? Many thanks! Fredrik
One reference that would be good to read and consider is: > > Spurious Correlation and the Fallacy of the Ratio Standard Revisited, > Richard Kronmal, Journal of the Royal Statistical Society. Series A, > Vol. 156, No. 3 (1993), 379-392. > > > This brings up some of the situations that can occur with using ratios.
I'm a SQL/C++ developer who recently has been asked to generate a report from our database to predict some future performance based on historical data; the problem is that I don't have much experience of this sort of data modelling. I initially thought I could take an average of each month's results and use that but after reading articles on the web about statistical models, estimation, forecasting etc. I felt that might not be sufficient. Plus, most of the formulas are over my head as I've never learnt statistics. What process would you recommend as the best way (for me) to calculate these predictions? Preferable something I can translate in to SQL or use in a spreadsheet (Excel)? **Update to link in comment** The link jthetzel provides to the document "Statistical flaws in Excel" by Hans Pottel (www.coventry.ac.uk/ec/~nhunt/pottel.pdf) no longer exists. I managed to [find the document here](http://www.pucrs.br/famat/viali/tic_literatura/artigos/planilhas/pottel.pdf).
On the one hand, jthetzel is correct. But on the other hand, asking your question *here* is like going to the annual conference of neurosurgeons to say "My patient has a headache, what do I do?" Of course the answer from a bunch of neurosurgeons will be "You need a neurosurgeon!" ;-) Modern-day SQL implementations are full suites of applications that go well beyond mere database management. So I take issue with the suggestion that analytics is not one of SQL's strengths. Microsoft SQL Server, for example, includes a full range of Analysis Services. This includes a variety of [data mining solutions](http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/data-mining.aspx) that can be used for predictive forecasting. Any major enterprise SQL suite is going to have something similar. Is "one size fits all" canned-algorithm data mining of this sort a substitute for an expert statistician who will analyze the unique situation of your business? Of course not. Not remotely. But can it get you a first pass of some useful predictive modeling and leverage the skills you already have to accomplish a decent beginning on the task? Yes, it can.
The Mersenne Twister is widely regarded as good. Heck, [the CPython source](https://github.com/python/cpython/blob/master/Lib/random.py#L33) says that it "is one of the most extensively tested generators in existence." But what does this mean? When asked to list properties of this generator, most of what I can offer is bad: * It's massive and inflexible (eg. no seeking or multiple streams), * It fails standard statistical tests despite its massive state size, * It has serious problems around 0, suggesting that it randomizes itself pretty poorly, * It's hardly fast and so on. Compared to simple RNGs like XorShift\*, it's also hopelessly complicated. So I looked for some information about why this was ever thought to be good. [The original paper](http://dl.acm.org/citation.cfm?id=272995) makes lots of comments on the "super astronomical" period and 623-dimensional equidistribution, saying > > Among many known measures, the tests based on the higher dimensional > uniformity, such as the spectral test (c.f., Knuth [1981]) and the k-distribution test, described below, are considered to be strongest. > > > But, for this property, the generator is beaten by a *counter* of sufficient length! This makes no commentary of *local* distributions, which is what you actually care about in a generator (although "local" can mean various things). And even CSPRNGs don't care for such large periods, since it's just not remotely important. There's a lot of maths in the paper, but as far as I can tell little of this is actually about randomness quality. Pretty much every mention of that quickly jumps back to these original, largely useless claims. It seems like people jumped onto this bandwagon at the expense of older, more reliable technologies. For example, if you just up the number of words in an LCG to 3 (much less than the "only 624" of a Mersenne Twister) and output the top word each pass, it passes BigCrush ([the harder part of the TestU01 test suite](https://en.wikipedia.org/wiki/TestU01)), despite the Twister failing it ([PCG paper, fig. 2](http://www.pcg-random.org/pdf/toms-oneill-pcg-family-v1.02.pdf)). Given this, and the weak evidence I was able to find in support of the Mersenne Twister, what *did* cause attention to favour it over the other choices? This isn't purely historical either. I've been told in passing that the Mersenne Twister is at least more proven in practice than, say, [PCG random](http://www.pcg-random.org/pdf/toms-oneill-pcg-family-v1.02.pdf). But are use-cases so discerning that they can do better than our batteries of tests? [Some Googling suggests they're probably not.](http://link.springer.com/chapter/10.1007/11766247_13) In short, I'm wondering how the Mersenne Twister got its widespread positive reputation, both in its historical context and otherwise. On one hand I'm obviously skeptical of its qualities, but on the other it's hard to imagine that it was an entirely randomly occurrence.
I am the Editor who accepted the MT paper in ACM TOMS back in 1998 and I am also the designer of TestU01. I do not use MT, but mostly MRG32k3a, MRG31k3p, and LRSR113. To know more about these, about MT, and about what else there is, you can look at the following papers: F. Panneton, P. L'Ecuyer, and M. Matsumoto, ``Improved Long-Period Generators Based on Linear Recurrences Modulo 2'', ACM Transactions on Mathematical Software, 32, 1 (2006), 1-16. P. L'Ecuyer, ``Random Number Generation'', chapter 3 of the Handbook of Computational Statistics, J. E. Gentle, W. Haerdle, and Y. Mori, eds., Second Edition, Springer-Verlag, 2012, 35-71. <https://link.springer.com/chapter/10.1007/978-3-642-21551-3_3> P. L'Ecuyer, D. Munger, B. Oreshkin, and R. Simard, ``Random Numbers for Parallel Computers: Requirements and Methods,'' Mathematics and Computers in Simulation, 135, (2017), 3-17. <http://www.sciencedirect.com/science/article/pii/S0378475416300829?via%3Dihub> P. L'Ecuyer, ``Random Number Generation with Multiple Streams for Sequential and Parallel Computers,'' invited advanced tutorial, Proceedings of the 2015 Winter Simulation Conference, IEEE Press, 2015, 31-44.
I have a set of observations drawn from an unknown distribution. Given a new observation $x$ I would like to ascertain the probability that $x$ was drawn from the same distribution. My approach was to use kernel density estimation to estimate the pdf of the initial samples, and then use this to estimate the probability of $x$. It has now come to my attention that the $\mathrm{pdf}(x) \neq P(x)$. How can I calculate $P(x)$?
You can extend your non-parametric method if your original sample is large enough. Suppose you wanted to have a 95% probability of not rejecting the null hypothesis that your new observation comes from the same distribution if it in fact does: your critical region could be to reject the null hypothesis if your new observation is in the top $k$ or bottom $k$ of the now $n+1$ values. So you want $\frac{2k}{n+1} = 1 - 0.95$, i.e. $k=0.05\frac{n+1}{2}$, giving the following pairs of values of $k$ and $n$ ``` k n 1 39 2 79 3 119 4 159 ``` and the pattern is obvious. In reality, $n$ will be decided for you so you need to make a sensible choice in the circumstances. In my view, you should choose $k$ (or, in general, the critical region) based on $n$ (or the original sample) before you look at the new observation, rather than looking at the full data and then trying to derive a probability from the new observation as seen: if the null hypothesis is in fact true then each position is equally likely, and the probability of being as extreme or more extreme than the new observation would be basing your conclusion on things that were not observed. If you had some idea about the unknown original distribution or the possible alternative distribution, you could probably do better than this.
Why do most people prefer to use many-one reductions to define NP-completeness instead of, for instance, Turing reductions?
Turing reductions are more powerful than many-one mapping reductions in this regard: Turing reductions let you map a language to its complement. As a result it can kind of obscure the difference between (for example) NP and coNP. In Cook's original paper he didn't look at this distinction (iirc Cook actually used DNF formulas instead of CNF), but it probably became clear very quickly that this was an important separation, and many-one reductions made it easier to deal with this.
**Background:** In word2vec we pass in a one-hot encoding of our target word into a simple neural network which is trained to predict context words from a window around our target. We eventually take the weights from our hidden layer to use as word embeddings - vector representations of our words. When we put in a one-hot encoding into our hidden layer we end up essentially doing a look-up of the corresponding row in the weight matrix, so when training as finished we can say that this row represents the embedding for a particular word. **My question is:** I am passing images through a pre-trained VGG network, and then use the final intermediary layer as my encodings for my images. I am then using these encodings as a replacement for my one-hot vectors and feeding them into a skip-gram architecture like W2V to learn an image embedding. In this case the dot product of my input feature vector and the hidden layer weights are no longer a look-up affecting a single row - s how can I connect images with their embeddings in the this setup? Edit: Some added context - I am trying to get style embeddings in a style vector space using this paper <https://arxiv.org/pdf/1708.04014.pdf>. The paper describes putting the images through a pre-trained VGG and then using these feature vectors to train the embedding layer. Normally we would then go and take the embedding layer weights, but in this case there isn't a single row that connects with our image because it isn't one-hot. Do we take the embedding layer activations instead of the actual weights?
I see where you are coming with this – if you say something like "we correlated A with B", you might risk giving the impression that you introduced correlation between A and B where perhaps none existed before. In my view, there are better ways to say this, such as: "we investigated whether A and B were correlated" or "we studied the (linear?) association/relationship between A and B". Can you get away with using "we correlated A and B" from a grammatical and/or statistical viewpoint? The answer is yes. Is that the best way you can get your point across? My own answer to this last question would be No.
> > What are the time complexities of finding $8th$ element from beginning and $8th$ element from end in a singly linked list? Let $n$ be the number of nodes in linked list, you may assume that $n > 8$. > > > The answer is given as $O(1)$ and $O(n)$. What I learnt till now is that searching operation in linked lists takes linear time since it doesn't have indexes like arrays. Then why does searching the $8th$ element would take constant time? Further explanation for the answer is as follows : > > Finding 8th element from beginning requires 8 nodes to be traversed which takes constant time. Finding 8th from end requires the complete list to be traversed. > > > Can someone explain me the concept behind this?
> > Since the execution time before improvement has a part (10t)(10t) > unaffectable by parallel computing on multiple processors, according > to Amdahl's law (outlined in blue box), the potential speedup must be > smaller than the number of processors. > > > The potential speedup in the article means a number of processors, already answered. It means a speedup limit for a given PC for the case when the algorithm could be cut in parallel pieces unlimited. For example, an algorithm has N independent parts for N processors. It never occurs in real life, but a theoretically example is - find N random numbers. Here our speedup is N, but the article's algorithm hasn't this property. The Amdahl's law doesn't interfere directly with the potential speed term. It, in other words, states that when the task split, the summary execution time can't be less than the largest fragment. So, Amdahl's law uses only sequential computation time for program fragments as the statement objects, not a "potential speedup."
I have read about cryptography prgs. If I have a generator G(x1,x2...,xn)= x1,x2,...,xn, x1&x2...&xn , how can I prove that it is a prg or prove it is not? Is there some principles I have to be based on while proving prgs?
We don't know how to prove that a cryptographic PRNG exists. There are some candidate constructions, but they have not been proved to work. There are some results of the form "if X exists then so does a cryptographic PRNG", where X is some other cryptographic primitive, and the PRNG can be constructed explicitly from X. However, none of these other cryptographic primitives are known to exist. A particularly intriguing open question is to construct such a primitive which works merely under the assumption that P differs from NP. On the other hand, proving that a generator is not a PRNG is much easier. You just need to give a distinguisher which has a non-negligible advantage in comparing the output of the generator to truly random output (as in the definition of a cryptographic PRNG).
I know that Euclid’s algorithm is the best algorithm for getting the GCD (great common divisor) of a list of positive integers. But in practice you can code this algorithm in various ways. (In my case, I decided to use Java, but C/C++ may be another option). I need to use the most efficient code possible in my program. In recursive mode, you can write: ``` static long gcd (long a, long b){ a = Math.abs(a); b = Math.abs(b); return (b==0) ? a : gcd(b, a%b); } ``` And in iterative mode, it looks like this: ``` static long gcd (long a, long b) { long r, i; while(b!=0){ r = a % b; a = b; b = r; } return a; } ``` --- There is also the Binary algorithm for the GCD, which may be coded simply like this: ``` int gcd (int a, int b) { while(b) b ^= a ^= b ^= a %= b; return a; } ```
For numbers that are small, the binary GCD algorithm is sufficient. GMP, a well maintained and real-world tested library, will switch to a special half GCD algorithm after passing a special threshold, a generalization of Lehmer's Algorithm. Lehmer's uses matrix multiplication to improve upon the standard Euclidian algorithms. According to the docs, the asymptotic running time of both HGCD and GCD is `O(M(N)*log(N))`, where `M(N)` is the time for multiplying two N-limb numbers. Full details on their algorithm can be found [here](https://gmplib.org/manual/Subquadratic-GCD.html#Subquadratic-GCD).
I have in mind a particular 3D object. Given an image taken by a camera, I want to check whether that image contains an instance of my object. For instance, let's say that the object is a bathroom sink. There are many kinds of bathroom sinks, but they tend to share some common elements (e.g., shape, size, color, function). There can also be significant variation in lighting and pose. Given an image, I want to know whether the image contains a bathroom sink. How do I do that? What technique/algorithm would be appropriate? Is there research on this topic? Of course, it is easy to use Google Images to obtain many example images that are known to contain a bathroom sink (or whatever the object I'm looking for might be), which could be used for training some sort of machine learning algorithm. This suggests to me that maybe some combination of computer vision plus machine learning might be a promising approach, but I'm not sure exactly what the specifics might look like.
Instead of simple numbering, you could spread the numbers out over a large (constant sized) range, such as integer minimum and maximums of a CPU integer. Then you can keep putting numbers "in between" by averaging the two surrounding numbers. If the numbers become too crowded (for example you end up with two adjacent integers and there is no number in between), you can do a one-time renumbering of the entire ordering, redistributing the numbers evenly across the range. Of course, you can run into the limitation that all the numbers within the range of the large constant are used. Firstly, this is not a usually an issue, since the integer-size on a machine is large enough so that if you had more elements it likely wouldn't fit into memory anyway. But if it is an issue, you can simply renumber them with a larger integer-range. If the input order is not pathological, this method might amortize the renumberings. ### Answering queries A simple integer comparison can answer the query $\left(X \stackrel{?}{<}Y\right)$. Query time would be very quick ( $\mathcal{O}\left(1\right)$ ) if using machine integers, as it is a simple integer comparison. Using a larger range would require larger integers, and comparison would take $\mathcal{O}\left(\log{|integer|}\right)$. ### Insertion Firstly, you would maintain the linked list of the ordering, demonstrated in the question. Insertion here, given the nodes to place the new element in between, would be $\mathcal{O}\left(1\right)$. Labeling the new element would usually be quick $\mathcal{O}\left(1\right)$ because you would calculate the new numeber easily by averaging the surrounding numbers. Occasionally you might run out of numbers "in between", which would trigger the $\mathcal{O}\left(n\right)$ time renumbering procedure. ### Avoiding renumbering You can use floats instead of integers, so when you get two "adjacent" integers, they *can* be averaged. Thus you can avoid renumbering when faced with two integer floats: just split them in half. However, eventually the floating point type will run out of accuracy, and two "adacent" floats will not be able to be averaged (the average of the surrounding numbers will probably be equal to one of the surrounding numbers). You can similarly use a "decimal place" integer, where you maintain two integers for an element; one for the number and one for the decimal. This way, you can avoid renumbering. However, the decimal integer will eventually overflow. Using a list of integers or bits for each label can entirely avoid the renumbering; this is basically equivalent to using decimal numbers with unlimited length. Comparison would be done lexicographically, and the comparison times will increase to the length of the lists involved. However, this can unbalance the labeling; some labels might require only one integer (no decimal), others might have a list of long length (long decimals). This is a problem, and renumbering can help here too, by redistributing the numbering (here lists of numbers) evenly over a chosen range (range here possibly meaning length of lists) so that after such a renumbering, the lists are all the same length. --- This method actually is actually used in [this algorithm](http://code-o-matic.blogspot.com/2010/07/graph-reachability-transitive-closures.html) ([implementation](https://code.google.com/p/transitivity-utils/),[relevant data structure](https://code.google.com/p/transitivity-utils/source/browse/trunk/src/edu/bath/transitivityutils/OrderList.java)); in the course of the algorithm, an arbitrary ordering must be kept and the author uses integers and renumbering to accomplish this. --- Trying to stick to numbers makes your key space somewhat limited. One could use variable length strings instead, using comparison logic "a" < "ab" < "b". Still two problems remain to be solved A. Keys could become arbitrarily long B. Long keys comparison could become costly
Consider an $m$ output, $n$ state Mealy machine. How many states does the equivalent Moore machine contain? The answer is $mn$ but my argument is that the total number of output produced while reading a string of length $n$ ($n$ states) Mealy will produce $m$ outputs ($m$ transitions) but a Moore machine produces a output even in the initial state without any transitions. So to accommodate the first transition of the Mealy machine (the first output of the Mealy) we need another state in the Moore machine. So the answer should be $mn + 1$. Can anyone tell where am I going wrong?
It is very simple to understand. The Mealy machine has 'm' outputs that means total 'm' transitions possible. Number of states are 'n'. Hence, there are a maximum of m\*n transitions possible in total. All these transitions should be depicted in Moore machine as well, as the power of both are same.
We are a project group working with ECG and we could use theoretical approval of our approach to deal with the problem. Our present approach is to cluster the ECG's, validate the formed clusters by cluster validity indexes and then compare each cluster with the features of the diagnosis attached to the ECG's to try to find correlation. We have 50.000 ECG's each with 8 median leads (representative median complex with noise reduction where each lead are time aligned) with 600 samples for each lead. Our approach is to use state-of-the-art shape-based clustering algorithms and evaluate them in relation to CVI and correlation with diagnosis features. For algorithms to evaluate we will use: * [k-Shape](http://www1.cs.columbia.edu/~jopa/Papers/PaparrizosSIGMOD2015.pdf) * [Fuzzy c-Shape](https://arxiv.org/abs/1608.01072) * Baseline: Hierarchical clustering with Euclidean distance as metric and computed for both average linkage and ward. For [Internal CVI](http://datamining.rutgers.edu/publication/internalmeasures.pdf) (Since we do not have any unlabeled ECG's): * Silhouette index * Calinski-Harabasz index * Davies-Bouldin index * (S\_Dbw validity index) Our current problem and where we especially would like some feedback is to find a window of k to run the two algorithms within. The time complexity of silhouette index do not allow us to try k with 1-50.000. We have talked about a window of running k with 5-200 but we do not have theory to back this window up. An approach to find a window could be to reduce the dimension of each ECG leads using [PAA](https://jmotif.github.io/sax-vsm_site/morea/algorithm/PAA.html), run k-Shape from 1-50.000 on this reduced data set, find a window of interesting k's and the run the two algorithms on the full ECG samples. We would really appreciate feedback on our current approach and a point in the right direction if you think there are cleverer ways to achieve our goal.
To answer the question in the title, AFAIK this is called a ***permutation test***. **If** this is indeed what you are looking for though, it does not work as described in the question. To be (somewhat) concise: the permutation test indeed works by shuffling one of the 'columns' and performing the test or calculation of interest. However, **the trick is to do this a lot of times**, shuffling the data each time. In small datasets it might even be possible to perform all possible permutations. In large datasets you usually perform an amount of permutation your computer can handle, but which is large enough to obtain *a distribution of the statistic of interest*. Finally, you use this distribution to check whether, for example, the mean difference between two groups is >0 in 95% of the distribution. Simply put, this latter step of checking which part of the distribution is above/below a certain critical value is the 'p-value' for your hypothesis test. If this is wildly different from the p-value in the original sample, I wouldn't say there's something wrong with the test/statistic of interest, but rather your sample containing certain datapoints which specifically influence the test result. This might be bias (selection bias due to including some weird cases; measurement error in specific cases, etc.), or it might be incorrect use of the test (e.g. violated assumptions). See <https://en.wikipedia.org/wiki/Resampling_(statistics)> for further details Moreover, see @amoeba 's answer to [this question](https://stats.stackexchange.com/questions/192291/variable-selection-using-cross-validated-pls-model-when-permutation-test-shows-l) If you want to know more about how to combine permutation tests with variable selection.
Please excuse or improve the poor title of this question. My question is rather undirected, but I guess I am trying to find out if I might be missing a keyword for my problem. So there is plenty of work on sorting algorithms. Sorting is usually understood as creating a / *the one correct* total order given a set of elements `X` and their pairwise relationship `x >= x'` for all `x` in `X`. Or actually `>=` is the total order and the task is to create a sequence or directed graph s.t. `x` comes after `x'` iff `x>=x'`. Now you want to do the exact same thing, only `>=` does not define a total order over `X`, but only a partial order. This seems like a very straightforward generalization that I can only imagine is required quite often. Still, under the term **partial order production**, I find only very little literature on the topic / task. Am I missing something? **EDIT: Alternative Formulation** Given a set of elements `X` and a function `f` that returns the relationship between any two elements `x,x'` in `X`. Create a DAG with an edge `x->x'` if the relationship is `>=` and `x'->x` if the relationship `f((x,x'))` is `<`. Then create the transitive reduction of the DAG. 1. `f(x,x')` is either `>=` or `<`. This is normal sorting. 2. `f(x,x')` is either `>=`, `<` or `?` (not comparable). This is partial order production. I would say both are clearly ordering problems, given the relationship function `f` (*the order*) and the elements `X` to order. Still you find so much on case (1) and hardly anything on case (2).
There are competitions for constraint satisfaction solvers. Some problems there can be readily translated to IP solvers as well. See e.g., [MiniZinc challenge](https://www.minizinc.org/challenge.html) which has taken place yearly since 2008 or the [XCSP competition](http://xcsp.org/competition).
this is my first question on this site, so please be patient with me. I am doing a random walk, where I build a timeseries curve. I do that a preset number of times ( let's say 100 times ). Now I was wondering what should I do with all the generated curves. Eventually I want to have 1 curve that is the best representation. I tried taking the mean and median of the values for each point of time, but that gives me a rather tame and flat curve. What other options do I have? Your input is appreciated!
Plotting the mean or median for each timepoint sounds a sensible start. You could also plot a [reference range](http://en.wikipedia.org/wiki/Reference_range) for each timepoint to show the variability across curves at each timepoint. You could also add a few (perhaps 5 or 10) randomly-chosen curves to illustrate the variability across timepoints within each curve. Should be perfectly possible to show all of those things on the same plot with a suitable choice of colours and line weights. That should give a graphical depiction of the process's behaviou but doesn't really answer your requirement for 'one curve that is the best representation'. But to answer that we need to know what you mean by 'best' -- how will you *use* this 'best curve'? The mean or median may *look* too flat and boring to use it as the sole graphical display but may be the 'best' summary curve for quite a few purposes.
I have already gone through [this post](https://stackoverflow.com/a/4103234/4993513) which uses `nltk`'s `cmudict` for counting the number of syllables in a word: ``` from nltk.corpus import cmudict d = cmudict.dict() def nsyl(word): return [len(list(y for y in x if y[-1].isdigit())) for x in d[word.lower()]] ``` However, for words outside the cmu's dictionary like names for example: `Rohit`, it doesn't give a result. **So, is there any other/better way to count syllables for a word?**
You can try another Python library called [Pyphen](http://pyphen.org/). It's easy to use and supports a lot of languages. ``` import pyphen dic = pyphen.Pyphen(lang='en') print dic.inserted('Rohit') >>'Ro-hit' ```
I want to write an algorithm to find the closest pair of points among n points in an XY-plane. I have the following approach in my mind: 1. Find the minimum x co-ordinate(minX) and minimum y(minY) co-ordinate. 2. Name the point origin= (minX,minY) 3. Find the distance of all points from this origin and store it in a vector dist[]. 4. Sort the vector dist[]. 5. Traverse through the vector dist and for each i=1 to n-1, do dist[i+1]-dist[i] and keep track of the minimum of these and the pair that form this minimum. 6. Return minimum and the pair. I am not sure if this algorithm would work because of how triangle inequality works. Any help on why this algorithm should/should not work?
**No, your approach will not work** Let $O$ be your chosen origin. Let $A$, $B$ be two of your other points. $OAB$ form a triangle. The vector you have in mind would contain the distances $\overline{OA}$ and $\overline{OB}$. You can not determine the distance $\overline{AB}$ using only the two other sides of the triangle. You would need at least one of the angles for that. As for a concrete counter example: $O = (0,0), A = (0,2), B = (0,5), C = (2,0)$ so your vector would be: $\overline{OA} = 2, \overline{OC} = 2, \overline{OB} = 5$ The differences are: $\overline{OA}-\overline{OC} = 0$ $\overline{OC}-\overline{OB} = -3$ $(C, B)$ forms the minimum of but the closest pair is $(A, B)$ with a distance of 3.
Intuitively, the mean is just the average of observations. The variance is how much these observations vary from the mean. I would like to know why the inverse of the variance is known as the precision. What intuition can we make from this? And why is the precision matrix as useful as the covariance matrix in multivariate (normal) distribution? Insights please?
*Precision* is one of the two natural parameters of the normal distribution. That means that if you want to combine two independent predictive distributions (as in a Generalized Linear Model), you add the precisions. Variance does not have this property. On the other hand, when you're accumulating observations, you average expectation parameters. The *second moment* is an expectation parameter. When taking the convolution of two independent normal distributions, the *variances* add. Relatedly, if you have a Wiener process (a stochastic process whose increments are Gaussian) you can argue using infinite divisibility that waiting half the time, means jumping with half the *variance*. Finally, when scaling a Gaussian distribution, the *standard deviation* is scaled. So, many parameterizations are useful depending on what you're doing. If you're combining predictions in a GLM, precision is the most “intuitive” one.
I'm aware of the general k-center approximation algorithm, but my professor (**this is a question from a CS class**) says that in a one-dimensional space, the problem can be solved (optimal solution found, not an approximation) in `O(n^2)` polynomial time without depending on `k` or using dynamic programming. A general description of the k-center problem: Given a set of nodes in an n-dimensional space, cluster them into `k` clusters such that the "radius" of each cluster (distance from furthest node to its center node) is minimized. A more formal and detailed description can be found at <http://en.wikipedia.org/wiki/Metric_k-center> As you might expect, I can't figure out how this is possible. The part currently causing me problems is how the runtime can not rely on `k`. The nature of the problem causes me to try to step through the nodes on a sort of number line and try to find points to put boundaries, marking off the edges of each cluster that way. But this would require a runtime based on `k`. The `O(n^2)` runtime though makes me think it might involve filling out an `nxn` array with the distance between two nodes in each entry. Any explanation on how this is works or tips on how to figure it out would be very helpful.
First, > > There exist optimal solutions in which each cluster consists of a contiguous sequence of points in the real line. > > > Any other optimal solutions can be transformed into the cases above. In the following, we focus on the optimal solutions of the kind above. --- The complexity of the following "dynamic programming" algorithm is $O(n^2 k) = O(n^3)$. The case for $k=1$ is easy. Denote the optimal solution to $k=1$ in $n$ points by $R\_{n,1}$. Let $R(n,k)$ be the optimal solution for the problem of $k$-center in the first $n$ points. For convenience, define $R(n,k) = 0$ if $k \ge n$ (This is reasonable because we can choose each point as the center of the cluster consisting of only itself). For general $k > 1$, we consider all the cases according to the number (denoted $m$) of points that are assigned to the last cluster (that is, the last cluster contains a contiguous sequence of the rightmost $m$ points). $$R(n,k) = \min\_{0 \le m \le n} \{ R(n-m, k-1) + R\_{m,1} \}$$ The complexity of the "dynamic programming" algorithm is $O(n^2 k) = O(n^3)$. --- **Note:** This paper [1] gives an $O(n \log n)$ time algorithm. --- [1] [Efficient Algorithms for the One-Dimensional $k$-Center Problem](http://arxiv.org/pdf/1301.7512v2.pdf). arXiv, 2014.
[Dynamical systems](http://en.wikipedia.org/wiki/Dynamical_system) are those whose evolution can be described by a rule, evolves with time and is deterministic. In this context can I say that Neural networks have a rule of evolution which is the activation function $f(\text{sum of product of weights and features})$ ? Are neural networks 1. dynamical systems, 2. linear or nonlinear dynamical systems? Can somebody please shed some light on this?
A particular neural network does not evolve with time. Its weights are fixed, so it defines a fixed, deterministic function from the input space to the output space. The weights are typically derived through a training process (e.g., backpropagation). One could imagine building a system that periodically re-applies the training process to generate new weights every so often. Such a system would indeed evolve over time. However, it would be more accurate to call this "a system that includes a neural network as one component of it". Anyway, at this point we are probably descending into quibbling over terminology, which might not be very productive. This site format is a better fit for objectively answerable questions with some substantive technical content.
How can prove that $2^n \nmid n!$ using binary representation for $n!$ and $2^n$.
Idea: Count explicitly how many factors $2$ the numbers in $[1..n]$ contribute to $n!$. Observe that every other number adds one (the even numbers), every fourth adds another (those divisible by four), every eighth another, and so on. Hence, the number $\#\_2(n!)$ of factors $2$ in $n!$ fulfills $\qquad\displaystyle\begin{align\*} \#\_2(n!) &\leq \sum\_{i=1}^{\log\_2(n)} \frac{n}{2^i} \\ &= n \cdot \sum\_{i=1}^{\log\_2(n)} \frac{1}{2^i} \\ &= n \cdot \frac{n-1}{n} \\ &= n-1 \;. \end{align\*}$ Therefore, $2^n$ can not be a divisor of $n!$.
I'm reading this paper:[An artificial neural network model for rainfall forecasting in Bangkok, Thailand](https://www.hydrol-earth-syst-sci.net/13/1413/2009/hess-13-1413-2009.pdf). The author created 6 models, 2 of which have the following architecture: model B: `Simple multilayer perceptron` with `Sigmoid` activation function and 4 `layers` in which `the number of nodes` are: 5-10-10-1, respectively. model C: `Generalized feedforward` with `Sigmoid` activation function and 4 `layers` in which `the number of nodes` are: 5-10-10-1, respectively. In the Results and discussion section of the paper, the author concludes that : *`Model C` enhanced the performance compared to `Model A` and `B`. This suggests that the `generalized feedforward network` performed better than the `simple multilayer perceptron network` in this study* Is there a difference between these 2 architectures?
Well you missed the diagram they provided for the GFNN. Here is the diagram from their page: [![enter image description here](https://i.stack.imgur.com/I2CEI.png)](https://i.stack.imgur.com/I2CEI.png) Clearly you can see what the GFNN does, unlike MLP the inputs are applied to the hidden layers also. While in MLP the only way information can travel to hidden layers is through previous layers, in GFNN the input information is directly available to the hidden layers. I might add this type of connections are used in ResNet CNN, which increased its performance dramatically compared to other CNN architectures.
I am doing Levene's mean test and Levene's median test (Brown-Forsythe). I want to compare the p-values of these two tests to see which is better. I get large p-values for both tests which are 0.562 (Levene mean) and 0.611 (Levene median) for normal distribution. * Which test shows the better type I error rate? * does Levene's mean test perform best when the data follows a normal distribution?
[NIST](http://www.itl.nist.gov/div898/handbook/eda/section3/eda35a.htm) & [Wikipedia](http://en.wikipedia.org/wiki/Brown%E2%80%93Forsythe_test) both cite Brown & Forsythe's 1974 paper in saying that the version of Levene's test using the median performs better for skewed distributions. You can't infer that the test performed well or badly from the p-value you get unless you know whether the samples did in fact come from populations with unequal variances, & then you'd have to repeat many times to find the distribution of the p-value. Which is just what Brown & Forsythe did to justify their claim.
Let $X\_1,...,X\_4$ be independent $N\_p(μ,Σ)$ random vectors. Let $V\_1,V\_2$ be such that $$V\_1=(1/4)X\_1-(1/4)X\_2+(1/4)X\_3-(1/4)X\_4 $$ $$V\_2=(1/4)X\_1+(1/4)X\_2-(1/4)X\_3-(1/4)X\_4 $$ I need to find the marginal distributions of $V\_1$ and $V\_2$ and the joint density. Since they are linear combinations of random vectors I do not know the theory behind it to solve this. Any answers will be much appreciated. Thanks
I think I understand what you're asking, but correct me if I'm wrong. The analytical formula for $\beta$ is the same for the multivariate case as the univariate case: $$ \hat \beta = (X'X)^{-1}X'Y $$ You find this the same way as for the univariate case, by taking the first derivative of residual sum of squares. It is relatively straightforward to calculate using matrix calculus (which is covered in the matrix cookbook linked to by queenbee). You can test whether this solution works in R: ``` y <- cbind(rnorm(10), rnorm(10), rnorm(10)) x <- cbind(1, rnorm(10), rnorm(10), rnorm(10), rnorm(10), rnorm(10), rnorm(10)) colnames(x) <- paste("x", 1:6, sep = "") colnames(y) <- paste("y", 1:3, sep = "") fit <- lm(y ~ x - 1) summary(fit) anaSol <- solve((t(x) %*% x)) %*% t(x) %*% y anaSol coef(fit) - anaSol ``` Here's another reference, specifically related to multivariate analysis: <http://socserv.mcmaster.ca/jfox/Books/Companion/appendix/Appendix-Multivariate-Linear-Models.pdf>
Suppose you have an array of size $n \geq 6$ containing integers from $1$ to $n − 5$, inclusive, with exactly five repeated. I need to propose an algorithm that can find the repeated numbers in $O(n)$ time. I cannot, for the life of me, think of anything. I think sorting, at best, would be $O(n\log n)$? Then traversing the array would be $O(n)$, resulting in $O(n^2\log n)$. However, I'm not really sure if sorting would be necessary as I've seen some tricky stuff with linked list, queues, stacks, etc.
Leaving this as an answer because it needs more space than a comment gives. You make a mistake in the OP when you suggest a method. Sorting a list and then transversing it $O(n\log n)$ time, not $O(n^2\log n)$ time. When you do two things (that take $O(f)$ and $O(g)$ respectively) sequentially then the resulting time complexity is $O(f+g)=O(\max{f,g})$ (under most circumstances). In order to multiply the time complexities, you need to be using a for loop. If you have a loop of length $f$ and for each value in the loop you do a function that takes $O(g)$, then you'll get $O(fg)$ time. So, in your case you sort in $O(n\log n)$ and then transverse in $O(n)$ resulting in $O(n\log n+n)=O(n\log n)$. If for each comparison of the sorting algorithm you had to do a computation that takes $O(n)$, *then* it would take $O(n^2\log n)$ but that's not the case here. --- In case your curious about my claim that $O(f+g)=O(\max{f,g})$, it's important to note that that's not always true. But if $f\in O(g)$ or $g\in O(f)$ (which holds for a whole host of common functions), it will hold. The most common time it doesn't hold is when additional parameters get involved and you get expressions like $O(2^cn+n\log n)$.
Suppose I have a bag of marbles containing blue and red marbles. I guess/predict that my chance of drawing a red marble is 60%. If the actual distribution turns out to be 80% blue/20% red, how can I best quantify the accuracy of my initial prediction? Is it possible to restrict such a quantification to a range between 0 and 100%, such that a correct prediction (20%) would evaluate to a 100% accuracy?
$\frac{21}{5^5}$ is the probability a pre-identified individual smashed $4$ or $5$ plates (assuming each plate was independently equally likely to be smashed by anybody), and is $P(X\ge 4)$ when $X \sim Bin(5,\frac15)$. As the books seems to say. $\frac{21}{5^4} = \frac{105}{5^5}$ is five times that and is the probability somebody smashed $4$ or $5$ plates, since it is impossible that two people each smashed $4$ or $5$ of the $5$ smashed plates. As your second attempt seems to say. Note that $5(4{5 \choose 4}+{5 \choose 5})=105$ Your $\frac{125}{5^5}$ looks harder to justify, especially your $p^5\_4 =120$ for $4$ plates smashed by somebody and $1$ by someone else. In that case there are $5$ people who could be the big smasher and $4$ other people the little smasher and $5$ possible plates being smashed by the little smasher, so $5\times 4 \times 5=100$ possibilities, to which you later add $5$ to get $105$.
Consider the example in this article <http://text-analytics101.rxnlp.com/2014/10/computing-precision-and-recall-for.html> Will accuracy be (30 + 60 + 80)/300? what is weighted precision?
Accuracy is for the whole model and your formula is correct. Precision for **one class 'A'** is `TP_A / (TP_A + FP_A)` as in the mentioned article. Now you can calculate average precision of a model. There are a few ways of averaging (micro, macro, weighted), well explained [here](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html): > > 'weighted': > Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; (...) > > >
Suppose we have a problem parameterized by a real-valued parameter p which is "easy" to solve when $p=p\_0$ and "hard" when $p=p\_1$ for some values $p\_0$, $p\_1$. One example is counting spin configurations on graphs. Counting weighted proper colorings, independent sets, Eulerian subgraphs correspond to partition functions of hardcore, Potts and Ising models respectively, which are easy to approximate for "high temperature" and hard for "low temperature". For simple MCMC, hardness phase transition corresponds to a point at which mixing time jumps from polynomial to exponential ([Martineli,2006](http://www.eecs.berkeley.edu/~sinclair/istree2.pdf)). Another example is inference in probabilistic models. We "simplify" given model by taking $1-p$, $p$ combination of it with a "all variables are independent" model. For $p=1$ the problem is trivial, for $p=0$ it is intractable, and hardness threshold lies somewhere in between. For the most popular inference method, problem becomes hard when the method fails to converge, and the point when it happens corresponds to the phase transition (in a physical sense) of a certain Gibbs distribution ([Tatikonda,2002](http://www.stanford.edu/~montanar/TEACHING/Stat375/papers/tatikonda.pdf)). What are other interesting examples of the hardness "jump" as some continuous parameter is varied? Motivation: to see examples of another "dimension" of hardness besides graph type or logic type
In standard worst-case approximation, there are many sharp thresholds as the *approximation factor* varies. For example, for 3LIN, satisfying as many given Boolean linear equations on 3 variables each, there is a simple random assignment approximation algorithm for approximation 1/2, but any approximation better than some t=1/2+o(1) is already as hard as exact SAT (conjectured to require exponential time).
I am just wondering what we can infer from a graph with x-axis as the actual and y axis as the predicted data? ![signups](https://i.stack.imgur.com/dwlOW.png)
Scatter plots of Actual vs Predicted are one of the richest form of data visualization. You can tell pretty much everything from it. Ideally, all your points should be close to a regressed diagonal line. So, if the Actual is 5, your predicted should be reasonably close to 5 to. If the Actual is 30, your predicted should also be reasonably close to 30. So, just draw such a diagonal line within your graph and check out where the points lie. If your model had a high R Square, all the points would be close to this diagonal line. The lower the R Square, the weaker the Goodness of fit of your model, the more foggy or dispersed your points are (away from this diagonal line). You will see that your model seems to have three subsections of performance. The first one is where Actuals have values between 0 and 10. Within this zone, your model does not seem too bad. The second one is when Actuals are between 10 and 20, within this zone your model is essentially random. There is virtually no relationship between your model's predicted values and Actuals. The third zone is for Actuals >20. Within this zone, your model steadily greatly underestimates the Actual values. From this scatter plot, you can tell other issues related to your model. The residuals are heteroskedastic. This means the variance of the error is not constant across various levels of your dependent variable. As a result, the standard errors of your regression coefficients are unreliable and may be understated. In turn, this means that the statistical significance of your independent variables may be overstated. In other words, they may not be statistically significant. Because of the heteroskedastic issue, you actually can't tell. Although you can't be sure from this scatter plot, it appears likely that your residuals are autocorrelated. If your dependent variable is a time series that grows over time, they definitely are. You can see that between 10 and 20 the vast majority of your residuals are positive. And, >20 they are all negative. If your independent variable is indeed a time series that grows over time it has a Unit Root issue, meaning it is trending ever upward and is nonstationary. You have to transform it to build a robust model.
When considering how multi-thread-friendly our program must be, my team puzzled about whether there's anything that *absolutely cannot be done* on a single-core CPU. I posited that graphics processing requires massively parallel processing, but they argue that things like DOOM were done on single-core CPUs without GPUs. **Is there anything that *must* be done on a multi-core processor?** Assume there is infinite time for both development and running.
The question is: under what constraints? There are certainly problems where, if we ask the question "can we solve this problem on hardware X in the given amount of time", the answer will be no. But this is not a "future-proof" answer: things which in the past could not be done fast enough in a single core probably can be now, and we can't predict what future hardware will be capable of. In terms of computability, we know that a single-tape Turing Machine is capable of computing all the same functions as a single or multi-core computer, so, runtime aside, there are no problems that a multi-core computer can solve that a single-core can't. In terms of something like graphics, literally everything that is on the GPU *could* be done on the CPU... if you are willing to wait long enough.
I was trying to prove by induction that $$ T(n) = \begin{cases} 1 &\quad\text{if } n\leq 1\\ T\left(\lfloor\frac{n}{2}\rfloor\right) + n &\quad\text{if } n\gt1 \\ \end{cases} $$ is $\Omega(n)$ implying that $\exists c>0, \exists m\geq 0\,\,|\,\,T(n) \geq cn \,\,\forall n\geq m$ Base case : $T(1) \geq c1 \implies c \leq 1$ Now we shall assume that $T(k) = \Omega(k) \implies T(k) \geq ck \,\,\forall k < n$ and prove that $T(n) = \Omega(n)$. $$ T(n) = T(\lfloor{\frac{n}{2}}\rfloor) + n \geq c\lfloor{\frac{n}{2}}\rfloor + n \geq c \frac{n}{2} -1 + n \geq n\left(\frac{c}{2} - \frac{1}{n} + 1\right) \geq^{?} cn\\ c \leq 2 - \frac{2}{n} $$ So we have proved that $T(n) \geq c n$ in : 1) The base case for $c \leq 1$ 2) The inductive step for $c \leq 2 - \frac{2}{n}$ Yet we have to find a value that satisfies them both for all $n\geq 1$, the book suggest such value is $c = 1$ which to me is not true since : $$ 1 \leq 1\\1\leq2 - \frac{2}{n}\implies 1 \leq 0 \text{ for n = 1} $$ My guess would be $0$ but is not an acceptable value; So we just say its $\Omega(n)$ but for $n \gt 1$?Or how can we deal with it?
Unicode is a suitable encoding for most scripts because it includes characters from all the scripts. The analog would be an instruction set architecture that includes instructions from all processors. This would be huge and impractical to implement whether in hardware or in software. It would even be contradictory: how do you reconcile an architecture with 16-bit words with an architecture with 64-bit words? an architecture that allows unaligned word accesses with one that doesn't? an architecture that guarantees the presence of an MMU with one that guarantees that addresses used by software map directly to addresses in physical memory? an architecture that guarantees that all memory accesses take exactly the same amount of time with one that has memory caches? A realistic ISA can't combine what all the ISA do. You have to make choices, and these choices depend on your objectives. If you're building a processor for real-time processing, a guarantee that memory accesses take the same amount of time is crucial. If you're building a processor for fast numerical or symbolic computation, caching memory accesses is essential for performance. A good instruction set for general-purpose computing is not the same as for graphical rendering, and machine learning calls for yet another approach. There is no one-size-fits-all. Why isn't there a standard model of vehicle? Why are there so many different models of cars and airplanes and bicycles and motorcycles and helicopters and submarines and so on?
Would it be possible for a runtime environment to detect infinite loops and subsequently stop the associated process, or would implementing such logic be equivalent to solving the halting problem? For the purpose of this question, I define an "infinite loop" to mean a series of instructions and associated starting stack/heap data that, when executed, return the process to exactly the same state (including the data) as it was in before initiating the infinite loop. (In other words, a program generating an indefinitely long decimal expansion of pi isn't "stuck" in an "infinite loop," because at every iteration, it has more digits of pi somewhere in its associated memory.) (Ported from <https://stackoverflow.com/q/16250472/1858225>)
It might be **theoretically possible** for a runtime environment to check for such loops using the following procedure: After ever instruction executed, the runtime environment would make a complete image of the state of a running process (i.e. all memory associated with it, including registers, P.C., stack, heap, and globals), save that image somewhere, and then check to see whether it matches any of its previously saved images for that process. If there is a match, then the process is stuck in an infinite loop. Otherwise, the next instruction is executed and the process is repeated. In fact, rather than performing this check after every single instruction, the runtime environment could simply pause the process periodically and make a save-state. If the process is stuck in an infinite loop involving *n* states, then after at most *n* checks, a duplicate state will be observed. Note, of course, that this is **not** a solution to the halting problem; the distinction is discussed [here](http://chat.stackoverflow.com/rooms/29021/discussion-between-kyle-strand-and-ed-heal). But such a feature would be an **enormous waste of resources**; continually pausing a process to save *all memory associated with it* would slow it down tremendously and consume an enormous amount of memory very quickly. (Although old images could be deleted after a while, it would be risky to limit the total number of images that could be saved because a large infinite loop--i.e., one with many states--might not get caught if there are too few states kept in memory.) Moreover, this feature wouldn't actually provide that much benefit, since its ability to catch errors would be extremely limited and because it's relatively simple to find infinite loops with other debugging methods (such as simply stepping through the code and recognizing the logic error). Therefore, I doubt that such a runtime environment exists or that it will ever exist, unless someone programs it just for kicks. (Which I am somewhat tempted to do now.)
I encountered this problem, summarised as: For a random variable X representing volume of water usage : population mean = μ population SD σ = 65 A sample of 80 has mean x̄ = 118 Numerically justify why X is unlikely to be a normally distributed? Here's the screen clipping of the actual question (in my summary I used the random variable X whereas in the original it's V). [![Original Problem](https://i.stack.imgur.com/BaoSw.jpg)](https://i.stack.imgur.com/BaoSw.jpg) The answer said it's because x̄ − nσ = 118 – 65n <0 and volume usage cannot be negative. I'm not sure where x̄ − nσ = 118 – 65n <0 came from. A screenshot of the given solution: [![enter image description here](https://i.stack.imgur.com/YeBMv.jpg)](https://i.stack.imgur.com/YeBMv.jpg) Many thanks
The explanation given is really poor. Usually, indeed, n does stand for sample size. Here, though, it does not. The point is that (as noted) water usage cannot be negative. If water usage is normally distributed with mean 118 and SD 65 then the minimum is only 118/65 = 1.81 sd below the mean. Where might N come from? In the sample of 80, we expect there to be some people to be more than 1.81 sd below the mean, and that's impossible. And the population distribution can't possibly be normal because it has a lower limit.
Given a regular language (NFA, DFA, grammar, or regex), how can the number of accepting words in a given language be counted? Both "with exactly n letters" and "with at most n letters" are of interest. [Margareta Ackerman](http://www.cs.fsu.edu/~ackerman/) has two papers on the related subject of enumerating words accepted by an NFA, but I wasn't able to modify them to count efficiently. It seems like the restricted nature of regular languages should make counting them relatively easy -- I almost expect a formula more than an algorithm Unfortunately my searches so far haven't turned up anything, so I must be using the wrong terms.
For a DFA, in which the initial state is state $0$, the number of words of length $k$ that end up in state $i$ is $A^k[0,i]$, where $A$ is the transfer matrix of the DFA (a matrix in which the number in row $i$ and column $j$ is the number of different input symbols that cause a transition from state $i$ to state $j$). So you can count accepting words of length exactly $k$ easily, even when $k$ is moderately large, just by calculating a matrix power and adding the entries corresponding to accepting states. The same thing works for accepting words of length at most $k$, with a slightly different matrix. Add an extra row and column of the matrix, with a one in the cell that's both in the row and the column, a one in the new row and the column of the initial state, and a zero in all the other cells. The effect of this change to the matrix is to add one more path to the initial state at each power. This doesn't work for NFAs. I suspect the best thing to do is just convert to a DFA and then apply the matrix powering algorithm.
I've spent days trying to determine the difference between these two methods, which, as best I can tell, represent the univariate and multivariate approaches to a within-subjects design. Can anybody tell me why these two analyses get different F-values for their effects? Here's demonstration data in the shape that repeated-measures uses it: ``` Data Wide; input subject DV11 DV12 DV13 DV21 DV22 DV23 DV31 DV32 DV33 ; cards; 1 36 52 55 60 68 64 40 42 44 2 30 32 34 40 42 44 20 22 24 3 -10 -8 -6 0 8 6 -30 -25 -15 ; run; ``` And the analysis: ``` PROC GLM data=wide; class subject; model dv11 dv12 dv13 dv21 dv22 dv23 dv31 dv32 dv33 = /nouni; REPEATED factor1 3, factor2 3; RUN; ``` Factor 1 has F(2,4) = 48.35; Factor 2 has F(2,4) = 11.5; interaction has F(4,8) = 0.65. For the other approach, which I'm assuming is the multivariate approach (correct me if I'm wrong!), we read the data in as a flat vector labeled with the levels of the within-subject factors: ``` Data Narrow; input subject factor1 factor2 value; cards; 1 1 1 36 1 1 2 52 1 1 3 55 1 2 1 60 1 2 2 68 1 2 3 64 1 3 1 40 1 3 2 42 1 3 3 44 2 1 1 30 2 1 2 32 2 1 3 34 2 2 1 40 2 2 2 42 2 2 3 44 2 3 1 20 2 3 2 22 2 3 3 24 3 1 1 -10 3 1 2 -8 3 1 3 -6 3 2 1 0 3 2 2 8 3 2 3 6 3 3 1 -30 3 3 2 -25 3 3 3 -15 ; run; ``` and the analysis: ``` PROC GLM data=Narrow; class subject factor1 factor2; model value = factor1|factor2 subject; run; ``` This time our results are: `Factor 1 F(2,16) = 77.57, Factor 2 F(2,16) = 7.70, interaction F(4,16) = 0.56`. Can anybody explain why these analyses get different results? I am more comfortable with the second method, but the first is traditionally preferred in my field. UPDATE: As best I can tell, the differences are that: 1. PROC GLM is based on Ordinary Least Squares, while PROC MIXED uses maximum likelihood. This difference is why MIXED can handle missing obs but REPEATED cannot. 2. A repeated-measures analysis analyzes the pairwise differences, while mixed-effects does not. Repeated-measures thereby involves an assumption of sphericity (constant variance of pairwise differences) while MIXED assumes normality of the raw observations (?) 3. Calculation of degrees of freedom are different between models, perhaps because GLM REPEATED is more conservative than MIXED due to concerns re: violations of sphericity (?) Sources: <http://www.ats.ucla.edu/stat/sas/library/GLMvsMIXED_os.htm> and <http://www.ats.ucla.edu/stat/sas/library/mixedglm.pdf>
In addition to @Glen\_b 's excellent answer (+1) I'd add 1) "Familywise" begs the question of what a family is. All the analyses in one paper? All the analyses on one data set? All the analyses related to one question? All the analyses you do in your life? What about analyses that *other* people do on the same data? 2) In addition, we default to "5%" and "20%" for type I and type II, but while we treat these as almost sacred, there's no reason to do so. Sometimes a type I error is bad; sometimes it isn't. (But try convincing a journal editor of that!)
I'm trying to calculate a Mahalanobis-type pairwise distance matrix in R. I have 33 individuals, each with 10 variables. The idea is to get a distance matrix D, where $$D\_{i,j}=(\mathbf{X}\_i-\mathbf{X}\_j)W^{-1}(\mathbf{X}\_i-\mathbf{X}\_j)^T$$ However I haven't been able build proper code for it.
There a very easy way to do it using R Package "biotools". In this case you will get a Squared Distance Mahalanobis Matrix. ``` #Manly (2004, p.65-66) x1 <- c(131.37, 132.37, 134.47, 135.50, 136.17) x2 <- c(133.60, 132.70, 133.80, 132.30, 130.33) x3 <- c(99.17, 99.07, 96.03, 94.53, 93.50) x4 <- c(50.53, 50.23, 50.57, 51.97, 51.37) #size (n x p) #Means x <- cbind(x1, x2, x3, x4) #size (p x p) #Variances and Covariances Cov <- matrix(c(21.112,0.038,0.078,2.01, 0.038,23.486,5.2,2.844, 0.078,5.2,24.18,1.134, 2.01,2.844,1.134,10.154), 4, 4) library(biotools) Mahalanobis_Distance<-D2.dist(x, Cov) print(Mahalanobis_Distance) ```
I was watching this [video](https://youtu.be/u8VE1_njuUo?list=PLcjqUUQt__ZGLhwUacPm7_RKs2eJNFwco&t=1301) on statements. There is an example: > > $x + \frac12 = 2$ > > > It's an **open statement** as the truth value could be `T` or `F` depending on the value of $x$. > > $∃x: x + \frac12 = 2$ and $x ∈ ℤ$ > > > Now the statement is **closed statement ~~(proposition)~~** as the truth value is `F`. But later the professor said: > > $x + \frac12 = 2$ and $x ∈ ℤ$ > > > is an **open statement**. But it's not clear to me how it's an **open statement** as for any value of $x$ the statement is `F`? **Update from the author of the video:** > > $x$ is a free (unquantified) variable here, so by definition this is an open statement. But you've identified something that's a source of confusion. Many online sites say that a statement is open if its truth value depends on what values the variable(s) take on. That usually coincides with the definition I indicated above. But this example points out that the two definitions are not the same. While it's easy to slip into the "is the truth value known" definition (as I seemed briefly to do at around 19:40), the definition of an open statement that works best is that it's a statement with one or more free variables, as I point out in several places in this video. > > >
$$x=1$$ $$∃x: x=1$$ The first is an open statement, since no value for $x$ is given. $x$ is called a free variable here. The second is a closed statement, because it talks about all possible values of $x$. $x$ is not a free variable here. An open statement can be true or false depending on what the values of its free variables are. A closed statement is either always true or always false. (But we might not know which) $$x=x+1$$ This is an open statement since there is no $∃x$ there. It just so happens that it is always false, but that is not relevant to openness.
The Ubuntu Software Center uses a 1 to 5 star rating system for its App Reviews. However, it's current ratings sorting algorithm [looks very fishy](https://bugs.launchpad.net/ubuntu/+source/software-center/+bug/894468). I believe this is a different question from [this one](https://stats.stackexchange.com/questions/15979/how-to-find-confidence-intervals-for-ratings), which seems to assume the mean of the ratings is a meaningful number. If I assume that user supplied ratings are ordinal data, then doing things like adding the ratings together or taking their mean are not correct. The primary method of sorting must be the sample median. Unfortunately, this leaves me with a lot of duplicates since 30 applications are being pigeon-holed into 5 stars of ratings, so it must be possible to further subsort the apps with identical median stars. I believe I want: * Rating at the median should be better than rating below it. + {2,3,3,3,4} > {2,2,3,3,4} * Similarly, rating at the median should be worse than rating above it. + {2,2,3,3,4} < {2,2,3,4,4} * Ratings above and below the median should be equivalent. + {1,1,3,4,4} = {2,2,3,4,4} = {2,2,3,5,5} * Among two apps with the same median, higher confidence that the median is at least that large should rank higher. + {2,3,3,3,4} > {2,3,4} Are these reasonable desires? What algorithm can get me there? My intuition tells me I want something like sample median + {lower bound probability estimate someone ranks that app higher than median} - {upper bound probability estimate someone ranks that app lower than median}. So: * a large data set composed of 20 % 1's, 40% 3's, and 40% 4's would approach (3)+(2/5)-(1/5) = 3.2 * a set composed of equal parts 2,3, and 5 would approach (3)+(1/3)-(1/3) = 3.0 * and a set composed of 40% 1's and 60% 3's would approach (3)+(0)-(2/5) = 2.6. Is this reasonable?
As others have stated, you need to have a common frequency of measurement (i.e. the time between observations). With that in place I would identify a common model that would reasonably describe each series separately. This might be an ARIMA model or a multiply-trended Regression Model with possible Level Shifts or a composite model integrating both memory (ARIMA) and dummy variables. This common model could be estimated globally and separately for each of the two series and then one could construct an F test to test the hypothesis of a common set of parameters.
What are the reasons for decreasing the number of threads in a parallel implementation ? Assume that we have two implementations, the first one with 4 threads, and a second one with 8 threads, and both have exactly the same run time. What are the various reasons to prefer the first one? I stress on the fact that the two implementations have the same run times. The obvious reasons are the following: * If we have 8 processors, we can execute at the same time this implementation on two different sets of inputs thus performing twice the work, * Less resources consumption by the OS which handles less number of threads. I'm looking for other reasons...
Some other benefits from a security point of view for running the 4 threads implementation would be : * **Obfuscation** of the actual processors being used by the algorithm, in case an attacker is looking to interfere with the execution. * If you have 8 processors you can run simultaneously the same algorithm twice using the same input to check the **integrity** of your results.
I am coding a procedure that takes an integer $d$, and generates $d$ finite lists $X\_1 \ldots, X\_d$ of elements. I would then like for it to output a list of the elements in the product set $X\_1 \times \cdots \times X\_d$. I can't use nested for-loops because $d$ can vary so I wouldn't know how many to nest. I'm sure there's a totally standard solution to this problem, but I don't know enough to search for it successfully either here on online. For what it's worth, here's one dumb solution I came up with. Let $b$ be the maximum cardinality of the sets $X\_i$. Then run a single loop for $n$ running from $0$ to $b^d$; for each $n$, write it in base $b$ and use the $i^{\rm th}$ digit to read off the element of $X\_i$ corresponding to that digit (and ignore if any of those digits are too big for the cardinality of the corresponding set). This will work, but feels like a pretty stupid solution. What's the standard way of doing this?
There is a legitimate mathematical question here: find an encoding or $X\_1\times X\_2\times\cdots\times X\_d$. The idea is to use *mixed base*. As you note, if all sets have cardinality $b$, then you can use base $d$ encoding. If the sets have different cardinalities, then you use a mixed base approach. Suppose $|X\_i| = b\_i$. The choice of elements $c\_i \in X\_i$ (say $0 \leq c\_i < b\_i$) can be encoded as follows: $$ c\_1 + b\_1 c\_2 + b\_1 b\_2 c\_3 + \cdots + b\_1 b\_2 \cdots b\_{d-1} c\_d. $$ Every number in the range $0$ to $b\_1 b\_2 \cdots b\_d - 1$ encodes a unique choice of element from $X\_1\times X\_2\times\cdots\times X\_d$. --- There is also a programming question here: how to generate the list. The idea is to use a generalization of a *binary counter*, familiar for students of amortized complexity. There is an array $c$ of length $d$, where $c\_i$ is going to encode a choice of element in $X\_i$. At each step, we update this "counter" by increasing it by 1. We start by increasing $c\_1$ by 1. If $c\_1 < b\_1$, everything is fine. Otherwise, there is carry: we set $c\_1 = 0$, and increase $c\_2$ by 1. And so on. The resulting sequence is a lexicographically ordered list of elements in $X\_1\times X\_2\times\cdots\times X\_d$. The amortized complexity per step (assuming $b\_i > 1$ for all $i$) is $O(1)$. Notice the similarity between the encoding in the first half and the algorithm in the second half.
I have been told not to look at significance level, or not to use forward/backward selection using BIC/AIC for model selection. Let's say, I have 100 survey data with 11 variables and I want to see the relation of one of those variables with the rest. I wanted to regress the dependent variable on 10 explanatory variable. But, as I was suggested not to trust significance levels, on what basis do I select a model and confirm that it is indeed the best?
You're right that you should not use significance, or strategies that are based on it (e.g., forward or backward selection, etc.). The bigger question is why do you need to select a model at all? If you want to test a hypothesis about a particular variable, you can do that without any meaningful model selection. If you want to test a variable while controlling for certain other variables, you can do that too. You don't need to 'select' a model with certain covariates, just include the variables you were concerned about a-priori. On the other hand, if you want to build a model that you will use to make predictions in the future, it is better to focus on what data you will have in the future. In the predictive context, if you really need to select a model, [cross-validation](http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29) is likely to be your best bet.
Say that I have a weighted graph $G = (V,E,w)$ such that $w:E\rightarrow [-1,1]$ is the weighting function -- note that negative weights are allowed. Say that $f:2^V\rightarrow \mathbb{R}$ defines a property of any subset of the vertices $S \subset V$. > > Question: What are some interesting examples of > $f$s for which the maximization > problem: $\arg\max\_{S \subseteq > V}f(S)$ can be performed in polynomial > time? > > > For example, the graph cut function $$f(S) = \sum\_{(u,v) \in E : u \in S, v \not\in S}w((u,v))$$ is an interesting property of subsets of vertices, but cannot be efficiently maximized. The edge density function is another example of an interesting property that alas, cannot be efficiently maximized. I'm looking for functions that are equally interesting, but *can* be efficiently maximized. I'll let the definition of "interesting" be somewhat vague, but I want the maximization problem to be non-trivial. For example it should not be that you can determine the answer without examining the edges of the graph (so constant functions, and the cardinality function are not interesting). It should also not be the case that $f$ is really just encoding some other function with a polynomially sized domain by padding it into the domain $2^V$ (i.e. I don't want there to be some small domain $X$, and some function $m:2^S\rightarrow X$ known before looking at the graph, such that the function of interest is really $g:X\rightarrow \mathbb{R}$, and $f(S) = g(m(S))$ If this is the case, then the "maximization" problem is really just a question of evaluating the function on all inputs.) Edit: Its true that sometimes *minimization* problems are easy if you ignore the edge weights (although not minimizing the cut function, since I allow negative edge weights). But I'm explicitly interested in maximization problems. It does not become an issue in natural weighted problems in this setting though.
Whenever $f(S)$ counts the number of edges $(u,v)$ satisfying some Boolean predicate defined in terms of $u\in S$ and $v\in S$, then what you wrote is just a Boolean 2-CSP. The objective function asks to maximize the number of satisfied clauses over all assignments to the variables. This is known to be NP-hard and the exact hardness threshold is also known assuming UGC (see Raghavendra'08). There are many natural positive examples when you want to maximize over subsets of edges, e.g, Maximum matching is one example of a polynomial time problem in this case.
Currently I'm reading about the Halting problem. $H(\langle M,w\rangle)$ is a machine which will solve the Halting problem, and then using machine $H$ one creates a new machine $D$ and we run $H$ on input $\langle M, \langle M\rangle\rangle$. My question is: What is the significance of giving a description of Turing machine as input as in $\langle M,\langle M\rangle\rangle$? What is the logical reason behind giving the description of a Turing machine as input to $D$?
We can then think of the machine description $\langle M \rangle$ as the source code of $M$. If we do this, then the Halting Problem is really concerned with the decision problem *Given the source code of a program and an input string, will the program halt if fed the string as input?* There are many programs that take program code as input. A compiler is a well-known example; a specialized text editor is another. If the Halting Problem were decidable, there would be a predictor program that could correctly predict the behaviour of *any* given program, given its source code. We could then modify the predictor program such that it would always behave differently from the program that it was analyzing. But then we could of course feed this modified predictor program its own source code and predict how it would react. And the conclusion would be that its behaviour would be different from its own behaviour -- and this is of course a contradiction.
I am having a problem computing the pearson correlation coefficient of data sets with possibly zero standard deviation (i.e. all data has the same value). Suppose that I have the following two data sets: ``` float x[] = {2, 2, 2, 3, 2}; float y[] = {2, 2, 2, 2, 2}; ``` The correlation coefficient "r", would be computed using the following equation: ``` float r = covariance(x, y) / (std_dev(x) * std_dev(y)); ``` However, because all data in data set "y" has the same value, the standard deviation std\_dev(y) would be zero and "r" would be undefined. Is there any solution for this problem? Or should I use other methods to measure data relationship in this case?
The "sampling theory" people will tell you that no such estimate exists. But you can get one, you just need to be reasonable about your prior information, and do a lot harder mathematical work. If you specified a Bayesian method of estimation, and the posterior is the same as the prior, then you can say the data say nothing about the parameter. Because things may get "singular" on us, then we cannot use infinite parameter spaces. I am assuming that because you use Pearson correlation, you have a bivariate normal likelihood: $$p(D|\mu\_x,\mu\_y,\sigma\_x,\sigma\_y,\rho)=\left(\sigma\_x\sigma\_y\sqrt{2\pi(1-\rho^2)}\right)^{-N}exp\left(-\frac{\sum\_{i}Q\_i}{2(1-\rho^2)}\right)$$ where $$Q\_i=\frac{(x\_i-\mu\_x)^2}{\sigma\_x^2}+\frac{(y\_i-\mu\_y)^2}{\sigma\_y^2}-2\rho\frac{(x\_i-\mu\_x)(y\_i-\mu\_y)}{\sigma\_x\sigma\_y}$$ Now to indicate that one data set may be the same value, write $y\_i=y$, and then we get: $$\sum\_{i}Q\_i=N\left[\frac{(y-\mu\_y)^2}{\sigma\_y^2}+\frac{s\_x^2 + (\overline{x}-\mu\_x)^2}{\sigma\_x^2}-2\rho\frac{(\overline{x}-\mu\_x)(y-\mu\_y)}{\sigma\_x\sigma\_y}\right]$$ where $$s\_x^2=\frac{1}{N}\sum\_{i}(x\_i-\overline{x})^2$$ And so your likelihood depends on four numbers, $s\_x^2,y,\overline{x},N$. So you want an estimate of $\rho$, so you need to multiply by a prior, and integrate out the nuisance parameters $\mu\_x,\mu\_y,\sigma\_x,\sigma\_y$. Now to prepare for integration, we "complete the square" $$\frac{\sum\_{i}Q\_i}{1-\rho^2}=N\left[\frac{\left(\mu\_y-\left[y-(\overline{x}-\mu\_x)\frac{\rho\sigma\_y}{\sigma\_x}\right]\right)^2}{\sigma\_y^2(1-\rho^{2})}+\frac{s\_x^2}{\sigma\_{x}^{2}(1-\rho^{2})} + \frac{(\overline{x}-\mu\_x)^2}{\sigma\_x^2}\right]$$ Now we should err on the side of caution and ensure a properly normalised probability. That way we can't get into trouble. One such option is to use a weakly informative prior, which just places restriction on the range of each. So we have $L\_{\mu}<\mu\_x,\mu\_y<U\_{\mu}$ for the means with flat prior and $L\_{\sigma}<\sigma\_x,\sigma\_y<U\_{\sigma}$ for the standard deviations with jeffreys prior. These limits are easy to set with a bit of "common sense" thinking about the problem. I will take an unspecified prior for $\rho$, and so we get (uniform should work ok, if not truncate the singularity at $\pm 1$): $$p(\rho,\mu\_x,\mu\_y,\sigma\_x,\sigma\_y)=\frac{p(\rho)}{A\sigma\_x\sigma\_y}$$ Where $A=2(U\_{\mu}-L\_{\mu})^{2}[log(U\_{\sigma})-log(L\_{\sigma})]^{2}$. This gives a posterior of: $$p(\rho|D)=\int p(\rho,\mu\_x,\mu\_y,\sigma\_x,\sigma\_y)p(D|\mu\_x,\mu\_y,\sigma\_x,\sigma\_y,\rho)d\mu\_y d\mu\_x d\sigma\_x d\sigma\_y$$ $$=\frac{p(\rho)}{A[2\pi(1-\rho^2)]^{\frac{N}{2}}}\int\_{L\_{\sigma}}^{U\_{\sigma}}\int\_{L\_{\sigma}}^{U\_{\sigma}}\left(\sigma\_x\sigma\_y\right)^{-N-1}exp\left(-\frac{N s\_x^2}{2\sigma\_{x}^{2}(1-\rho^{2})}\right) \times$$ $$\int\_{L\_{\mu}}^{U\_{\mu}}exp\left(-\frac{N(\overline{x}-\mu\_x)^2}{2\sigma\_x^2}\right)\int\_{L\_{\mu}}^{U\_{\mu}}exp\left(-\frac{N\left(\mu\_y-\left[y-(\overline{x}-\mu\_x)\frac{\rho\sigma\_y}{\sigma\_x}\right]\right)^2}{2\sigma\_y^2(1-\rho^{2})}\right)d\mu\_y d\mu\_x d\sigma\_x d\sigma\_y$$ Now the first integration over $\mu\_y$ can be done by making a change of variables $z=\sqrt{N}\frac{\mu\_y-\left[y-(\overline{x}-\mu\_x)\frac{\rho\sigma\_y}{\sigma\_x}\right]}{\sigma\_y\sqrt{1-\rho^{2}}}\implies dz=\frac{\sqrt{N}}{\sigma\_y\sqrt{1-\rho^{2}}}d\mu\_y$ and the first integral over $\mu\_y$ becomes: $$\frac{\sigma\_y\sqrt{2\pi(1-\rho^{2})}}{\sqrt{N}}\left[\Phi\left( \frac{U\_{\mu}-\left[y-(\overline{x}-\mu\_x)\frac{\rho\sigma\_y}{\sigma\_x}\right]}{\frac{\sigma\_y}{\sqrt{N}}\sqrt{1-\rho^{2}}} \right)-\Phi\left( \frac{L\_{\mu}-\left[y-(\overline{x}-\mu\_x)\frac{\rho\sigma\_y}{\sigma\_x}\right]}{\frac{\sigma\_y}{\sqrt{N}}\sqrt{1-\rho^{2}}} \right)\right]$$ And you can see from here, no analytic solutions are possible. However, it is also worthwhile to note that the value $\rho$ has not dropped out of the equations. This means that the data and prior information still have something to say about the true correlation. If the data said nothing about the correlation, then we would be simply left with $p(\rho)$ as the only function of $\rho$ in these equations. It also shows how that passing to the limit of infinite bounds for $\mu\_y$ "throws away" some of the information about $\rho$, which is contained in the complicated looking normal CDF function $\Phi(.)$. Now if you have a lot of data, then passing to the limit is fine, you don't loose much, but if you have very scarce information, such as in your case - it is important keep every scrap you have. It means ugly maths, but this example is not too hard to do numerically. So we can evaluate the integrated likelihood for $\rho$ at values of say $-0.99,-0.98,\dots,0.98,0.99$ fairly easily. Just replace the integrals by summations over a small enough intervals - so you have a triple summation
We generally call an algorithm "good algorithm" if it's runnning time is polynomial in the worst-case. But in some cases (for example Simplex algorithm), eventhough the worst-case of the algorithm is exponential, it could work very well in practice. Are there any (deterministic) examples to this situatation other than Simplex algorithm?
Modern [SAT solving algorithms](http://en.wikipedia.org/wiki/Boolean_satisfiability_problem#Algorithms_for_solving_SAT) are able to solve most instances quite fast, even though the worst case running time is, of course, exponential. In this case, however, the practical speed is more of a result of years of algorithm engineering, rather than that of a single elegant algorithm. While I've understood that conflict driven clause learning caused a major jump in the performance of SAT solvers, the later improvements are have often been achieved by a clever use of various heuristics in the algorithms.
Scott Aaronson said in the paper entitled "Why Philosophers Should Care About Computational Complexity" (Please see [ECCC Report: TR11-108](http://eccc.hpi-web.de/report/2011/108/), section 7, pp 25-31): > > Following the work of Kearns and Valiant, we now know that many > natural learning problems — as an example, inferring the rules of a > regular or context-free language from random examples of grammatical > and ungrammatical sentences — are computationally intractable. > > > My question is: Which factors make the problem of inferring the grammar difficult? Is the introducing random examples of ungrammatical sentences? If so, what would happen if the condition of "random examples of grammatical and ungrammatical sentences" is replace with "random examples of grammatical sentences with probability p>0 and random examples of ungrammatical sentences with probability 1-p"?
Regarding the difficulty of learning grammars, let's stick to regular ones for concreteness. These are precisely the grammars/languages recognized by Deterministic Finite-state Automata (DFAs). The source of difficulty is purely computational; the statistical aspects are quite straightforward. *If* you were able to find the smallest DFA consistent with your finite sample, simple Occam/cardinality arguments guarantee that this will generalize very well (it's the "optimal" learner in some sense). See Theorems 2 and 3 of Graepel et al. <http://research.microsoft.com/pubs/65635/graepelherbrichtaylor05.pdf> for actual state-of-the-art bounds. However, finding such a small DFA is VERY hard (see Angluin, Gold, Pitt-Warmuth, etc): <http://www.sciencedirect.com/science/article/pii/S0019995878906836> <http://web.mit.edu/6.863/www/spring2010/readings/gold67limit.pdf> <http://dl.acm.org/citation.cfm?id=138042> [the latter even gives a hardness-of-approximation result]. But wait, it gets worse! Suppose you didn't care about a DFA and just wanted to learn the grammar in *some* representation (i.e., a mechanism for predicting the labels of test strings drawn from the same distribution as the training set). If such an algorithm were to exist, and succeeded against *all* distributions, then it would also break RSA and related cryptographic primitives: <http://dl.acm.org/citation.cfm?id=697797>
I have a question and would like to hear what the community has to say. Suppose you are training a deep learning neural network. The implementation details are not relevant for my question. I know very well that if you choose a learning rate that is too big, you end up with a cost function that may becomes nan (if, for example, you use the sigmoid activation function). Suppose I am using the cross entropy as cost function. Typical binary classification (or even multi class with softmax) problem. I also know about why this happen. I often observe the following behaviour: my cost function decreases nicely, but after a certain number of epochs it becomes nan. Reducing the learning rate make this happen later (so after more epochs). Is this really because the (for example) gradient descent after getting very close to the minimum cannot stabilize itself and starts bouncing around wildly? I thought that the algorithm will not converge exactly to the minimum but should oscillates around it, remaining more or less stable there... Thoughts?
Well, if you get NaN values in your cost function, it means that the input is outside of the function domain. E.g. the logarithm of 0. Or it could be in the domain analytically, but due to numerical errors we get the same problem (e.g. a small value gets rounded to 0). It has nothing to do with an inability to "settle". So, you have to determine what the non-allowed function input values for your given cost function are. Then, you have to determine why you are getting that input to your cost function. You may have to change the scaling of the input data and the weight initialization. Or you just have to have an adaptive learning rate as suggested by Avis, as the cost function landscape may be quiet chaotic. Or it could be because of something else, like numerical issues with some layer in your architecture. It is very difficult to say with deep networks, but I suggest you start looking at the progression of the input values to your cost function (the output of your activation layer), and try to determine a cause.
I try to teach myself the usage of bison. The manpage bison(1) says about bison: > > Generate a deterministic LR or generalized LR (GLR) parser employing LALR(1), IELR(1), or canonical LR(1) parser tables. > > > What is an IELR-parser? All relevant articles I found on the world wide web are paywalled.
The IELR(1) Parsing Algorithm ============================= The [IELR(1) parsing algorithm was developed in 2008 by Joel E. Denny](https://doi.org/10.1016/j.scico.2009.08.001) as part of his Ph.D. research under the supervision of Brian A. Malloy at Clemson University. The IELR(1) algorithm is a variation of the so-called ["minimal" LR(1) algorithm developed by David Pager in 1977](https://doi.org/10.1007/BF00290336), which itself is a variation of the [LR(k) parsing algorithm invented by Donald Knuth in 1965](https://doi.org/10.1007/BF00290336). The IE in IELR(1) stands for inadequacy elimination (see last section). LR(1) Algorithms ---------------- The [LR(1)](https://en.wikipedia.org/wiki/Canonical_LR_parser) part of IELR(1) stands for **L**eft to right, **R**ightmost derivation with 1 lookahead token. LR(1) parsers are also called canonical parsers. This class of parsing algorithms employs a bottom-up, shift-reduce parsing strategy with a stack and [state transition table](https://en.wikipedia.org/wiki/State_transition_table) determining the next action to take during parsing. Historically, LR(1) algorithms have been disadvantaged by large memory requirements for their transition tables. Pager's improvement was to develop a method of combining the transition states when the transition table is generated, significantly reducing the size of the table. Thus Pager's algorithm makes LR(1) parsers competitive with other parsing strategies with respect to space and time efficiency. The phrase "minimal LR(1) parser" refers to the minimal size of the transition table introduced by Pager's algorithm. Limitations of Pager's Algorithm -------------------------------- Minimal LR(1) algorithms produce the transition table based on a particular input grammar for the language to be parsed. Different grammars can produce the same language. Indeed, it is possible for a non-LR(1) grammar to produce an LR(1) parsable language. In practice, LR(1) parser generators accept non-LR(1) grammars with a specification for resolving conflicts between two possible state transitions ("shift-reduce conflicts") to accommodate this fact. Denny and Malloy found that Pager's algorithm fails to generate parsers powerful enough to parse LR(1) languages when provided certain non-LR(1) grammars even though the non-LR(1) grammar generates an LR(1) language. Denny and Malloy show that this limitation is not merely academic by demonstrating that Gawk and Gpic, both widely used, mature software, perform incorrect parser actions. IELR(1)'s Improvements ---------------------- Denny and Malloy studied the source of the deficiencies of Pager's algorithm by comparing the transition table generated by Pager's algorithm to the transition table of an equivalent LR(1) grammar and identified two sources of what they term *inadequacies* that appear in the transition table from Pager's algorithm but not in the LR(1) transition table. Denny and Malloy's IELR(1) (*Inadequacy Elimination* LR(1)) algorithm is an algorithm designed to *eliminate* these *inadequacies* when generating the transition table that is virtually identical in size to that of Pager's algorithm.
This might be a question in general: due to computational burden, I have to use a subset of my complete data (say, 1,000 out of the complete 10,000 observations) to get a p-value of a test. The test itself is from Monte Carlo simulations. My question is, is there a way to quantify the uncertainty of the p-value **due to the use of a subset of the 1,000 observations instead of using the complete dataset?** Thanks!
For the typical low signal:noise ratio we see in most problems, a common rule of thumb is that you need about 15 times as many events and 15 times as many non-events as there are parameters that you entertain putting into the model. The rationale for that "rule" is that it results in a model performance metric that is likely to be as good or as bad in new data as it appears to be in the training data. But you need 96 observations just to estimate the intercept so that the overall predicted risk is within a $\pm 0.1$ margin of error of the true risk with 0.95 confidence.
I'm reasonably certain the absolute value of the distance covariance satisfies 1. $d(x, y) \ge 0$ (non-negativity, or separation axiom) 2. $d(x, y) = 0$ if and only if $x = y$ (identity of indiscernibles, or coincidence axiom) 3. $d(x, y) = d(y, x)$ (symmetry) But I'm not sure about: 4) $d(x, z) \le d(x, y) + d(y, z)$ (subadditivity / triangle inequality). I'm thinking in particular of a space of timeseries, but I don't think that matters too much.
You got it: covariance, of course, induces a metric: $$ d(x,y) = \sqrt{ \text{Cov}(x-y,x-y) } $$ but NOT: $$ d(x,y) = \text{Cov}(x,y) $$ Actually I'm thinking about the property of the triangle inequality. Or, what effect does this property induce?
since we do normalize as 10kg >>> 10 grams or 1000 >> 10. so incase of one hot encoding eg male=0 and female =1, are we giving more weight to female as 1>0 for training our models?
The 0/1 encoding of male/female doesn't by itself put more weight on females versus males; it's not really different from having a value of 0 versus 1 (or 1 versus 2) in a continuous predictor. It's just a difference of 1 unit in the predictor value. As @Tim rightly points out, for many machine learning approaches normalization is not required and there is no issue about weighting. There is, however, a potential problem when your modeling method requires all predictors to be on the same scale. Examples are principal-components analysis (PCA) and penalized linear approaches like LASSO, ridge, or their hybrid elastic net. The default, in some implementations at least, is to normalize all predictors including the categorical predictors. That's where you can have difficulties. If you don't normalize the categorical predictors, are they on the same scales as the continuous predictors, as your penalization method implicitly assumes? If you do normalize 2-level predictors like male/female, the normalized values can depend heavily on the class frequencies in your sample. So normalization of predictors having class imbalance can alter weighting. For multi-level categorical predictors the issue is even more vexing as the choice of the reference level can affect the normalized values. [This page](https://stats.stackexchange.com/q/359015/28500) goes into more detail. There is no one-size-fits-all solution to this issue in PCA and penalized regression; intelligent application of your subject-matter knowledge might be best.
For the term *'predictor'*, I found the following definition: > > Predictor Variable: One or more variables that are used to > determine or predict the target variable. > > > Whereas Wikipedia contains the following definition of the word *'feature'*: > > Feature is an individual measurable property or characteristic of a > phenomenon being observed. > > > What is the difference between *'predictor'* and *'feature'* in machine learning?
Feature and predictor are used interchangeably in machine learning today though I must admit that it seems that feature is being used more than predictor. The definition is the one on Wikipedia which you have already mentioned. The term predictor comes from statistics, here one definition: [An independent variable, sometimes called an experimental or predictor variable, is a variable that is being manipulated in an experiment in order to observe the effect on a dependent variable, sometimes called an outcome variable.](https://statistics.laerd.com/statistical-guides/types-of-variable.php) and my favorite definition: [A predictor variable explains changes in the response.](http://statisticsbyjim.com/glossary/predictor-variables/) In a nutshell: X columns: features, predictors, independent variables, experimental variables. y column(s): target, dependent variable, outcome, outcome variable.
Are multiple and multivariate regression really different? What *is* a variate anyways?
I think the key insight (and differentiator) here aside from the number of variables on either side of the equation is that for the case of multivariate regression, the goal is to utilize the fact that there is (generally) correlation between response variables (or outcomes). For example, in a medical trial, predictors might be weight, age, and race, and outcome variables are blood pressure and cholesterol. We could, in theory, create two "multiple regression" models, one regressing blood pressure on weight, age, and race, and a second model regressing cholesterol on those same factors. However, alternatively, we could create a single multivariate regression model that predicts *both* blood pressure and cholesterol simultaneously based on the three predictor variables. The idea being that the multivariate regression model may be better (more predictive) to the extent that it can learn more from the correlation between blood pressure and cholesterol in patients.
I'm working on statistics for software builds. I have data for each build on pass/fail and elapsed time and we generate ~200 of these/week. The success rate is easy to aggregate, I can say that 45% passed any given week. But I'd like to aggregate elapsed time as well, and I want to make sure I don't misrepresent the data too badly. Figured I'd better ask the pros :-) Say I have 10 durations. They represent both pass and fail cases. Some builds fail immediately, which makes duration unusually short. Some hang during testing and eventually time out, causing very long durations. We build different products, so even successful builds vary between 90 seconds and 4 hours. I might get a set like this: ``` [50, 7812, 3014, 13400, 21011, 155, 60, 8993, 8378, 9100] ``` My first approach was to get the median time by sorting the set and picking the mid-value, in this case 7812 (I didn't bother with the arithmetic mean for even-numbered sets.) Unfortunately, this seems to generate a lot of variation, since I only pick out one given value. So if I were to trend this value it would bounce around between 5000-10000 seconds depending on which build was at the median. So to smooth this out, I tried another approach -- remove outliers and then calculate a mean over the remaining values. I decided to split it into tertiles and work only on the middle one: ``` [50, 60, 155, 3014, 7812, 8378, 8993, 9100, 13400, 21011] -> [50, 60, 155], [3014, 7812, 8378, 8993], [9100, 13400, 21011] -> [3014, 7812, 8378, 8993] ``` The reason this seems better to me is two-fold: * We don't want any action on the faster builds, they're already fine * The longest builds are likely timeout-induced, and will always be there. We have other mechanisms to detect those So it seems to me that this is the data I'm looking for, but I'm worried that I've achieved smoothness by removing, well, truth. Is this controversial? Is the method sane? Thanks!
What you're doing is known as a *trimmed mean*. As you have done, it's common to trim the same proportion from each side (the trimming proportion). You can trim anything between 0% (an ordinary mean) up to (almost) 50% (which gives the median). Your example has 30% trimmed from each end. See [this answer](https://stats.stackexchange.com/a/4254/805) and the relevant [Wikipedia article](http://en.wikipedia.org/wiki/Truncated_mean). [Edit: See Nick Cox's [excellent discussion](https://stats.stackexchange.com/a/114226/805) on this topic.] It's quite a reasonable, somewhat robust location estimator. It's generally considered more suitable for near-symmetric distributions than highly skewed ones, but if it suits your purpose\* there's no reason not to use it. How much is best to trim depends on the kinds of distribution you have and the properties you seek. \* It's not completely clear what you want to estimate here. There are a large number of other robust approaches to summarizing the 'center' of distributions, some of which you might also find useful. (e.g. M-estimators might have some use for you, perhaps) [If you need a corresponding measure of variability to go with your trimmed mean, a *Winsorized* standard deviation might be of some use to you (essentially, when calculating the s.d., replace the values you would cut off when trimming with the most extreme values you didn't cut off).]
I am implementing a matrix library for use in my research. This should support 2D matrices of size 100x100 (or more perhaps later on). I am a little confused about the algorithm I should be using for matrix multiplication. I am extremely curious to know what kind of algorithms my peers use? The following are the possible choices: 1. Naive $O(n^3)$ 2. Strassens Algorithm $O(n^{2.807})$ 3. Coppersmith Winograd Algorithm $O(n^{2.375})$ Obviously (3) is quite complicated and may take quite a few tries to get right. Is it worth the effort ? I must mention here that speed is paramount for my usage as I will have a number of multiplication calls.
The fastest way to multiply dense matrices on a modern computer is to call [BLAS](http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms).
I am trying to solve this problem on and off for the past couple of months but to no success. This was supposed to be a very small part of my PhD thesis in navigation but I guess I underestimated the problem. It sounded trivial at the beginning, but now I am not so sure. Lets say we have two ships, each with its own nominal position in 2D coordinates (mean). Due to errors in positioning systems we can only be certain that the ships are within 1 mile of the mean with 95% probability (normal distribution). Given these 2 positions and this probability distribution, what is the probability that the ships are within 5 miles from each other? Also, same question if the ship's probable position is an ellipse, not a circle. I asked some people and they told me that there are no analytic solutions. If that is really the case, please explain how to solve it numerically. As you can already tell, I come from engineering background, therefore my math is more than a bit rusty. I apologize in advance if the question is too vague or too trivial for this forum. I will be more than happy to explain in more detail if needed. I found [this](https://stats.stackexchange.com/questions/12209/percentage-of-overlapping-regions-of-two-normal-distributions), but it is only for univariate case, and besides I don't know how to implement it in my case where I need to find the probability that the distance between two ships is less than 5 miles. I imagine this problem as a plane with two hills that intersect and the solution is the volume under the circle with diameter of 5 miles that is located somewhere between the two peaks of hills (means). Am I on the right track? Thanks
### Summary The problem is not trivial, but obtaining a solution is straightforward. Exact analytical expressions for the distribution of the inter-ship distance can be found (in terms of Bessel functions): it is the square root of a scaled non-central chi-squared variate. Provided the ships are far apart compared to the standard deviation of the position estimates, formulas for the mean and variance of this distribution provide an excellent Normal approximation. This can be used to develop either confidence intervals or a posterior distribution for the distance. --- A comment describes the data: > > The data that I have is 2 pairs of x,y coordinates that mark the estimated positions of 2 ships. Also, positional errors are bivariate normal with 95% probability of ship's actual position being within 1 mile of the expected position. > > > It will be convenient to obtain conventional parameters of the positional errors. A bivariate normal distribution with no correlation and variances of $\sigma^2$ for each of the coordinates has a total probability of $1 - \exp(-x^2/(2\sigma^2))$ within a distance $x$ of its mean. Letting $x$ be one mile and setting this expression to $0.95$ determines $\sigma^2$. In general, when the probability is $1-\alpha$ ($\alpha=0.05$ here) at a radius of $x$, then $$\sigma^2 = \frac{x^2}{-2 \log(\alpha)}.$$ Let $(X\_1,Y\_1)$ be the observed location of ship 1, assumed to be at the unknown location $(\mu\_{x1}, \mu\_{y1})$ and $(X\_2,Y\_2)$ the observed location of ship 2, assumed to be at $(\mu\_{x2}, \mu\_{y2})$. Their *squared* distance, $$D^2 = (X\_1 - X\_2)^2 + (Y\_1 - Y\_2)^2,$$ is a sum of squares of two Normal variates: $X\_1-X\_2$ has an expectation of $\mu\_{x1}-\mu\_{x2}$ and a variance of $2\sigma^2 = \sigma^2 + \sigma^2$ while $Y\_1-Y\_2$ has an expectation of $\mu\_{y1}-\mu\_{y2}$ and a variance of $2\sigma^2$. This makes $D^2$ equal to $2\sigma^2$ times a [non-central $\chi^2$ distribution](http://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution) with $\nu=2$ degrees of freedom and noncentrality parameter $$\lambda = \frac{(\mu\_{x1}-\mu\_{x2})^2 + (\mu\_{y1}-\mu\_{y2})^2}{2\sigma^2}.$$ Consequently, $D$ itself could be called a (scaled) "noncentral $\chi$ distribution." Calculations indicate that the mean of $D$ equals $\sqrt{2}\sigma$ times $$\frac{1}{2} e^{-\lambda /4} \sqrt{\frac{\pi }{2}} \left((2+\lambda ) \text{BesselI}\left[0,\frac{\lambda }{4}\right]+\lambda \text{BesselI}\left[1,\frac{\lambda }{4}\right]\right)$$ and (somewhat surprisingly) its raw second moment is $2\sigma^2$ times $2+\lambda$. As we would intuitively expect, the mean (upper blue curve) is close to $\sqrt{\lambda}$ (lower red curve), especially for large $\lambda$, which occurs when the ships are well separated: ![Plot of mean versus lambda](https://i.stack.imgur.com/7EGXK.png) From these, by matching moments, we obtain a Normal approximation to $D$. It is *remarkably* good when the ships are separated by several $\sigma$'s. (The Normal approximation has slightly shorter tails.) For instance, here are plots of the distribution of $D$ and its Normal approximation when the two ships are actually $5$ miles apart in the circumstances of the initial quotation: ![PDF plot and its approximation](https://i.stack.imgur.com/G5sCv.png) At this resolution, they perfectly coincide. The correct probability that $D$ is less than $5$, $\Pr(D\le 5)$, is equal to $0.476912$, while the probability given by the Normal approximation is $0.476807$: just $0.0001$ off. However, **these calculations do not directly answer the question,** which is: given the observed value of $D$, what can we say about the *true* distance between the ships (equal to $\delta = \sqrt{(\mu\_{x1}-\mu\_{x2})^2 + (\mu\_{y1}-\mu\_{y2})^2}$)? This usually has two kinds of answers: 1. **For any desired level of confidence,** we can compute an associated *confidence interval* for $\delta$, or 2. **If we adopt a prior distribution for $\delta$**, we can update that distribution (via Bayes' Theorem) based on $D$ to obtain a posterior distribution. Either method is easy and straightforward when the Normal approximation to the distribution of $D$ is good. Both require some heavy computation otherwise--but that is perhaps a discussion for another day.
Is there a way to find very rough minimum and maximum estimates for the travelling salesman problem? The estimates only need to be within the roughly same magnitude, but it's important that the *minimum estimate is lower (or equal) to the actual distance* and that the *maximum estimate is higher (or equal) to the actual distance*. EDIT: Metric TSP unfortunately does not apply. To add some context: I'm searching for non-dominated fronts (pareto fronts) for a Multiobjective TSP, e.g. optimizing for multiple variables like travel time *and* distance. I'm using [NSGA-II](http://www.iitk.ac.in/kangal/Deb_NSGA-II.pdf) which requires an upper and lower bound for each objective function, which translates into rough estimates for single-objective TSP, with the constraint that the upper estimate has to be higher than the real max distance and vice versa.
It is conjectured (but unproven) that the Held-Karp relaxation (*not* the Held-Karp algorithm) has an integrality ratio of 4/3 (on metric TSP). Various proofs exists for even better bounds if some assumptions are made. Practically it performs even better, gaps of ~2% are normal, and there is a lot of literature about adding more cuts that get that gap (typically) a fair way below 1%, but that would not fit with your goal of "fast and easy to implement" anymore. Just the basic degree constraints + lazy sub-tour elimination is, when using an existing LP solver, both fast and easy, and a lot better than taking one of the well known upper bounds and dividing them by their approximation ratio (which tends to be give terrible results, because they typically approximate much better than their worst case, resulting in an overly optimistic lower bound).
When converting a CFG to a PDA I know that you get three main states, Qstart, Qloop and Qaccept. But Qloops will need a various amount of states, and my question is how many? Is there a way to find out the "worst case scenario" of how many states there can potentially be? I don't mean for one particular CFG, but in general. I'm having difficulties trying to figure out how I can calculate this...
When converting CFG to NPDA ,then to simplify the process, first convert the CFG to Greibach Normal Form(GNF). Now using three states: Qstart, Qloop and Qaccept, you can generate all transition rules for the NPDA. To start the process add the transition: $ δ(Qstart,λ,z)=\{(Qloop,Sz)\}$ where S start variable and z is stack start symbol. And add the following transition to get the NPDA into a final state:$ δ(Qloop,λ,z)=\{(Qaccept,z)\}$. Now using the production rules of the CFG you can easily generate all the other transitions which includes only $Qloop$. And here the number of transitions depend on the number of production rules.
Consider the following three phenomena. 1. Stein's paradox: given some data from multivariate normal distribution in $\mathbb R^n, \: n\ge 3$, sample mean is not a very good estimator of the true mean. One can obtain an estimation with lower mean squared error if one shrinks all the coordinates of the sample mean towards zero [or towards their mean, or actually towards any value, if I understand correctly]. *NB: usually Stein's paradox is formulated via considering only one single data point from $\mathbb R^n$; please correct me if this is crucial and my formulation above is not correct.* 2. Ridge regression: given some dependent variable $\mathbf y$ and some independent variables $\mathbf X$, the standard regression $\beta = (\mathbf X^\top \mathbf X)^{-1} \mathbf X^\top \mathbf y$ tends to overfit the data and lead to poor out-of-sample performance. One can often reduce overfitting by shrinking $\beta$ towards zero: $\beta = (\mathbf X^\top \mathbf X + \lambda \mathbf I)^{-1} \mathbf X^\top \mathbf y$. 3. Random effects in multilevel/mixed models: given some dependent variable $y$ (e.g. student's height) that depends on some categorical predictors (e.g. school id and student's gender), one is often advised to treat some predictors as 'random', i.e. to suppose that the mean student's height in each school comes from some underlying normal distribution. This results in shrinking the estimations of mean height per school towards the global mean. I have a feeling that all of this are various aspects of the same "shrinkage" phenomenon, but I am not sure and certainly lack a good intuition about it. So my main question is: **is there indeed a deep similarity between these three things, or is it only a superficial semblance?** What is the common theme here? What is the correct intuition about it? In addition, here are some pieces of this puzzle that don't really fit together for me: * In ridge regression, $\beta$ is not shrunk uniformly; ridge shrinkage is actually related to singular value decomposition of $\mathbf X$, with low-variance directions being shrunk more (see e.g. *The Elements of Statistical Learning* 3.4.1). But James-Stein estimator simply takes the sample mean and multiplies it by one scaling factor. How does that fit together? **Update:** see [James-Stein Estimator with unequal variances](https://stats.stackexchange.com/questions/119786) and [e.g. here](https://stats.stackexchange.com/questions/104704) regarding variances of $\beta$ coefficients. * Sample mean is optimal in dimensions below 3. Does it mean that when there are only one or two predictors in the regression model, ridge regression will always be worse than ordinary least squares? Actually, come to think of it, I cannot imagine a situation in 1D (i.e. simple, non-multiple regression) where ridge shrinkage would be beneficial... **Update:** No. See [Under exactly what conditions is ridge regression able to provide an improvement over ordinary least squares regression?](https://stats.stackexchange.com/questions/122936) * On the other hand, sample mean is *always* suboptimal in dimensions above 3. Does it mean that with more than 3 predictors ridge regression is always better than OLS, even if all the predictors are uncorrelated (orthogonal)? Usually ridge regression is motivated by multicollinearity and the need to "stabilize" the $(\mathbf X^\top \mathbf X)^{-1}$ term. **Update:** Yes! See the same thread as above. * There are often some heated discussion about whether various factors in ANOVA should be included as fixed or random effects. Shouldn't we, by the same logic, always treat a factor as random if it has more than two levels (or if there are more than two factors? now I am confused)? **Update:** ? --- **Update:** I got some excellent answers, but none provides enough of a big picture, so I will let the question "open". **I can promise to award a bounty of at least 100 points** to a new answer that will surpass the existing ones. I am mostly looking for a unifying view that could explain how the general phenomenon of shrinkage manifests itself in these various contexts and point out the principal differences between them.
I'm going to leave it as an exercise for the community to flesh this answer out, but in general the reason why shrinkage estimators will \*dominate\*$^1$ unbiased estimators in finite samples is because Bayes$^2$ estimators *cannot* be dominated$^3$, and many shrinkage estimators can be derived as being Bayes.$^4$ All of this falls under the aegis of Decision Theory. A exhaustive, but rather unfriendly reference is "Theory of point estimation" by Lehmann and Casella. Maybe others can chime in with friendlier references? --- $^1$ An estimator $\delta\_1(X)$ of parameter $\theta \in \Omega$ on data $X$ is *dominated* by another estimator $\delta\_2(X)$ if for every $\theta \in \Omega$ the Risk (eg, Mean Square Error) of $\delta\_1$ is equal or larger than $\delta\_2$, and $\delta\_2$ beats $\delta\_1$ for at least one $\theta$. In other words, you get equal or better performance for $\delta\_2$ everywhere in the parameter space. $^2$ An estimator is Bayes (under squared-error loss anyways) if it is the the posterior expectation of $\theta$, given the data, under some prior $\pi$, eg, $\delta(X) = E(\theta | X)$, where the expectation is taken with the posterior. Naturally, different priors lead to different risks for different subsets of $\Omega$. An important toy example is the prior $$\pi\_{\theta\_0} = \begin{cases} 1 & \mbox{if } \theta = \theta\_0 \\ 0 & \theta \neq \theta\_0 \end{cases} $$ that puts all prior mass about the point $\theta\_0$. Then you can show that the Bayes estimator is the constant function $\delta(X) = \theta\_0$, which of course has extremely good performance at and near $\theta\_0$, and very bad performance elsewhere. But nonetheless, it cannot be dominated, because only that estimator leads to zero risk at $\theta\_0$. $^3$ A natural question is if any estimator that cannot be dominated (called **admissible**, though wouldn't indomitable be snazzier?) need be Bayes? The answer is almost. See "complete class theorems." $^4$ For example, ridge regression arises as a Bayesian procedure when you place a Normal(0, $1/\lambda^2$) prior on $\beta$, ~~and random effect models arise as an empirical Bayesian procedure in a similar framework~~. These arguments are complicated by the fact that the vanilla version of the Bayesian admissibility theorems assume that every parameter has a proper prior placed on it. Even in ridge regression, that is not true, because the "prior" being placed on variance $\sigma^2$ of error term is the constant function (Lebesgue measure), which is not a proper (integrable) probability distribution. But nonetheless, many such "partially" Bayes estimators can be shown to be admissible by demonstrating that they are the "limit" of a sequence of estimators that are proper Bayes. But proofs here get rather convoluted and delicate. See "generalized bayes estimators".
As I have been teaching the basis of λ-calculus lately, I have implemented a simple λ-calculus evaluator in Common Lisp. When I ask the normal form of `Y fac 3` in normal-order reduction, it takes 619 steps, which seemed a bit much. Of course, each time I did similar reductions on paper, I never used the untyped λ-calculus, but added numbers and functions operating on them. In this case, fac is defined as such: ``` fac = λfac.λn.if (= n 0) 1 (* n (fac (- n 1))) ``` In this case, considering `=`, `*` and `-` as currying functions, it only take approximately 50 steps to get `Y fac 3` to its normal form `6`. But in my evaluator, I used the following: ``` true = λx.λy.x false = λx.λy.y ⌜0⌝ = λf.λx.x succ = λn.λf.λx.f n f x ⌜n+1⌝ = succ ⌜n⌝ zero? = λn.n (λx.false) true mult = λm.λn.λf.m (n f) pred = λn.λf.λx.n (λg.λh.h (g f)) (λu.x) (λu.u) fac = λfac.λn.(zero? n) ⌜1⌝ (* n (fac (pred n))) Y = λf.(λf.λx.f (x x)) f ((λf.λx.f (x x)) f) ``` In 619 steps, I get from `Y fac ⌜3⌝` to the normal form of `⌜6⌝`, namely `λf.λx.f (f (f (f (f (f x)))))`. From a quick skimming of the many steps, I guess it's the definition of `pred` that warrants such a long reduction, but I still wonder if it just may be a big nasty bug in my implementation... EDIT: I initially asked about a thousand steps, some of a which were indeed caused a incorrect implementation of the normal order, so I got down to 2/3 of the initial number of steps. As commented below, with my current implementation, switching from Church to Peano arithmetic actually increases the number of steps…
Church coding is really bad if you want to use `pred`. I would advise you to use some more efficient coding in Peano style: ``` // arithmetics : p_zero = λs.λz.z : p_one = λs.λz.s p_zero : p_succ = λn.λs.λz.s n : p_null = λn.n (λx. ff) tt : p_pred = λn.n (λp.p) p_zero : p_plus = μ!f.λn.λm.n (λp. p_succ (!f p m)) m : p_subs = μ!f.λn.λm.n (λp. p_pred (!f p m)) m : p_eq = μ!f.λm.λn. m (λp. n (λq. !f p q) ff) (n (λx.ff) tt) : p_mult = μ!f.λm.λn. m (λp. p_plus n (!f p n)) p_zero : p_exp = μ!f.λm.λn. m (λp. p_mult n (!f p n)) p_one : p_even = μ!f.λm. m (λp. not (!f p)) tt // numbers : p_0 = λs.λz.z : p_1 = λs.λz.s p_0 : p_2 = λs.λz.s p_1 : p_3 = λs.λz.s p_2 ... ``` This is some code taken from one of my old libraries, and `μ!f. …` was just an optimized construction for `Y (λf. …)`. (And `tt`, `ff`, `not` are booleans.) I'm not really sure you would obtain better results for `fac` though.
I am trying to find out how many times the "statement" is executed by finding its formula based on these loops: ``` int s = 0; for(int k = n; k > 0; k /= 2) { for(int l = k; l < n; l++) { s++; // statement } } ``` I have been stuck with this problem for a while since I couldn't really get the correct formula whenever I compared the result of my formula to the output s. I started by doing this: $$T(n) = \sum\_{k=1}^{\left\lfloor{\log\_2(n)}\right\rfloor + 1}\sum\_{l=k}^{n-1} 1$$ then eventually got this as a result: $$T(n) = (1/2)(\left\lfloor{\log\_2(n)}\right\rfloor + 1)(2n - \left\lfloor{\log\_2(n)}\right\rfloor-2)$$ Does anyone know how to solve this kind of problem? (This is a self-made problem btw since I kind of find analysis of algorithms fun)
We can determine the *exact* value of $s$ as a function of $n$. For the sake of brevity, let $\lambda = \left \lfloor \lg n \right \rfloor$. We can replace the inner loop with: > > $s \leftarrow s+n-k$ > > > At the end of the program we'll have: $$ s= \sum\_{h=0}^{\lambda} n - \left \lfloor \frac{n}{2^h} \right \rfloor = n \left( \lambda + 1 \right) - \sum\_{h=0}^{\lambda} \left \lfloor \frac{n}{2^h} \right \rfloor $$ Our problem is now rewriting the last term in a nicer form. . Certainly $n$ has a unique representation in base $2$, in particular it is true that $$ n = \sum\_{u \in S} 2^u $$ for some $S \subseteq \mathbb{N}\_{\le \lambda}$. Therefore we have: $$ \sum\_{h=0}^{\lambda} \left \lfloor \frac{n}{2^h} \right \rfloor = \sum\_{h=0}^{\lambda} \sum\_{u \in S} \left \lfloor 2^{u-h} \right \rfloor = \sum\_{u \in S} \sum\_{h=0}^{\lambda} \left \lfloor 2^{u-h} \right \rfloor = \sum\_{u \in S} 2^{u+1} -1 $$ But then: $$ \sum\_{u \in S} 2^{u+1} -1 = 2n - |S| $$ Piercing it all together, at the end of the program we will have: $$ s = n \left ( \left \lfloor \lg n \right \rfloor - 1\right) + \textbf{1}(n) $$ where $\textbf{1}(n)$ is the Hamming norm of $n$, that is, the number of ones in its binary representation.
As I understand, in computer science data types are not based on set theory because of things like Russell's paradox, but as in real world programming languages we can't express such complex data types as "set that does not contain itself", can we say that in practice type is an infinite set of its members where instance membership is defined by of number of features that are intrinsic to this type/set (existence of certain properties, methods)? If no what would be the counterexample?
The main reason for avoiding sets in semantics of types is that a typical programming language allows us to define arbitrary recursive functions. Therefore, whatever the meaning of a type is, it has to have the fixed-point property. The only set with such a property is the singleton set. To be more precise, a recursively defined value $v$ of type $\tau$ (where typically $\tau$ is a function type) is defined by a fixed-point equation $v = \Phi(v)$ where $\Phi : \tau \to \tau$ can be any program. If $\tau$ is interpreted as the set $T$ then we would expect every $f : T \to T$ to have a fixed point. But the only set $T$ with this property is the singleton. Of course, you could also realize that the culprit is classical logic. If you work with intuitionistic set theory, then it is consistent to assume that there are many sets with fixed-point property. In fact, this has been used to give semantics of programming language, see for example > > Alex Simpson, *[Computational Adequacy for Recursive Types in Models of Intuitionistic Set Theory](http://homepages.inf.ed.ac.uk/als/Research/cartinmist.pdf)*, In Annals of Pure and Applied Logic, 130:207-275, 2004. > > >
I would like to prove that any context free language over a 1 letter alphabet is regular. I understand there is Parikh's theorem but I want to prove this using the work I have done so far: Let $L$ be a context free language. So, $L$ satisfies the pumping lemma. Let $p$ be $L$'s pumping constant. Let $L = L\_1 \cup L\_2$ where $L\_1$ consists of $w$ where $|w| < p$ and and $L\_2$ consists of $w$ where $|w| \ge p$. We have a single letter alphabet and since $L\_1$ has a restriction on the length of its words, $L\_1$ is a finite language. Finite languages are regular so $L\_1$ is regular. If I can show that $L\_2$ is regular, I can use the fact that the union of regular languages is regular. But I am struggling on showing that $L\_2$ is regular. I know that since $w \in L\_2$ has to satisfy $|w| \ge p$, by the pumping lemma, $w$ can be written as $w = uvxyz$ where $|vxy| \le p$, $|vy| > 0$ and $\forall t \ge 0$, $uv^txy^tz \in L$. Since we have a single letter alphabet (say the letter is $a$), $uv^txy^tz = uxz(vy)^t = uxz(a^{|vy|})^t \in L$. Now what?
Let $L$ be a unary context-free language. According to the pumping lemma, there is a constant $p$ such that if $a^n \in L$ then either $n < p$ or there exists $q \in \{1,\ldots,p\}$ such that $a^{n+tq} \in L$ for all $t \geq 0$ (actually, the pumping lemma gives $t \geq -1$). Let $L\_0$ be the set of words in $L$ of length smaller than $p$, and for $q \in \{1,\ldots,p\}$, let $L\_{q\_0}$ be the set of words in $L$ of length at least $p$ for which the pumping lemma gives $q = q\_0$. I claim that $$ L = L\_0 \cup \bigcup\_{q=1}^p \{ a^{n+tq} : a^n \in L\_q, t \geq 0\}. $$ Proof of $ \subseteq $: If $x \in L$ then either $|x| < p$, in which case $x \in L\_0$, or $|x| \geq p$, in which case $x \in L\_q$ for some $q$, and so $x$ belongs to the $q$'th summand on the right (choosing $t \geq 0$). Proof of $ \supseteq $: This follows directly by definition of $L\_0$ and $L\_q$. Since $L\_0$ is finite, it is regular. Hence it suffices to prove that for each $q \in \{1,\ldots,p\}$, the following language is regular: $$ R\_q := \{ a^{n+tq} : a^n \in L\_q, t \geq 0\}. $$ Let us write, for $0 \leq r \leq q-1$, $$ R\_{q,r} := \{ a^{n+tq} : a^n \in L\_q, t \geq 0, n \equiv r \pmod q \}, $$ so that $R\_q = \bigcup\_{r=0}^{q-1} R\_{q,r}$. Thus it suffices to prove that each $R\_{q,r}$ is regular. If no $a^n \in L\_q$ satisfies $n \equiv r \pmod q$, then $R\_{q,r} = \emptyset$ is clearly regular. Otherwise, let $n\_{q,r}$ be the minimal $n \equiv r \pmod q$ such that $a^n \in L\_q$. I claim that $$ R\_{q,r} = \{a^{n\_{q,r} + tq} : t \geq 0\} = a^{n\_{q,r}} (a^q)^\*, $$ and so $R\_{q,r}$ is regular. Indeed, clearly $R\_{q,r}$ contains the right-hand side. On the other hand, suppose that $a^m \in R\_{q,r}$. Then $m = n+tq$ for some $n \equiv r \pmod q$ and $t \geq 0$ such that $a^n \in L$. But then $m = n\_{q,r} + (t+(n-n\_{q,r})/q)q$ (our choice of $n\_{q,r}$ guarantees that $n \geq n\_{q,r}$ and that $n-n\_{q,r}$ is divisible by $q$), and so $a^m$ is an element of the right-hand side.
I was reading [this article](http://www.paulgraham.com/avg.html). The author talks about "The Blub Paradox". He says programming languages vary in power. That makes sense to me. For example, Python is more powerful than C/C++. But its performance is not as good as that of C/C++. Is it always true that more powerful languages must **necessarily** have lesser **possible** performance when compared to less powerful languages? Is there a law/theory for this?
In general, it's about what the language and its implementors are trying to do. C has a long culture of keeping things as close to the hardware as possible. It doesn't do anything that could easily be translated into machine code at compile time. It was intended as a multi-platform kind of low level language. As time went on (and it was a *lot* of time!), C became sort of a target language for compilers in turn - it was a relatively simple way to get your language to compile for all the platforms that C compiled for, which was a lot of languages. And C ended up being the API-system of choice for most desktop software - not because of any inherent qualities in the way C calls things or shares header files or whatever, but simply because the barrier to introducing a new way is very high. So again, the alternatives usually sacrifice performance for other benefits - just compare C-style APIs with COM. That isn't to say that C wasn't used for development, of course. But it's also clear that people were well aware of its shortcomings, since even people doing "hard-core" stuff like OS development always tried to find better languages to work with - LISP, Pascal, Objective-C etc. But C (and later C++) remained at the heart of most system-level stuff, and the compilers were continuously tweaked to squeeze out extra performance (don't forget there's ~50 years of C by now). C wasn't significantly improved in capabilities over that time; that was never seen as particularly important, and would conflict with the other design pillars. Why do you design a new language? To make something better. But you can't expect to get *everything* better; you need to focus. Are you looking for a good way to develop GUIs? Build templates for a web server? Resolve issues with reliability or concurrency? Make it easier to write *correct* programs? Now, out of some of those, you may get performance benefits. Abstraction usually has costs, but it can also mean you can spend more of your time performance tweaking small portions of code. It's definitely not true that using a low-level language (like C) *will* net you better performance. What *is* true, is that if you really really want to, you *can* reach the highest performance with a low-level language. As long as you don't care about the cost, maintainability and all that. Which is where economies of scale come in - if you can have a 100 programmers save performance for 100M programmers through a low-level tweak, that might be a great pay off. The same way, a lot of smart people working on a good high-level language can greatly increase the output of a lot more people using that language. There is a saying that a sufficiently powerful compiler will be able to eliminate all the costs of high-level languages. In some sense, it's true - every problem eventually needs to be translated to a language the CPU understands, after all. Higher level abstractions mean you have fewer constraints to satisfy; a custom .NET runtime, for example, doesn't *have* to use a garbage collector. But of course, we do not have unlimited capacity to work on such compilers. So as with any optimisation problem, you solve the issues that are the most painful to you, and bring you the most benefit. And you probably didn't start the development of a new, high level language, to try to rival C in "raw" power. You wanted to solve a more specific problem. For example, it's really hard to write high-performance concurrent code in C. Not *impossible*, of course. But the "everything is shared and mutable by default" model means you have to either be extremely careful, or use plenty of guards everywhere. In higher level languages, the compiler or runtime can do that for you, and decide where those can be omitted. More powerful programming languages tend to have slower implementations because fast implementations were never a priority, and may not be cost effective. Some of the higher level features or guarantees may be hard to optimise for performance. Most people don't think performance should trump everything - even the C and C++ people are using C or C++, after all. Languages often trade run-time, compile-time and write-time performance. And you don't even have to look at languages and their implementations to see that - for example, compare the original Doom engine with Duke Nukem 3D. Doom's levels need significant compile-time - Duke's can be edited in real-time. Doom had better runtime performance, but it didn't matter by the time Duke launched - it was fast enough, and that's all that matters when you're dealing with performance on a desktop. What about performance on a server? You might expect a much stronger focus on performance in server software. And indeed, for things like database engines, that's true. But at the same time, servers are flooded with software like PHP or Node.js. Much of what's happening in server-space shifted from "squeeze every ounce of performance from this central server node" to "just throw a hundred servers at the problem". Web servers were always designed for high concurrency (and decentralisation) - that's one big reason why HTTP and the web were designed to be state-less. Of course, not everyone got the memo, and it's handy to have some state - but it still makes decoupling state from a particular server much easier. PHP is not a powerful language. It's not particularly nice to work with. But it provided something people needed - simple templating for their web sites. It took quite a while for performance to become an important goal, and it was further "delayed" by sharding, caching, proxying etc. - which were very simple to do thanks to the *limitations* of PHP and HTTP. But surely, you'll always write an OS in C/C++? Well, for the foreseeable future on the desktop, sure. But not because of raw performance - the trump card is compatibility. Many research OSes have cropped up over time that provide greater safety, security, reliability *and* performance (particularly in highly concurrent scenarios). A fully memory managed OS makes many of the costs of managed memory go away; better memory guarantees, type safety and runtime type information allow you to elude many runtime checks and costs with task switching etc. Immutability allows processes to share memory safely and easily, at very low cost (heck, many of Unix strengths and weaknesses come from how `fork` works). Doing compilation on the target computer means you can't spend so much time optimising, but it also means you are targeting a very specific configuration - so you can always use the best available CPU extensions, for example, without having to do any runtime checks. And of course, *safe* dynamic code can bring its own performance benefits too (my software 3D renderer in C# uses that heavily for shader code; funnily enough, thanks to all the high-level language features, it's much simpler, faster *and* more powerful than e.g. the Build engine that powers Duke Nukem 3D - at the cost of extra memory etc.). We're doing engineering here (poor as it may be). There's trade-offs to be had. Unless squeezing every tiny bit of performance out of your language gives you the greatest possible benefit, you shouldn't be doing it. C wasn't getting faster to please C programmers; it was getting faster because there were people who used it to work on stuff that made things faster for *everyone else*. That's a lot of history that can be hard to beat, and would you really want to spend the next 50 years catching up with some low-level performance tweaks and fixing tiny incompatibilities when nobody would want to use your language in the first place because it doesn't provide them with any real benefit over C? :)
Please consider the following question given in a booklet: [![enter image description here](https://i.stack.imgur.com/VJC1q.png)](https://i.stack.imgur.com/VJC1q.png) I have read about databases from **Database System Concepts - by Henry F. Korth**, but never saw an instance where we could do a NULL comparison in relational algebra queries - unlike SQL queries. (C) is given the correct option, but my question is: Is the statement III given above, a valid Relational Algebra query?
There is a simple proof using context-free grammars. Let $L$ be a context-free language with context-free grammar $\langle V,T,P,S \rangle$, and let $R$ be a regular language with DFA $\langle \Sigma,Q,q\_0,F,\delta \rangle$. We construct a new context-free grammar whose nonterminals consist of a new start symbol $S'$ and the triples in $Q \times V \times Q$. The idea is that $\langle q\_1,A,q\_2 \rangle$ generates all words $w \in L(A)$ such that $\delta(q\_1,w) = q\_2$. Let us assume, without loss of generality, that the grammar is in Chomsky normal form: all rules are of the form $A \to BC$, $A \to a$, $S \to \epsilon$; and $S$ doesn't appear on the right-hand side of a rule. We can carry out the construction even without this simplifying assumption – that's a good exercise for you. The rules of the new grammar are: 1. For every $q \in F$: $S' \to \langle q\_0, S, q \rangle$. 2. If $S \to \epsilon$ is a rule and $q\_0 \in F$: $S' \to \epsilon$. 3. For every rule $A \to a$ and for every $q \in Q$: $\langle q,A,\delta(q,a) \rangle \to a$. 4. For every rule $A \to BC$ and for every $q\_1,q\_2,q\_3 \in Q$: $\langle q\_1,A,q\_3 \rangle \to \langle q\_1,B,q\_2 \rangle \langle q\_2,C,q\_3 \rangle$. Correctness proof left to you.
I understand that in observational studies, if we want to use regression for analysis, and if some variables are supposed to be a confounding variable, we should control for them in the regression model. However, I am a bit confused about adding covariates in ANOVA kind of analysis. Here is the scenario: Scenario: In a randomised experiment in which participants are **randomly assigned** into 3 different treatment groups, and then we measure their performance (e.g., response times) on some task as our only DV. However, participants' IQ and EQ are supposed to be covariates, so they were measured **prior** to the treatment. Question1: In general, should we run ANOVA or ANCOVA? Question2: If we indeed found no significant difference in participants' IQ, shall we treat IQ as covariate in our model? Question2: If we unfortunately find that the 3 groups have different levels of EQ even though we used random assignment, should we run ANOVA or ANCOVA? Edit 1: Assume that the observations are independent of each other by random sampling. Also assume that the distribution of DV is normal.
For probabilities (proportions or shares) $p\_i$ summing to 1, the family $\sum p\_i^a [\ln (1/p\_i)]^b$ encapsulates several proposals for measures (indexes, coefficients, whatever) in this territory. Thus 1. $a = 0, b = 0$ returns the number of distinct words observed, which is the simplest to think about, regardless of its ignoring differences among the probabilities. This is always useful if only as context. In other fields, this could be the number of firms in a sector, the number of species observed at a site, and so forth. In general, let's call this the **number of distinct items**. 2. $a = 2, b = 0$ returns the Gini-Turing-Simpson-Herfindahl-Hirschman-Greenberg sum of squared probabilities, otherwise known as the repeat rate or purity or match probability or homozygosity. It is often reported as its complement or its reciprocal, sometimes then under other names, such as impurity or heterozygosity. In this context, it is the probability that two words selected randomly are the same, and its complement $1 - \sum p\_i^2$ the probability that two words are different. The reciprocal $1 / \sum p\_i^2$ has an interpretation as the equivalent number of equally common categories; this is sometimes called the numbers equivalent. Such an interpretation can be seen by noting that $k$ equally common categories (each probability thus $1/k$) imply $\sum p\_i^2 = k (1/k)^2 = 1/k$ so that the reciprocal of the probability is just $k$. Picking a name is most likely to betray the field in which you work. Each field honours their own forebears, but I commend **match probability** as simple and most nearly self-defining. 3. $a = 1, b = 1$ returns Shannon entropy, often denoted $H$ and already signalled directly or indirectly in previous answers. The name **entropy** has stuck here, for a mix of excellent and not so good reasons, even occasionally physics envy. Note that $\exp(H)$ is the numbers equivalent for this measure, as seen by noting in similar style that $k$ equally common categories yield $H = \sum^k (1/k) \ln [1/(1/k)] = \ln k$, and hence $\exp(H) = \exp(\ln k)$ gives you back $k$. Entropy has many splendid properties; "information theory" is a good search term. The formulation is found in I.J. Good. 1953. The population frequencies of species and the estimation of population parameters. *Biometrika* 40: 237-264. [www.jstor.org/stable/2333344](http://www.jstor.org/stable/2333344). Other bases for logarithm (e.g. 10 or 2) are equally possible according to taste or precedent or convenience, with just simple variations implied for some formulas above. Independent rediscoveries (or reinventions) of the second measure are manifold across several disciplines and the names above are far from a complete list. Tying together common measures in a family is not just mildly appealing mathematically. It underlines that there is a choice of measure depending on the relative weights applied to scarce and common items, and so reduces any impression of adhockery created by a small profusion of apparently arbitrary proposals. The literature in some fields is weakened by papers and even books based on tenuous claims that some measure favoured by the author(s) is the best measure that everyone should be using. My calculations indicate that examples A and B are not so different except on the first measure: ``` ---------------------------------------------------------------------- | Shannon H exp(H) Simpson 1/Simpson #items ----------+----------------------------------------------------------- A | 0.656 1.927 0.643 1.556 14 B | 0.684 1.981 0.630 1.588 9 ---------------------------------------------------------------------- ``` (Some may be interested to note that the Simpson named here (Edward Hugh Simpson, 1922- ) is the same as that honoured by the name Simpson's paradox. He did excellent work, but he wasn't the first to discover either thing for which he is named, which in turn is Stigler's paradox, which in turn....)
Suppose I am given an array of $n$ fixed width integers (i.e. they fit in a register of width $w$), $a\_1, a\_2, \dots a\_n$. I want to compute the sum $S = a\_1 + \ldots + a\_n$ on a machine with 2's complement arithmetic, which performs additions modulo $2^w$ with wraparound semantics. That's easy — but the sum may overflow the register size, and if it does, the result will be wrong. If the sum doesn't overflow, I want to compute it, and to verify that there is no overflow, as fast as possible. If the sum overflows, I only want to know that it does, I don't care about any value. Naively adding numbers in order doesn't work, because a partial sum may overflow. For example, with 8-bit registers, $(120, 120, -115)$ is valid and has a sum of $125$, even though the partial sum $120+120$ overflows the register range $[-128,127]$. Obviously I could use a bigger register as an accumulator, but let's assume the interesting case where I'm already using the biggest possible register size. There is a well-known technique to [add numbers with the opposite sign as the current partial sum](https://cs.stackexchange.com/a/1425). This technique avoids overflows at every step, at the cost of not being cache-friendly and not taking much advantage of branch prediction and speculative execution. Is there a faster technique that perhaps takes advantage of the permission to overflow partial sums, and is faster on a typical machine with an overflow flag, a cache, a branch predictor and speculative execution and loads? (This is a follow-up to [Overflow safe summation](https://cs.stackexchange.com/questions/1424/overflow-safe-summation))
You can add $n$ numbers of size $w$ without any overflow if you are using $\lceil \log n\rceil + w$ bits arithmetic. My suggestion is to do just that and then check if the result is in the range. Algorithms for multiprecision arithmetic are well-known (see TAOCP section 4.3 if you need a reference), there is often hardware support for addition (*carry* flag and *add with carry* instruction), even without such support you can implement it without data dependant jump (which is good for jump predictors) and you need just one pass on the data and you may visit the data in the most convenient order (which is good for cache). If the data doesn't fit in memory, the limiting factor will be the IO and how well you succeed in overlapping the IO with the computation. If the data fit in memory, you'll probably have $\lceil \log n\rceil \leq w$ (the only exception I can think of is 8-bits microprocessor which usually have 64K of memory) which means you are doing double precision arithmetic. The overhead over a loop doing $w$-bits arithmetic can be just two instructions (one to sign extend, the other to add with carry) and a slight increase of register pressure (but if I'm right, even the register starved x86 has enough registers that the only memory access in the inner loop can the data fetch). I think it is probable that an OO processor will be able to schedule the additional operations during the memory load latency so the inner loop will be executed at the memory speed and thus the exercise will be one of maximising the use of the available bandwidth (prefetch or interleaving techniques could help depending on the memory architecture). Considering the latest point, it is difficult to think of other algorithms with better performance. Data dependant (and thus not predictable) jumps are out of question as are several passes on the data. Even trying to use the several cores of today's processor would be difficult as the memory bandwidth will probably be saturated, but it could be an easy way to implement interleaved access.
I've taken a few online courses in machine learning, and in general, the advice has been to choose random weights for a neural network to ensure that your neurons don't all learn the same thing, breaking symmetry. However, there were other cases where I saw people initializing using zero weights. Unfortunately, I can't remember what those were. I think it might have been non-neural-network cases, like a simple linear or logistic regression model (simple weights only on the inputs, leading directly to an output). Are those cases safe for zero initialization? Alternatively, could we use random initialization in those cases too, just to stay consistent?
Whenever you have a convex cost function you are allowed to initialize your weights to zeros. The cost function of logistic regression and linear regression have convex cost function if you use MSE for, also [RSS](https://stats.stackexchange.com/a/90556/179078), linear regression and cross-entropy for logistic regression. The main idea is that for convex cost function you'll have just a single optimal point and it does not matter where you start, the starting point just changes the number of epochs to reach to that optimal point whilst for neural networks the cost function does not have just one optimal point. Take a look at [here](https://datascience.stackexchange.com/q/18802/28175). About random initialization, you have to consider that you are not allowed to choose random weights which are too small or too big although the former was a more significant problem. If you choose random small weights you may have vanishing gradient problem which may lead to a network that does not learn. Consequently, you have to use standard initialization methods like `He` or `Glorot`, take a look at [here](https://datascience.stackexchange.com/a/10930/28175) and [Understanding the difficulty of training deep feedforward neural networks](https://scholar.google.com/scholar?cluster=17889055433985220047&hl=en&as_sdt=0,22). Also, take a look at the following question. * [How should the bias be initialized and regularized](https://datascience.stackexchange.com/a/18145/28175) * [When to use (He or Glorot) normal initialization over uniform init? And what are its effects with Batch Normalization](https://datascience.stackexchange.com/q/13061/28175)
) Suppose G is a simple graph with n vertices. Prove that G has twice as many edges as vertices only if n ≥ 5. ? Any one can help?
Not necessarily true, for example complete graph of 4 vertices have no cut vertex. But there exists a graph G with all vertices of degree 3 and there is a cut vertex. See the picture. Red vertex is the cut vertex.[![ regular graph of order 3 having a cut vertex](https://i.stack.imgur.com/A8gyC.png)](https://i.stack.imgur.com/A8gyC.png)
This question is about the apparent *implicit* regularization that is observed when training a linear model using SGD. I describe my understanding in the hope that someone can point out what I'm missing. In Section 5 of [Understanding Deep Learning Requires Rethinking Generalization](https://arxiv.org/pdf/1611.03530.pdf), we are given the problem of fitting a linear model $$y=Xw,$$ where $y$ is the model output, $X$ is the $n \times d$ data matrix with $n$ observations of $d$-dimensional data points and $w$ are the parameters to be learnt. If we let $d \geq n$ then the system has an infinite number of solutions. The paper goes on to derive the kernel trick (without an embedding into feature space) in the context of SGD. If we run SGD we get a solution of the form $w=X^T\alpha$ due to the update rule. We also have that $y=Xw$. Combining these two, we have $$XX^T\alpha = K \alpha = y.$$ This part (I think) I understand. I don't understand this part: > > Note that this kernel solution has an appealing interpretation in terms of implicit regularization. Simple algebra reveals that it is equivalent to the > minimum $\ell\_2$-norm > solution of $Xw=y$. > > > How is it known that this solution has the minimum $\ell\_2$-norm?
The problem here is that your question is contradictory. You are using a KDE with a continuous kernel, which means that you are estimating using a continuous distribution. For a continuous distribution, the probability of any outcome is zero (see e.g., [here](https://stats.stackexchange.com/questions/60702/) and [here](https://stats.stackexchange.com/questions/238058/why-x-x-is-impossible-for-continuous-random-variables?noredirect=1&lq=1)), so we usually measure by the *probability density* instead. However, you say that you want the probability of the point, not its density. You also make it clear that you want the probability of the individual point, not the probability of a neighbourhood containing that point. Under these requirements, **the estimated probability of the outcome is zero**. This is not helpful, which is why we measure outcomes in a continuous distribution by their probability density instead of their probability.
We are given a universe $\mathcal{U}=\{e\_1,..,e\_n\}$ and a set of subsets $\mathcal{S}=\{s\_1,s\_2,...,s\_m\}\subseteq 2^\mathcal{U}$. **Set-Packing** asks how many disjoint sets we can pack, and is defined as follows: Given a number $k\in[m]$, is there a set $\mathcal{S'} \subseteq \mathcal{S}$, $|\mathcal{S'}|=k$ such that all of the sets in $\mathcal{S'}$ are disjoint? **Maximum-Coverage**, allows intersecting sets, but asks how much of the universe can we cover by $k$ sets: Given numbers $k\in[m]$,$r\in[n]$ is there a set $\mathcal{S'} \subseteq \mathcal{S}$, $|\mathcal{S'}|=k$ such that $|\cup\_{s\in\mathcal{S''}}s|\geq r$? --- I'm interested in what seems to be a combination of the two, a disjoint cover, which aims at covering as much of $\mathcal{U}$ as possible. **Disjoint-Maximum-Coverage**: Is there a set $\mathcal{S'} \subseteq \mathcal{S}$ such that $|\cup\_{s\in\mathcal{S'}}s|\geq k$ (i.e. it covers at least $k$ elements) **and** the sets in $\mathcal{S'}$ are disjoint? **What can we say about the approximation hardness of $DMC$?** Is this problem known under a different name? Related results: Both **Set-Packing** and **Maximum-Coverage** are known to be $APX$-Hard (and even stronger than that - Unless $P=NP$, $SP$ can't be approximated within $ln(|S|)(1-o(1))$, and $MC$ has a tight bound using the greedy algorithm). $MC$ is approximable within $1-\frac{1}{e} + o(1)$, while the best known bound for $SP$ is a $O(\sqrt S)$ approximation.
The "**split-substitute**" graphs are the graphs starting with a split graph and substituting any split graph into any node. The forbidden induced subgraphs can be obtains from the forbidden induced subgraphs of split graphs (c4, C5, co-C4), but replacing the non-prime ones (c4 and co-c4) with their minimal prime extensions... (here primality is with respect with modular decomposition.) So the 5 forbidden induced subgraphs are the C5, P5, co-P5, H6 and co-H6. This forms a self-complementary class which strictly contains the class of cographs. See: <http://webdocs.cs.ualberta.ca/~hayward/papers/p4comp.online.pdf> --- The **maxibrittle** graphs are the perfectly orderable graphs for which a vertex sequence defined only by the degrees will produce a perfect order (more specifically, the degree sequence defines a brittle ordering). These graphs have exactly 5 minimal forbidden induced subgraphs.
First of all I would like to apologize if I am making any misconceptions or not using the right vocabulary since I am just getting started in Bayesian inference. That said, the problem I am facing is the classification of a series of devices in different classes using the measurements that they are reporting. In order to do that, I am using the new evidences (data reported by the devices) to obtain the likelihood probability for each class with which I then obtain the posterior probability for each class. As we know, the posterior probabilities of each class obtained by the Bayes inference will sum 1. In this sense, to finally classify to which class each device belongs, I am taking the hypothesis for which the posterior probability is higher. The problem is that, sometimes, the posterior probabilities are too close to each other so the probability of the hypothesis with the maximum posterior probability is not significant compared with the others. Example: imagine that we have three hypotheses with the posterior probabilities being 0.4, 0.3 and 0.3. Currently I am classifying the device to the first class based on these probabilities but, as can be seen, the three probabilities are too close each other, not being any of them significant to the others. In this sense, is there any test that I can perform to obtain if the maximum posterior probability is significant enough to the rest of them? Thanks in advance and sorry again if I am making any mistake with the vocabulary or any misconception.
Denoting all the conditioning explicitly (which you should make a habit of doing in Bayesian analysis), your nonlinear regression model is actually specifying: $$p(y\_i | x\_i, \theta, \sigma) = \text{N}(y\_i | f\_\theta(x\_i), \sigma^2).$$ Now, if you want to make a Bayesian inference about any of the values in the conditional part, you are going to need to specify a prior for them. Fundamentally this is no different from any situation in Bayesian analysis; if you want a posterior for the regressors then your model must specify an appropriate prior. I'm going to assume that you will want to model the regressors using a parametric model with an additional parameter vector $\lambda$. In this case, it is useful to decompose the prior for these three conditioning variables in a hierarchical manner as: $$\begin{align} \text{Prior for model parameters} & & & \pi(\theta, \sigma, \lambda) \\[6pt] \text{Sampling distribution for regressors} & & & \phi(x\_i | \theta, \sigma, \lambda) \end{align}$$ I'm also going to assume that the regressors are IID conditional on the model parameters, so that $p(\mathbf{x}| \theta, \sigma, \lambda) = \prod \phi(x\_i | \theta, \sigma, \lambda)$. If you specify this sampling distribution for the regressors then you will get the posterior distribution: $$\begin{align} \phi(\mathbf{x} | \mathbf{y}) &\overset{\mathbf{x}}{\propto} p(\mathbf{x}, \mathbf{y}, \theta, \sigma, \lambda) \\[12pt] &= \pi(\theta, \sigma, \lambda) \prod\_{i=1}^n p(y\_i | x\_i, \theta, \sigma) \cdot \phi(x\_i | \theta, \sigma, \lambda) \\[6pt] &= \pi(\theta, \sigma, \lambda) \prod\_{i=1}^n \text{N}(y\_i | f\_\theta(x\_i), \sigma^2) \cdot \phi(x\_i | \theta, \sigma, \lambda). \\[6pt] \end{align}$$ Computing the last line of this formula will give you the posterior kernel, and then you can get the posterior distribution by computing the constant for the density directly, or by using MCMC simulation.
I have a dataset that comprises 200 males and 250 females and I am testing their responses on the relationship between X and Y. X and Y are continuous and X1 (gender) is categorical. I am using the general linear model in SPSS to test for main effects and interaction. As I understand it, this is an 'unbalanced' design because the size of the two groups (males and females) are not the same. **Questions** 1. Is using Sum of Squares (Type III) appropriate in this case? 2. What alternatives do I have to analyse this data?
1. Regression models allow you to borrow information explicitly across groups defined by your predictors. Having balanced design only means that all such groups have effects estimated with equal precision (under regression assumptions: correct mean model, homoskedasticity). This is rarely, if ever, necessary for justifying a statistical model. Traditionally, balanced design is considered for two reasons: to assess whether randomization was truly random in clinical trials and to demonstrate differences between certain study designs and simple random samples. In fact, it's frequently the case that unbalanced designs are more valid and more efficient, provided researchers have adhered to their sampling protocol. To clarify on the SSIII point, this is basically the F-test for the main effects which is a sensible test. 2. Assuming you're using the linear link in your GLM for continuous outcomes, there are alternatives. However, I feel your current methods seems solid, barring any egregious difficulties in the data of which I'm not aware. A sort of traditional consideration with continuous data is whether a transformation is necessary, such as a logarithmic transform. This would be a choice if you're interested in estimating a "ratio" of Y differences for a unit difference in X (i.e. subjects had 2x the 'Y' among those differing by 1 'X') using a base-2 log transform. There are also rank statistics which I find difficult to interpret, but could be a sensitivity analysis for data which is heavily skewed. An aside: If you're presenting this data in a "Table 1" I would advise against displaying p-values for balance. It's likely to mislead reviewers and/or readers who think your design depends on such characteristics. You need only be explicit about your sampling methodology, and this model otherwise sounds like a valid approach.
Problem: find sum of k element in array and it is biggest? time complexity of my algorithm is O(k\*n).Is it linear complexity?
Yes, the algorithm is linear in $k$ and linear in $n$, whatever $k$ and $n$ mean. But it is quadratic in $(k,n)$.
I have an assignment to construct a game of Nim (a game in which two players must divide a pile of tokens into two unequal sizes; 6 can be divided into 2 & 4 but not 3 & 3). I was provided a game tree (the same one I find on Google), but I am confused. Shouldn't the first turn of the second player have access to S6, S5, *and* S4 and S3 since 7 can be divided into 4 & 3? ![NIM game tree](https://i.stack.imgur.com/8EuYb.jpg)
No, the game tree is not correct for the description you gave. If you look a few levels downward, there's the position F(2), which by your description should be a dead end: you can't make a move, since splitting a pile of 2 into non-equal parts is not possible. Yet, the tree gives two options. Your observation that you should be able to move from 7 to 5, 6, or 3+4 is also correct. I think Hendrik Jan is right, and this tree describes the game where you either take one or two tokens from a pile, and the player who takes the last token wins.
This question is targeted at people who assign problems: teachers, student assistants, tutors, etc. This has happened to me a handful of times in my 12-year career as a professor: I hurriedly assigned some problem from the text thinking "this looks good." Then later realized I couldn't solve it. Few things are more embarrassing. Here's a recent example: "Give a linear-time algorithm that determines if digraph $G$ has an odd-length cycle." I assigned this thinking it was trivial, only to later realize my approach wasn't going to work. My question: what do you think is the "professional" thing to do: * Obsess on the problem until you solve it, then say nothing to your students. * Cancel the problem without explanation and move on with your life. * Ask for help on cstheory.SE (and suffer the response, "is this a homework problem?") **Note:** I'm looking for practical and level-headed suggestions that I perhaps haven't thought of. I realize my question has a strong subjective element since handling this situation involves one's own tastes to a large extent, so I understand if readers would prefer to see it not discussed.
Yes, sadly, I've done this several times, as well as the *slightly* more forgivable sin of assigning a problem that I *can* solve, but only later realizing that the solution requires tools that the students haven't seen. I think the following is the most professional response (at least, it's the response I've settled on after several false starts): 1. Immediately and publicly admit the mistake. Explain steps 2 and 3. 2. Give every student full credit for the problem. Yes, even if they submit nothing. 3. Grade all submitted solutions normally, but award the resulting points as extra credit. In particular, give the usual partial credit for partial solutions. The first point is both the hardest and the most important. If you try to cover your ass, you will lose the respect and attention of your students (who are not stupid), which means they won't try as hard, which means they won't learn as well, which means you haven't done your job. I don't think it's fair to let students twist in the wind with questions I honestly don't think they can answer without some advance warning. (I regularly include open questions as homework problems in my advanced grad classes, but I warn the students at the start of the semester.) *Educational*, sure, but not fair. It's sometimes useful to give hints or an outline (as @james and @Martin suggest) to make the problem more approachable; otherwise, almost nobody will even try. Obviously, this is only possible if you figure out the solution first. On the other hand, sometimes it's appropriate for nobody to even try. (For example, "Describe a polynomial-time algorithm for X" when X is NP-hard, or if the setting is a timed exam.) If you still can't solve the problem yourself after sweating buckets over it, relax. Probably none of the students will solve it either, but if you're lucky, you'll owe someone a **LOT** of extra credit and a recommendation letter. And if you later realize the solution is easy after all, well, I guess you screwed up twice. Go to step 1.
I have a trivial problem but I do not understand fully the k-armed bandit theorem from chapter 2. My question is based on Sutton's "***Reinforcement Learning: An introduction, second edition***". The exercise is as such: > > Consider a k-armed bandit problem with k = 4 actions, denoted > 1, 2, 3, and 4. Consider applying to this problem a bandit algorithm using ε-greedy action selection, > sample-average action-value estimates, and initial estimates of *Q1(a)* = 0, for all *a*. Suppose the initial > sequence of actions and rewards is *A1* = 1, *R1* = 1, *A2* = 2, *R2* = 1, *A3* = 2, *R3* = 2, *A4* = 2, *R4* = 2, > *A5* = 3, *R5* = 0. On some of these time steps the ε case may have occurred, causing an action to be > selected at random. On which time steps did this definitely occur? On which time steps could this > possibly have occurred? > > > My assumption was as such: A1 may have been chosen both randomly or deliberately (no info about that provided), A2 may have been random (as exploration), A3 and A4 were greedy and A5 was explored. So my answer to the question: > > On which time steps did this definitely occur? > > > would be that actions A5 was definitelly random (not sure about A1 and A2). How to answer this correctly and the second question as well? What I do not quite get here is why the immediate rewards (R3 and R4) are equal to 2 when they should be chosen among 0 (lose) and 1 (win). I would understand that the action-values Q3(a) and Q4(a) may be equal to 2 but this R3, R4 confused me. Could someone provide me the proper way of solving this problem step by step? This would help me to understand the essence of the k-armed bandit theorem.
From what you posted, I don't see where the rewards are constrained to 0,1. Generally rewards can be real-valued. To understand the algorithm, let's write down the estimates after each iteration. I'll use the format Iteration: estimate of Q(1),Q(2),Q(3),Q(4) 0: 0, 0, 0, 0 1: 1, 0, 0, 0 2: 1, 1, 0, 0 3: 1, 1.5, 0, 0 4: 1, 1.67, 0, 0 5: 1, 1.67, 0, 0 Now consider how the algorithm works. At iteration 1, it could either pick action 1 by default or pick it randomly - we really can't say. At iteration 2, it *should* pick action 1 since that has the largest estimate. Since it picks action 2, we know this is an exploration action. At iteration 3, it should pick 2 so this is greedy. You should be able to use this reasoning to deduce that A4 is an exploitation action and A5 was an exploration action.
Backgrounds ----------- Suppose that $X \sim \mathcal{N} (0,\sigma^2)$, and define $C\equiv I(X>c)$ , for a given constant(**decision boundary**) $c$. Now assume we perform a **logistic regression**: $$\mathrm{logit}(P(C=1)) \sim \beta\_0 + \beta\_1X $$ Note that for **logistic regression**, the fitted $\displaystyle -\frac{\hat{\beta\_0}}{\hat{\beta\_1}}$ corresponds to the **mean** of underlying logistic distribution. (This is perfect separation case. Please also take a generous look at **imperfect separation case** at the bottom.) --- Problem ------- **My hypothesis says** the value should be the same, or at least similar as the criterion $c$, i.e. $$ c \approx -\frac{\hat{\beta\_0}}{\hat{\beta\_1}} $$ I would like to **prove or reject the above argument**. --- Simulation ---------- It is really hard to analytically derive the distribution of $\displaystyle -\frac{\hat{\beta\_0}}{\hat{\beta\_1}}$. Therefore with `R`, I simulated for various possible sets of $(\sigma, c)$ to test **my hypothesis**. Suppose we set, for instance, * $\sigma: 5,10,15,20$ * $c : -5,4,12$ ``` N = 1000 for(sig in c(5,10,15,20)){ for (c in c(-5, 4, 12)){ X = rnorm(N, sd=sig) C = (X > c)*1 DATA = data.frame(x=X, c=C) coef = summary(glm(C ~ X, DATA, family = "binomial"))$coefficients print(sprintf("True c: %.2f, Estimated c: %.2f", c, -coef[1,1]/coef[2,1])) } } ``` Note the **true $c$ and the estimated $-\hat{\beta\_0}\big/\hat{\beta\_1}$ are similar** as seen in the following output: ``` [1] "True c: -5.00, Estimated c: -5.01" [1] "True c: 4.00, Estimated c: 4.01" [1] "True c: 12.00, Estimated c: 11.83" [1] "True c: -5.00, Estimated c: -5.01" [1] "True c: 4.00, Estimated c: 3.98" [1] "True c: 12.00, Estimated c: 11.97" [1] "True c: -5.00, Estimated c: -5.01" [1] "True c: 4.00, Estimated c: 3.97" [1] "True c: 12.00, Estimated c: 12.00" [1] "True c: -5.00, Estimated c: -5.01" [1] "True c: 4.00, Estimated c: 3.99" [1] "True c: 12.00, Estimated c: 12.00" ``` **Note**: there were warning messages for nonconvergence! --- Try to prove ------------ To compute maximum likelihood estimates(MLE), we have the log-likelihood to maximize: $$ \begin{aligned} \widehat{(\beta\_0, \beta\_1)} &= \mathrm{argmax}\_{(\beta\_0, \beta\_1)} \mathrm{LogLik}(\beta\_0, \beta\_1) \\[8pt] &\approx \mathrm{argmax}\_{(\beta\_0, \beta\_1)} \mathbb{E}\_X \mathrm{LogLik}(\beta\_0, \beta\_1) \\[8pt] &= \mathrm{argmax}\_{(\beta\_0, \beta\_1)} \mathbb{E}\_X \left[ C\cdot(\beta\_0 + \beta\_1X) - \log[1 + \exp(\beta\_0 + \beta\_1X) \right] \\[8pt] &= \mathrm{argmax}\_{(\beta\_0, \beta\_1)} \mathbb{E}\_X \left[ I(X > c) \cdot(\beta\_0 + \beta\_1X) - \log[1 + \exp(\beta\_0 + \beta\_1X) \right] \\[8pt] \end{aligned} $$ Note that * $\displaystyle \mathbb{E}\_X(I(X>c)) = P(X>c) = 1-\Phi(c/\sigma)$ * $\displaystyle \mathbb{E}\_X(XI(X>c)) = \mathbb{E}\_X \left(Trunc\mathcal{N}(0,\sigma^2,\min=c \right) = \sigma \frac{\phi(c/\sigma)}{1-\Phi(c/\sigma)}$ ([Wiki-Truncated Normal Distribution](https://en.wikipedia.org/wiki/Truncated_normal_distribution)) --- I'm currently finding $\mathbb{E}\_X \log(1+\exp(\beta\_0 + \beta\_1X))$. However, I'm not sure if it is a valid approach. For instance if $\mathbb{E}\_X$ is a linear function of $\beta\_0,\beta\_1$ then $\mathrm{argmax}\_{(\beta\_0, \beta\_1)} \mathbb{E}\_X$ may have no solution. Any help will be appreciated. --- On imperfect separation ----------------------- The following may obscure my main claim, but I would like to add this. As @Whuber noted I absurdly ignored the warning messages. However, let us say the above is an idealized setting, and suppose there's a white noise in decision: say $C := I(X + W > c), X \perp W, W \sim \mathcal{N}(0, \sigma\_W^2)$. This may eschew some trivialities, but I see the similar tendency here: the recovery of $\displaystyle c \approx - \frac{\hat{\beta\_0}}{\hat{\beta\_1}}$, yet with some noise. I would really like to explain what caused this behavior. ``` N = 1000 for(sig in c(5,10,15,20)){ for (c in c(-5, 4, 12)){ X = rnorm(N, sd=sig) C = (X + rnorm(N, sd=5) > c)*1 DATA = data.frame(x=X, c=C) coef = summary(glm(C ~ X, DATA, family = "binomial"))$coefficients print(sprintf("True c: %.2f, Estimated c: %.2f", c, -coef[1,1]/coef[2,1])) } } ``` Without warning messages, ``` [1] "True c: -5.00, Estimated c: -5.35" [1] "True c: 4.00, Estimated c: 4.31" [1] "True c: 12.00, Estimated c: 12.27" [1] "True c: -5.00, Estimated c: -4.91" [1] "True c: 4.00, Estimated c: 3.87" [1] "True c: 12.00, Estimated c: 11.93" [1] "True c: -5.00, Estimated c: -4.72" [1] "True c: 4.00, Estimated c: 3.73" [1] "True c: 12.00, Estimated c: 12.25" [1] "True c: -5.00, Estimated c: -5.16" [1] "True c: 4.00, Estimated c: 4.25" [1] "True c: 12.00, Estimated c: 12.41" ```
Indipendently on the distribution of $X$, if $C$ is computed in that deterministic way, estimation won't converge because there is no couple of parameters $\beta$ for which likelihood is maximized. It is easy to notice that $\hat c = -\frac{\hat \beta\_0}{\hat \beta\_1}$ maximises the likelihood at some middle value between last x value before $c$ and first one after it, but you have to keep $\beta\_1$ fixed to observe this, and vary just $\beta\_0$, because of the absence of one ML point for in the whole parametric space. I will make this clear now. Let's say we take that value $\hat c$ fixed at the point we just described, for which likelihood is maximized for any given slope $\beta\_1$, and we now vary $\beta\_1$, to see how likelihood varies. Mind that $\beta\_0$ will vary together with $\beta\_1$ to keep $\hat c$ constant. We will notice that the higher the slope is, the higher the likelihood, without convergence. This always happens when logistic regression is used in a deterministic setting and no misclassifications happen. I will add the mathematical details when I have time, but you can already verify my claims.
Due to technical constraints, single thread performance has been increasing more slowly and adding cores has been adopted to offer greater potential increase in performance. However, multicore support currently appears to require the programmer to manually divide the work, control communication/synchronization, etc. For many processor intensive games and simulations (such as DCS), such complicated multithreading effort has been limited, and these programs depend extremely heavily on single thread performance. Could a processor with multiple cores divide the work automatically? Alternatively, could a compile convert single-threaded code to mulithreaded code? In the current world, multicore support has to be done by hand as far as I know, with a result being that many processor intensive games and simulations (such as DCS) depend extremely heavily on one CPU core to avoid complicated multithreading.
The short answer: yes it's possible, but right now it generates slower code. The long answer: the thing to keep in mind here synchronization. Computation is not just a bunch of independent computations. The results of one computation are used in other ones. Sometimes the order doesn't matter, and sometimes it does. Take for example, some hard to evaluate function $f$. If we want to compute $f(f(10) + f(20))$, we can evaluate $f(10)$ and $f(20)$ in parallel, but we can't evaluate the final result until we've done the two inner evaluations and added them together. This process of making sure things happen in the right order is called synchronization. There are many ways to ensure synchronization happens and that the code produces the same result even if some operations are moved around. There are many models of concurrency: some have shared memory with locks on variables, some have totally separate processes that pass messages. But in every model, the synchronization carries some overhead. Acquiring a lock, sending a message between processes, each of these adds time that wouldn't be required if you evaluated the code sequentially. Likewise, scheduling causes some overhead. If you have 200 processes and only 8 cores, you will probably be slower than if you have, say 20 processes and 8 cores, since you have to do a lot more work scheduling, but don't get additional parallelism. The key, then, is to adjust the "granularity" of your concurrency. Too coarse and you don't make use of all the processors, but too fine and you get too much overhead from synchronization and scheduling. This sort of analysis is hard, and right now compilers don't do it well enough to make general code faster, but I suspect with heuristics, and maybe even a bit of AI/machine learning, in the future we'll be able to automatically adjust the granularity automatically. I'm not a hardware person, but my guess is that computing these sorts of things are way too complicated and time-consuming to be done on-the-fly at the CPU level. At a compiler level, it's possible, but depends on your language constructs. I'm fairly certain finding the "optimal" parallelization of a program is undecidable, since you can probably construct some example where you can parallelize a program iff an arbitrary program halts. However, like most program analyses, we can do "safe" approximations. In particular, there has been interesting research into automatically parallelizing "pure" functional languages like Haskell. When any side-effects of a function are captured by the type system, it's a lot easier for the compiler to know what operations can safely be performed in parallel. But to my knowledge, none of these efforts have succeeded so far in producing fast code.
For my bachelor project I've been tasked with making a transformer that can forecast time series data, specifically powergrid data. I need to take a univariate time series of length N, that can then predict another univariate time series M steps into the future. I started out by following the "[Attention is all you need](https://arxiv.org/abs/1706.03762)" paper but since this paper is meant for NLP I had to make some changes. Instead of embedding each word to a random point in d\_model-dimensional space I use a linear layer embed the data. I've also tried using a nn.Conv1d layer with a kernel size of 1 to embed, but these approaches fail to make a non-linear prediction of the data and instead only predict a straight line through the average of the data. First I though that the problem was my implementation of the transformer, but even when I use Pytorch' build in nn.Transformer module I get the same results. I then tried different types of positional encoding like the ["Time2Vec"](https://arxiv.org/abs/1907.05321) paper that approximates the data by using different sinus functions. I feel like I've tried a lot of different things to make this transformer work but to no avail. So my question is, do transformers alone work for multistep forecasting with univariate data. And if so are there any articles, papers, repositories etc. that forecasts time series data with succes? If not which approach should I take going forwards to see if I can get my transformer to work. **Edit: I figured out the problem, apparently the only issue was that I had set my learning rate too high :)**
Transformers can be used for time series forecasting. See the following articles: * [Adversarial Sparse Transformer for Time Series Forecasting, by Sifan Wu et al.](https://proceedings.neurips.cc/paper/2020/file/c6b8c8d762da15fa8dbbdfb6baf9e260-Paper.pdf) * [Deep Transformer Models for Time Series Forecasting: The Influenza Prevalence Case, by Neo Wu, Bradley Green, Xue Ben, & Shawn O'Banion](https://arxiv.org/abs/2001.08317) * [The Time Series Transformer, by Theodoras Ntakouris](https://towardsdatascience.com/the-time-series-transformer-2a521a0efad3) * [Transformers for Time-series Forecasting, by Natasha Klingebrunn](https://medium.com/mlearning-ai/transformer-implementation-for-time-series-forecasting-a9db2db5c820)\_
I'm trying to run [add\_datepart()](https://github.com/fastai/fastai/blob/af4773bbba8fa535d8e9e0af0879a12acc174046/old/fastai/structured.py#L76) which converts a df column from a datetime64 to many columns in place Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear','Ismonthend', 'Ismonthstart', 'Isquarterend', 'Isquarterstart', 'Isyearend', 'Isyearstart' . etc I'm using is [Grocery Sales dataset](https://www.kaggle.com/c/favorita-grocery-sales-forecasting/data). Total dates i believe in that are 125497040. What should I do to run this operation? Every time I run this piece of code. The kernel dies(out of memory which is 17.2 GB RAM) So I tried breaking down this data frame in smaller parts and then running `add_datepart` but still the same result I wrote this code ``` def add_datepart_large(temp_df, size): list_df = [temp_df[i:i+size] for i in range(0,temp_df.shape[0],size)] for i in range(len(list_df)): add_datepart(list_df[i], 'date') return pd.concat(list_df) final_df = add_datepart_large(df_all, 100000) ``` If after running this code, the kernel dies. What's wrong?
For 127 million rows, it is better to perform data prep on DB. It will be a select + insert query and will not require whole data to be loaded in memory. ``` SELECT YEAR(date) AS 'year', MONTH(date) AS 'month' FROM data ``` Edit : Once you start training / validation, even then it would be better to load few batches from DB (at a time). Most frameworks support this. Foe example : <https://github.com/keras-team/keras/issues/107>
In a previous question about time hierarchy, I've learned that equalities between two classes can be propagated to more complex classes and inequalities can be propagated to less complex classes, with arguments using padding. Therefore, a question comes to mind. Why do we study a question about different types of computation (or resources) in the smallest (closed) class possible? Most researchers believe that $P \neq NP$. This distinction of classes wouldn't be between classes that use the same type of resource. Therefore, one might think of this inequality as a universal rule: Nondeterminism is a more powerful resource. Therefore, although an inequality, it could be propagated upwards via exploiting the different nature of the two resources.So, one could expect that $EXP \neq NEXP$ too. If one proved this relation or any other similar inequality, it would translate to $P \neq NP$. My argument could maybe become clear in terms of physics. Newton would have a hard time understanding universal gravity by examining rocks (apples?) instead of celestial bodies. The larger object offers more details in its study, giving a more precise model of its behavior and allowing to ignore small-scale phenomena that might be irrelevant. Of course, there is the risk that in larger objects there is a different behavior, in our case that the extra power of non-determinism wouldn't be enough in larger classes. What if after all, $P \neq NP$ is proven? Should we start working on $EXP \neq NEXP$ the next day? Do you consider this approach problematic? Do you know of research that uses larger classes than polynomial to distinguish the two types of computation?
Why we choose to care about $P$ vs. $NP$? Actually nondeterminism as an object of study is only of secondary concern. We really care about $NP$ because of the thousands of important *problems* that are $NP$-complete. These are problems we want (and in real life *need to*) solve. We care about whether these problems can be solved efficiently, and $P$ is our theoretical model for efficient computation. Hence we are lead to the question of $P$ vs. $NP$.
It is known that the language of words containing equal number of 0 and 1 is not regular, while the language of words containing equal number of 001 and 100 is regular ([see here](https://cs.stackexchange.com/questions/12139/is-the-language-of-words-containing-equal-number-of-001-and-100-regular)). Given two words $w\_1,w\_2$, is it decidable if the language of words containing equal number of $w\_1$ and $w\_2$ is regular?
I and several colleagues answered this question [here](https://arxiv.org/pdf/1804.11175.pdf) with a necessary and sufficient criterion for when the language $L\_{x=y}$ (all words having an equal number of occurrences of $x,y$) is regular. We also show the same for fewer $x$ than $y$, and more $x$ than $y$. C.J. Colbourn, R.E. Dougherty, T.F. Lidbetter and J. Shallit. Counting Subwords and Regular Languages. Developments in Language Theory. DLT 2018. LNCS 11088. Springer. doi:[10.1007/978-3-319-98654-8\_19](https://doi.org/10.1007/978-3-319-98654-8_19)
Plenty of hard graph problems are solvable in polynomial time on [graphs of bounded treewidth](http://en.wikipedia.org/wiki/Treewidth). Indeed, textbooks typically use e.g. independet set as an example, which is a *local problem*. Roughly, a local problem is a problem whose solution can be verified by examining some small neighborhood of every vertex. Interestingly, even problems (such as Hamiltonian path) of a *global nature* can still be solved efficiently for bounded treewidth graphs. For such problems, usual dynamic programming algorithms have to keep track of all the ways in which the solution can traverse the corresponding separator of the tree decomposition (see e.g. [1]). Randomized algorithms (based on so-called cut'n'count) were given in [1], and improved (even deterministic) algorithms were developed in [2]. I don't know if it's fair to say that many, but at least some global problems can be solved efficiently for graphs of bounded treewidth. So what about problems that remain hard on such graphs? I'm assuming they are also of a global nature, but what else? What separates these hard global problems from global problems that *can* be solved efficiently? For instance, how and why would known methods fail to give us efficient algorithms for them? For example, one could consider the following problem(s): > > **Edge precoloring extension** Given a graph $G$ with some edges colored, decide if this coloring can be extended to a proper $k$-edge-coloring of the graph $G$. > > > Edge precoloring extension (and its list edge coloring variant) is NP-complete for bipartite series-parallel graphs [3] (such graphs have treewidth at most 2). > > **Minimum sum edge coloring** Given a graph $G=(V,E)$, find an edge-coloring $\chi : E \to \mathbb{N}$ such that if $e\_1$ and $e\_2$ have a common vertex, then $\chi(e\_1) \neq \chi(e\_2)$. The objective is to minimize $E'\_\chi(E) = \sum\_{e \in E} \chi(e)$, the sum of the coloring. > > > In other words, we have to assign positive integers to the edges of a graph such that adjacent edges receive different integers and the sum of the assigned numbers is minimal. This problem is NP-hard for partial 2-trees [4] (i.e. graphs of treewidth at most 2). Other such hard problems include the edge-disjoint paths problem, the subgraph isomorphism problem, and the bandwidth problem (see e.g. [5] and the references therein). For problems that remain hard even on trees, see [this question](https://cstheory.stackexchange.com/questions/1215/np-hard-problems-on-trees). --- [1] [Cygan, M., Nederlof, J., Pilipczuk, M., van Rooij, J. M., & Wojtaszczyk, J. O. (2011, October). Solving connectivity problems parameterized by treewidth in single exponential time. In Foundations of Computer Science (FOCS), 2011 IEEE 52nd Annual Symposium on (pp. 150-159). IEEE.](http://arxiv.org/pdf/1103.0534.pdf) [2] [Bodlaender, H. L., Cygan, M., Kratsch, S., & Nederlof, J. (2013). Deterministic single exponential time algorithms for connectivity problems parameterized by treewidth. In Automata, Languages, and Programming (pp. 196-207). Springer Berlin Heidelberg.](http://link.springer.com/chapter/10.1007/978-3-642-39206-1_17) [3] [Marx, D. (2005). NP‐completeness of list coloring and precoloring extension on the edges of planar graphs. Journal of Graph Theory, 49(4), 313-324.](http://sziami.cs.bme.hu/~dmarx/papers/planar-edge-prext.pdf) [4] [Marx, D. (2009). Complexity results for minimum sum edge coloring. Discrete Applied Mathematics, 157(5), 1034-1045.](http://www.sciencedirect.com/science/article/pii/S0166218X08001819) [5] [Nishizeki, T., Vygen, J., & Zhou, X. (2001). The edge-disjoint paths problem is NP-complete for series–parallel graphs. Discrete Applied Mathematics, 115(1), 177-186.](http://www.sciencedirect.com/science/article/pii/S0166218X01002232)
Most algorithms for graphs of bounded treewidth are based on some form of dynamic programming. For these algorithms to be efficient, we need to bound the number of states in the dynamic programming table: if you want a polynomial-time algorithm, then you need a polynomial number of states (e.g., n^tw), if you want to show that the problem is FPT, you usually want to show that the number of states is some function of treewidth. The number of states typically corresponds to the number of different types of partial solutions when breaking the graph at some small separator. Thus a problem is easy on bounded-treewidth graphs usually because partial solutions interacting with the outside world via a bounded number of vertices have only a bounded number of types. For example, in the independent set problem the type of a partial solution depends only on which boundary vertices are selected. In the Hamiltonian cycle problem, the type of a partial solution is described by how the subpaths of the partial solution match the vertices of the boundary to each other. Variants of Courcelle's Theorem give sufficient conditions for a problem to have the property that partial solutions have only a bounded number of types. If a problem is hard on bounded-treewidth graphs, then it is usually because of one of the following three reasons. 1. There are interactions in the problem not captured by the graph. For example, Steiner Forest is NP-hard on graphs of treewidth 3, intuitively because the source-destination pairs create interactions between nonadjacent vertices. Elisabeth Gassner: The Steiner Forest Problem revisited. J. Discrete Algorithms 8(2): 154-163 (2010) MohammadHossein Bateni, Mohammad Taghi Hajiaghayi, Dániel Marx: Approximation Schemes for Steiner Forest on Planar Graphs and Graphs of Bounded Treewidth. J. ACM 58(5): 21 (2011) 2. The problem is defined on the edges of the graph. Then even if a part of the graph is attached to the rest of the graph via a bounded number of vertices, there could be many edges incident to those few vertices and then the state of a partial solution can be described only by describing the state of all these edges. This is what made the problems in [3,4] hard. 3. Each vertex can have a large number of different states. For example, Capacitated Vertex Cover is W[1]-hard parameterized by treewidth, intuitively because the description of a partial solution involves not only stating which vertices of the separator were selected, but also stating how many times each selected vertex of the separator was used to cover edges. Michael Dom, Daniel Lokshtanov, Saket Saurabh, Yngve Villanger: Capacitated Domination and Covering: A Parameterized Perspective. IWPEC 2008: 78-90
I have a dataset of total number of Vehicles Registered for 16 years. These are a total of 16 values from 2001-2016. Which Machine Learning Technique would be best for predicting the number of vehicles in the upcoming years - say till 2050 - while using R ? This is my Dataset:- ``` Date Bikes Cars 1 2001-01-01 2283381 1198918 2 2002-01-01 2341051 1279362 3 2003-01-01 2379260 1289854 4 2004-01-01 2609442 298353 5 2005-01-01 2649910 1318488 6 2006-01-01 2757842 1372191 7 2007-01-01 2895734 1440801 8 2008-01-01 3039815 1549854 9 2009-01-01 3215583 1657860 10 2010-01-01 4305121 1726347 11 2011-01-01 5781953 1881560 12 2012-01-01 7500182 2094289 13 2013-01-01 9064547 2281083 14 2014-01-01 10341326 2400690 15 2015-01-01 12177352 2531592 16 2016-01-01 12600402 2582149 ```
The situation of overlapping clusters with similar locations but very different spreads comes about when your data has a much higher and narrower peak than a Gaussian distribution. A Gaussian mixture model will try to fit the peak with a Gaussian that has relatively little spread and fit the tails with a Gaussian that has a lot more spread. In this case, it isn't really "clustering" in the traditional sense, just trying to fit a distribution that is significantly different from a Gaussian (although it may be that the mixture model does identify some clusters as well as trying to fit leptokurtotic data in one or more of the clusters.) In the case of the graph linked to in a comment to the OP, it appears to me that you have two clusters: one represented, more or less, by the leftmost (green) curve that probably has a fair amount of negative skew, and one represented, more or less, by a mixture of the three rightmost curves, which is probably somewhat "blocky" with a fair amount of positive skew. The case for a mixture of Gaussians to represent a single cluster can be illustrated with a simple example. I generate 5,000 observations from a $t$ distribution with two degrees of freedom, throwing out observations such that $|x\_i| > 20$ to make the plotting a little easier. This distribution is sufficiently spread out that it doesn't have a variance, consequently any peak is too much peak relative to a Gaussian. I then fit a very simple mixture of two Gaussians with the same mean (equal to the mean of the observations) but different spreads: ``` x <- rt(5000,2) x <- x[x>-20 & x<20] lnl <- function(theta) { p1 <- theta[1] sd1 <- theta[2] sd2 <- theta[3] -sum(log(p1*dnorm(x,mean(x),sd1) + (1-p1)*dnorm(x,mean(x),sd2))) } tmp <- optim(fn=lnl, par=c(0.5,1,2), method="L-BFGS-B", lower=c(0.0001,0.1,0.1), upper=c(0.9999,10,10)) ``` I now plot a histogram of the data, the MLE of a single Gaussian (the black line), and the two Gaussians in the mixture model, weighted by the mixture probability: ``` hist(x, freq=FALSE, xlim=c(-7,7), breaks=c(-20,seq(-7,7,by=1), 20)) x.val <- seq(-7,7,by=0.01) lines(dnorm(x.val, mean(x), sd(x))~x.val) lines(tmp$par[1]*dnorm(x.val,mean(x),tmp$par[2])~x.val, lwd=2, col=2) lines((1-tmp$par[1])*dnorm(x.val,mean(x),tmp$par[3])~x.val, lwd=2, col=4) ``` which results in: [![enter image description here](https://i.stack.imgur.com/pZBQg.png)](https://i.stack.imgur.com/pZBQg.png) You can see that the single Gaussian (black line) simply cannot fit the peak, because it has to be spread out enough to fit the tails (which do in fact extend out to $\pm 20$, although this isn't shown on the plot so that we can see the detail in the middle of the distribution.) If we had plotted the two Gaussians without multiplying by the probability of cluster membership, the nature of the mixture would be clearer: [![enter image description here](https://i.stack.imgur.com/VR7yR.png)](https://i.stack.imgur.com/VR7yR.png) and the effect described above is pretty clear. And a final plot compares the single Gaussian fit with the mixture model fit: [![enter image description here](https://i.stack.imgur.com/9M1rp.png)](https://i.stack.imgur.com/9M1rp.png) which is evidently far better, even though the underlying distribution is not a mixture of two Gaussians.
I've been reading [Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/), and I would like to know why the Lasso provides variable selection and ridge regression doesn't. Both methods minimize the residual sum of squares and have a constraint on the possible values of the parameters $\beta$. For the Lasso, the constraint is $||\beta||\_1 \le t$, whereas for ridge it is $||\beta||\_2 \le t$, for some $t$. I've seen the diamond vs ellipse picture in the book and I have some intuition as for why the Lasso can hit the corners of the constrained region, which implies that one of the coefficients is set to zero. However, my intuition is rather weak, and I'm not convinced. It should be easy to see, but I don't know why this is true. So I guess I'm looking for a mathematical justification, or an intuitive explanation of why the contours of the residual sum of squares are likely to hit the corners of the $||\beta||\_1$ constrained region (whereas this situation is unlikely if the constraint is $||\beta||\_2$).
Suppose we have a data set with y = 1 and x = [1/10 1/10] (one data point, two features). One solution is to pick one of the features, another feature is to weight both features. I.e. we can either pick w = [5 5] or w = [10 0]. Note that for the L1 norm both have the same penalty, but the more spread out weight has a lower penalty for the L2 norm.
In many discussions of binary heap, normally only decrease-key is listed as supported operation for a min-heap. For example, CLR chapter 6.1 and [this wikipedia page](http://en.wikipedia.org/wiki/Heap_%28data_structure%29). Why isn't increase key normally listed for min-heap? I imagine it is possible to do that in O(height) by iteratively swapping the increased element (x) with the minimum of its children, until none of its children is bigger than x. e.g. ``` IncreaseKey(int pos, int newValue) { heap[pos] = newValue; while(left(pos) < heap.Length) { int smallest = left(pos); if(heap[right(pos)] < heap[left(pos)]) smallest = right(pos); if(heap[pos] < heap[smallest]) { swap(smallest, pos); pos= smallest; } else return; } } ``` Is the above correct? If not, why? If yes, why isn't increase key listed for min-heap?
The algorithm you suggest is simply heapify. And indeed - if you increase the value of an element in a min-heap, and then heapify its subtree, then you will end up with a legal min-heap.
In answering [this question](https://cs.stackexchange.com/questions/146325/fast-and-compact-data-structure-for-dynamic-graphs), I was looking for references (textbooks, papers, or implementations) which represent a graph using a *set* (e.g. hashtable) for the adjacent vertices, rather than a list. That is, the graph is a map from vertex labels to sets of adjacent vertices: ``` graph: Map<V, Set<V>> ``` In fact, I thought that this representation was completely standard and commonly used, since it allows O(1) querying for an edge existence, O(1) edge deletion, and O(1) iterating over the elements of the adjacency set. I have always represented graphs this way both in my own implementations and teaching. To my surprise, most algorithms textbooks do not cover this directly, and instead represent it using a *list* of labels: ``` graph: Map<V, List<V>> ``` As far as I understand, adjacency lists seem strictly worse: both representations support O(1) vertex additions and iteration over adjacent edges, but adjacency lists require O(m) for edge removal or edge existence (in the worst case). Yet I am baffled that, for example [Cormen Leiserson Rivest Stein: Introduction to Algorithms](http://139.59.56.236/bitstream/123456789/106/1/Introduction%20to%20Algorithms%20by%20Thomas%20%20H%20Coremen.pdf), [Morin: Open Data Structures](https://www.aupress.ca/app/uploads/120226_99Z_Morin_2013-Open_Data_Structures.pdf), and [Wikipedia](https://en.wikipedia.org/wiki/Graph_(abstract_data_type)) all suggest using adjacency lists. They mainly contrast adjacency lists with adjacency matrices, but the idea of storing adjacent elements as a set is only mentioned briefly in an off-hand comment as an *alternative* to the list representation, if at all. (For example, Morin mentions this on page 255, "What type of collection should be used to store each element of adj?") I must be missing something basic. Q: What is the advantage of using a list instead of a set for adjacent vertices? ================================================================================ * Is this a pedagogical choice, an aversion to hashmaps/hashsets, a historical accident, or something else? * [This question](https://cs.stackexchange.com/questions/111305/why-isnt-an-edge-map-graph-implementation-used-in-practice) is closely related, but asks about the representation `graph: Set<(V, V)>`. The top answer suggests using my representation. Looking for a bit more context on this. * The second answer suggests hash collisions are a problem. But if hash sets are not preferred, another representation of maps and sets can be used, and we still get great performance for edge removal with a possible additional logarithmic factor in cost. * **Bottom line:** I don't understand why anyone would implement the edges as a list, unless all vertex degrees are expected to be small.
In many algorithms we don't need to check whether two vertices are adjacent, like in search algorithms, DFS, BFS, Dijkstra's, and many other algorithms. In the cases where we only need to enumerate the neighborhoods, a list/vector/array *far outperforms* typical set structures. Python's `set` uses a hashtable underneath, which is both much slower to iterate over, and uses much more memory. If you want really efficient algorithms (and who doesn't), you take this into account. If you *need* $O(1)$ lookup of adjacencies and don't intend to do much neighborhood enumeration (and can afford the space), you use an adjacency matrix. If *expected* $O(1)$ is good enough, you can use hashtables, sets, trees, or other datastructures with the performance you need. --- I suspect, however, that you don't hear about this so often, is because in algorithms classes, it makes analysis much simpler to use lists, because we don't need to talk about *expected running time* and hash functions. --- Editing in two comments from leftaroundabout and jwezorek. Many real world graphs are *very sparse* and you often see $O(1)$-sized degrees for most of the graphs. This means that *even if* you want to do lookup, looping through a list is not necessarily much slower, and can in many cases be much faster. As a "proof", I add some statistics from the graphs from [Stanford Network Analysis Platform](https://snap.stanford.edu/snap/). Out of approximately 100 large graphs, the **average degrees** are | Avg. degree | number of graphs | | --- | --- | | < 10 | 35 | | < 20 | 43 | | < 30 | 10 | | < 40 | 4 | | < 50 | 2 | | < 70 | 3 | | < 140 | 1 | | < 350 | 1 |
I am an electrical engineer, and only had one CS course in college 26 years ago. However, I am also a devoted Mathematica user. I have the sense that Turing Machines are very important in computer science. Is the importance only in the theory of computer science? If there are practical implications/applications what are some of them?
The importance of Turing machines is twofold. First, Turing machines were one of the first (if not *the* first) theoretical models for computers, dating from 1936. Second, a lot of theoretical computer science has been developed with Turing machines in mind, and so a lot of the basic results are in the language of Turing machines. One reason for this is that Turing machines are simple, and so amenable to analysis. That said, Turing machines are not a practical model for computing. As an engineer and a Mathematica user, they shouldn't concern you at all. Even in the theoretical computer science community, the more realistic RAM machines are used in the areas of algorithms and data structures. In fact, from the point of view of complexity theory, Turing machines are polynomially equivalent to many other machine models, and so complexity classes like P and NP can equivalently be defined in terms of these models. (Other complexity classes are more delicate.)