input
stringlengths
38
38.8k
target
stringlengths
30
27.8k
I started to do Monte Carlo in R as a hobby, but eventually a financial analyst advised to migrate to Matlab. I'm an experienced software developer. but a Monte Carlo beginner. I want to construct static models with sensitivity analysis, later dynamic models. Need good libraries/ algorithms that guide me. To me seems that R has excellent libraries, and I suspect mathlab is preferred by inexperienced programmers because of the easy pascal-like language. The R language is based on scheme and this is hard for beginners, but not for me. If Matlab/ Octave has not advantages on the numerical/ library side I would stick with R.
To be honest, I think any question you ask around here about R vs ... will be bias towards R. Remember that R is by far the most used [tag](https://stats.stackexchange.com/tags)! **What I do** My current working practice is to use R to prototype and use C when I need an extra boost of speed. It used to be that I would have to switch to C very quickly (again for my particular applications), but the R [multicore](http://www.rforge.net/doc/packages/multicore/multicore.html) libraries have helped delay that switch. Essentially, you make a `for` loop run in parallel with a trivial change. I should mention that my applications are *very* computationally intensive. **Recommendation** To be perfectly honest, it really depends on exactly what you want to do. So I'm basing my answer on this statement in your question. > > I want to construct static models > with sensitivity analysis, later > dynamic models. Need good libraries/ > algorithms that guide me > > > I'd imagine that this problem would be ideally suited to prototyping in R and using C when needed (or some other compiled language). On saying that, typically Monte-Carlo/sensitivity analysis doesn't involve particularly advanced statistical routines - of course it may needed other advanced functionality. So I think (without more information) that you *could* carry out your analysis in any language, but being completely biased, I would recommend R!
I'm working through the examples in Kruschke's [Doing Bayesian Data Analysis](http://www.indiana.edu/%7Ekruschke/DoingBayesianDataAnalysis/), specifically the Poisson exponential ANOVA in ch. 22, which he presents as an alternative to frequentist chi-square tests of independence for contingency tables. I can see how we get information about about interactions that occur more or less frequently than would be expected if the variables were independent (ie. when the HDI excludes zero). My question is how can I compute or interpret an *effect size* in this framework? For example, Kruschke writes "the combination of blue eyes with black hair happens less frequently than would be expected if eye color and hair color were independent", but how can we describe the strength of that association? How can I tell which interactions are more extreme than others? If we did a chi-square test of these data we might compute the Cramér's V as a measure of the overall effect size. How do I express effect size in this Bayesian context? Here's the self-contained example from the book (coded in `R`), just in case the answer is hidden from me in plain sight ... ```r df <- structure(c(20, 94, 84, 17, 68, 7, 119, 26, 5, 16, 29, 14, 15, 10, 54, 14), .Dim = c(4L, 4L), .Dimnames = list(c("Black", "Blond", "Brunette", "Red"), c("Blue", "Brown", "Green", "Hazel"))) df Blue Brown Green Hazel Black 20 68 5 15 Blond 94 7 16 10 Brunette 84 119 29 54 Red 17 26 14 14 ``` Here's the frequentist output, with effect size measures (not in the book): ```r vcd::assocstats(df) X^2 df P(> X^2) Likelihood Ratio 146.44 9 0 Pearson 138.29 9 0 Phi-Coefficient : 0.483 Contingency Coeff.: 0.435 Cramer's V : 0.279 ``` Here's the Bayesian output, with HDIs and cell probabilities (directly from the book): ```r # prepare to get Krushkes' R codes from his web site Krushkes_codes <- c( "http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/Programs/openGraphSaveGraph.R", "http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/Programs/PoissonExponentialJagsSTZ.R") # download Krushkes' scripts to working directory lapply(Krushkes_codes, function(i) download.file(i, destfile = basename(i))) # run the code to analyse the data and generate output lapply(Krushkes_codes, function(i) source(basename(i))) ``` And here are plots of the posterior of Poisson exponential model applied to the data: ![enter image description here](https://i.stack.imgur.com/lbjiW.png) And plots of the posterior distribution on estimated cell probabilities: ![enter image description here](https://i.stack.imgur.com/saWCP.png)
Improper scoring rules such as proportion classified correctly, sensitivity, and specificity are not only arbitrary (in choice of threshold) but are improper, i.e., they have the property that maximizing them leads to a bogus model, inaccurate predictions, and selecting the wrong features. It is good that they disagree with proper scoring (log-likelihood; logarithmic scoring rule; Brier score) rules and the $c$-index (a semi-proper scoring rule - area under ROC curve; concordance probability; Wilcoxon statistic; Somers' $D\_{xy}$ rank correlation coefficient); this gives us more confidence in proper scoring rules.
We have submitted a paper reporting a statistically significant result. One reviewer asks us to report what is the power to detect a significant association. As there was a previous paper on this issue, we could use the effect size of that paper to do the calculation. However, we are surprised by this comment, and would be happy to know what is your opinion and whether you know of references that discuss calculation of power at posteriori when the result is significant. --- Thank you very much for your responses. I should have made clearer that we used a large dataset to run these analyses, so the study is unlikely to be underpowered. However, it involves a complex design, and other than running simulations, there is no simple way to compute power. We are not familiar with simulations to compute power, so I was trying to avoid this :-)
*Context*: I wrote this answer before the OP clarified that they are working with a large dataset, so the study (probably) has sufficient power. In my post I consider the more common case of a small study with a "significant finding". Imagine, for example, that the article under review presents an estimate of 1.25 in a domain where previous studies about related phonemena have reported estimates in the range [0.9, 1.1]. How does the article's author respond to the reviewer's request for a post-hoc estimate of power to detect an effect of size 1.25? --- It's hard to argue that it doesn't matter if a study with significant p-value is underpowered. If a study has low power *and* the null hypothesis is rejected, then the sample statistic is likely to be a biased estimate of the population parameter. Yes, you are lucky to get evidence against the null hypothesis but also likely to be over-optimistic. The reviewer knows this so he asks how much power your study had to detect the effect you detected. It's not recommended to do post-hoc power estimation. This is a much discussed topic on CV; see references below. In short – if your study was indeed underpowered to detect the true effect size – by doing post-hoc power analysis you compound the issue of overestimating the effect by also overestimating the power. Mathematically, the power at the observed effect is a function of the p-value: if the p-value is small, the post-hoc power is large. It's as if the result is more convincing because the same fact — the null is rejected — gets reported twice. Okay, so enough bad news. How can you respond to the reviewer? Computing the power retroactively is not meaningful because your study is already done. Instead compute confidence interval(s) for the effect(s) of interest and emphasize estimation, not hypothesis testing. If the power of your study is low, the intervals are wide (as low power means that we can't make precise estimates). If the power of your study is high, the intervals are tight, demonstrating convincingly how much you have learned from your data. If the reviewer insists on a power calculation, don't compute the power by plugging in the estimated effect for the true effect, aka post-hoc power. Instead do sensitivity power analysis: For example, fix the sample size, the power and the significance level, and determine the range of effect sizes that can be detected. Or fix the sample size and the significance, and plot power as a function of effect size. It will be especially informative to know what the power is for a range of realistic effect sizes. Daniël Lakens discusses power at great length in [Improving Your Statistical Inferences](https://lakens.github.io/statistical_inferences/). There is even a section on "What to do if Your Editor Asks for Post-hoc Power?" He has great advice. *References* J. M. Hoenig and D. M. Heisey. The abuse of power. *The American Statistician*, 55(1):19–24, 2001. A. Gelman. Don't calculate post-hoc power using observed estimate of effect size. *Annals of Surgery*, 269(1), 2019. [Do underpowered studies have increased likelihood of false positives?](https://stats.stackexchange.com/q/176384/237901) [What is the post-hoc power in my experiment? How to calculate this?](https://stats.stackexchange.com/q/430030/237901) [Why is the power of studies that only report significant effects not always 100%?](https://stats.stackexchange.com/questions/263383/why-is-the-power-of-studies-that-only-report-significant-effects-not-always-100) [Post hoc power analysis for a non significant result?](https://stats.stackexchange.com/questions/193726/post-hoc-power-analysis-for-a-non-significant-result) --- This simulation shows that "significant" estimates from underpowered studies are inflated. A study with little power to detect a small effect has more power to detect a large effect. So if the true effect is small and the null hypothesis of no effect is rejected, the estimated effect tends to be larger than the true one. I simulate 1000 studies with 50%, so about half of the studies have p-value < 0.05. The sample means from those "significant" studies are mostly to the right of the true mean 0.1, ie. they overestimate the true mean, often by a lot. ![](https://i.imgur.com/of8bujr.png) ```r library("pwr") library("tidyverse") # Choose settings for an underpowered study mu0 <- 0 mu <- 0.1 sigma <- 1 alpha <- 0.05 power <- 0.5 pwr.t.test(d = (mu - mu0) / sigma, power = power, sig.level = alpha, type = "one.sample") #> #> One-sample t test power calculation #> #> n = 386.0261 #> d = 0.1 #> sig.level = 0.05 #> power = 0.5 #> alternative = two.sided # Sample size to achieve 50% power to detect mean 0.1 with a one-sided t-test n <- 387 # Simulate 1,000 studies with low power set.seed(123) reps <- 1000 studies <- tibble( study = rep(seq(reps), each = n), x = rnorm(reps * n, mean = mu, sd = sigma) ) results <- studies %>% group_by( study ) %>% group_modify( ~ broom::tidy(t.test(.)) ) results %>% # We are only interested in studies where the null is rejected filter( p.value < alpha ) %>% ggplot( aes(estimate) ) + geom_histogram( bins = 33 ) + geom_vline( xintercept = mu, color = "red" ) + labs( x = glue::glue("estimate of true effect {mu} in studies with {100*power}% power"), y = "", title = "\"Significant\" effect estimates from underpowered studies are inflated" ) ``` Created on 2022-04-30 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)
The fixed-point combinator FIX (aka the Y combinator) in the (untyped) lambda calculus ($\lambda$) is defined as: FIX $\triangleq \lambda f.(\lambda x. f~(\lambda y. x~x~y))~(\lambda x. f~(\lambda y. x~x~y))$ I understand its purpose and I can trace the execution of its application perfectly fine; **I would like to understand how to derive FIX from first principles**. Here is as far as I get when I try to derive it myself: 1. FIX is a function: FIX $\triangleq \lambda\_\ldots$ 2. FIX takes another function, $f$, to make it recursive: FIX $\triangleq \lambda f.\_\ldots$ 3. The first argument of the function $f$ is the "name" of the function, used where a recursive application is intended. Therefore, all appearances of the first argument to $f$ should be replaced by a function, and this function should expect the rest of the arguments of $f$ (let's just assume $f$ takes one argument): FIX $\triangleq \lambda f.\_\ldots f~(\lambda y. \_\ldots y)$ This is where I do not know how to "take a step" in my reasoning. The small ellipses indicate where my FIX is missing something (although I am only able to know that by comparing it to the "real" FIX). I already have read [Types and Programming Languages](http://rads.stackoverflow.com/amzn/click/0262162091), which does not attempt to derive it directly, and instead refers the reader to [The Little Schemer](http://mitpress.mit.edu/books/little-schemer) for a derivation. I have read that, too, and its "derivation" was not so helpful. Moreover, it is less of a direct derivation and more of a use of a very specific example and an ad-hoc attempt to write a suitable recursive function in $\lambda$.
As Yuval has pointed out there is not just one fixed-point operator. There are many of them. In other words, the equation for fixed-point theorem do not have a single answer. So you can't derive the operator from them. It is like asking how people derive $(x,y)=(0,0)$ as a solution for $x=y$. They don't! The equation doesn't have a unique solution. --- Just in case that what you want to know is how the first fixed-point theorem was discovered. Let me say that I also wondered about how they came up with the fixed-point/recursion theorems when I first saw them. It seems so ingenious. Particularly in the computability theory form. Unlike what Yuval says it is not the case that people played around till they found something. Here is what I have found: As far as I remember, the theorem is originally due to S.C. Kleene. Kleene came up with the original fixed-point theorem by salvaging the proof of inconsistency of Church's original lambda calculus. Church's original lambda calculus suffered from a Russel type paradox. The modified lambda calculus avoided the problem. Kleene studied the proof of inconsistency probably to see how if the modified lambda calculus would suffer from a similar problem and turned the proof of inconsistency into a useful theorem in of the modified lambda calculus. Through his work regarding equivalence of lambada calculus with other models of computation (Turing machines, recursive functions, etc.) he transferred it to other models of computation. --- How to derive the operator you might ask? Here is how I keep it in mind. Fixed-point theorem is about removing self-reference. Everyone knows the liar paradox: > > I am a lair. > > > Or in the more linguistic form: > > This sentence is false. > > > Now most people think the problem with this sentence is with the self-reference. It is not! The self-reference can be eliminated (the problem is with truth, a language cannot speak about the truth of its own sentences in general, see [Tarski's undefinability of truth theorem](https://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem)). The form where the self-reference is removed is as follows: > > If you write the following quote twice, the second time inside quotes, the resulting sentence is false: "If you write the following quote twice, the second time inside quotes, the resulting sentence is false:" > > > No self-reference, we have instructions about how to construct a sentence and then do something with it. And the sentence that gets constructed is equal to the instructions. Note that in $\lambda$-calculus we don't need quotes because there is no distinction between data and instructions. Now if we analyse this we have $MM$ where $Mx$ is the instructions to construct $xx$ and do something to it. > > $Mx = f(xx)$ > > > So $M$ is $\lambda x. f(xx)$ and we have > > $MM = (\lambda x. f(xx))(\lambda x. f(xx))$ > > > This is for a fixed $f$. If you want to make it an operator we just add $\lambda f$ and we get $Y$: > > $Y = \lambda f. (MM) = \lambda f.((\lambda x. f(xx))(\lambda x. f(xx)))$ > > > So I just keep in mind the paradox without self-reference and that helps me understand what $Y$ is about.
[Ladner's Theorem](http://doi.acm.org/10.1145/321864.321877) states that if P ≠ NP, then there is an infinite hierarchy of [complexity classes](http://en.wikipedia.org/wiki/Complexity_class) strictly containing P and strictly contained in NP. The proof uses the completeness of SAT under many-one reductions in NP. The hierarchy contains complexity classes constructed by a kind of diagonalization, each containing some language to which the languages in the lower classes are not many-one reducible. This motivates my question: > > Let C be a complexity class, and let D be a complexity class that strictly contains C. If D contains languages that are complete for some notion of reduction, does there exist an infinite hierarchy of complexity classes between C and D, with respect to the reduction? > > > More specifically, I would like to know if there are results known for D = P and C = [LOGCFL](http://qwiki.stanford.edu/wiki/Complexity_Zoo%3aL#logcfl) or C = [NC](http://qwiki.stanford.edu/wiki/Complexity_Zoo%3aN#nc), for an appropriate notion of reduction. --- Ladner's paper already includes Theorem 7 for space-bounded classes C, as Kaveh pointed out in an answer. In its strongest form this says: if NL ≠ NP then there is an infinite sequence of languages between NL and NP, of strictly increasing hardness. This is slightly more general than the usual version (Theorem 1), which is conditional on P ≠ NP. However, Ladner's paper only considers D = NP.
It is very likely that you can accomplish this in a generic setting. Almost certainly such a result **has** been proved in a generic setting already, but the references escape me at the moment. So here's an argument from scratch. The writeup at <http://oldblog.computationalcomplexity.org/media/ladner.pdf> has two proofs of Ladner's theorem. The second proof, by Russell Impagliazzo, produces a language $L\_1$ of the form {$ x01^{f(|x|)}$} where $x$ encodes a satisfiable formula and $f$ is a particular polynomial time computable function. That is, by simply padding SAT with the appropriate number of $1$'s, you can get "NP-intermediate" sets. The padding is performed to "diagonalize" over all possible polynomial time reductions, so that no polynomial time reduction from SAT to $L\_1$ will work (assuming $P \neq NP$). To prove that there are infinitely many degrees of hardness, one should be able to substitute $L\_1$ in place of SAT in the above argument, and repeat the argument for $L\_2 = ${$x 0 1^{f(|x|)} | x \in L\_1$}. Repeat with $L\_i = ${$x 0 1^{f(|x|)} | x \in L\_{i-1}$}. It seems clear that such a proof can be generalized to classes $C$ and $D$, where (1) $C$ is properly contained in $D$, (2) $D$ has a complete language under $C$-reductions, (3) the list of all $C$-reductions can be recursively enumerated, and (4) the function $f$ is computable in $C$. Perhaps the only worrisome requirement is the last one, but if you look at the definition of $f$ in the link, it looks very easy to compute, for most reasonable classes $C$ that I can think of.
Is there a natural class $C$ of CNF formulas - preferably one that has previously been studied in the literature - with the following properties: * $C$ is an easy case of SAT, like e.g. Horn or 2-CNF, i.e., membership in $C$ can be tested in polynomial time, and formulas $F\in C$ can be tested for satisfiability in polynomial time. * Unsatisfiable formulas $F\in C$ are not known to have short (polynomial size) tree-like resolution refutations. Even better would be: there are unsatisfiable formulas in $C$ for which a super-polynomial lower bound for tree-like resolution is known. * On the other hand, unsatisfiable formulas in $C$ are known to have short proofs in some stronger proof system, e.g. in dag-like resolution or some even stronger system. $C$ should not be too sparse, i.e., contain many formulas with $n$ variables, for every (or at least for most values of) $n\in \mathbb{N}$. It should also be non-trivial, in the sense of containing satisfiable as well as unsatisfiable formulas. The following approach to solving an arbitrary CNF formula $F$ should be meaningful: find a partial assignment $\alpha$ s.t. the residual formula $F\alpha$ is in $C$, and then apply the polynomial time algorithm for formulas in $C$ to $F\alpha$. Therefore I would like other answers besides the *all-different constraints* from the currently accepted answer, as I think it is rare that an arbitrary formula will become an all-different constraint after applying a restriction.
I'm not sure why one would require also sat formulas but there are some articles on the separation between General and Tree resolution eg [1]. It sounds to me that this is what you want. [1] Ben-Sasson, Eli, Russell Impagliazzo, and Avi Wigderson. "Near optimal separation of tree-like and general resolution." Combinatorica 24.4 (2004): 585-603.
What is the difference between programming language and a scripting language? For example, consider C versus Perl. Is the only difference that scripting languages require only the interpreter and don't require compile and linking?
I think the difference has a lot more to do with the intended use of the language. For example, Python is interpreted, and doesn't require compiling and linking, as is Prolog. I would classify both of these as programming languges. Programming langauges are meant for writing software. They are designed to manage large projects. They can probably call programs, read files, etc., but might not be quite as good at that as a scripting language. Scripting langauges aren't meant for large-scale software development. Their syntax, features, library, etc. are focused more around accomplishing small tasks quickly. This means they are sometimes more "hackish" than programming langauges, and might not have all of the same nice features. They're designed to make commonly performed tasks, like iterating through a bunch of files or performing sysadmin tasks, to be automated. For example, Bash doesn't do arithmetic nicely, which would probably make writing large-scale software in it a nightmare. As a kind of benchmark: I would never write a music player in perl, even though I probably could. Likewise, I would never try to use C++ to rename all the files in a given folder. This line is becoming blurrier and blurrier. JavaScript, by definition a "scripting" langauge, is increasingly used to develop "web apps" which are more in the realm of software. Likewise, Python initially fit many of the traits of a scripting language but is seeing more and more sofware developed using Python as the primary platform.
For every Kth operation: Right rotate the array clockwise by 1. Delete the (n-k+1)th last element. eg: A = {1, 2, 3, 4, 5, 6}. Rotate the array clockwise i.e. after rotation the array A = {6, 1, 2, 3, 4, 5} and delete the last element that is {5} so A = {6, 1, 2, 3, 4}. Again rotate the array for the second time and deletes the second last element that is {2} so A = {4, 6, 1, 3}, doing these steps when he reaches 4th time, 4th last element does not exists so he deletes 1st element ie {1} so A={3, 6}. So continuing this procedure the last element in A is {3}, so outputp will be 3 How to solve this?
It seems some kind of problem with mathematical solution (try to find the position with a closed math formula). However, here is an algorithmic approach that runs in $O(n \log^2 n)$. Using segment tree / Fenwick tree, you can find the number of removed elements in a range (using prefix sums). Keep track of the offset (imagine having a pointer on the first element (imagine a cyclic arry and in each step we push it one element to the left instead of rotating right). Using binary search on the Fenwick tree, you can find the index of element having the distance you are looking for the offset element. Nite that after deleting half of the elements $k$ will be greater than the size and hence you can directly output the first unremoved element before the offset.
I am conducting an ordinal logistic regression. I have an ordinal variable, let's call it Change, that expresses the change in a biological parameter between two time points 5 years apart. Its values are 0 (no change), 1 (small change), 2 (large change). I have several other variables (VarA, VarB, VarC, VarD) measured between the two time points. My intention is to perform an ordinal logistic regression to assess whether the entity of Change is more strongly associated with VarA or VarB. I'm really interested only in VarA and VarB, and I'm not trying to create a model. VarC and VarD are variables that I know *may* affect Change, but probably not very much, and in any case I'm not interested in them. I just want to know if the association in the period of observation (5 years) was stroger for VarA or for VarB. Would it be wrong to not include VarC and VarD in the regression?
This depends on the relationships between the predictor variables (how are VarC and VarD related to VarA and VarB?) and also what question you are trying to answer. Consider the possible case where VarA causes VarC which causes the response. If your only interest is the relationship between VarA and the response then including VarC would hide the indirect relationship. But if we are interested in if VarA has a direct effect on the response above and beyond the indirect effect through VarC then including VarC is important. Sometimes it is helpful to draw a diagram with all the different variables and then draw lines/arrows showing the potential and/or interesting relationships between all the variables. Then use that along with the question of interest to decide on the model.
I have one question with respect to need to use feature selection methods (Random forests feature importance value or Univariate feature selection methods etc) before running a statistical learning algorithm. We know to avoid overfitting we can introduce regularization penalty on the weight vectors. So if I want to do linear regression, then I could introduce the L2 or L1 or even Elastic net regularization parameters. To get sparse solutions, L1 penalty helps in feature selection. Then is it still required to do feature selection before Running L1 regularizationn regression such as Lasso?. Technically Lasso is helping me reduce the features by L1 penalty then why is the feature selection needed before running the algo? I read a research article saying that doing Anova then SVM gives better performance than using SVM alone. Now question is: SVM inherently does regularization using L2 norm. In order to maximise the margin, it is minimising the weight vector norm. So it is doing regularization in it's objective function. Then technically algorithms such as SVM should not be bothered about feature selection methods?. But the report still says doing Univariate Feature selection before normal SVM is more powerful. Anyone with thoughts?
I don't think overfitting is the reason that we need feature selection in the first place. In fact, overfitting is something that happens if we don't give our model enough data, and feature selection further *reduces* the amount of data that we pass our algorithm. I would instead say that feature selection is needed as a preprocessing step for models which do not have the power to determine the importance of features on their own, or for algorithms which get much less efficient if they have to do this importance weighting on their own. Take for instance a simple k-nearest neighbor algorithm based on Euclidean distance. It will always look at all features as having the same weight or importance to the final classification. So if you give it 100 features but only three of these are relevant for your classification problem, then all the noise from these extra features will completely drown out the information from the three important features, and you won't get any useful predictions. If you instead determine the critical features beforehand and pass only those to the classifier, it will work much better (not to mention be much faster). On the other hand, look at a random forest classifier. While training, it will automatically determine which features are the most useful by finding an optimal split by choosing from a subset of all features. Therefore, it will do much better at sifting through the 97 useless features to find the three good ones. Of course, it will still run faster if you do the selection beforehand, but its classification power will usually not suffer much by giving it a lot of extra features, even if they are not relevant. Finally, look at neural networks. Again, this is a model which has the power to ignore irrelevant features, and training by backpropagation will usually converge to using the interesting features. However, it is known that standard training algorithm converge much faster if the inputs are "whitened", i.e., scaled to unit variance and with removed cross correlation [(LeCun et al, 1998)](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf). Therefore, although you don't strictly need to do feature selection, it can pay in pure performance terms to do preprocessing of the input data. So in summary, I would say feature selection has less to do with overfitting and more with enhancing the classification power and computational efficiency of a learning method. How much it is needed depends a lot on the method in question.
A [domino tiling](https://en.wikipedia.org/wiki/Domino_tiling) is a tesselation of a region in the plane by 2 × 1 squares. What is a good data type for storing and manipulating such objects? ![](https://upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Pavage_domino.svg/220px-Pavage_domino.svg.png) In my current manipulation, use an array to store all the half-squares numbering them 1-2 for a horizontal domino, 3 4 for a vertical. It's not ideal but I can draw the tiling in `ASCII` or using a graphics editor. ``` _ _ _ _ 1 2 |_ _| 3 3 | | | 1 2 |_ _| 4 4 |_|_| ``` In some applications, I have to identify specific features within array. For example, I might have to count all instances of 2 ×2 squares of dominos in my array, excluding things like ``` _ _ _|_ 2 1 |_ _| 1 2 ``` These queries are not difficult, but they make me start to question my use of arrays.
There is an alternative, more compact kind of array representation which might be better if you are working with diamonds as defined in the paper you linked (for axis aligned rectangles you need to find the smallest covering diamond). The idea is to checkerboard the plane and assign a 2-bit number to each "black" cell, such that the number represents the orientation of the domino relative to the cell. Consider for example: ![enter image description here](https://i.stack.imgur.com/aEYgZ.png) where $0$ means "horizontal domino pointing east", $1$ means "vertical domino pointing north", etc. Tilting the numbers clockwise yields the array representation: $$A = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 2 & 3 & 3 & 0 \\ 2 & 3 & 0 & 2 \\ 2 & 1 & 0 & 1 \\ 2 & 2 & 2 & 1 \\ \end{pmatrix}$$ To find two dominos forming a square search for a $0$ adjacent to a $2$ or a $1$ adjacent to a $3$. Now, if you deal with big tesselations and are into implemtations sped up by bit operations, you can do this: Let $A$ be a $\{0,1,2,3\}^{m \times n}$ matrix, $b$ be the bit mask $\dots 01010101$, then we can represent the $i^{th}$ row of $A$ as a single integer by using bit-concatenation: $$r\_i = \bigvee\_{j=1}^n (a\_{ij} << 2(j-1))$$ where $\vee$ denotes a bitwise OR and $<<$ a leftshift. Denoting $\oplus$ as bitwise XOR we can calculate $$s\_i=(r\_i \text{ mod } 2^{2n}) \oplus (r\_i >> 2) \oplus b \\ t\_i=r\_i \oplus r\_{i+1} \oplus b, ~~i < m$$ Then domino $a\_{ij}$ and domino $a\_{ij+1}$ form a square iff bits $(s\_i)\_{2j-2}$ and $(s\_i)\_{2j-1}$ are both $1$. Likewise domino $a\_{ij}$ and domino $a\_{i+1j}$ do so iff bits $(t\_i)\_{2j-2}$ and $(t\_i)\_{2j-1}$ are both $1$. Example calculation for rows 1,2 (little-endian!): ``` r_1 00 00 00 00 r_2 10 11 11 00 b 01 01 01 01 xor ----------- 11 10 10 01 ``` Therefore $a\_{11}$ and $a\_{21}$ form a square.
In all of the contexts I've seen loss functions in statistics/machine learning so far, loss functions are additive in observations. i.e.: loss $Q\_D$ of dataset $D$ is an additive aggregation of losses at observations $i\in D$: $Q\_D(\beta)=\sum\_{i\in D}Q\_i(\beta)$. e.g. in the loss that is a simple sum of squared residuals: $Q\_D=\sum\_i(y\_i-X\_i\beta)^2$. This seems sensible, but I am wondering: **Are there contexts in statistics/machine learning in which it happens (or reasons in theory why one might want) that a loss function is used that is not additive (or even separable) in observations?**
**Loss functions are not always additive in observations:** A loss function is function of an estimator (or predictor) and the thing it is estimating (predicting). The loss function is often, but not always, a distance function. Moreover, the estimator (predictor) sometimes, but not always, involves a sum of terms involving a single observation. Generally speaking, the loss function does not always have a form that is additive with respect to the observations. For prediction problems, deviation from this form occurs because of the form of the loss function. For estimation problems, it occurs either because of the form of the loss function, or because of the form of the estimator appearing in the loss function. To see the generality of the loss form for a prediction problem, consider the general case where we have an observed data $\mathbf{y} = (y\_1,...,y\_n)$ and we want to predict the observable vector $\mathbf{y}\_\* = (y\_{n+1},...,y\_{n+k})$ using the predictor $\hat{\mathbf{y}}\_\* = \mathbf{H}(\mathbf{y})$. We can write the loss for this prediction problem as: $$L(\hat{\mathbf{y}}\_\*, \mathbf{y}\_\*) = L(\mathbf{H}(\mathbf{y}), \mathbf{y}\_\*).$$ The loss function in your question is the Euclidean distance between the prediction vector and the observed data vector, which is $L(\hat{\mathbf{y}}\_\*, \mathbf{y}\_\*) = ||\hat{\mathbf{y}}\_\* - \mathbf{y}\_\*||^2 = \sum\_i (\hat{y}\_{\*i} - y\_{\*i})^2$. That particular form is composed of a sum of terms involving the observed values being predicted, and so the additivity property holds in that case. However, there are many other examples of loss functions that give rise to a form that does not have this additivity property. A simple example of two loss functions that are not additive in the observations are when the loss is equal to the prediction error either from the best prediction, or from the worst prediction. In the case of "loss from best prediction" we have the loss function $L(\hat{\mathbf{y}}\_\*, \mathbf{y}\_\*) = \min\_i |\hat{y}\_{\*i} - y\_{\*i}|$, and in "loss from worse prediction" we have the loss function $L(\hat{\mathbf{y}}\_\*, \mathbf{y}\_\*) = \max\_i |\hat{y}\_{\*i} - y\_{\*i}|$. In either case, the loss function is not additive for the individual terms.
How is [real-time computing](http://en.wikipedia.org/wiki/Real-time_computing) defined in theoretical computer science (e.g. complexity theory)? Are there complexity theoretic models designed to capture the real-time computation?
Like many things in life, there is no one definitive definition. For an algorithm to run in real-time, some people on the theoretical side say that this means it will take constant time per 'something.' Now you have to decide what a 'something' is but let me give a concrete example. Let's say that the input arrives one symbol at a time and you want to output the answer to a query as soon as a new symbol arrives. If calculating that output takes constant time per new symbol then you might say the algorithm runs in real-time. An example of this is real-time exact string matching, which outputs whether a pattern matches the latest suffix of a text in constant time per new symbol. The text is assumed to arrive one symbol at a time. However, an engineering answer will be less worried about "constant time" and more worried about it happening fast in practice and in particular fast enough that the result can be used by the time it is needed. So for example in robotics, if you want to play ping-pong it is useful for the robot to be able to work out where the ball is and move to hit it as the ball arrives, and not after the ball has passed. The asymptotic time complexity of the underlying algorithms will perhaps be of less interest there than just the observation that the code works out the location quickly enough. To give another example, if you want to render video and can do it at 25 frames per second then it is reasonable to say that the rendering is happening in real-time. So basically you have two answers. One for the theoreticians/algorithmists and one that just says that you are doing the work as you need it on the fly. EDIT: I should probably add that one extra feature one should require of even a constant time algorithm is that the time complexity is not amortised. In this context, real-time == unamortised constant time.
I've always thought vaguely that the answer to the above question was affirmative along the following lines. Gödel's incompleteness theorem and the undecidability of the halting problem both being negative results about decidability and established by diagonal arguments (and in the 1930's), so they must somehow be two ways to view the same matters. And I thought that Turing used a universal Turing machine to show that the halting problem is unsolvable. (See also [this math.SE](https://math.stackexchange.com/questions/108964/halting-problem-and-universality) question.) But now that (teaching a course in computability) I look closer into these matters, I am rather bewildered by what I find. So I would like some help with straightening out my thoughts. I realise that on one hand Gödel's diagonal argument is very subtle: it needs a lot of work to construct an arithmetic statement that can be interpreted as saying something about it's own derivability. On the other hand the proof of the undecidability of the halting problem I found [here](http://en.wikipedia.org/wiki/Halting_problem#Sketch_of_proof) is extremely simple, and doesn't even explicitly mention Turing machines, let alone the existence of universal Turing machines. A practical question about universal Turing machines is whether it is of any importance that the alphabet of a universal Turing machine be the same as that of the Turing machines that it simulates. I thought that would be necessary in order to concoct a proper diagonal argument (having the machine simulate itself), but I haven't found any attention to this question in the bewildering collection of descriptions of universal machines that I found on the net. If not for the halting problem, are universal Turing machines useful in any diagonal argument? Finally I am confused by [this further section](http://en.wikipedia.org/wiki/Halting_problem#Relationship_with_G.C3.B6del.27s_incompleteness_theorem) of the same WP article, which says that a weaker form of Gödel's incompleteness follows from the halting problem: "a complete, consistent and sound axiomatisation of all statements about natural numbers is unachievable" where "sound" is supposed to be the weakening. I know a theory is consistent if one cannot derive a contradiction, and a complete theory about natural numbers would seem to mean that all true statements about natural numbers can be derived in it; I know Gödel says such a theory does not exist, but I fail to see how such a hypothetical beast could possibly fail to be sound, i.e., also derive statements which are false for the natural numbers: the negation of such a statement would be true, and therefore by completeness also derivable, which would contradict consistency. I would appreciate any clarification on one of these points.
Universal Turing machines are useful for some diagonal arguments, e.g in the separation of some classes in the [hierarchies of time](http://en.wikipedia.org/wiki/Time_hierarchy_theorem) or [space](http://en.wikipedia.org/wiki/Space_hierarchy_theorem) complexity: the universal machine is used to prove there is a decision problem in $\mbox{DTIME}(f(n)^3)$ but not in $\mbox{DTIME}(f(n/2))$. (Better bounds can be found in the WP article) However, to be perfectly honest, if you look closely, the universal machine is not used in the `negative' part: the proof supposes there is a machine $K$ that would solve a time-limited version of the halting problem and then proceeds to build $¬KK$. (No universal machine here) The universal machine is used to solve the time-limited version of the halting problem in a larger amount of time.
I’m a sucker for mathematical elegance and rigour, and now am looking for such literature on algorithms and algorithm analysis. Now, it doesn’t matter much to me *what* algorithms are covered, but very much *how* they are presented and treated.¹ I most value a very clear and precise language which *defines* all used notions in a stringent and abstract manner. I found that the classic *Introduction to Algorithms*, by Cormen, Leiserson, Rivest and Stein is pretty neat, but doesn’t handle the mathematics well and is quite informal with its proofs and definitions. Sipser’s *Introduction to the Theory of Computation* seems better in that regard, but still offers no seamless transition from mathematics to algorithms. Can anyone recommend something? --- ¹: The algorithms should at least invole the management of their needed data using classical non-trivial abstract data structures like graphs, arrays, sets, lists, trees and so on – preferably also operating on such data structures. I wouldn’t be too interested if the issue of usage and management of data structures was ignored altogether. I don’t care much about the problems solved with them, though.
Well, you can always consider a Turing machine equipped with an oracle for the ordinary Turing machine halting problem. That is, your new machine has a special tape, onto which it can write the description of an ordinary Turing machine and its input and ask if that machine halts on that input. In a single step, you get an answer, and you can use that to perform further computation. (It doesn't matter whether it's in a single step or not: it would be enough if it was guaranteed to be in some finite number of steps.) However, there are two problems with this approach. 1. Turing machines equipped with such an oracle can't decide their own halting problem: Turing's proof of the undecidability of the ordinary halting problem can easily be modified to this new setting. In fact, there's an infinite hierarchy, known as the "Turing degrees", generated by giving the next level of the hierarchy an oracle for the halting problem of the previous one. 2. Nobody has ever suggested any way in which such an oracle could be physically implemented. It's all very well as a theoretical device but nobody has any clue how to build one. Also, note that ZFC is, in a sense, weaker than naive set theory, not stronger. ZFC can't express Russell's paradox, whereas naive set theory can. As such, a better analogy would be to ask whether the halting problem is decidable for weaker models of computation than Turing machines. For example, the halting problem for deterministic finite automata (DFAs) is decidable, since DFAs are guaranteed to halt for every input.
I have been struggling with [flat file databases](https://en.wikipedia.org/wiki/Flat_file_database) and corresponding statistical packages for almost 20 years now (from Excel to SPSS, then Stata, and currently R). However, I have always had to convert complex and multidimensional [relational databases](https://en.wikipedia.org/wiki/Relational_database) (eg in Access or MySQL) to often overly simplified flat sheet databases, which is at best time consuming (but often means reducing the amount of information available for each analysis). Indeed, the approach I have always followed is the typical one of converting a relational database through specific queries into one or more flat file databases. While this is simple enough for most analyses, especially univariate and bivariate, it may become more confusing for multivariable and multivariate analyses, as it requires taking multiple and complex queries, and most importantly often oversimplifying the data themselves. Now that I try to get more acquainted with big data and data science, I wonder whether the shift to big data will require also a shift to data analysis encompassing multiple tables and relations, without diluting the efficiency and power of a relational database when it is converted into multiple flat file databases. So, my question is, simply: is it possible to directly perform complex (eg multivariable) analyses of relational databases? And if yes, how? This is not a philosophical question (only). For instance, I am now working on a relatively large (reaching 2000 patients) observational study on transcatheter aortic valve implantation for severe aortic stenosis ([RISPEVA](https://clinicaltrials.gov/ct2/show/NCT02713932)). It is based on a MySQL electronic case report form which corresponds to 12 separate tables with complex relations and often multiple entries per each patient. My approach so far to try to identify predictors of long-term death (eg if looking for a score) has been, as usual, to create multiple tables through queries, and then distill the key features capable of predicting death. This means going through multiple stages of analysis and, at best, it is time consuming. My fear is however that it might overlook one or more of the relational features of the data, and thus loosing precision or accuracy. Could it be done in a different fashion, directly analyzing the relational database as it stands?
My understanding of your question is that you are interested in methods to uncover multidimensional relationships in data yet are reluctant to take low-dimensional slices of the data for analysis. This is, in a sense, the basis of many machine learning algorithms that use data in high dimensions to make predictions or classifications with often very complex rules that are learned directly from the data. There are classes of relational methods which perhaps fit more neatly into what you are thinking of, however. For example, the [infinite relational model](http://www.psy.cmu.edu/~ckemp/papers/KempTGYU06.pdf) is a Bayesian nonparametric framework for identifying hidden structure across many dimensions in a way that appears to conceptually match what you want. For a sample problem that this might be used for, consider a relational database which contains 3 tables with 3 different primary keys and containing information on a set of cases $S$, a set of patients $P$ and a set of doctors $D$ that performed procedures during these cases. I offer this as a low-dimensional example but all of this can be scaled up to include more data. Then, suppose that you have an indicator variable denoting whether or not the patient had a good outcome. As shown in the paper I linked, you could simultaneously find partitionings of each of $S$, $D$ and $P$ such that each partition cell contained similar outcomes. This learning is done by performing optimization of the likelihood of the data under a Bayesian model. This might inform you as to which doctors are good or bad, or whether certain patients are particularly troublesome for a given procedure. Again, this framework is flexible and affords a range of generative models for the underlying process. This may be more complex than what you desired - it's a bit of a jump from Excel or SPSS to writing custom inference code in another programming language. Still, it's how I would approach this problem.
I am interested in modeling objects, from object oriented programming, in dependent type theory. As a possible application, I would like to have a model where I can describe different features of imperative programming languages. I could only find one Paper on modeling objects in dependent type theory, which is: [Object-oriented programming in dependent type theory by A. Setzer (2006)](http://www.cs.swan.ac.uk/~csetzer/articles/objectOrientedProgrammingInDepTypeTheoryTfp2006PostProceedings.pdf) Are there further references on the topic that I missed and perhaps are there more recent ones? Is there perhaps an implementation (i.e. proof) available for a theorem prover, like Coq or Agda?
Some early(?) work done in this area was by Bart Jacobs (Nijmegen) and Marieke Huisman. Their work is based on the PVS tool and relied on a coalgebraic encoding of classes (if I remember correctly). Look at Marieke's [publication page](http://wwwhome.ewi.utwente.nl/~marieke/papers.html) for papers in the year 2000 and her PhD thesis in 2001. Also look at the papers by Bart Jacobs cited in the A Setzer paper you mention. Back in those days, they had something called the LOOP tool, but it seems to have vanished from the internets. There is a workshop series known as [FTfJP](http://www.cs.ru.nl/ftfjp/) (Formal Techniques for Java-like Programs) that addresses the formal verification of OO programs. Undoubtedly some of the work uses dependent type theory/higher-order logic. The workshop series has been running for some 14 years.
The problem is whether a graph (which we represent as a flow network) has a single min-cut, or there could be multiple min cuts with the same maximum flow value, I've yet to encounter a well explained algorithm for solving this problem, let alone a complexity anaylsis or proof. I was not able to find the exact resource of information to help me answer the problem. I'm not sure where else to search, therefore I thought asking here could perhaps help me better my understanding of the problem and the algorithms that help solving them.
Here is a simple algorithm that determines whether a flow network $G=(V, E, c)$ with source $s$ and sink $t$ has a single min-cut or not. 1. Find a maximum flow $f$ of $G$. Let $R\_f$ be the residual network of $G$ with respect to $f$. 2. Let $X$ be the set of all nodes that are reachable from $s$ in $R\_f$. 3. Let $Y$ be the set of all nodes from which $t$ is reachable or, what is equivalent, all nodes reachable from $t$ in $R\_f$ with all direction of flow capacity reversed. 4. If $X\cup Y=V$, there is a unique minimum cut (which is $(X,Y)$). Otherwise, there are more than one minimum cut. #### Why is the algorithm above correct? First, we know that $(X, V\setminus X)$ is a minimum cut. By symmetry, so is $(V\setminus Y, Y)$. Suppose $(S,T)$ is an arbitrary minimum-cut of $G$ where $s\in S$ and $t\in T$. [This post](https://cs.stackexchange.com/q/149194/91753) tells that $X$ is contained in $S$. By symmetry, $Y$ is contained in $T$. Hence, $(S,T)$ is the unique minimum-cut iff $X\cup Y=V$. #### Time-complexity of the algorithm It mostly depends on the complexity of [the algorithm that is used to compute a maximum flow](https://en.wikipedia.org/wiki/Maximum_flow_problem#Algorithms). It takes $O(E)$ time to compute $X$ and $Y$, given the max-flow $f$. It takes $O(V)$ time to check whether $X=Y$.
I can't find how to prove the decibility with a reduction. EDIT: I've tried the reduction from the halting problem and the aceptance problem. Stopping for at least one entry has infinite inputs (you have to check all possible inputs) but the halting problem only has one input for the TM. I don't understand how can i formally define a machine that using a machine that checks all inputs solves all cases of the halting problem.
It more or less depends on the implementation. If you have implemented the matrix using linked lists (which isn't what we usually do), then adding a new vertex in $G$ will be linear in the number of vertices of $G$. But we don't usually use linked lists because then reading/checking an edge will take $V(G)^2$ time. To allow random/constant time access of the adjacency matrix, we need to use fixed-size arrays of arrays (or matrix). Now, when you add new vertex, we are required to copy the whole matrix into a bigger size matrix: hence the quadratic time. Generally, we use $vectors$ instead of $arrays$ which hold *extra hidden memory*, and hence it will indeed be linear time to add a vertex on average. <https://www.geeksforgeeks.org/vector-in-cpp-stl/>
I understand the textbook explanation of how to use dynamic programming to find the minimum edit distance between 2 strings but how do we get to pick the 2nd string? I don't think the entire dictionary is compared as sometimes the difference is either in the middle or the end. I assume that in the end, what is suggested is the string that has the minimum edit distance after creating a certain number of $n \times m$ tables, where $n$ is the typed string length and $m$ is that of the other words that may be close.
Companies with search engines (e.g. Microsoft or Google) don't always directly search for the string with the smallest Levenshtein distance. They have a huge database of search queries, from which they have developed a huge database of commonly misspelled/mistyped variants, and what word the user probably meant to type instead. They also have a huge corpus of text, and can use this to (for example) predict which word is most likely to come next based on what you've typed so far, or to assist with autocompletion. The set of likely words is much smaller than the set of possible words. Don't underestimate the value of understanding exactly how real humans misspell or mistype things. For example, when you misspell something, you rarely get the first letter wrong, unless it's an ambiguous letter such as a vowel or "c" vs "k". With all that said, let's assume that you're not doing any of that, and just want to find a string with edit distance as close as possible. The general idea is to find a set of candidate words first (e.g. all words within a certain edit distance, or all words with promising sub-matches), and then use some kind of finer-grained metric to decide which member of the set to suggest. A simple approach is to use a trie, such as a [ternary search trie](https://en.wikipedia.org/wiki/Ternary_search_tree). Another option is to combine [k-mer](https://en.wikipedia.org/wiki/K-mer) matches.
I am reading/studying this paper [1](https://arxiv.org/pdf/1601.00670.pdf) and got confused with some expressions. It might be basic for many of you, so my apologizes. In the paper the following prior model is assumed: $\mu\_k \sim \mathcal{N}(0, \sigma^2) \\ c\_i \sim Categorical(1/K, ... 1/K) \\ x\_i|c\_i, \mu \sim \mathcal{N}(c\_i^{T}\mu, 1)$ The joint density is modeled as follows: $p(x, c, \mu) = p(\mu)\prod\_{i=1}^{n}p(c\_i)p(x\_i|c\_i, \mu)$ Using the mean-field approximation as, $q(\mu, c) = \prod\_{k=1}^{K}q(\mu\_k; m\_k, s\_k^{2}) \prod\_{i=1}^{n}q(c\_i;\varphi\_i)$ the authors arrive to the ELBO, $ELBO(\textbf{m}, \textbf{s}^2, \varphi) = \sum\_{k=1}^{K}\mathbb{E}[\log p(\mu\_k);m\_k, s\_k^{2}] + \\ + \sum\_{i=1}^{n}(\mathbb{E}[\log p(c\_i);\varphi\_i] + \mathbb{E}[\log p(x\_i|c\_i, \mu); \textbf{m}, \textbf{s}^{2}, \varphi\_i]) + \\ - \sum\_{i=1}^{n}\mathbb{E}[\log q(c\_i;\varphi\_i)] - \sum\_{k=1}^{K}\mathbb{E}[\log q(\mu\_k; m\_k, s\_k^{2})]$ I am kind of lost in how to compute the ELBO. E.g., the first term is the prior on $\mu\_k$, which is a zero-mean Gaussian. Then I would say that term is zero. Am I right? In the second term, $\sum\_{i=1}^{n}(\mathbb{E}[\log p(c\_i);\varphi\_i]$, should it be $\log (K)$? Can someone give me a hint how to compute this equation? Besides this, the paper goes on presenting the update algorithm on page 14. The update equation for the latent variables $\varphi\_i$ is: For $i=1....n$ $ \varphi\_{ik} \propto \texttt{exp}\{\mathbb{E}[\mu\_k; m\_k, s\_k^{2}]x\_i - \mathbb{E}[\mu\_k^{2};m\_k,s\_k^{2}]/2\}$ Again, $\mathbb{E}[\cdot]$ is computed w.r.t. $q(\cdot)$, and, assuming that $\mu\_k$ is a Gaussian distribution centered here at $m\_k$, the first term should be simply $\texttt{exp}\{m\_k x\_i\}$ ? the second term just $\texttt{exp}\{(s\_k^{2} + m\_k^{2})/2\}$ ? Help in understanding these expressions would be very appreciated! Thanks!
Ok, I believe I got some feeling about what , e.g., the first term of the ELBO might be: $\sum\_{k=1}^{K}\mathbb{E}[\log p(\mu\_k);m\_k;s\_k^{2}] \sum\_{k=1}^{K}\mathbb{E}[-\frac{1}{2}\log(\frac{1}{2\pi\sigma^2})-\frac{\mu\_k^{2}}{2\sigma^{2}};m\_k;s\_k^{2}] = \\ \sum\_{k=1}^{K}-\frac{1}{2}\log(\frac{1}{2\pi\sigma^2})-\mathbb{E}[\frac{\mu\_k^{2}}{2\sigma^{2}};m\_k;s\_k^{2}] = \sum\_{k=1}^{K}-\frac{1}{2}\log(\frac{1}{2\pi\sigma^2})-\frac{\mathbb{E}[\mu\_k^{2};m\_k;s\_k^{2}]}{2\sigma^{2}} = \\ \sum\_{k=1}^{K}-\frac{1}{2}\log(\frac{1}{2\pi\sigma^2})-\frac{s\_k^{2}+m\_k^{2}}{2\sigma^{2}} = -\frac{K}{2}\log(\frac{1}{2\pi\sigma^2})-\sum\_{k=1}^{K}\frac{s\_k^{2}+m\_k^{2}}{2\sigma^{2}}$ which is a function of the variational parameters and hence can be computed.
I'm following-up on [this great answer](https://stats.stackexchange.com/a/540019/140365). Essentially, I was wondering how could misspecification of random-effects bias the estimates of fixed-effects? So, can the same set of fixed-effect coefficients become biased if we create models that only differ in their random-effect specification? Also as a conceptual matter, can we say in mixed-effect models, the fixed-effect coef is some kind of (weighted) average of the individual regression counterparts fit to each individual cluster and that is why fixed-effect coefs in mixed models can prevent [something like this Simpson's Paradox case](https://stats.stackexchange.com/a/478580/42952) from happening? A possible `R` demonstration is appreciated.
> > Can fixed-effects become biased due to random structure misspecification > > > Yes they can. Let's do a simulation in R to show it. We will simulate data according to the following model: ``` Y ~ treatment + time + (1 | site) + (time | subject) ``` So we have fixed effects for `treatment` and `time`, random intercepts for `subject` nested within `site` and random slopes for `time` over `subject`. There are many things that we can vary with this simulation and obviously there is a limit to what I can do here. But if you (or others) have some suggestions for altering the simulations, then please let me know. Of course you can also play with the code yourself :) In order to look at bias in the fixed effects we will do a Monte Carlo simulation. We will make use of the following helper function to determine if the model converged properly or not: ```r hasConverged <- function (mm) { if ( !(class(mm)[1] == "lmerMod" | class(mm)[1] == "lmerModLmerTest")) stop("Error must pass a lmerMod object") retval <- NULL if(is.null(unlist(mm@optinfo$conv$lme4))) { retval = 1 } else { if (isSingular(mm)) { retval = 0 } else { retval = -1 } } return(retval) } ``` So we will start by setting up the parameters for the nested factors: ```r n_site <- 100; n_subject_site <- 5; n_time <- 2 ``` which are the number of sites, the number of subjects per site and the number of measurements within subjects. So now we simulate the factors: ```r dt <- expand.grid( time = seq(0, 2, length = n_time), site = seq_len(n_site), subject = seq_len(n_subject_site), reps = 1:2 ) %>% mutate( subject = interaction(site, subject), treatment = sample(0:1, size = n_site * n_subject_site,, replace = TRUE)[subject], Y = 1 ) X <- model.matrix(~ treatment + time, dt) # model matrix for fixed effects ``` where we also add a column of 1s for the reponse at this stage in order to make use of the `lFormula` function in `lme4` which can construct the model matrix of random effects `Z`: ```r myFormula <- "Y ~ treatment + time + (1 | site) + (time|subject)" foo <- lFormula(eval(myFormula), dt) Z <- t(as.matrix(foo$reTrms$Zt)) ``` Now we set up the parameters we will use in the simulations: ```r # fixed effects intercept <- 10; trend <- 0.1; effect <- 0.5 # SDs of random effects sigma_site <- 5; sigma_subject_ints <- 2; sigma_noise <- 1; sigma_subj_slopes <- 0.5 # correlation between intercepts and slopes for time over subject rho_subj_time <- 0.2 betas <- c(intercept, effect, trend) # Fixed effects parameters ``` Then we perform the simulations: ```r n_sim <- 200 # vectrs to store the fixed effects from each simulations vec_intercept <- vec_treatment <- vec_time <- numeric(n_sim) for (i in 1:n_sim) { set.seed(i) u_site <- rnorm(n_site, 0, sigma_site) # standard deviation of random intercepts for site cormat <- matrix(c(sigma_subject_ints, rho_subj_time, rho_subj_time, sigma_subj_slopes), 2, 2) # correlation matrix covmat <- lme4::sdcor2cov(cormat) umat <- MASS::mvrnorm(n_site * n_subject_site, c(0, 0), covmat, empirical = TRUE) # simulate the random effects u_subj <- c(rbind(umat[, 1], umat[, 2])) # lme4 needs the random effects in this order (interleaved) when there are slopes and intercepts u <- c(u_subj, u_site) e <- rnorm(nrow(dt), 0, sigma_noise) # residual error dt$Y <- X %*% betas + Z %*% u + e m0 <- lmer(myFormula, dt) summary(m0) %>% coef() -> dt.tmp if(hasConverged(m0)) { vec_intercept[i] <- dt.tmp[1, 1] vec_treatment[i] <- dt.tmp[2, 1] vec_time[i] <- dt.tmp[3, 1] } else { vec_intercept[i] <- vec_treatment[i] <- vec_time[i] <- NA } } ``` And finally we can check for bias: ```r mean(vec_intercept, na.rm = TRUE) ## [1] 10.04665 mean(vec_treatment, na.rm = TRUE) ## 0.497358 mean(vec_time, na.rm = TRUE) ## [1] 0.09761494 ``` ...and these agree closely with the values used in the simulation: 10, 0.5 and 0.1. Now, let us repeat the simulations, based on the same model: ```r Y ~ treatment + time + (1 | site) + (time|subject) ``` but instead of fitting this model, we will fit: ```r Y ~ treatment + time + (1 | site) ``` So we just need to make a simple change: ```r m0 <- lmer(myFormula, dt) ``` to ```r m0 <- lmer(Y ~ treatment + time + (1 | site), data = dt ) ``` And the results are: ```r mean(vec_intercept, na.rm = TRUE) ## [1] 10.04169 mean(vec_treatment, na.rm = TRUE) ##[1] 0.5068864 mean(vec_time, na.rm = TRUE) ##[1] 0.09761494 ``` So that's all good. Now we make a simple change: ```r n_site <- 4 ``` So now, instead of 100 sites, we have 4 sites. We retain the number of subjects per site (5) and the number of time points per subject (2). For the "correct" model, the results are: ```r mean(vec_intercept, na.rm = TRUE) ## 10.16447 mean(vec_treatment, na.rm = TRUE) ## [1] 0.422812 mean(vec_time, na.rm = TRUE) ## [1] 0.1049933 ``` Now, while the intercept and time are close to unbiased, the `treatment` fixed effect is a little off (0.42 vs 0.5, a bias of around -15% which perhaps stregthens the argument for not fitting random intercepts at all for such a small group even when the random structure is correct). But, if we fit the "wrong" model, the results are: ```r mean(vec_intercept, na.rm = TRUE) ## [1] 10.0194 mean(vec_treatment, na.rm = TRUE) ## [1] 0.7084542 mean(vec_time, na.rm = TRUE) ## [1] 0.1029664 ``` So now we find the bias of around +42% As mentioned above, there are a huge number of possible ways this simulation can be altered and adapted, but it does show that biased fixed effects can result when the random structure is wrong, as requested.
"The same value in all the parameters makes all the neurons have the same effect on the input, which causes the gradient with respect to all the weights is the same and, therefore, the parameters always change in the same way." Taken from my course.
a. For a beginner I would suggest the [fullstackdeeplearning](https://fullstackdeeplearning.com/spring2021/lecture-6/) course, it's a modern overview of tools and best practices for ML in production. As you can see below, there are a lot of moving pieces. [![enter image description here](https://i.stack.imgur.com/rBfCn.png)](https://i.stack.imgur.com/rBfCn.png) b. What you are asking for can be done with Spark + Airflow. In particular Airflow (or similar tools such as Luigi) allows to create very customised data pipelines. The learning curve is a bit steep, but there are good resources available online. c. The course above should answer your questions, as the data side is not really deep learning specific, but can apply also to data-science workflows.
Going through [some knowledge representation tutorials](http://www.cs.toronto.edu/~sheila/384/w11/) on resolution at the moment, and I came across [slide 05.KR, no77](http://www.cs.toronto.edu/~sheila/384/w11/Lectures/csc384w11-Lecture-05-KR.pdf). There it is mentioned that "the procedure is also complete". I think this completeness can not mean that if a sentence is entailed by KB, then it will be derived by resolution. For example, resolution can not derive $(q \lor \neg q)$ from a KB with single clause $\neg p$. (Example from KRR, Brachman and Levesque, page 53). Could anyone help me figure out what is meant in this slide? Is the completeness of slide refer to being refutaton-complete and not a complete proof procedure?
Resolution is only refutationally complete, as you mentioned. This is *intended* and very useful, because it drastically reduces the search space. Instead of having to eventually derive every possible consequence (to find a proof of some conjecture), resolution is only trying to derive the empty clause.
Let's say I have a context free language. It can be recognised by a pushdown automaton. Chances are it can't be parsed with a regular expression, as regular expressions are not as powerful as pushdown automata. Now, let's put an additional constraint on the language: the maximum recursion amount must be finite. Because the stack size has an upper bound in this case, my understanding is that there are finite number of stack configurations reachable. This means I could number them 0, 1, 2, 3, ..., N. So, I should be able to create a deterministic finite automaton (DFA) with states 0, 1, 2, 3, ..., N that recognises the same language that the pushdown automaton recognises. Now, if I'm able to create an equivalent DFA, doesn't it mean that there exists a regular expression that can parse the context-free language with maximum recursion amount? So, my theory is that all context-free languages that have a maximum recursion amount can be parsed with regular expressions. Is this theory correct? Of course, the theory says nothing about the complexity of the regular expression, it just says such a regular expression should exist. So, in other words: if your stack memory is limited, a regexp can do the job of a HTML/XML parser! In principle, isn't it true that computers with finite memory are actually DFAs and not Turning machines?
We can take it even further: if we put a limit on the size of the HTML/XML, say 1PB, then there is only a finite number of them, so we can trivially parse them in $O(1)$ using a giant look-up table. However, that doesn't seem to tell us much about the complexity of parsing HTML/XML in practice. The issue at stake here is *modeling*. A good model abstracts away the salient points of a real world situation in a way that makes the situation amenable to mathematical inquiry. Modeling HTML/XMLs as instances of arbitrarily recursing language forms a better model in practice than your suggestion or my suggestion.
I need a data structure which can include millions of elements, minimum and maximum must be accesable in constant time and inserting and erasing element time complexity must be better than linear.
A basic data structure that allows insertion and deletion in time $\Theta(\log n)$ are [balanced binary search trees](https://en.wikipedia.org/wiki/Balanced_binary_search_tree). Their memory overhead is reasonable (in case of AVL trees, two pointers and three bits per entry) so millions of entries are no problem at all on modern machines. Note that in a search tree, finding the minimum (or maximum) is conceptually easy by descending always left (right) starting in the root. This works in time $\Theta(\log n)$, too, which is too slow for you. However, we can certainly store pointers to these tree nodes, similar to front and end pointers in double-linked linear lists. But what happens when the elements are deleted? In this case, we have to find the [in-order](https://en.wikipedia.org/wiki/In-order_traversal#In-order_.28symmetric.29) successor (predecessor) and update the pointer to the minimum (maximum). Finding this node works in time $O(\log n)$ so it does not hurt deletion time, asymptotically. You can, however, enable time $O(1)$ deletion of minimum and maximum by *threading* the tree, that is maintaining -- in addition to the binary search tree -- a double-linked list in in-order. Then, finding the new minimum/maximum is possible in time $O(1)$. This list requires additional space (two pointers per entry) and has to be maintained during insertions and deletions; this does not make the asymptotics worse but certainly slows down *every* such operation (I leave the details to you). So you have to trade-off the options given your application, that is which operations occur more often and which you want to be fastest. Note that trees, as all linked structures, tend to be bad for memory hierarchies since they don't necessarily preserve data locality. If your sets are so large that they don't fit into cache completely, you should check out [B-trees](https://en.wikipedia.org/wiki/B-tree) which are designed to minimise page loads. The above works with them, too.
My question is simple: > > What is the worst-case running time of the best known algorithm for computing an [eigendecomposition](http://mathworld.wolfram.com/EigenDecomposition.htmlBlockquoteBlockquote) of an $n \times n$ matrix? > > > Does eigendecomposition reduce to matrix multiplication or are the best known algorithms $O(n^3)$ (via [SVD](http://en.wikipedia.org/wiki/Singular_value_decomposition)) in the worst case ? Please note that I am asking for a worst case analysis (only in terms of $n$), not for bounds with problem-dependent constants like condition number. **EDIT**: Given some of the answers below, let me adjust the question: I'd be happy with an $\epsilon$-approximation. The approximation can be multiplicative, additive, entry-wise, or whatever reasonable definition you'd like. I am interested if there's a known algorithm that has better dependence on $n$ than something like $O(\mathrm{poly}(1/\epsilon)n^3)$? **EDIT 2**: See [this related question](https://cstheory.stackexchange.com/questions/3115/complexity-of-finding-the-eigendecomposition-of-a-symmetric-matrix) on *symmetric matrices*.
Ryan answered a similar question on mathoverflow. Here's the link: [mathoverflow-answer](https://mathoverflow.net/questions/24287/what-is-the-best-algorithm-to-find-the-smallest-nonzero-eigenvalue-of-a-symmetric/24294#24294) Basically, you can reduce eigenvalue computation to matrix multiplication by computing a symbolic determinant. This gives a running time of O($n^{\omega+1}m$) to get $m$ bits of the eigenvalues; the best currently known runtime is O($n^3+n^2\log^2 n\log b$) for an approximation within $2^{-b}$. Ryan's reference is ``Victor Y. Pan, Zhao Q. Chen: The Complexity of the Matrix Eigenproblem. STOC 1999: 507-516''. (I believe there is also a discussion about the relationship between the complexities of eigenvalues and matrix multiplication in the older Aho, Hopcroft and Ullman book ``The Design and Analysis of Computer Algorithms'', however, I don't have the book in front of me, and I can't give you the exact page number.)
I'm facing a problem where I want to model a GEE with a tweedie distribution but it's not implemented in any R package that I found. I know that GEEs and linear mixture models (LMM) are somehow related but I'm not an expert. It's very easy to define an LMM in Bayesian terms and carry out parameter estimation in `rStan` for example. Is there a way to do this for GEEs as well? I'm interested in an example as well.
As far as I know, this is not possible. GEE uses estimating equations for the various moments. The benefit of this approach is that you don't have to write down a likelihood and make the assumptions therein, however this also makes it limited in terms of using Bayesian methods that require specification of a likelihood. Here is a link <https://ete-online.biomedcentral.com/articles/10.1186/s12982-015-0030-y>
There is given sequence $a\_1,...a\_n$ such that there are $O(n^{\frac{3}{2}}) $ inversions in this sequence. I am thinking about sorting algorithm for that. I know lower bound for number of comparisons - it is $O(n)$ - on the contrary, there would be a minimum finding algorithm faster than $O(n)$. Nevertheless, I don't have idea how sort it in linear time ? What doy you think ? Inversion is a pair $(i, j)$ such that $i < j$ and $a\_i > a\_j$
You can't sort it in linear time. Suppose you have $n$ items, and you divide them into $\sqrt{n}$ consecutive blocks of $\sqrt{n}$ items each. You need to take $\sqrt{n} \log \sqrt{n}$ comparisons to sort each one. And there are $\sqrt{n}$ of them, giving $\theta(n \log n)$ time total. And it's easy to see that there can't be more than $n^{3/2}$ inversions in the sequence, since there can't be more than $n$ inversions in each subsequence.
Stable Marriage Problem: <http://en.wikipedia.org/wiki/Stable_marriage_problem> I am aware that for an instance of a SMP, many other stable marriages are possible apart from the one returned by the Gale-Shapley algorithm. However, if we are given only $n$ , the number of men/women, we ask the following question - Can we construct a preference list that gives the maximum number of stable marriages? What is the upper bound on such a number?
An upper bound on the maximum number of stable matchings for a Stable Marriage instance is given in my Master's thesis and it is extended to the Stable Roommates problem as well.The bound is of magnitude $O(n!/2^n)$ and it can be shown that it is actually of magnitude $O\left((n!)^\frac{2}{3}\right)$. The document is thesis number 97 on page <http://mpla.math.uoa.gr/msc/>
I was reading [this article](http://www.paulgraham.com/avg.html). The author talks about "The Blub Paradox". He says programming languages vary in power. That makes sense to me. For example, Python is more powerful than C/C++. But its performance is not as good as that of C/C++. Is it always true that more powerful languages must **necessarily** have lesser **possible** performance when compared to less powerful languages? Is there a law/theory for this?
**TL;DR:** Performance is a factor of **Mechanical Sympathy** and **Doing Less**. Less *flexible* languages are generally doing less and being more mechanically sympathetic, hence they generally perform better *out of the box*. ### Physics Matter As Jorg mentioned, CPU designs today co-evolved with C. It's especially telling for the x86 instruction set which features SSE instructions specifically tailored for NUL-terminated strings. Other CPUs could be tailored for other languages, and that may give an edge to such other languages, but regardless of the instruction set there are some hard physics constraints: * The size of transistors. The latest CPUs feature 7nm, with 5nm being experimental. Size immediately places an upper bound on density. * The speed of light, or rather the speed of electricity in the medium, places on an upper bound on the speed of transmission of information. Combining the two places an upper bound on the size of L1 caches, in the absence of 3D designs – which suffer from heat issues. **Mechanical Sympathy** is the concept of designing software with hardware/platform constraints in mind, and essentially to play to the platform's strengths. Language Implementations with better Mechanical Sympathy will outperform those with lesser Mechanical Sympathy on a given platform. A critical constraint today is being cache-friendly, notably keeping the working set in the L1 cache, and typically GCed languages use more memory (and more indirections) compared to languages where memory is manually managed. ### Less (Work) is More (Performance) There's no better optimization than removing work. A typical example is accessing a property: * In C `value->name` is a single instruction (`lea`). * In Python or Ruby, the same typically involves a hash table lookup. The `lea` instruction is executed in 1 CPU cycle, an optimized hash table lookup takes at least 10 cycles. ### Recovering performance Optimizers, and JIT optimizers, attempt to recover the performance left on the table. I'll take the example of two typical optimizations for JavaScript code: * NaN-tagging is used to store a double OR a pointer in 8 bytes. At run-time, a check is performed to know which is which. This avoids boxing doubles, eliminating a separate memory allocation and an indirection, and thus is cache-friendly. * The V8 VM optimizes dynamic property lookups by creating a C-like struct for each combination of properties on an object, hence going from hash table lookup to type-check + `lea` – and possibly lifting the type-check much earlier. Thus, to *some extent*, even highly flexible languages can be executed efficiently... so long as the optimizer is smart enough, or the developer makes sure to massage the code to just hit the optimizer's sweet spot. ### There is no faster language... ... there are just languages that are easier to write fast programs in. I'll point to a serie of 3 blog articles from 2018: * Nick Fitzgerald explained how he sped up a JS library by writing the core algorithm in Rust and compiling it to WebAssembly: <https://hacks.mozilla.org/2018/01/oxidizing-source-maps-with-rust-and-webassembly/> * Vyacheslav Egorov (V8 developer) explained how you could massively speed up the JS library by making sure to hit V8 sweet spots (and some algorithmic improvements): <https://mrale.ph/blog/2018/02/03/maybe-you-dont-need-rust-to-speed-up-your-js.html> * Nick concluded with *Speed without Wizardry*, with a less flexible language (Rust) there was no need for expert's knowledge, nor for tuning for 1 specific JS engine (possibly at the expense of others): <https://fitzgeraldnick.com/2018/02/26/speed-without-wizardry.html> I think the latter article is the key point. More flexible languages can be made to run efficiently *with expert's knowledge, and time*. This is costly, and typically brittle. The main advantage of less flexible languages – statically typed, tighter control on memory – are that they make optimizing their performance *more straightforward*. **When the language's semantics already closely match the platform sweet spot, good performance is straight out of the box.**
Why do most programming languages (if not all) lack a 2-bit data type? A **boolean** is a single bit with the values being **either** *true* or *false*. Why is there no native, corresponding type where the values can be either *true*, *false*, *neither* or *both*? This data type can of course be constructed inside the language, but I figure there would be lots of usages for a more fundamental language support for it. This data type would be very handy in, for instance: * gene programming (values being: A, C, G or T) * GUI programming (a checkbox that is: not checkable, unchecked, checked, or both/unknown) * testing (test result can be: not run, pass, fail or inconclusive) * quantum programming (values being: true, false, neither or both) Is there a scientific reason to why this data type (quad, qool, qit etc.) has been excluded?
The reason is that architectures offer the types most often used. If you look at real programs, they handle characters (mostly implemented as 8 bit bytes), much more often integers (for array indices, much computation, things like file sizes). Approximations for real numbers are also important. Boolean values as such are very rarely handled (most conditions are handled as conditions in the CPU, and by conditional jumps). If you look at the types offered by C, they are a close match to the above. To give boolean values, the convention is that a zero value or a NULL pointer is false, as checking for those values is common. Then there is the consideration of making shuffling values from/to memory has to be efficient, and current architectures live and die by their caches, which move multiple words at a time. Adding paths inside the CPU for handling another data size is costly too. An extreme case of C weirdness was the CRAY, in which even characters where 32 bits, in the interests of maximal performance.
I am trying to fine tune some code from a [Kaggle kernel](https://www.kaggle.com/fujisan/use-keras-pre-trained-vgg16-acc-98). The model uses pretrained VGG16 weights (via 'imagenet') for transfer learning. However, I notice there is no layer freezing of layers as is recommended in a [keras blog](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html). One approach would be to freeze the all of the VGG16 layers and use only the last 4 layers in the code during compilation, for example: ``` for layer in model.layers[:-5]: layer.trainable = False ``` Supposedly, this will use the imagenet weights for the top layers and train only the last 5 layers. What are the consequences of not freezing the VGG16 layers? ``` from keras.models import Sequential, Model, load_model from keras import applications from keras import optimizers from keras.layers import Dropout, Flatten, Dense img_rows, img_cols, img_channel = 224, 224, 3 base_model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(img_rows, img_cols, img_channel)) add_model = Sequential() add_model.add(Flatten(input_shape=base_model.output_shape[1:])) add_model.add(Dense(256, activation='relu')) add_model.add(Dense(1, activation='sigmoid')) model = Model(inputs=base_model.input, outputs=add_model(base_model.output)) model.compile(loss='binary_crossentropy', optimizer=optimizers.SGD(lr=1e-4, momentum=0.9), metrics=['accuracy']) model.summary() ```
I think that the main consequences are the following: * Computation time: If you freeze all the layers but the last 5 ones, you only need to backpropagate the gradient and update the weights of the last 5 layers. In contrast to backpropagating and updating the weights all the layers of the network, this means a huge decrease in computation time. For this reason, if you unfreeze all the network, this will allow you to see the data fewer epochs than if you were to update only the last layers weights'. * Accuracy: Of course, by not updating the weights of most of the network your are only optimizing in a subset of the feature space. If your dataset is similar to any subset of the imagenet dataset, this should not matter a lot, but, if it is very different from imagenet, then freezing will mean a decrease in accuracy. If you have enough computation time, unfreezing everything will allow you to optimize in the whole feature space, allowing you to find better optima. To wrap up, I think that the main point is to check if your images are comparable to the ones in imagenet. In this case, I would not unfreeze many layers. Otherwise, unfreeze everything but get ready to wait for a long training time.
I just read in *Effective Java* about the `hashCode` method: > > 1. Store some constant nonzero value, say, 17, in an int variable called result. > 2. For each significant field f in your object (each field taken into account by the equals method, that is), do the following: > > > **a.** Compute an int hash code c for the field. > > > **b.** Combine the hash code c computed in step 2.a into result as > follows: `result = 31 * result + c;` > > > A nonzero initial value is used in step 1 so the hash value will be > affected by initial fields whose hash value, as computed in step 2.a, > is zero. > > > If zero were used as the initial value in step 1, the overall > hash value would be unaffected by any such initial fields, which could > increase collisions. The value 17 is arbitrary. The multiplication in > step 2.b makes the result depend on the order of the fields, yielding > a much better hash function if the class has multiple similar fields. > > > And that's their implementation of `hashCode`: ``` @Override public int hashCode() { result = 17; result = 31 * result + areaCode; result = 31 * result + prefix; result = 31 * result + lineNumber; hashCode = result; return result; } ``` My questions are below: 1. How does the initial value 17 help us decrease collisions? 2. How does multiplying by 31 help us decrease collisions? 3. Why do we use 17 and 31? I know they are prime numbers. Is that why we use them? I tried to come up with an example of a collision using a class with two and three integer fields, but didn't succeed. I'd glad, if you gave me a simple example of a collision.
If the hashed codes are $x\_1,\ldots,x\_n$ (in that order), then the resulting hash value is $$ x\_n + 31 x\_{n-1} + 31^2 x\_{n-2} + \cdots + 31^{n-1} x\_1 + 31^n \cdot 17 \pmod{2^{32}}, $$ assuming `int` is 32 bits, and ignoring signed/unsigned distinctions. In view of this, the integers 17 and 31 should satisfy the following two properties: 1. 17 should be non-zero. This way, the hash depends on the input length. Otherwise it is completely arbitrary. 2. 31 should be relatively prime to $2^{32}$ (i.e., odd) and have a high order modulo $2^{32}$ (that is, the minimal $m$ such that $31^m \equiv 1 \pmod{2^{32}}$); this way the multiplicands $31,31^2,\ldots$ are all different for the longest time possible. It turns out that the order of $31$ modulo $2^{32}$ is only $2^{27}$, so 31 is not an optimal choice. Its being prime makes no difference – we only care about its oddness and order. In any case, this hash can be improved on – for example, the `python` hash function uses XOR instead of addition, thus making the resulting calculation non-algebraic.
Given two NP-Complete languages A and B, show that the language: $L = A\bigoplus B =\{a\bigoplus b \mid a \in A, b \in B, |a|=|b|\}$ is not necessarily NP-Complete. Remember $a\bigoplus b$ when $|a|=|b|$ gives 0 when they have the same digit and 1 otherwise. Example: 0110$\bigoplus$ 0101 = 0011 I have tried to solve this in my exam only was not so accurate..Help needed.
$A\bigoplus B$ can be empty (and thus definitely not $NP$-complete) if we pick $A$ and $B$ carefully. For instance, if all strings in $A$ have even length and all strings in $B$ have odd length then $A\bigoplus B$ is empty. It is fairly easy to modify any $NP$-complete language $L$ to have either only even (resp., odd) strings: let $L'$ be the language consisting of strings formed by prepending $1$ to any odd (resp., even) length string from $L$ and prepending $01$ to any even (resp., odd) length string from $L$. Note that we can easily recover the original language $L$ from $L'$ by noting that any string that was originally of odd length starts with a $1$ (and thus we must drop the first character), and any string that was originally of even length starts with $01$ (and thus we must drop the first two characters).
I realise there are other reasons why p-values will decrease with increased sample size, but I am wondering if this reason is valid at all.
I think people may be answering or commenting on a question you didn't intend. @ttnphns is correct (as usual) that the bias does decrease with increasing sample size. But this decrease is pretty small. $\sqrt{\frac{n-1}{n-2}}$ is small. If n goes from 10 to 100, this goes from $\sqrt{.9}$ to $\sqrt{.99}$ or from 0.949 to 0.995. If n is larger, the decrease is even smaller. It is also true that the p value need not go down, as @Deathkill14 points out. However, *in general* the reason the p value goes down with increasing sample size (given the same correlation) is not because of decreasing bias but because of increasing precision. A simple random sample of a population gives an estimate that is nearly unbiased, and that "nearly" because unimportant for reasonable n. But it gets more precise much more quickly. An approximation to the standard error of the correlation is given by $SE[r] = \frac{(1-\rho^2)^2}{\sqrt{n-1}}$ so, for $\rho^2$ = .5 when n goes from 10 to 100 (as above) this goes from 0.08 to 0.025 and this decrease continues for increasing n.
Many machine learning classifiers (e.g. support vector machines) allow one to specify a kernel. What would be an intuitive way of explaining what a kernel is? One aspect I have been thinking of is the distinction between linear and non-linear kernels. In simple terms, I could speak of 'linear decision functions' an 'non-linear decision functions'. However, I am not sure if calling a kernel a 'decision function' is a good idea. Suggestions?
Kernel is a way of computing the dot product of two vectors $\mathbf x$ and $\mathbf y$ in some (possibly very high dimensional) feature space, which is why kernel functions are sometimes called "generalized dot product". Suppose we have a mapping $\varphi \, : \, \mathbb R^n \to \mathbb R^m$ that brings our vectors in $\mathbb R^n$ to some feature space $\mathbb R^m$. Then the dot product of $\mathbf x$ and $\mathbf y$ in this space is $\varphi(\mathbf x)^T \varphi(\mathbf y)$. A kernel is a function $k$ that corresponds to this dot product, i.e. $k(\mathbf x, \mathbf y) = \varphi(\mathbf x)^T \varphi(\mathbf y)$. Why is this useful? Kernels give a way to compute dot products in some feature space without even knowing what this space is and what is $\varphi$. For example, consider a simple polynomial kernel $k(\mathbf x, \mathbf y) = (1 + \mathbf x^T \mathbf y)^2$ with $\mathbf x, \mathbf y \in \mathbb R^2$. This doesn't seem to correspond to any mapping function $\varphi$, it's just a function that returns a real number. Assuming that $\mathbf x = (x\_1, x\_2)$ and $\mathbf y = (y\_1, y\_2)$, let's expand this expression: $\begin{align} k(\mathbf x, \mathbf y) & = (1 + \mathbf x^T \mathbf y)^2 = (1 + x\_1 \, y\_1 + x\_2 \, y\_2)^2 = \\ & = 1 + x\_1^2 y\_1^2 + x\_2^2 y\_2^2 + 2 x\_1 y\_1 + 2 x\_2 y\_2 + 2 x\_1 x\_2 y\_1 y\_2 \end{align}$ Note that this is nothing else but a dot product between two vectors $(1, x\_1^2, x\_2^2, \sqrt{2} x\_1, \sqrt{2} x\_2, \sqrt{2} x\_1 x\_2)$ and $(1, y\_1^2, y\_2^2, \sqrt{2} y\_1, \sqrt{2} y\_2, \sqrt{2} y\_1 y\_2)$, and $\varphi(\mathbf x) = \varphi(x\_1, x\_2) = (1, x\_1^2, x\_2^2, \sqrt{2} x\_1, \sqrt{2} x\_2, \sqrt{2} x\_1 x\_2)$. So the kernel $k(\mathbf x, \mathbf y) = (1 + \mathbf x^T \mathbf y)^2 = \varphi(\mathbf x)^T \varphi(\mathbf y)$ computes a dot product in 6-dimensional space without explicitly visiting this space. Another example is Gaussian kernel $k(\mathbf x, \mathbf y) = \exp\big(- \gamma \, \|\mathbf x - \mathbf y\|^2 \big)$. If we Taylor-expand this function, we'll see that it corresponds to an infinite-dimensional codomain of $\varphi$. Finally, I'd recommend an online course ["Learning from Data"](https://work.caltech.edu/telecourse.html) by Professor Yaser Abu-Mostafa as a good introduction to kernel-based methods. Specifically, lectures ["Support Vector Machines"](http://www.youtube.com/watch?v=eHsErlPJWUU&hd=1), ["Kernel Methods"](http://www.youtube.com/watch?v=XUj5JbQihlU&hd=1) and ["Radial Basis Functions"](http://www.youtube.com/watch?v=O8CfrnOPtLc&hd=1) are about kernels.
I have a dataset where a set of people donated for charity along with the dates of the donation. I have to find the probability of each donor donating in the next three months. Data is available from August 2014 - February 2016. I have to predict the probability of each person donating for March-June 2016. Any help would be appreciated? Below is a snapshot of the data ``` id date amount 1 13-08-14 2485 1 21-11-14 2105 1 17-09-15 1359 2 13-08-14 2542 2 20-04-15 1276 2 12-10-15 2694 3 20-11-14 3556 4 28-07-15 3383 5 13-08-14 1698 5 11-12-14 1725 5 09-06-15 1376 5 17-09-15 3230 ``` Regards
Please double-check if there's the only data you have got, because all you have is a single predictor `date`. If this is indeed your only data source, then you only have a single predictor, and your independent variable is continuous. Now, you should plot `date` vs `amount` and fit a single linear regression. Does the fitting look good? Only you can tell because we don't have the full data-set. If it's not a good fit, look at the plot and ask yourself does this look like a curve? If so, you might want to fit a spline curve or something like that. You should also check the autocorrelation. This makes sense because your data look like a time series (you'll need to check it yourself). If this is the case, you might want to consider MA and ARCH model. It's not possible for us to give you accurate advice because we don't know your data.
In the following paper found [here](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.310.145&rep=rep1&type=pdf) and reference below, the author suggests that "if the model is true or close to true, the posterior predictive p-value will almost certainly be very close to 0.5" . This is found in the very beginning of Section 6 in the paper. I am trying to interpret what is meant when he says 'model is true'. My questions are: i) Statistically what is a "true model" as said in the quote above? ii) What does a value of 0.5 mean in simple words? Gelman, A. (2013). Two simple examples for understanding posterior p-values whose distributions are far from uniform. Electronic Journal of Statistics, 7, 2595-2602.
The model is true if the data are generated according to the model you are doing inference with. In other words, the unobserved parameter is generated by the prior, and then, using that parameter draw, your observed data are generated by the likelihood. This is not the setup where you consider multiple models $M\_1, M\_2, \ldots$ and have discrete probability distributions describing the model uncertainty. A posterior predictive "p-value" of .5 means your test statistic $T(y)$ will be exactly equal to the median of the posterior predictive distribution of $T(y^{\text{rep}})$. Generally, this distribution and its median are obtained by looking at simulated data. Roughly speaking, this tells us that predictions (i.e. $T(y^{\text{rep}})$) "look like" our real data $T(y)$. If our model's predictions are "biased" to be too high, then we will get a number greater than $.5$, and if they are generally on the low side, we will get a number less than $.5$. The posterior predictive distribution is \begin{align\*} p(y^{\text{rep}} \mid y) &= \int p(y^{\text{rep}},\theta \mid y) d\theta\\ &= \int p(y^{\text{rep}}\mid \theta, y)p(\theta \mid y) d\theta\\ &= \int \underbrace{p(y^{\text{rep}} \mid \theta)}\_{\text{model}}\underbrace{p(\theta \mid y)}\_{\text{posterior}} d\theta \\ &= E\_{\theta \mid y}\left[p(y^{\text{rep}} \mid \theta) \right]. \end{align\*} Then you take this distribution and integrate over the region where $T(y^{\text{rep}})$ is greater than some calculated nonrandom statistic of the dataset $T(y)$. $$ P(T(y^{\text{rep}}) > T(y) \mid y) = \int\_{\{T(y^{\text{rep}}) : T(y^{\text{rep}}) > T(y) \}} p(y^{\text{rep}} \mid y) dy^{\text{rep}}. $$ In practice, if computing the above integral is too difficult, this means drawing parameters from the posterior, and then, using these parameters, simulating many $y^{\text{rep}}$s. For each simulated data set (of the same size as your original/real data set), you calculate $T(y^{\text{rep}})$. Then you calculate what percent of these simulated values are above your single $T(y)$ coming from your real data set. For more information, see this thread: [What are posterior predictive checks and what makes them useful?](https://stats.stackexchange.com/questions/115157/what-are-posterior-predictive-checks-and-what-makes-them-useful) Because you are assuming there is no model uncertainty, $p(y^{\text{rep}} \mid y)$ is an integral over the parameter space; not the parameter space AND the model space.
This may be a ridiculous question, but is it possible to have a problem that actually gets easier as the inputs grow in size? I doubt any practical problems are like this, but maybe we can invent a degenerate problem that has this property. For instance, perhaps it begins to "solve itself" as it gets larger, or behaves in some other bizarre way.
Clearly, from a pure mathematical, purely CS algorithm viewpoint this is impossible. But in fact there are several real-world examples of when scaling up your project makes it easier, many which are not intuitive to end-users. **Directions**: the longer your directions get, they can sometimes get easier. For example, if I want Google Maps to give me directions for going west 3000 miles, I could drive to the West Coast -- and would get cross-country driving instructions. But if I wanted to go 6000 miles west, I would end up with significantly simpler instructions: get on a plane from NYC to Hokkaido. Giving me a cross-country route that incorporates traffic, roads, weather, etc. is rather difficult algorithmically, but telling me to get on a plane and looking up flights in a database is comparatively significantly simpler. ASCII graph of difficulty vs distance: ``` | / | / Difficulty | / ____------- | / ____---- | / ____---- --------------------------------- Distance ``` **Rendering**: say I want a render of one face and a rendering of 1000 faces; this is for a billboard ad so both final images must be 10000px by 5000px. Rendering one face realistically would be hard -- at the resolution of several thousand pixels across you have to use really powerful machines -- but for the crowd of 1000 faces each face need only be ten pixels across, and can easily be cloned! I could probably render 1000 faces on my laptop, but rendering a realistic face 10000px across would take a very long time and powerful machines. ASCII graph of difficulty vs. objects rendered, showing how difficulty of rendering n objects to an image of a set size drops off quickly but then returns slowly: ``` | - |- - _________ Difficulty | -- ______------- | ------ | --------------------------------- Objects ``` **Hardware control**: many things with hardware get much easier. "Move motor X 1 degree" is hard and/or impossible, and you have to deal with all kinds of things that you wouldn't have to deal with for "move motor X 322 degrees". **Short duration tasks:** Say you want item X to be on for (very small amount of time) every second. By increasing the amount of time that X runs, you will need less complex software as well as hardware.
You have an array of floats, for example: ``` [74.45329758275943, 501.9063197679927, 172.59563201929095, 307.1739798358187, 362.042263381624, 940.1282277740091, 577.2604546481798, 63.201270419598224, 828.8081043649505, 31.1630295128974] ``` For each float, you can either call floor() or ceiling() to convert it to the nearest integer. The goal is to solve these two questions: 1. Choose floor or ceiling on each float to produce a set of integers where sum(integers) = round(sum(floats)). 2. Of the values that solve question 1, find the floor or ceiling choice that produces min(abs(float - int)), for each float/integer pair. I *think* this is reducible to subset sum, and indicated as much during the quiz I just completed, but I wanted to check with people who have done reductions more frequently than me. * Compute round(sum(floats) and subtract this value from each float in the array. * Then for each of these "subtracted" floats, compute both floor and ceiling and put them in a new array with 2x the length of the old array. * You now need to find a subset of these values, including both positive and negative integers, that sums to 0. However, you either need to choose the floor or the ceiling value for each of the original inputs, which limits the number of subsets that need to be searched. I have not done an algorithms class in some time; is the analysis above correct, or am I missing something? Is this problem NP complete or is there a polynomial time solution?
Here is how I would solve this: Denote $A$ as the array, which has $n$ values $A\_1,\dots,A\_n$. Without loss of generality, I will assume that there is no integer in $A$ (think of how to deal with the case that there is one!) Lets start by computing $d:=round\left(\sum\_{i=1}^NA\_i\right)-\sum\_{i=1}^n{\lfloor A\_i \rfloor}$. Notice, that this tells us exactly for how many values we need to choose "ceiling" instead of floor: since each time we choose "ceiling" instead of "floor", we increase the total sum by $1$. Therefore, for question $1$, the answer should be "choose any $d$ values you want to round up, and the rest to round down". For question $2$, even though we already know we need exactly $d$ "round up"s, we need to choose them carefully in order to minimize the differences. But first, I want to clarify how I understood the question: The question, as I see it - asks us to minimize $\sum\_{i=1}^n { |A\_i-R\_i| }$ where $R\_i$ is the rounded $A\_i$, and also the solution has to satisfy that $\sum\_{i=1}^n R\_i = round \left(\sum\_{i=1}^n A\_i\right)$. The latter, we already saw how to achieve. Now whats left is to choose which things we round up correctly so we will minimize $\sum\_{i=1}^n |A\_i-R\_i|$. For this task, let us compute a value $v\_i$ for every index $1\le i\le n$, such that $v\_i:= (A\_i - \lfloor A\_i \rfloor) - (\lceil A\_i \rceil - A\_i)$. Please take a moment to understand why $v\_i$ is the "loss" (or "gain") when we choose to round $A\_i$ up instead of down. Now, we can rewrite the cost for $A\_i$ when we choose to round it up, as $|A\_i-\lfloor A\_i\rfloor | +v\_i $, and when we choose to round it down, the cost is $|A\_i-\lfloor A\_i \rfloor|$. Notice, how in both cases we had $|A\_i-\lfloor A\_i \rfloor |$. This means, that we can rewrite the total cost as follows: $Total~cost=\sum\_{i=1}^n { |A\_i-R\_i| }=\sum\_{i=1}^n |A\_i - \lfloor A\_i \rfloor| + s\_iv\_i$, where $s\_i\in \{0,1\}$ is $1$ if we choose to round $A\_i$ up, and $0$ if we choose to round it down. In this sum, we cannot control the value of $|A\_i-\lfloor A\_i \rfloor |$ whatsoever, so when minimizing, all such terms disappear. This makes us now minimize the value of $\sum\_{i=1}^n s\_iv\_i$ (with exactly $d$ values of $s\_i$ being $1$, and the rest being $0$), which is equivalent to finding the $d$ smallest $v\_i$s, and choosing to round up the $A\_i$s that correspond to them - rounding down the rest of the values. --- As you can see, this is a polynomial solution to the question. The problem with your reduction is that you did it the wrong direction. To show that a problem $L$ is NP-hard, using reductions - you need to show a reduction **from** subset sum, **to** $L$. What you showed is the other direction, hence the reduction doesn't imply that your problem is hard.
In statistical learning, implicitly or explicitly, one *always* assumes that the training set $\mathcal{D} = \{ \bf {X}, \bf{y} \}$ is composed of $N$ input/response tuples $({\bf{X}}\_i,y\_i)$ that are *independently drawn from the same joint distribution* $\mathbb{P}({\bf{X}},y)$ with $$ p({\bf{X}},y) = p( y \vert {\bf{X}}) p({\bf{X}}) $$ and $p( y \vert {\bf{X}})$ the relationship we are trying to capture through a particular learning algorithm. Mathematically, this i.i.d. assumption writes: \begin{gather} ({\bf{X}}\_i,y\_i) \sim \mathbb{P}({\bf{X}},y), \forall i=1,...,N \\ ({\bf{X}}\_i,y\_i) \text{ independent of } ({\bf{X}}\_j,y\_j), \forall i \ne j \in \{1,...,N\} \end{gather} I think we can all agree that this assumption is *rarely satisfied* in practice, see this related [SE question](https://stats.stackexchange.com/q/82096/109618) and the wise comments of @Glen\_b and @Luca. My question is therefore: > > Where exactly does the i.i.d. assumption becomes critical in practice? > > > **[Context]** I'm asking this because I can think of many situations where such a stringent assumption is not needed to train a certain model (e.g. linear regression methods), or at least one can work around the i.i.d. assumption and obtain robust results. Actually the *results* will usually stay the same, it is rather the *inferences* that one can draw that will change (e.g. heteroskedasticity and autocorrelation consistent HAC estimators in linear regression: the idea is to re-use the good old OLS regression weights but to adapt the finite-sample behaviour of the OLS estimator to account for the violation of the Gauss-Markov assumptions). My guess is therefore that *the i.i.d. assumption is required not to be able to train a particular learning algorithm, but rather to guarantee that techniques such as cross-validation can indeed be used to infer a reliable measure of the model's capability of generalising well*, which is the only thing we are interested in at the end of the day in statistical learning because it shows that we can indeed learn from the data. Intuitively, I can indeed understand that using cross-validation on dependent data could be optimistically biased (as illustrated/explained in [this interesting example](https://github.com/ogrisel/notebooks/blob/master/Non%20IID%20cross-validation.ipynb)). For me i.i.d. has thus nothing to do with *training* a particular model but everything to do with that model's *generalisability*. This seems to agree with a paper I found by Huan Xu et al, see "Robustness and Generalizability for Markovian Samples" [here](https://www.google.be/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwiThZWy8e_MAhXoFJoKHZXECsIQFggnMAE&url=http%3A%2F%2Fwww.ecmlpkdd2009.net%2Fwp-content%2Fuploads%2F2008%2F09%2Flearning-from-non-iid-data-theory-algorithms-and-practice.pdf&usg=AFQjCNF5eOVWeNsP5OADpQ0oNJRwzWI8uQ&sig2=bVae1fdm8HwKIFu9nOlLVg). Would you agree with that? **[Example]** If this can help the discussion, consider the problem of using the LASSO algorithm to perform a smart selection amongst $P$ features given $N$ training samples $({\bf{X}}\_i,y\_i)$ with $\forall i=1,...,N$ $$ {\bf{X}}\_i=[X\_{i1},...,X\_{iP}] $$ We can further assume that: * The inputs ${\bf{X}}\_i$ are dependent hence leading to a violation of the i.i.d. assumption (e.g. for each feature $j=1,..,P$ we observe a $N$ point time series, hence introducing temporal auto-correlation) * The conditional responses $y\_i \vert {\bf{X}}\_i$ are independent. * We have $P \gg N$. In what way(s) does the violation of the i.i.d. assumption can pose problem in that case assuming we plan to determine the LASSO penalisation coefficient $\lambda$ using a cross-validation approach (on the full data set) + use a nested cross-validation to get a feel for the generalisation error of this learning strategy (we can leave the discussion concerning the inherent pros/cons of the LASSO aside, except if it is useful).
In my opinion there are two rather mundane reasons why the i.i.d. assumption is important in statistical learning (or statistics in general). 1. Lots of behind the scenes mathematics depend on this assumption. If you want to prove that your learning method actually works for more than one data set, i.i.d. assumption will crop up eventually. It is possible to avoid it, but mathematics becomes several times harder. 2. If you want to learn something from data, you need to assume that there is something to learn. Learning is impossible if every data point is generated by different mechanism. So it is essential to assume that something unifies given data set. If we assume that data is random, then this something is naturally a probability distribution, because probability distribution encompasses all information about the random variable. So if we have data $x\_1,...,x\_n$ ($x\_i$ can be either a vector or scalar), we assume that it comes from distribution $F\_n$: $$(x\_1,...,x\_n)\sim F\_n.$$ Here we have a problem. We need to ensure that $F\_n$ is related to $F\_m$, for different $n$ and $m$, otherwise we have the initial problem, that every data point is generated differently. The second problem is that although we have $n$ data points, we basically have one data point for estimating $F\_n$, because $F\_n$ is $n$-variate probability distribution. The most simple solution for these two problems is an i.i.d assumption. With it $F\_n=F^n,$ where $x\_i\sim F$. We get very clear relationship between $F\_n$ and $F\_m$ and we have $n$ data points to estimate one $F$. There are other ways these two problems are solved, but it is essential to note that every statistical learning method needs to solve this problem and it so happens that i.i.d. assumption is by far the most uncomplicated way to do it.
If compilers have an inheritance of compilers(for example, a compiler needs to be compiled by another compiler, and that compiler needs to be compiled by another compiler, and so on) then if there is an inefficiency in one of the compilers up the line of inheritance(like the compiler's compiler's compiler's compiler), would the current compiler inherit that inefficiency?
No, because compiler runtime is irrelevant to quality of produced code. It would not propagate. It will only once slow down compiling time in flawed compiler, there is no possibility of making next compiler slower (it will be essentially the same one). Let me give you example: By analogy you have one recipe, chain of writters and one cook. Writters are passing recipe one to another and their responsibility is to change words according to given dictionary. None of then is making mistakes. But you can switch one of writters with slower writing one. The cook gets the same recipe always, the only difference is time, and delay is caused by this inefficient writter (not by successor). You should read this thread: [C compiler](https://cs.stackexchange.com/questions/45486/how-can-a-language-whose-compiler-is-written-in-c-ever-be-faster-than-c/45494), there are some awesome answers related to your question.
When we wanted to construct a PDA for $0^n1^n$ the idea was to put all the zeroes (which is a part of the input string) to the stack associated with the PDA, and then pop each of them when we get a $1$ from the latter part of the input. But when we try to prove that, we can create a PDA for a given CFG we put nonterminals and terminals in the stack and try to match it with input and pop from the stack. Why do we do something like this? For a problem, we push some part of my input to the stack and match rest of them, and for some problems do not push any input symbol rather only use inputs to compare? Maybe I am missing some intuitive part of it.
Your question is, in essence, > > Why does the proof that every context-free grammar can be converted to an equivalent PDA proceed in a particular way rather than in another way? > > > It's hard to answer such a question unless it gets more specific. For example, you can ask why the resulting PDA have to invoke nondeterminism. The answer is that some context-free languages cannot be accepted by a DPDA. Indeed, while this particular proof uses PDAs acting in a certain way, another proof might use PDAs acting in a different way. One alternative such proof uses the [Chomsky–Schützenberger representation theorem](https://en.wikipedia.org/wiki/Chomsky%E2%80%93Sch%C3%BCtzenberger_representation_theorem). The theorem states that every context-free language can be realized as $$ h(D \cap R), $$ where $D$ is a Dyck language (the language of all correctly nested strings of parentheses of some fixed number of sorts), $R$ is a regular language, and $h$ is a homomorphism. This theorem, which can be proved directly using context-free grammars (see, for example, [Context-free languages and pushdown automata](http://www-igm.univ-mlv.fr/~berstel/Articles/1997CFLPDA.pdf) by Autebert, Berstel and Boasson, from the Handbook of Formal Languages), allows to convert a context-free grammar to a PDA which is more like the one for $0^n 1^n$, along the following lines: 1. Start with a PDA for $D$. This is a PDA that pushes whenever encountering a left parenthesis, and pops whenever encountering a right parenthesis (checking that the two parentheses, the one on the stack and the one being read, have the same type). 2. Construct a DFA/NFA for $R$, and use the product construction to construct a PDA for $D \cap R$. 3. Construct a PDA for $h(D \cap R)$ by replacing each transition on $\sigma$ to a transition on $h(\sigma)$. If you apply this construction to $0^n1^n$ (which you can realize as $D \cap 0^\*1^\*$, where $D$ is the Dyck language with a single type of left parenthesis $0$ and the corresponding right parenthesis $1$) then you get a PDA which is remarkably similar to the one you describe.
For example, I know that for ARIMA models stationarity needs to be achieved. What about Exponential Smoothing? Is it also required?
exponential smoothing models do not assume stationary data. Citation: see [Hyndman and Athana­sopou­los](https://www.otexts.org/fpp/8/10): "every ETS [exponential smoothing] model is non-stationary"
I have a set $S$, which contains $n$ real numbers, which generically are all different. Now suppose I know all the sums of its subsets, can I recover the original set $S$? I have $2^n $ data. This is far more than $n$, the number of unknowns.
No you can't. Consider any set $S=\{a,b,c\}$ with $a+b+c=0$, and the set $S'=\{a+b,b+c,c+a\}$. The subset sums for $S$ are $0, a, b, c, a+b, b+c, c+a, a+b+c=0$. The subset sums for $S'$ are $0, a+b, b+c, c+a, a+2b+c=b, b+2c+a=c, c+2a+b = a, 2(a+b+c)=0$. Hence, you can't distinguish $S$ and $S'$ from the subset sums: $0, a, b, c, a+b, b+c, c+a, 0$. --- If all elements are non-negative, then the smallest subset sums should respectively correspond to the empty set and the singletons made up of the smallest two elements, thus you can know the smallest two elements. Once you know the smallest $k$ elements, you can know the subset sums corresponding to the subsets made up of these $k$ elements. Extract them, then the smallest subset sum should correspond to the $(k+1)$-th smallest element. Repeat the process above, you will finally get all elements.
How do we explain the difference between logistic regression and neural network to an audience that have no background in statistics?
I assume you're thinking of what used to be, and perhaps still are referred to as 'multilayer perceptrons' in your question about neural networks. If so then I'd explain the whole thing in terms of *flexibility about the form of the decision boundary* as a function of explanatory variables. In particular, for this audience, I wouldn't mention link functions / log odds etc. Just keep with the idea that the probability of an event is being predicted on the basis of some observations. Here's a possible sequence: * Make sure they know what a predicted probability is, conceptually speaking. Show it as a function of *one* variable in the context of some familiar data. Explain the decision context that will be shared by logistic regression and neural networks. * Start with logistic regression. State that it is the linear case but show the linearity of the resulting decision boundary using a heat or contour plot of the output probabilities with *two* explanatory variables. * Note that two classes may not be well-separated by the boundary they see and motivate a more flexible model to make a more curvy boundary. If necessary show some data that would be well distinguished this way. (This is why you start with 2 variables) * Note that you *could* start complicating the original linear model with extra terms, e.g. squares or other transformations, and maybe show the boundaries that these generate. * But then discard these, observing that you don't know in advance what the function form ought to be and you'd prefer to learn it from the data. Just as they get enthusiastic about this, note the impossibility of this in complete generality, and suggest that you are happy to assume that it should at least be 'smooth' rather than 'choppy', but otherwise determined by the data. (Assert that they were probably *already* thinking of only smooth boundaries, in the same way as they'd been speaking prose all their lives). * Show the output of a generalized additive model where the output probability is a joint function of the pair of the original variables rather than a true additive combination - this is just for demonstration purposes. Importantly, call it a *smoother* because that's nice and general and describes things intuitively. Demonstrate the non-linear decision boundary in the picture as before. * Note that this (currently anonymous) smoother has a smoothness parameter that controls how smooth it actually is, refer to this in passing as being like a prior belief about smoothness of the function turning the explanatory variables into the predicted probability. Maybe show the consequences of different smoothness settings on the decision boundary. * Now introduce the neural net as a diagram. Point out that the second layer is just a logistic regression model, but also point out the non-linear transformation that happens in the hidden units. Remind the audience that this is just another function from input to output that will be non-linear in its decision boundary. * Note that it has a lot of parameters and that some of them need to be constrained to make a smooth decision boundary - reintroduce the idea of a number that controls smoothness as the *same* (conceptually speaking) number that keeps the parameters tied together and away from extreme values. Also note that the more hidden units it has, the more different types of functional forms it can realise. To maintain intuition, talk about hidden units in terms of flexibility and parameter constraint in terms of smoothness (despite the mathematical sloppiness of this characterization) * Then surprise them by claiming since you still don't know the functional form so you want to be *infinitely* flexible by adding an infinite number of hidden units. Let the practical impossibility of this sink in a bit. Then observe that this limit can be taken in the mathematics, and ask (rhetorically) what such a thing would look like. * Answer that it would be a smoother again (a Gaussian process, as it happens; Neal, 1996, but this detail is not important), like the one they saw before. Observe that there is again a quantity that controls smoothness but no other particular parameters (integrated out, for those that care about this sort of thing). * Conclude that neural networks are particular, implicitly limited, implementations of ordinary smoothers, which are the non-linear, not necessarily additive extensions of the logistic regression model. Then do it the other way, concluding that logistic regression is equivalent to a neural network model or a smoother with the smoothing parameter set to 'extra extra smooth' i.e. linear. The advantages of this approach is that you don't have to really get into any mathematical detail to give the correct idea. In fact they don't have to understand either logistic regression or neural networks already to understand the similarities and differences. The disadvantage of the approach is that you have to make a lot of pictures, and strongly resist the temptation to drop down into the algebra to explain things.
Assume that I am a programmer and I have an NP-complete problem that I need to solve it. What methods are available to deal with NPC problems? Is there a survey or something similar on this topic?
There are a number of well-studied strategies; which is best in your application depends on circumstance. * ***Improve worst case runtime*** Using problem-specific insight, you can often improve the naive algorithm. For instance, there are $O(c^n)$ algorithms for Vertex Cover with $c < 1.3$ [1]; this is a [*huge* improvement](http://www.wolframalpha.com/input/?i=plot+2%5Ex+vs+%281.3%29%5Ex+for+x+from+0+to+10) over the naive $\Omega(2^n)$ and might make instance sizes relevant for you tractable. * ***Improve expected runtime*** Using heuristics, you can often devise algorithms that are fast on many instances. If those include most that you meet in practice, you are golden. Examples are SAT for which quite involved solvers exist, and the Simplex algorithm (which solves a polynomial problem, but still). One basic technique that is often helpful is [*branch and bound*](https://en.wikipedia.org/wiki/Branch_and_bound). * ***Restrict the problem*** If you can make more assumptions on your inputs, the problem may become easy. + **Structural properties** Your inputs may have properties that simplify solving the problem, e.g. planarity, bipartiteness or missing a minor for graphs. See [here](https://en.wikipedia.org/wiki/Clique_problem#Special_classes_of_graphs) for some examples of graph classes for which CLIQUE is easy. + **Bounding functions of the input** Another thing to look at is [*parameterised complexity*](https://en.wikipedia.org/wiki/Parameterized_complexity); some problems are solvable in time $O(2^kn^m)$ for $k$ some instance parameter (maximum node degree, maximum edge weight, ...) and $m$ constant. If you can bound $k$ by a polylogarithmic function in $n$ in your setting, you get polynomial algorithms. [Saeed Amiri](https://cs.stackexchange.com/users/742/saeed-amiri) gives [details in his answer](https://cs.stackexchange.com/a/11773). + **Bounding input quantities** Furthermore, some problems admit algorithms that run in [*pseudo-polynomial time*](https://en.wikipedia.org/wiki/Pseudo-polynomial_time), that is their runtime is bounded by a polynomial function in a *number* that is part of the input; the naive primality check is an example. This means that if the quantities encoded in your instances have reasonable size, you might have simple algorithms that behave well for you. * ***Weaken the result*** This means that you tolerate errorneous or incomplete results. There are two main flavors: + **Probabilistic algorithms** You only get the correct result with some probability. There are some variants, most notable [*Monte-Carlo*](https://en.wikipedia.org/wiki/Monte-Carlo_Algorithm) and [*Las-Vegas*](https://en.wikipedia.org/wiki/Las_vegas_algorithm) algorithms. A famous example is the [*Miller-Rabin primality test*](https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test). + **Approximation algorithms** You no longer look for optimal solutions but *almost* optimal ones. Some algorithms admit relative ("no worse than double the optimum"), others absolute ("no worse than $5$ plus the optimum") bounds on the error. For many problems it is open how well they can be approximated. There are some that can be approximated arbitrarily well in polynomial time, while others are known to not allow that; check the theory of [*polynomial-time approximation schemes*](https://en.wikipedia.org/wiki/Polynomial-time_approximation_scheme). Refer to [*Algorithmics for Hard Problems*](http://www.springer.com/computer/hardware/book/978-3-540-44134-2) by Hromkovič for a thorough treatment. --- 1. [*Simplicity is beauty: Improved upper bounds for vertex cover*](http://facweb.cs.depaul.edu/research/techreports/TR05-008.pdf) by Chen Jianer, Iyad A. Kanj, Ge Xia (2005)
I'm wondering if PDAs with more than one initial states are also accepting context free languages. If found [that](https://cs.stackexchange.com/questions/24491/nfas-with-more-than-one-initial-state) question on this site about NFAs and would like to know if this answer is also valid for PDAs if one defines a new single initial state and connects this with the former initial states using $\epsilon : \epsilon \to \epsilon$ transitions?
PDAs that are allowed to have more than one initial state (let's call them PDAIs) are computationally equivalent to conventional PDAs: * Trivially, every conventional PDA can be considered as a PDAI that happens to have one initial state. * Every PDAI can be converted to an equivalent PDA with the process you describe. So yes, PDAIs accept exactly the context free languages.
I don't understand how images are actually fed into a CNN. If I have a directory containing a few thousand images, what steps do I need to take in order to feed them to a neural network (for instance resizing, grey scale, labeling, etc)? I don't understand how the labeling of an image works. What would this dataset actually look like? Or can you not look at it at all (something like a table)?
This is a very packed question. Let's try to go through it and I will try to provide some example for image processing using a CNN. Pre-processing the data ======================= Pre-processing the data such as resizing, and grey scale is the first step of your machine learning pipeline. Most deep learning frameworks will require your training data to all have the same shape. So it is best to resize your images to some standard. Whenever training any kind of machine learning model it is important to remember the bias variance trade-off. The more complex the model the harder it will be to train it. That means it is best to limit the number of model parameters in your model. You can lower the number of inputs to your model by downsampling the images. Greyscaling is often used for the same reason. If the colors in the images do not contain any distinguishing information then you can reduce the number of inputs by a third by greyscaling. There are a number of other pre-processing methods which can be used depending on your data. It is also a good idea to do some data augmentation, this is altering your input data slightly without changing the resulting label to increase the number of instances you have to train your model. How to structure the data? ========================== The shape of the variable which you will use as the input for your CNN will depend on the package you choose. I prefer using tensorflow, which is developed by Google. If you are planning on using a pretty standard architecture, then there is a very useful wrapper library named Keras which will help make designing and training a CNN very easy. When using tensorflow you will want to get your set of images into a numpy matrix. The first dimension is your instances, then your image dimensions and finally the last dimension is for `channels`. So for example if you are using MNIST data as shown below, then you are working with greyscale images which each have dimensions 28 by 28. Then the numpy matrix shape that you would feed into your deep learning model would be (n, 28, 28, 1), where $n$ is the number of images you have in your dataset. [![enter image description here](https://i.stack.imgur.com/4V7iv.png)](https://i.stack.imgur.com/4V7iv.png) How to label images? ==================== For most data the labeling would need to be done manually. This is often named data collection and is the hardest and most expensive part of any machine learning solution. It is often best to either use readily available data, or to use less complex models and more pre-processing if the data is just unavailable. --- Here is an example of the use of a CNN for the MNIST dataset First we load the data ``` from keras.datasets import mnist import numpy as np (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. print('Training data shape: ', x_train.shape) print('Testing data shape : ', x_test.shape) ``` > > Training data shape: (60000, 28, 28) > Testing data shape : (10000, > 28, 28) > > > ``` from __future__ import print_function import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.callbacks import ModelCheckpoint from keras.models import model_from_json from keras import backend as K ``` Then we need to reshape our data to add the channel dimension at the end of our numpy matrix. Furthermore, we will one-hot encode the labels. So you will have 10 output neurons, where each represent a different class. ``` # The known number of output classes. num_classes = 10 # Input image dimensions img_rows, img_cols = 28, 28 # Channels go last for TensorFlow backend x_train_reshaped = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test_reshaped = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) # Convert class vectors to binary class matrices. This uses 1 hot encoding. y_train_binary = keras.utils.to_categorical(y_train, num_classes) y_test_binary = keras.utils.to_categorical(y_test, num_classes) ``` Now we design our model ``` model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) ``` Finally we can train the model ``` epochs = 4 batch_size = 128 # Fit the model weights. model.fit(x_train_reshaped, y_train_binary, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test_reshaped, y_test_binary)) ```
If someone makes a statement like below: > > "Overall, nonsmokers exposed to environmental smoke had a relative > risk of coronary heart disease of 1.25 (95 percent confidence > interval, 1.17 to 1.32) as compared with nonsmokers not exposed to > smoke." > > > What is the relative risk for the population as a whole? How many things are connected with coronary heart disease? > Of the vast > number of things that can be tested, very few actually are connected > to coronary heart disease, so the chance that any particular thing > chosen at random is connected is vanishingly small. Thus we can say > that the relative risk > for the population is 1. But the quoted interval does not contain the > value 1. So either there actually is a connection between the two > things, the probability of which is vanishingly small, or this is one > of the 5% of intervals that do not contain the parameter. As the > latter is far more likely than the former it is what we should assume. Therefore, the appropriate > conclusion is that the data set was almost certainly atypical of the > population, and thus no connection can be implied. > > > Of course, if there is some basis for assuming that more than 5% of > things are linked to coronary heart disease then there might be some > evidence in the statistic to support the suggestion that environmental > smoke is one of them. Common sense suggests that this is unlikely. > > > What is the error in their reasoning (as all health organizations agree that there is significant literature regarding the damaging effects of second-hand smoking)? Is it because of their premise that "Of the vast number of things that can be tested, very few actually are connected to coronary heart disease"? This sentence may be true for any randomly chosen factor (ie. how many dogs a person owns with the risk of coronary artery disease) but the a priori probability is much higher for second hand smoking and coronary heart disease than just 'any random factor'. Is this the correct reasoning? Or is there something else?
This is a quite interesting philosophical issue related to hypothesis testing (and thus in the frequentist setting also confidence intervals, as I explain [here](https://stats.stackexchange.com/questions/31679/what-is-the-connection-between-credible-intervals-and-bayesian-hypothesis-tests)). There are, of course, a lot of hypotheses that could be investigated - passive smoking causes coronary heart disease, drinking alcohol causes chd, owning dogs causes chd, being a Capricorn causes chd... **If we choose one of all of these hypotheses at random, the probability of us choosing a hypothesis that happens to be true is virtually zero.** This seems to be the argument in the quoted text - that it is very unlikely that we happened to test a true hypothesis. **But the hypothesis was not chosen at random.** It was motivated by previous epidemiological and medical knowledge about coronary heart disease. There are theoretical mechanisms that explain how smoking could cause coronary heart disease, so it does not seem far-fetched to think that those would work for passive smoking as well. The criticism in the quote may be valid for exploratory studies where a data set is mined for hypotheses. That is the reason that we don't accept such "discoveries" as facts - instead we require that the results can be replicated in new studies. Either way, the paper cited in the quote is a meta study and is therefore not affected by this problem. We have seen empirically over the last centuries that **testing hypotheses motivated by theory by comparing the predicted results to the observed results works.** The fact that we believe in this procedure is the reason that we have made so much progress in medicine, engineering and science. It is the reason that I can write this on my computer and that you can read it on yours. To argue that this procedure is wrong is to argue that the scientific method is fundamentally flawed - and we have **plenty of evidence that says otherwise.** I doubt that there is anything that a person who isn't willing to accept this kind of evidence actually will accept...
I'm reading Simon Peyton Jones's [*The Implementation of Functional Programming Languages*](http://research.microsoft.com/en-us/um/people/simonpj/papers/slpj-book-1987/) and there's one statement that surprised me a little bit (on page 39): > > To a much greater extent than is the > case for imperative languages, > functional languages are largely > syntactic variations of one another, > with relatively few semantic > differences. > > > Now, this was written in 1987 and my thoughts on this subject might be influenced by more modern programming languages that weren't around or popular then. However, I find this a bit hard to believe. For instance, I think that the described Miranda programming language (an early predecessor to Haskell) has much more different semantics compared to a strict language like ML than say C has to Pascal or maybe even C has to smalltalk (although I'll cede that C++ provides some validation of his point :-). But then again, I'm basing this on my intuitive understanding. Is Simon Peyton Jones largely correct in saying this, or is this a controversial point?
Simon is basically correct, from an extensional point of view. We know pretty well what the semantics of modern functional languages are, and they really are relatively small variations of one another -- they each represent slightly different translations into a monadic metalanguage. Even a language like Scheme (a dynamically typed higher-order imperative language with first-class control) has a semantics which is pretty close to ML and Haskell's. From a denotational point of view, you can start by giving a pretty simple domain equation for the semantics of Scheme -- call it $V$. People could and did solve equations like this in the late 70s/early 80s, so this isn't too bad. Similarly, there are relatively simple operational semantics for Scheme as well. (Note that when I say "Scheme", I mean untyped lambda calculus plus continuations plus state, as opposed to actual Scheme which has a few warts like all real languages do.) But to get to a category suitable for interpreting modern typed functional languages, things get quite scary. Basically, you end up constructing an ultrametric-enriched category of partial equivalence relations over this domain. (As an example, see Birkedal, Stovring, and Thamsborg's "Realizability Semantics of Parametric Polymorphism, General References, and Recursive Types".) People who prefer operational semantics know this stuff as step-indexed logical relations. (For example, see Ahmed, Dreyer and Rossberg's "State-Dependent Representation Independence".) Either way, the techniques used are relatively new. The reason for this mathematical complexity is that we need to be able to interpret parametric polymorphism and higher-order state at the same time. But once you've done this, you're basically home free, since this construction contains all the hard bits. Now, you can interpret ML and Haskell types via the usual monadic translations. ML's strict, effectful function space `a -> b` translates to $\left<a\right> \to T\left<b\right>$, and Haskell's lazy function space translates to $\left<a\right> \to \left<b\right>$, with $T(A)$ the monadic type of side effects interpreting the IO monad of Haskell, and $\left<a\right>$ is the interpretation of the type ML or Haskell type `a`, and $\to$ is the exponential in that category of PERs. So as far as the equational theory goes, since these languages can both be described by translations into slightly different subsets of the same language, it is entirely fair to call them syntactic variations of one another. The difference in feel between ML and Haskell actually arises from the intensional properties of the two languages -- that is, execution time and memory consumption. ML has a compositional performance model (i.e., the time/space cost of a program can be computed from the time/space costs of its subterms), as would a true call-by-name language. Actual Haskell is implemented with call-by-need, a kind of memoization, and as a result its performance is not compositional -- how long an expression bound to a variable takes to evaluate depends on whether it has been used before or not. This is not modelled in the semantics I alluded to above. If you want to take the intensional properties more seriously, then ML and Haskell do start to show more serious differences. It is still probably possible to devise a common metalanguage for them, but the interpretation of types will differ in a much more systematic way, related to the proof-theoretic idea of *focusing*. One good place to learn about this is Noam Zeilberger's PhD thesis.
I hope this question makes sense, but I was wondering if there are other models of computation similar to lambda calculus that you can use to build up axiomatic mathematical and logical fundamentals like numbers, operators, arithmetic functions and such?
Yes, there are many models of computation, and of those many are extensions or modifications of the $\lambda$-calculus. You may wish to learn about [partial combinatory algebras](https://ncatlab.org/nlab/show/partial+combinatory+algebra), which are very general models of computation, of which the $\lambda$-calculus is an example. Every partial combinatory algebra has a $\lambda$-like notation for defining functions. They encompass examples such as: $\lambda$-calculus, Turing machines, Turing machines with oracles, topological models of computation, [PCF](https://en.wikipedia.org/wiki/Programming_Computable_Functions), etc. Many models of computation can be seen as extensions or modifications of the $\lambda$-calculus. Let us consider the [simply typed $\lambda$-calculus](https://en.wikipedia.org/wiki/Simply_typed_lambda_calculus) augmented with various features: 1. [System T](https://en.wikipedia.org/wiki/Dialectica_interpretation) is $\lambda$-calculus with natural numbers and primitive recursion. It is *not* Turing complete. 2. The aforementioned [PCF](https://en.wikipedia.org/wiki/Programming_Computable_Functions) is $\lambda$-calculus extended with natural numbers and general recursion. It is Turing complete. 3. There are many extensions of PCF, for instance PCF++ can perform parallel computations, PCF+`catch` can throw and catch exceptiosn, PCF+`quote` can disasemble code into source code, etc. These are all *different* computation models based on the $\lambda$-calculus. 4. In another direction, we can extend the $\lambda$-calculus with more powerful types, for instance [System F](https://en.wikipedia.org/wiki/System_F) is a $\lambda$-calculus with polymorphic types. It is quite powerful but not Turing complete. All of this is just scratching a rich theory of models of computation and functional programming languages (which is what extensions of $\lambda$-calculus are, more or less). If you are interested in the topic, you can start by reading some [books on the principles of programming langauges](https://www.cis.upenn.edu/~bcpierce/tapl/) (for practical aspects), or some books on [realizability theory](https://www.elsevier.com/books/realizability/van-oosten/978-0-444-51584-1) (for theoretical aspects).
I have done Repeated Measure Two way ANOVA for my ample of data using SIGMAPLOT. My analysis has two factors, for example: 1st factor is the hydrogen peroxide concentration and the 2nd factor is about time series (20 minutes with every 1 minute interval I took the absorbance reading). What happened was my normality tended to fail and even sometimes my variance test failed. Then this SIGMAPLOT can suggest me to proceed with the Holm-Sidaak test. I have never used that or came across in any analysis. So I need advice on how from the beginning I can collect (how many or replicates) data to make my data are normally distributed when I run a normality test where I can use Tukey or Post hoc test which is commonly used. I would like to run a new set of experiments. How can I proceed?
Two general misconceptions in your approach. First: Data for linear models do not have to be normally distibuted. Residuals preferably should, but with large numbers ANOVA and ANCOVA get quite resistant to deviations. Second: No variables in real life are perfectly normally distributed and normality tests will therefore fail with all real data, given a large enough sample. Data can be normal enough for any practical purpose and still show significance in normality tests. Collecting more data does not usually make them more normally distributed but it makes t-tests, ANCOVAs and many more approaches more robust towards non-normally distributed data and residuals.
This post follows this one: [Why does ridge estimate become better than OLS by adding a constant to the diagonal?](https://stats.stackexchange.com/questions/118712/why-does-ridge-estimate-become-better-than-ols-by-adding-a-constant-to-the-diago/120073#120073) Here is my question: As far as I know, ridge regularization uses a $\ell\_2$-norm (euclidean distance). But why do we use the square of this norm ? (a direct application of $\ell\_2$ would result with the square root of the sum of beta squared). As a comparison, we don't do this for the LASSO, which uses a $\ell\_1$-norm to regularize. But here it's the "real" $\ell\_1$ norm (just sum of the square of the beta absolute values, and not square of this sum). Can someone help me to clarify?
There are lots of penalized approaches that have all kinds of different penalty functions now (ridge, lasso, MCP, SCAD). The question of why is one of a particular form is basically "what advantages/disadvantages does such a penalty provide?". Properties of interest might be: 1) nearly unbiased estimators (note all penalized estimators will be biased) 2) Sparsity (note ridge regression does not produce sparse results i.e. it does not shrink coefficients all the way to zero) 3) Continuity (to avoid instability in model prediction) These are just a few properties one might be interested in a penalty function. It is a lot easier to work with a sum in derivations and theoretical work: e.g. $||\beta||\_2^2=\sum |\beta\_i|^2$ and $||\beta||\_1 = \sum |\beta\_i|$. Imagine if we had $\sqrt{\left(\sum |\beta\_i|^2\right)}$ or $\left( \sum |\beta\_i|\right)^2$. Taking derivatives (which is necessary to show theoretical results like consistency, asymptotic normality etc) would be a pain with penalties like that.
I keep reading this and intuitively I can see this but how does one go from L2 regularization to saying that this is a Gaussian Prior analytically? Same goes for saying L1 is equivalent to a Laplacean prior. Any further references would be great.
For a regression problem with $k$ variables (w/o intercept) you do OLS as $$\min\_{\beta} (y - X \beta)' (y - X \beta)$$ In regularized regression with $L^p$ penalty you do $$\min\_{\beta} (y - X \beta)' (y - X \beta) + \lambda \sum\_{i=1}^k |\beta\_i|^p $$ We can equivalently do (note the sign changes) $$\max\_{\beta} -(y - X \beta)' (y - X \beta) - \lambda \sum\_{i=1}^k |\beta\_i|^p $$ This directly relates to the Bayesian principle of $$posterior \propto likelihood \times prior$$ or equivalently (under regularity conditions) $$log(posterior) \sim log(likelihood) + log(penalty)$$ Now it is not hard to see which exponential family distribution corresponds to which penalty type.
I'm trying to understand the Cook-Levin theorem proof, as it attempts to create a polynomial-time reduction from any `NP` problem to `SAT` (as presented in the book by Michael Sipser). Most requirements are absolutely clear, but I don't understand why the $\phi\_{cell}$ formula is required. Of course, a tableau with improper values, such that fails $\phi\_{cell}$, is broken and therefore will not "translate" correctly to the original NP problem on the other side of the reduction. We can easily "break" a tablaue by putting a $q\_{accept}$ somewhere where it shouldn't, and the machine's language will change. But I seem to miss something basic. A reduction says that if A reduces to B, then for each $w\in\Sigma^\* $, $w\in A$ iff $f(w)\in B$. The $f(w)$ part may be the "broken" tableau, but can we really call it $f(w)$? After all, such broken tableaus are never in the image of $f$. So, why do we care about them?
A problem is in NP if and only if it has a polynomial verifier – that's the definition of NP. All NP-complete problems are in NP – that's part of the definition – and so have a polynomial verifier. Factoring is not a decision problem, so it neither belongs to NP nor doesn't belong to NP. You can turn Factoring into a decision problem in many ways, for example asking if the $i$th bit of the $j$th factor of $n$ is equal to $b$. In that case a witness will be the list of factors of $n$. We can check that the factors multiply to $n$, and that they are all prime (since primality testing is in polynomial time). (We are using here the *unique factorization* theorem, stating that a positive integer can be factored as a product of primes in a unique way. The same property doesn't hold in some more general situations.) It is suspected that the decision problem corresponding to Factoring is *not* NP-complete, though it is certainly in NP, as the preceding paragraph shows.
Assuming l1 and l2 cache requests result in a miss, does the processor stall until main memory has been accessed? I heard about the idea of switching to another thread, if so what is used to wake up the stalled thread?
Memory latency is one of the fundamental problems studied in computer architecture research. Speculative Execution --------------------- Speculative execution with out-of-order instruction issue is often able to find useful work to do to fill the latency during an L1 cache hit, but usually runs out of useful work after 10 or 20 cycles or so. There have been several attempts to increase the amount of work that can be done during a long-latency miss. One idea was to try to do [value prediction](http://pharm.ece.wisc.edu/mikko/oldpapers/asplos7.pdf) (Lipasti, Wilkerson and Shen, (ASPLOS-VII):138-147, 1996). This idea was very fashionable in academic architecture research circles for a while but seems not to work in practice. A last-gasp attempt to save value prediction from the dustbin of history was [runahead execution](http://users.ece.cmu.edu/~omutlu/pub/mutlu_hpca03.pdf) (Mutlu, Stark, Wilkerson, and Patt (HPCA-9):129, 2003). In runahead execution you recognize that your value predictions are going to be wrong, but speculatively execute *anyway* and then throw out all the work based on the prediction, on the theory that you'll at least start some prefetches for what would otherwise be L2 cache misses. It turns out that runahead wastes so much energy that it just isn't worth it. A final approach in this vein which may be getting some traction in industry involves creating enormously long reorder buffers. Instructions are executed speculatively based on branch prediction, but no value prediction is done. Instead all the instructions that are dependent on a long-latency load miss sit and wait in the reorder buffer. But since the reorder buffer is so large you can keep fetching instructions *if* the branch predictor is doing a decent job you will sometimes be able to find useful work much later in the instruction stream. An influential research paper in this area was [Continual Flow Pipelines](http://pages.cs.wisc.edu/~rajwar/papers/asplos04.pdf) (Srinivasan, Rajwar, Akkary, Gandhi, and Upton (ASPLOS-XI):107-119, 2004). (Despite the fact that the authors are all from Intel, I believe the idea got more traction at AMD.) Multi-threading --------------- Using multiple threads for latency tolerance has a much longer history, with much greater success in industry. All the successful versions use hardware support for multithreading. The simplest (and most successful) version of this is what is often called FGMT (*fine grained multi-threading*) or *interleaved multi-threading*. Each hardware core supports multiple thread contexts (a *context* is essentially the register state, including registers like the instruction pointer and any implicit flags registers). In a fine-grained multi-threading processor each thread is processed *in*-order. The processor keeps track of which threads are stalled on a long-latency load miss and which are ready for their next instruction and it uses a simple FIFO scheduling strategy on each cycle to choose which ready thread to execute that cycle. An early example of this on a large scale was Burton Smith's HEP processors (Burton Smith went on to architect the Tera supercomputer, which was also a fine-grained multi-threading processor). But the idea goes much further back, into the 1960s, I think. FGMT is particularly effective on streaming workloads. All modern GPUs (graphics processing units) are multicore where each core is FGMT, and the concept is also widely used in other computing domains. Sun's T1 was also multicore FMGT, and so is Intel's Xeon Phi (the processor that is often still called "MIC" and used to be called "Larabee"). The idea of [Simultaneous Multithreading](http://taco.cse.tamu.edu/utsa-www/cs5513-fall07/reader/tullsen-smt.pdf) (Tullsen, Eggers, and Levy, (ISCA-22):392-403, 1995) combines hardware multi-threading with speculative execution. The processor has multiple thread contexts, but each thread is executed speculatively and out-of-order. A more sophisticated scheduler can then use various heuristics to fetch from the thread that is most likely to have useful work ([Malik, Agarwal, Dhar, and Frank, (HPCA-14:50-61), 2008](http://ipa.ece.illinois.edu/pub/Malik-2008-PaCo.pdf)). A certain large semiconductor company started using the term *hyperthreading* for simultaneous multithreading, and that name seems to be the one most widely used these days. Low-level microarchitectural concerns ------------------------------------- I realized after rereading your comments that you are also interested in the signalling that goes on between processor and memory. Modern caches usually allow multiple misses to be simultaneously outstanding. This is called a Lockup-free cache (Kroft, (ISCA-8):81-87, 1981). (But the paper is hard to find online, and somewhat hard to read. Short answer: there's a lot of book-keeping but you just deal with it. The hardware book-keeping structure is called a MSHR (miss information/status holding register), which is the name Kroft gave it in his 1981 paper.)
I know GPT is a Transformer-based Neural Network, composed of several blocks. These blocks are based on the original Transformer's Decoder blocks, but are they exactly the same? In the original Transformer model, Decoder blocks have two attention mechanisms: the first is pure Multi Head Self-Attention, the second is Self-Attention with respect to Encoder's output. In GPT there is no Encoder, therefore I assume its blocks only have one attention mechanism. That's the main difference I found. At the same time, since GPT is used to generate language, its blocks must be masked, so that Self-Attention can only attend previous tokens. (Just like in Transformer Decoders.) Is that it? Is there anything else to add to the difference between GPT (1,2,3,...) and the original Transformer?
GPT uses an unmodified Transformer decoder, except that it lacks the encoder attention part. We can see this visually in the diagrams of the [Transformer model](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) and the [GPT model](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf): [![transformer_diagram](https://i.stack.imgur.com/IMNa7.png)](https://i.stack.imgur.com/IMNa7.png) [![gpt_diagram](https://i.stack.imgur.com/DbokL.png)](https://i.stack.imgur.com/DbokL.png) For GPT-2, this is clarified by the authors in [the paper](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf): > > [![gpt2_section2.3](https://i.stack.imgur.com/QitFo.png)](https://i.stack.imgur.com/QitFo.png) > > > There have been several lines of research studying the effects of having the layer normalization before or after the attention. For instance the ["sandwich transformer"](https://www.aclweb.org/anthology/2020.acl-main.270/) tries to study different combinations. For GPT-3, there are further modifications on top of GPT-2, also explained in [the paper](https://arxiv.org/abs/2005.14165): > > [![gpt3_section2.1](https://i.stack.imgur.com/fJPdm.png)](https://i.stack.imgur.com/fJPdm.png) > > >
In my notes is represented like this: [![enter image description here](https://i.stack.imgur.com/7scGH.png)](https://i.stack.imgur.com/7scGH.png) From wikipedia: > > In computer science, an inverted index (also referred to as a postings > file or inverted file) is a database index storing a mapping from > content, such as words or numbers, to its locations in a table, or in > a document or a set of documents. > > > **Where is the inversion** with respect to a normal index (like the one found at the end of books)? > > Index: (in a book or set of books) an alphabetical list of names, > subjects, etc. with reference to the pages on which they are > mentioned. > > > **Am I missing something?** Like a nuance of the meaning of the name due to the fact that english is not my main language.
The reason why we use the term "inverted index" is that the term "index" came to computer science first. In fact, it has several common meanings in computer science, but in this case it refers to the more general concept of an efficient lookup data structure for a database. What we call an "inverted index" is, strictly speaking, an inverted *file* used as a database index. "Inverted file" is the data structure, and "index" is the use to which it is put. A B-tree data structure, similarly, can be put to more uses than just database indexing, but it makes sense to talk of a "B-tree index". The index in a book is not the only kind of text index. [*Strong's Concordance*](https://en.wikipedia.org/wiki/Strong%27s_Concordance), which is considered an important ancestor of modern full-text search, is a *permuted* index (specifically, a variant known today as a [KWIC](https://en.wikipedia.org/wiki/Key_Word_in_Context) index). The inverted file is not the only data structure that can be used for text/string indexing. [Suffix arrays](https://en.wikipedia.org/wiki/Suffix_array) and [Burrows-Wheeler indexes](https://en.wikipedia.org/wiki/FM-index;) are commonly used for strings that don't need linguistic analysis such as indexing DNA or RNA sequences. Some of these index variants have efficient partial match queries. The signature file (a probabilistic index structure, essentially Bloom filters for text search) was briefly popular, but it turned out to be [nowhere near as generally useful as its competitors](https://people.eng.unimelb.edu.au/jzobel/fulltext/acmtods98.pdf).
The Paley-Zygmund inequality is given by \begin{equation} \operatorname{P}( Z > \theta\operatorname{E}[Z] ) \ge (1-\theta)^2 \frac{\operatorname{E}[Z]^2}{\operatorname{E}[Z^2]} \end{equation} I want to prove it. I found [this](https://en.wikipedia.org/wiki/Paley%E2%80%93Zygmund_inequality) Wikipedia article on the Paley-Zygmund inequality which states that I can decompose the random variable $X$ like so \begin{equation} E[Z] = E[Z\mathbf{1}\_{\{Z\leq\theta E[Z]\}}]+E[Z\mathbf{1}\_{\{Z>\theta E[Z]\}}] \end{equation} the article then states that \begin{align} E[Z\mathbf{1}\_{\{Z\leq\theta E[Z]\}}] \leq & \; \theta E[Z] \;\;\; \text{and}\\ E[Z\mathbf{1}\_{\{Z>\theta E[Z]\}}] \leq & \; \sqrt{E[Z^2]P(Z>\theta E[Z])} \end{align} Plugging this into the first equation yields \begin{align} E[Z] \leq & \; \theta E[Z] + \sqrt{E[Z^2]P(Z>\theta E[Z])} \\ \iff (1-\theta)^2 \frac{E[Z]^2}{E[Z^2]} \leq & \; P(Z>\theta E[Z]). \end{align} I have two questions: 1) The decomposition makes intuitive sense. Either way, exactly one of the conditions is true, so the equality holds. However, is there a more formal way / analytical approach to support this statement? 2) How are the upper bounds for the two expected values of the indicator variable derived? I would appreciate an answer which is kept quite simple as I am not a (pure) mathematician. /edit: Possibly should have posted this to math.stackexchange. Maybe a mod can move the question.
For the first part of your question we have that: We have that $Z\geq 0$ thus the domain can be decomposed as $Z\geq 0= (Z\leq \theta \mathbb{E}[Z]) \cup (Z> \theta \mathbb{E}[Z])$ $$\mathbb{E}[Z] = \int\_{Z\geq 0} z p(z)dz = \int\_{Z\leq \theta \mathbb{E}[Z]} zp(z)dz + \int\_{Z> \theta \mathbb{E}[Z]}zp(z) = \\ \mathbb{E}[Z\times 1\_{Z\leq \theta \mathbb{E}[Z]}] + \mathbb{E}[Z\times 1\_{Z>\theta \mathbb{E}[Z]}] $$ For the second part: $$\mathbb{E}[Z\times 1\_{Z\leq \theta \mathbb{E}[Z]}] = \int\_{Z\leq \theta \mathbb{E}[Z]} zp(z)dz \leq \int\_{Z\leq \theta \mathbb{E}[Z]} max(z)p(z)dz \\ = \theta \mathbb{E}[Z]\int p(z)dz = \theta \mathbb{E}[Z]$$ Also, the $Z$ has finite variance thus $\mathbb{E}[Z^{2}]<\infty,$ $Z$ is square integrable, so we can use the Cauchy–Schwarz inequality, <https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality> for $L^{2}$, $$\mathbb{E}[Z\times 1\_{Z>\theta \mathbb{E}[Z]}] = \int z1\_{Z> \theta\mathbb{E}[Z]}p(z) \leq (\int z^{2}p(z)dz)^{1/2} (\int 1\_{Z> \theta \mathbb{E}[Z]}^{2}p(z)dz)^{2}\\ =\mathbb{E}[Z^{2}]^{1/2}\mathbb{P}(Z> \theta \mathbb{E}[Z])^{1/2} $$
I am currently doing a Ph.D. in Theoretical Computer Science, and any research paper I encountered so far has the author's names in alphabetical order of their surnames. For example consider the most fundamental book on algorithms: "Introduction to algorithms" by Thomas Cormen, Charles Leiserson, Ronald Rivest, and Clifford Stein. Also consider the book: "Parameterized Algorithms" by Marek Cygan, Fedor V. Fomin, Lukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michal Pilipczuk and Saket Saurabh Basically take any paper in the TCS domain, all follow this pattern. This pattern is not followed in other domains where the authorship is decided based on the individual contribution of the authors. In other words, a person having the most contribution to the paper is given the first authorship. Likewise, a person with less contribution would have his/her name appear later in the list of authors. I consider this norm, fundamentally flawed. Can somebody provide a good reason as to why such a norm is followed in the TCS domain?
In theoretical computer science (and mathematics), there is also the issue that the work develops and heavily changes over time: * You state and prove a lemma. * After one week, the lemma has become irrelevant since one of your co-authors has proved a much stronger statement. * After one week, this much stronger statement also has become irrelevant since the structure of the entire approach has been simplified by another co-author. So the individual steps that the research has taken are not necessarily visible in the final paper. How do you want to assign credit to the various co-authors? How do you want to rank them? Should only authors be listed who have contributed something that explicitly shows up in the final version?
I have a client that has a web site, he want my to help improve it's conversion rate. I am currently planning an A/B test, on the service page, testing the form. Number estimated of views on one month are 1,200 and the current conversion rate of the service page is 3.34%. I don't know how to calculate how many visits are required to provide a 95% confidence in the results of the experiment.
You can try [Wald's approximation](http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval) of the confidence interval: $$p\pm\sqrt{\frac{(1-p)p}{n}}$$ So, in your case 95% confidence interval is $$\approx 3.3\%\pm 1\%$$ Solve for whatever precision you need. For instance, in your case it's clear that you should not be reporting your conversion rate as 3.34%, because your 95% precision is over 1 percentage point. You have to report your conversion rate as 2.3-4.3% range or as 3%. If you think your A/B test will produce the difference of more than 1% between A and B, then you're good with 1200 visits to each A and B page. If you're splitting 1200 visits to A/B, then it's not enough for 1% difference for the confidence band at 600 visits is $\pm 1.4\%$
This could be a pretty basic question, I'm a little rusty on my stats knowledge. **Background:** I am monitoring website load time performance. To do so, I have a script running and capturing data points (About 400) on load time through various Agents. Every Agent is located in different geographic locations, but they measure the same steps. I would like to determine if there is statistical difference between the agents. So if one is consistently reporting slower load time performance I would like to know if its because of the Agent or not. I would include images but I need 10 reputation points and I just found out about this website. **Problem:** I have two sets of data from different agents measuring the seconds it takes a website to download, both are bell-shaped but are heavily skewed to the right. Can I still perform ANOVA to determine if there is difference, even though they are skewed? Thanks in advance
If the distributions are similar (in particular have the same variance) and the group sizes are identical (balanced design), you probably have no reason to worry. Formally, the normality assumption *is* violated and it can matter but it is less important than the equality of variance assumption and simulation studies have shown ANOVA to be quite robust to such violations *as long as the sample size and the variance are the same across all cells of the design.* If you combine several violations (say non-normality and heteroscedasticity) or have an unbalanced design, you cannot trust the F test anymore. That said, the distribution will also have an impact on the error variance and even if the nominal error level is preserved, non-normal data can severely reduce the power to detect a given difference. Also, when you are looking at skewed distributions, a few large values can have a big influence on the mean. Consequently, it's possible that two groups really have different means (in the sample and in the population) but that most of the observations (i.e. most of the test runs in your case) are in fact very similar. The mean therefore might not be what you are interested in (or at least not be all you are interested in). In a nutshell, you could probably still use ANOVA as inference will not necessarily be threatened but you might also want to consider alternatives to increase power or learn more about your data. Also note that strictly speaking the normality assumption applies to the distribution of the residuals, you should therefore look at residual plots or at least at the distribution in each cell, not at the whole data set at once.
I'm curious about your thoughts on reporting precision for sample statistics. Are there good rules of thumb for this? For example, I'm reporting on $X$ and $Y$ which have $n$ samples. Statistics of interest include the mean, standard deviation, skewness, kurtosis, and $\rm{Cov}(X,Y)$. For some $(X,Y)$, the sample size $n$ is very large. For some $(W,Z)$ the sample sizes are smaller. How should I determine the reporting precision (significant figures)? Very interested in your thoughts.
What you are getting at is how should the reporting precision (number of significant digits written out) depend on your certainty in your result. You have more trust in the accuracy of an empirical mean for a larger sample than for a smaller one. The central limit theorem is the rigorous result that comes to my mind, at least for the difference between the empirical mean and the mean of the underlying distribution. You should compute the standard deviation of the limiting normal distribution in the CLT (estimate it from the sample variance) and divide it by square root of sample size; this gives a good measure of spread for the mean. In experimental scientific papers people often report measurements like this: $$\textrm{measured mean}\ \pm\ \textrm{st. dev.}$$ I would consider the standard deviation to 2 or 3 significant values as relevant, and then I'd report the mean to include the first two digits that are covered by the order of magnitude of the st. dev. E.g. if your st. dev. is $0.3456$, then reporting the mean is meaningful to at most two decimal places. If your st. dev. is $65000$, then there is no point in writing out hundreds and lower in your mean.
I have just started to learn Cryptography. I am trying to learn "Merkle-Hellman Knapsack Cryptosystem". So, right at the beginning of the discussion, a question came in my mind: Why is it important to solve a problem in Polynomial time?
The concept of polynomial time is a theoretical concept that is supposed to capture what kinds of problems are possible to solve "in practice". In practical cryptography, the goal one is interested in is that no government body can break the system in less than a 100 years (say). However, this concept is hard to reason about, for many reasons. One reason is that the set of problems that can be solved in a 100 years depend on your hardware, budget, and so on. Moreover, even given a specific algorithm for a problem, it is hard to estimate how much time exactly it will require to run. To circumvent these difficulties, cryptography theory suggests the following idea: > > Consider a cryptosystem which depends on a parameter $n$ tending to infinity. Analyze the security of the cryptosystem in terms of the asymptotic complexity of breaking it. > > > The asymptotic complexity doesn't depend too much on the exact hardware (as long as it's not a quantum computer); if next year a new CPU is revealed running 10 times as fast, a system that requires $O(2^n)$ time for breaking will still require $O(2^n)$ time for breaking. Moreover, we make the following assumption: > > An algorithm runs "fast" if it runs in polynomial time (in $n$). > > > Polynomial time is a natural class for many reasons, the reason here being that it is the smallest class which is closed under function composition and contains superlinear functions. Putting both ideas together, we get: > > A cryptosystem depending on a parameter $n$ is secure if it cannot be broken in time polynomial in $n$. > > > Here $n$ should be a reasonable parameter. One way of enforcing it is to require the encryption procedure to run in polynomial time in $n$. This definition has led to a beautiful theory of theoretical cryptography. However, in practice polynomial time is not such a good notion, especially in cryptography, for two reasons: * Polynomial time could be very slow in practical. For example, an encryption procedure running in time $O(n^{10})$ or $10^{10} n$ is not practical. * Super-polynomial time could be practical, depending on the value of $n$. For example, $2^{n/2}$ is very practical for $n = 64$. In applied cryptography, therefore, one tries to give running time estimates for a particular system (a particular setting of $n$) which is not asymptotic. Usually one calculates the number of "basic operations", a concept which is not defined precisely but is a better proxy than asymptotic complexity. When possible, actual simulations are run to show that the cryptosystem can be broken with weakened parameters, and to extrapolate the time it takes to break the full cryptosystem. One problem with this approach is that it is going to be very difficult to prove that a system is secure. While this is true even for theoretical cryptography (we cannot prove the security of any cryptosystem), here it is seems that the best one can do is try to break the system and fail; proving a meaningful lower bound is probably too much to ask for. One way to circumvent this difficulty in some situations is to use oracle (or black box) models. Suppose that a cryptosystem relies on some cryptographic primitive which we trust. We can imagine replacing this primitive by an idealized component. In this case, sometimes lower bounds on the number of queries to the black box can be proved. This allows us to gauge the security of the model, making the implicit assumption that the algorithm implementing the black box is completely opaque. This is the approach taken by theoretical cryptography – assuming the existence of some secure cryptographic primitive (which we can't prove at the moment), it constructs cryptosystems and proves their security. However, this doesn't necessarily have practical implications, since in reality the parameter $n$ is fixed rather than tending to infinity. Even in cases in which we can rigorously prove that some cryptosystem requires $T(n)$ time to break (given some assumptions on the cryptographic primitives), usually the guarantees are very weak, and in order to get reasonably large $T(n)$, an impractically large $n$ is required. For more information, check this [classic paper](http://web.cs.ucdavis.edu/~rogaway/papers/sym-enc.pdf) on concrete security and this [notorious article](http://www.ams.org/notices/200708/tx070800972p.pdf) debunking theoretical cryptography.
The complexity class BQP corresponds to polynomial time quantum subroutines taking in classical inputs and spitting out a probabilistic classical output. Quantum advice modifies that to include copies of some predetermined quantum advice states but with classical inputs as usual. What is the complexity class for polynomial time quantum subroutines taking in arbitrary quantum states as inputs, with one copy only due to no cloning, and spitting out quantum states as an output?
Something you might be interested in is the notion of *quantum oracle* introduced by Aaronson and Kuperberg in [arXiv:quant-ph/0604056](http://arxiv.org/abs/quant-ph/0604056). Quoting from their paper: > > Just as a classical oracle models a subroutine to which > an algorithm has black-box access, so a quantum oracle models a quantum subroutine, which can take quantum input > and produce quantum output. > > > This doesn't directly answer your question about a definition of a complexity class that represents the model you describe. Still, the notion of quantum oracle has relevance in complexity theory: in their paper Aaronson and Kuperberg use a quantum oracle to give a separation between [QMA](http://qwiki.stanford.edu/index.php/Complexity_Zoo%3aQ#qma) and [QCMA](http://qwiki.stanford.edu/index.php/Complexity_Zoo%3aQ#qcma).
It is known that TCP is a connection *oriented* transport layer protocol of the TCP/IP suite. But TCP (and UDP) operate over a connection *less* network layer protocol, the IP (internet protocol). What this means is that in TCP, if a sender sends data to the receiver, at the network layer, the receiver will accept the packet, without bothering about the source IP address of the packet. The receiver network layer then strips-off the IP-header from the received packet (and the IP address information of the source and the destination get stripped along with it), and passes off the remaining segment to the transport layer. Now the doubt arises is, since the IP address information has been stripped away from the *packet* before forwarding it to the transport layer, how does the TCP sitting at the transport layer manage to decide from which IP address the information has come, when the only address details the transport header contains is the source and destination port numbers of the processes running on the respective hosts?
I remember a counter-example from the 1980s: OS-9/68000 was quite popular then: a multi-user, multi-processing real-time operating system for the Motorola 68K processor family, loosely patterned after UNIX. It didn't require any of the features from your list. The 68000 didn't have address translation. OS-9 compilers produced position-independent code that could run from any address, using only relative branches (N bytes forward or backward) instead of absolute jumps (to a given address X). Static memory was always addressed relative to the OS-provided data start address. Dynamically-allocated memory always comes at "random" addresses, so applications always address dynamically-allocated data relative to an OS-provided address. DMA disk transfer was typically used in OS-9 device drivers, but that was not necessary for the multi-user feature, only to avoid the blocking time of programmed I/O that would otherwise negatively affect the real-time capabilities. If I remember correctly (wrong! Thanks, @davidbak), the 68000 itself didn't have a privileged mode, so the protection against evil code was limited. There was an optional memory management unit chip available that OS-9 could use for protection purposes. OS-9 typically ran without demand paging. When physical memory was exhausted, new memory allocation attempts simply failed (demand paging doesn't fit nicely to a real-time OS). I even know a few Windows servers today where the admins completely switched off the pagefile feature. But don't misunderstand me: The features you listed perfectly make sense for multi-user, multi-processing operating systems, I just wanted to point out that none of them is strictly necessary.
I read about two versions of the loss function for logistic regression, which of them is correct and why? 1. From [*Machine Learning*, Zhou Z.H](http://download.csdn.net/tag/%E5%91%A8%E5%BF%97%E5%8D%8E) (in Chinese), with $\beta = (w, b)\text{ and }\beta^Tx=w^Tx +b$: $$l(\beta) = \sum\limits\_{i=1}^{m}\Big(-y\_i\beta^Tx\_i+\ln(1+e^{\beta^Tx\_i})\Big) \tag 1$$ 2. From my college course, with $z\_i = y\_if(x\_i)=y\_i(w^Tx\_i + b)$: $$L(z\_i)=\log(1+e^{-z\_i}) \tag 2$$ --- I know that the first one is an accumulation of all samples and the second one is for a single sample, but I am more curious about the difference in the form of two loss functions. Somehow I have a feeling that they are equivalent.
I learned the loss function for logistic regression as follows. Logistic regression performs binary classification, and so the label outputs are binary, 0 or 1. Let $P(y=1|x)$ be the probability that the binary output $y$ is 1 given the input feature vector $x$. The coefficients $w$ are the weights that the algorithm is trying to learn. $$P(y=1|x) = \frac{1}{1 + e^{-w^{T}x}}$$ Because logistic regression is binary, the probability $P(y=0|x)$ is simply 1 minus the term above. $$P(y=0|x) = 1- \frac{1}{1 + e^{-w^{T}x}}$$ The loss function $J(w)$ is the sum of (A) the output $y=1$ multiplied by $P(y=1)$ and (B) the output $y=0$ multiplied by $P(y=0)$ for one training example, summed over $m$ training examples. $$J(w) = \sum\_{i=1}^{m} y^{(i)} \log P(y=1) + (1 - y^{(i)}) \log P(y=0)$$ where $y^{(i)}$ indicates the $i^{th}$ label in your training data. If a training instance has a label of $1$, then $y^{(i)}=1$, leaving the left summand in place but making the right summand with $1-y^{(i)}$ become $0$. On the other hand, if a training instance has $y=0$, then the right summand with the term $1-y^{(i)}$ remains in place, but the left summand becomes $0$. Log probability is used for ease of calculation. If we then replace $P(y=1)$ and $P(y=0)$ with the earlier expressions, then we get: $$J(w) = \sum\_{i=1}^{m} y^{(i)} \log \left(\frac{1}{1 + e^{-w^{T}x}}\right) + (1 - y^{(i)}) \log \left(1- \frac{1}{1 + e^{-w^{T}x}}\right)$$ You can read more about this form in these [Stanford lecture notes](http://cs229.stanford.edu/notes/cs229-notes1.pdf).
If you analyze the same data with a t test and the nonparametric Mann-Whitney test, which do you expect to have the lower P value?
It depends. If you assume that the data are sampled from Gaussian distributions, then the t test has a bit more power (depending on sample size) so will -- on average -- have a lower P value. But only on average. For any particular set of data, the t test may give a higher or a lower P value. If you don't assume the data are sampled from Gaussian distributions, then the Mann-Whitney test may have more power (depending on how far the distribution is from Gaussian). If so, you'd expect the Mann-Whitney test to have the lower P value on average, but the results are not predictable for any particular set of data. What does "on average" mean? Perform both tests on many sets of (simulated) data. Compute the average P value from the t test, and also the average P value from the Mann-Whitney test. Now compare the two averages.
I have a little history question, namely, as the title says, I am looking for early uses of *trees* (as a data structure, search tree, whatever) in computer science.
Isaiah: ""And there shall come forth a rod out of the stem of Jesse, and a Branch shall grow out of his roots" The tree as a data model for genealogical information is very ancient indeed.
I have an array of 100,000 strings, all of length $k$. I want to compare each string to every other string to see if any two strings differ by 1 character. Right now, as I add each string to the array, I'm checking it against every string already in the array, which has a time complexity of $\frac{n(n-1)}{2} k$. Is there a data structure or algorithm that can compare strings to each other faster than what I'm already doing? Some additional information: * Order matters: `abcde` and `xbcde` differ by 1 character, while `abcde` and `edcba` differ by 4 characters. * For each pair of strings that differ by one character, I will be removing one of those strings from the array. * Right now, I'm looking for strings that differ by only 1 character, but it would be nice if that 1 character difference could be increased to, say, 2, 3, or 4 characters. However, in this case, I think efficiency is more important than the ability to increase the character-difference limit. * $k$ is usually in the range of 20-40.
My solution is similar to j\_random\_hacker's but uses only a single hash set. I would create a hash set of strings. For each string in the input, add to the set $k$ strings. In each of these strings replace one of the letters with a special character, not found in any of the strings. While you add them, check that they are not already in the set. If they are then you have two strings that only differ by (at most) one character. An example with strings 'abc', 'adc' For abc we add '\*bc', 'a\*c' and 'ab\*' For adc we add '\*dc', 'a\*c' and 'ad\*' When we add 'a\*c' the second time we notice it is already in the set, so we know that there are two strings that only differ by one letter. The total running time of this algorithm is $O(n\*k^2)$. This is because we create $k$ new strings for all $n$ strings in the input. For each of those strings we need to calculate the hash, which typically takes $O(k)$ time. Storing all the strings takes $O(n\*k^2)$ space. **Further improvements** We can improve the algorithm further by not storing the modified strings directly but instead storing an object with a reference to the original string and the index of the character that is masked. This way we do not need to create all of the strings and we only need $O(n\*k)$ space to store all of the objects. You will need to implement a custom hash function for the objects. We can take the Java implementation as an example, see [the java documentation](https://docs.oracle.com/javase/6/docs/api/java/lang/String.html#hashCode()). The java hashCode multiplies the unicode value of each character with $31^{k-i}$ (with $k$ the string length and $i$ the one-based index of the character. Note that each altered string only differs by one character from the original. We can easily compute the contribution of that character to the hash code. We can subtract that and add our masking character instead. This takes $O(1)$ to compute. This allows us to bring the total running time down to $O(n\*k)$
What exactly are the parameters in GPT-3's 175 billion parameters? Are these the words in text on which model is trained?
The parameters in GPT-3, like any neural network, are the weights and biases of the layers. From the following table taken from the [GTP-3 paper](https://arxiv.org/pdf/2005.14165.pdf) [![GPT-3 Table](https://i.stack.imgur.com/xX1YS.png)](https://i.stack.imgur.com/xX1YS.png) there are different versions of GPT-3 of various sizes. The more layers a version has the more parameters it has since it has more weights and biases. Regardless of the model version, the words it was trained on are the 300 billion tokens the caption references with what appears to be around 45TB of data scraped from the internet.
Which language class are today's modern programming languages like Java, JavaScript, and Python in? It appears (?) they are not context-free and not regular languages. Are these programming languages context-sensitive or decidable languages? I am very confused! I know that context-free is more powerful than regular languages and that context-sensitive is more powerful than context-free. Are modern programming languages both context-free and context-sensitive?
Practically no programming language, modern or ancient, is truly context-free, regardless of what people will tell you. But it hardly matters. Every programming language can be parsed; otherwise, it wouldn't be very useful. So all the deviations from context freeness have been dealt with. What people usually mean when they tell you that programming languages are context-free because somewhere in the documentation there's a context-free grammar, is that the set of well-formed programs (that is, the "language" in the sense of formal language theory) is a *subset* of a context-free grammar, conditioned by a set of constraints written in the rest of the language documentation. That's mostly how programs are parsed: a context-free grammar is used, which recognises all valid and some invalid programs, and then the resulting parse tree is traversed to apply the constraints. To justify describing the language as "context-free", there's a tendency to say that these constraints are "semantic" (and therefore not part of the language syntax). [Note 1] But that's not a very meaningful use of the word "semantic", since rules like "every variable must be declared" (which is common, if by no means universal) is certainly syntactic in the sense that you can easily apply it without knowing anything about the meaning of the various language constructs. All it requires is verifying that a symbol used in some scope also appears in a declaration in an enclosing scope. However, the "also appears" part makes this rule context-sensitive. That rule is somewhat similar to the constraints mentioned in [this post about Javascript](https://stackoverflow.com/a/30698970/1566221) (linked to from one of your comments to your question): that neither a Javascript object definition nor a function parameter list can define the same identifier twice, another rule which is both clearly context-sensitive and clearly syntactic. In addition, many languages require non-context-free transformations prior to the parse; these transformations are as much part of the grammar of the language as anything else. For example: * Layout sensitive block syntax, as in Python, Haskell and many data description languages. (Context-sensitive because parsing requires that all whitespace prefixes in a block be the same length.) * Macros, as in Rust, C-family languages, Scheme and Lisp, and a vast number of others. Also, template expansion, at least in the way that it is done in C++. * User-definable operators with user-definable precedences, as in Haskell, Swift and Scala. (Scala doesn't really have user-definable precedence, but I think it is still context-sensitive. I might be wrong, though.) None of this in any way diminishes the value of context-free parsing, neither in practical nor theoretical terms. Most parsers are and will continue to be fundamentally based on some context-free algorithm. Despite a lot of trying, no-one yet has come up with a grammar formalism which is both more powerful than context-free grammars and associated with an algorithm for transforming a grammar into a parser without adding hand-written code. (To be clear: the goal I refer to is a formalism which is more powerful than context-free grammars, so that it can handle constraints like "variables must be declared before they are used" and the other features mentioned above, but without being so powerful that it is Turing complete and therefore undecidable.) ### Notes 1. Excluding rules which cannot be implemented in a context-free grammar in order to say that the language is context-free strikes me as a most peculiar way to define context-freeness. Of course, if you remove all context-sensitive aspects of a language, you end up with a context-free superset, but it's no longer the same language.
Are there any recent books on Online Algorithms? I know of only two books on the subject. * Online Computation and Competitive Analysis by Allan Borodin and Ran El-Yaniv: This is a classic but old book, and does not contain many recent advances in the field. * [The Design of Competitive Online Algorithms via a Primal-Dual Approach](http://www.tau.ac.il/~nivb/download/pd-survey.pdf) by Niv Buchbinder and Joseph (Seffi) Naor: This is a new book and contains lot of recent results. However, it's scope is limited to LP based primal-dual algorithms. Please list here all the books on Online Algorithms that you may know. If there are any books freely available on the web, that will be great.
It seems that there are no recent books or survey papers on online algorithms.
First, let's assume that we want to generated from a [Dirichlet](http://en.wikipedia.org/wiki/Dirichlet_distribution)(1,1,1,1) distribution. Would the following method be correct?: * generate three variates from a Uniform(0,1). Call them $x\_1$, $x\_2$, $x\_3$. * then, order these such that $0 \leq x\_{(1)} \leq x\_{(2)} \leq x\_{(3)} \leq 1$ * then, return the differences as our Dirichlet variate: $(x\_{(1)}, x\_{(2)}-x\_{(1)}, x\_{(3)}-x\_{(2)}, 1-x\_{(3)})$ Is this correct? I have a feeling it is correct, but I'm not sure and this does not seem to be the same as either method described on Wikipedia or any other search I did. Maybe it is slow or has other problems, but I'm curious if it is correct. Assuming this is correct, can it be extended to non-uniform Dirichlets, such as Dirichlet(a,b,c,d)? **Extra note**: I am *not* simply asking how to generate a Dirichlet; there is plenty of information about that already. I'm just curious to see if the method for uniforms can be extended. Is there a more general method that involves drawing from a distribution, then ordering those numbers, then using the gaps?
If $Y\_i$ are independent $\mathrm{Gamma}(\alpha\_i,\beta)$, for $i=1,\dots,k$, then $$ (X\_1,\dots,X\_k) = \left(\frac{Y\_1}{\sum\_{j=1}^k Y\_j}, \dots, \frac{Y\_k}{\sum\_{j=1}^k Y\_j} \right) \sim \mathrm{Dirichlet}(\alpha\_1,\dots,\alpha\_k) \, .$$ So, in R just do something like ``` rdirichlet <- function(a) { y <- rgamma(length(a), a, 1) return(y / sum(y)) } ``` And use it uniformily ``` > rdirichlet(c(1, 1, 1, 1)) [1] 0.40186737 0.03924152 0.37070316 0.18818796 ``` or non-uniformily ``` > rdirichlet(c(3, 2.5, 9, 7)) [1] 0.1377426 0.1043081 0.4701179 0.2878314 ``` The proof is given on page 594 of Luc Devroye's beautiful book: <http://luc.devroye.org/rnbookindex.html> P.S. Thanks to @cardinal for the R hacking tips.
I remember sitting in stats courses as an undergrad hearing about why extrapolation was a bad idea. Furthermore, there are a variety of sources online which comment on this. There's also a mention of it [here](https://www.pmg.com/blog/googles-causal-impact-part-2-caution/). Can anyone help me understand why extrapolation is a bad idea? If it is, how is it that forecasting techniques aren't statistically invalid?
A regression model is often used for extrapolation, i.e. predicting the response to an input which lies outside of the range of the values of the predictor variable used to fit the model. The danger associated with extrapolation is illustrated in the following figure. [![graph showing extrapolated line continuing upwards where "true" value decreases](https://i.stack.imgur.com/3Ab7e.jpg)](https://i.stack.imgur.com/3Ab7e.jpg) The regression model is “by construction” an interpolation model, and should not be used for extrapolation, *unless* this is properly justified.
I want to classify a dataset of support tickets which mostly contain text in the description field and sometimes server logs in a separate field. The log field is not always there but when it's present, it's a good indicator of the target class of the ticket. I have created a CNN based classifier which can classify the tickets based on the log field, and a SVM clf with TFIDF based features for the description field. I am thinking of adding the output probabilities of the CNN classifier in TFIDF based SVM classifier to combine the models as a feature column. Is there a better way to combine these models? Is there a better way to approach this problem, without having two separate models?
One common approach for your problem would be to first learn a low-dimensional representation of your output space in an unsupervised manner, and then a mapping from your inputs to the now-dimensionally-reduced outputs. To put it in a kind of pseudo-SKLearn terms, the overall procedure would look as follows: Model fitting: ``` dr = SomeDimensionalityReducer() Y_dr = dr.fit_transform(Y) m = SomeSupervisedModel() m.fit(X, Y_dr) ``` Model application: ``` dr.inverse(m.predict(X_new)) ``` For the role of `SomeSupervisedModel` you are free to choose any standard machine learning regression model, remembering that you may need to predict a vector as an output rather than a single number. In some cases (such as the neural network) it is a natural part of the model, in others it means you would need to train a separate model for each of the components in the output. The choice of the dimensionality reduction technique is a bit more tricky, as the `inverse` operation is not normally part of standard implementations, hence you might need to understand and implement it manually. Your main options are listed in the Wikipedia page on [dimensionality reduction](https://en.wikipedia.org/wiki/Dimensionality_reduction). Consider [PCA](https://en.wikipedia.org/wiki/Principal_component_analysis), [Kernel-PCA](https://en.wikipedia.org/wiki/Kernel_principal_component_analysis) and an [Autoencoder](https://en.wikipedia.org/wiki/Autoencoder) as your base choices. * PCA would result in a linear mapping and might not be powerful enough to represent the output space adequately in all but the simplest of tasks. However, it is easy to use and understand and is not too prone to overfitting. * Kernel-PCA a more flexible nonlinear model, which is still easy to implement, but it has higher memory and computational requirements and may overfit. * The Autoencoder route might be the best of the three, but, being a neural network-based method it may be fiddly and require a lot of tuning. There is a whole world of different kinds of autoencoders to choose from. Another possibility, not mentioned in the Wikipedia article above (because it is more of a "dimensionality expansion" rather than reduction method) is the [Generative Adversarial Network](https://en.wikipedia.org/wiki/Generative_adversarial_network). Of all the mentioned approaches it may be the most sophisticated and, if you are lucky and have a lot of data, may give the best results. Unfortunately, it is the fussiest of all to work with, so try other things before trying it. Note that you do not need your *inputs* `X` to perform the dimensionality reduction, hence you can "help" your method by feeding more samples from the output space without having to also obtain the corresponding inputs.
There are $n$ people and $n$ items. For each person, there is a set of items he likes. Our goal is to give to each person a single item that he likes, i.e, find a perfect matching in the preference graph (encoding the "like" relation). In some cases, this may be done using a [*picking sequence*](https://en.wikipedia.org/wiki/Picking_sequence): order the people in a queue and let each person in turn pick a single item he likes. For example, suppose that: * $A$ likes $\{1,2\}$ * $B$ likes $\{2,3\}$ * $C$ likes $\{3\}$ Then, $\langle C,B,A\rangle$ is a good picking sequence, since $C$ necessarily picks $3$, then $B$ picks $2$, then $A$ picks $1$ and we get a perfect matching. On the other hand, $\langle C,A,B\rangle$ is not a good picking sequence, since after $C$ picks $3$, it is possible that $A$ will pick $2$, and then $B$ will remain without an item. So, my question is: If a perfect matching exists, can it always be found by a picking sequence?
Yes, it can. This can be proved by mathematical induction on $n$. Denote by $U$ the set of people. For a subset of people $P$, denote by $N(P)$ the *neighborhood* of $P$, that is the set of items that at least one person in $P$ likes. By [Hall's marriage theorem](https://en.wikipedia.org/wiki/Hall%27s_marriage_theorem#Graph_theoretic_formulation), $|P|\le |N(P)|$ for all $P$. If $|P|< |N(P)|$ for all $P\subsetneq U$, then we can let an arbitrary person pick first. After his picking, $N(P)$ is decreased by at most 1 for all $P$, so $|P|\le |N(P)|$ holds for all $P$ in the remaining graph. Therefore there is still a perfect matching in the remaining graph by Hall's marriage theorem, and there is a valid picking sequence by inductive assumption. If there exists $P\subsetneq U$ such that $|P|=|N(P)|$, then there is a perfect matching in the subgraph induced by $P\cup N(P)$, and by inductive assumption there is a valid picking sequence for people in $P$ in this subgraph. We can apply this picking sequence in our orignial graph first, and the result is the same, that is, exactly all items in $N(P)$ are picked. Note there is still a perfect matching in the remaining graph, so we can again use the inductive assumption to complete the picking sequence.
I would like to know what is the asymptotic time complexity analysis for general models of Back-propagation Neural Network, SVM and Maximum Entropy. Does it just depend on number of features included and training time complexity is the only stuff that really matters. And does it real matter when applying on large chunk of text classification like twitter data or blog data
SVMs contain an underlying optimization step that is solved heuristically, so for any actual algorithm that purports to solve SVMs, the answer is undefined. A number like $O(n^3)$ is generally bandied around for implementations like libsvm, which means something like time/iteration \* #iterations (where #iterations is assumed to be constant)
> > Let $X\_1$, $X\_2$, $\cdots$, $X\_d \sim \mathcal{N}(0, 1)$ and be independent. What is the expectation of $\frac{X\_1^4}{(X\_1^2 + \cdots + X\_d^2)^2}$? > > > It is easy to find $\mathbb{E}\left(\frac{X\_1^2}{X\_1^2 + \cdots + X\_d^2}\right) = \frac{1}{d}$ by symmetry. But I do not know how to find the expectation of $\frac{X\_1^4}{(X\_1^2 + \cdots + X\_d^2)^2}$. Could you please provide some hints? **What I have obtained so far** I wanted to find $\mathbb{E}\left(\frac{X\_1^4}{(X\_1^2 + \cdots + X\_d^2)^2}\right)$ by symmetry. But this case is different from that for $\mathbb{E}\left(\frac{X\_1^2}{X\_1^2 + \cdots + X\_d^2}\right)$ because $\mathbb{E}\left(\frac{X\_i^4}{(X\_1^2 + \cdots + X\_d^2)^2}\right)$ may be not equal to $\mathbb{E}\left(\frac{X\_i^2X\_j^2}{(X\_1^2 + \cdots + X\_d^2)^2}\right)$. So I need some other ideas to find the expectation. **Where this question comes from** A [question](https://math.stackexchange.com/questions/1853954/the-variance-of-the-expected-distortion-of-a-linear-transformation) in mathematics stack exchange asks for the variance of $\|Ax\|\_2^2$ for a unit uniform random vector $x$ on $S^{d-1}$. My derivation shows that the answer depends sorely on the values of $\mathbb{E}\left(\frac{X\_i^4}{(X\_1^2 + \cdots + X\_d^2)^2}\right)$ and $\mathbb{E}\left(\frac{X\_i^2X\_j^2}{(X\_1^2 + \cdots + X\_d^2)^2}\right)$ for $i \neq j$. Since $$ \sum\_{i \neq j}\mathbb{E} \left( \frac{X\_i^2X\_j^2}{(X\_1^2 + \cdots + X\_d^2)^2}\right) + \sum\_i \mathbb{E}\left(\frac{X\_i^4}{(X\_1^2 + \cdots + X\_d^2)^2}\right) = 1 $$ and by symmetry, we only need to know the value of $\mathbb{E}\left(\frac{X\_1^4}{(X\_1^2 + \cdots + X\_d^2)^2}\right)$ to obtain other expectations.
The distribution of $X\_i^2$ is chi-square (and also a special case of gamma). The distribution of $\frac{X\_1^2}{X\_1^2 + \cdots + X\_d^2}$ is thereby beta. The expectation of the square of a beta isn't difficult.
I have following problem: Within an independent groups 1-factor design I have two independent groups, with a sample size of 20 each. The data of the treatment group is not normally distributed, whereas the data for the control group is (checked with Shapiro-Wilk Normality Test). Now I want to check if the differences of the means of both groups are significant. What is the appropriate test for this? I think it should be the Wilcoxon Rank Sum and Signed Rank Test, but I am not sure... Could please anybody help me?
If you are 100% sure that the two samples are drawn from populations with different distributions (one Gaussian, one not), are you sure you need any statistical test? You are already sure that the two populations are different. Isn't that enough? Does it really help to test for differences in means or medians? (The answer, of course, depends on your scientific goals, which were not part of the original question.)
I have a b+ tree and i want to find the record associated with a specific key Ki. So i run the [b+ tree search algorithm](https://en.wikipedia.org/wiki/B+_tree#Search). If a certain node in the search path is a leaf and K=Ki, then the record exists in the table and we can return the record associated with Ki. Since the leaf nodes have the same structure of internal nodes, how can the algorithm know if a node is a leaf node ?
Ellul, Krawetz, Shallit and Wang construct in their paper [Regular Expressions: New Results and Open Problems](https://cs.uwaterloo.ca/~shallit/Papers/re3.pdf) a regular expression of length $n$ (for infinitely many $n$) such that the shortest string missing from its language has length $2^{\Omega(n)}$. Since a regular expression of length $n$ can be converted to an NFA having $O(n)$ states, this gives, for infinitely many $n$, an NFA having $n$ states such that the shortest string not accepted by the NFA has length $2^{\Omega(n)}$. Conversely, if an NFA having $n$ states doesn't accept all strings, then it must reject some string of length shorter than $2^n$. This follows from the pumping lemma once you convert the NFA to a DFA. Hence the construction in the paper mentioned above is optimal up to the constant in the exponent.
I am running kmeans for a market research study, and I have a couple of questions: 1. Should I be standardizing my data, and if so, how? For example, one variable I have is product demand, which is measured on a seven point scale. On the other hand, I also have a variable on age, which is a very different scale. Should I be standardizing these, and how? 2. Can I use categorical variables in kmeans? Specifically, I would like to use gender and ethnicity. If it is possible, how would I prepare this data for the cluster analysis? I suppose I would assign numbers to them, but how would I standardize these with my other data? 3. I downloaded the open source software Cluster 3.0. Is this a good one to use?
First of all: yes: standardization is a must unless you have a strong argument why it is not necessary. Probably try z scores first. Discrete data is a larger issue. K-means is meant for continuous data. The mean will not be discrete, so the cluster centers will likely be anomalous. You have a high chance that the clustering algorithms ends up discovering the discreteness of your data, instead of a sensible structure. Categorical variables are worse. K-means can't handle them at all; a popular hack is to turn them into multiple binary variables (male, female). This will however expose above problems just at an even worse scale, because now it's multiple highly correlated binary variables. Since you apparently are dealing with survey data, consider using hierarchical clustering. With an appropriate distance function, it *can* deal with all above issues. You just need to spend some effort on finding a good measure of similarity. Cluster 3.0 - I have never even seen it. I figure it is an okay choice for non data science people. Probably similar to other tools such as Matlab. It will be missing all the modern algorithms, but it probably has an easy to use user interface.
The matrix-variate normal distribution can be [sampled indirectly by utilizing the Cholesky decomposition of two positive definite covariance matrices](https://en.wikipedia.org/wiki/Matrix_normal_distribution#Drawing_values_from_the_distribution). However, if one or both of the covariance matrices are positive semi-definite and not positive definite (for example a block structure due to several pairs of perfectly correlated features and samples) the Cholesky decomposition fails, e.g. $\Sigma\_{A} = \begin{bmatrix} 1 & 1 & 0 & 0\\ 1 & 1 & 0 & 0\\ 0 & 0 & 1 & 1\\ 0 & 0 & 1 & 1 \end{bmatrix} \quad $or another example:$ \quad \Sigma\_{B} = \begin{bmatrix} 4 & 14 & 0 & 0\\ 14 & 49 & 0 & 0\\ 0 & 0 & 25 & 20\\ 0 & 0 & 20 & 16 \end{bmatrix}$ Where $\Sigma\_{B}$ is generated from $R$ (correlation matrix this time) = $\Sigma\_{A} $ and $D$ (standard deviations) = $\begin{bmatrix} 2 & 0 & 0 & 0\\ 0 & 7 & 0 & 0\\ 0 & 0 & 5 & 0\\ 0 & 0 & 0 & 4 \end{bmatrix}$ via $RDR$. Is it possible to adapt the [SVD based sampling technique for the multivariate normal case](https://stats.stackexchange.com/a/159322/107160) that overcomes this difficulty to the matrix-variate case? --- This question is different from this [post](https://stats.stackexchange.com/questions/63817/generate-normally-distributed-random-numbers-with-non-positive-definite-covarian) in that it is not clear if the lower diagonal produced by the SVD based sampling technique will suffice, since it is [potentially quite different from one produced by a Cholesky decomposition](https://math.stackexchange.com/a/307208) that might be performed in this case by [removing duplicate features and/or samples from the covariance matrices, performing the decomposition, and putting them back in](https://stats.stackexchange.com/a/238766/107160). Also, the mentioned [post](https://stats.stackexchange.com/questions/63817/generate-normally-distributed-random-numbers-with-non-positive-definite-covarian) is not concerned with positive semi-definite matrices.
This sounds more like an issue with singular covariance matrices than with random matrices vs. random vectors. To handle the latter issue, do everything as a random vector, and then in the last step, reshape the vector into a matrix. To handle the former problem: ### If your desired covariance matrix is singular... Let $\Sigma$ be a singular covariance matrix. Because it's singular, you can't do a Cholesky decomposition. But you can do a [singular value decomposition](https://en.wikipedia.org/wiki/Singular_value_decomposition). ``` [U, S, V] = svd(Sigma) ``` The singular value decomposition will construct matrices $U$, $S$, and $V$ such that $ \Sigma = U S V'$ and $S$ is diagonal. Furthermore, $U = V$ (because $\Sigma$ is symmetric). The number of positive singular values will be the [rank](https://en.wikipedia.org/wiki/Rank_(linear_algebra)) of your covariance matrix. You can then construct $n$ random vectors of length $k$ with. `X = randn(n, k) * sqrt(S) * U'` Let $\mathbf{z}$ be a standard multivariate normal vector. The basic idea is that: \begin{align\*} \mathrm{Var}\left(US^{\frac{1}{2}} \mathbf{z} \right) &= US^{\frac{1}{2}} \mathrm{Var}\left( \mathbf{x} \right)S^{\frac{1}{2}}U' \\ &= U S U'\\ &= \Sigma \end{align\*} Once you get a vector $US^{\frac{1}{2}}\mathbf{z}$ you can simply reshape it to the dimensions of your matrix. (Eg. a 6 by 1 vector could become a 3 by 2 matrix.) The SVD on a symmetric matrix $C$ is a way to find another matrix $A$ such that $AA' = C$. ### Optional step to be a more clever mathematician and more efficient coder Find the singular values below some tolerance and remove them and their corresponding columns from $S$ and $U$ respectively. This way you can generate less than a $k$ dimensional random vector $\mathbf{z}$.
What's the difference of mathematical statistics and statistics? I've read [this](http://en.wikipedia.org/wiki/Statistics): > > Statistics is the study of the collection, organization, analysis, and > interpretation of data. It deals with all aspects of this, including > the planning of data collection in terms of the design of surveys and > experiments. > > > And [this](http://en.wikipedia.org/wiki/Mathematical_statistics): > > Mathematical statistics is the study of statistics from a mathematical > standpoint, using probability theory as well as other branches of > mathematics such as linear algebra and analysis. > > > So what would be the difference beetween them? I can understand that the processes of collection may not be mathematical, but I guess that organization, analysis and interpretation are, am I missing something?
Mathematical statistics concentrates on theorems and proofs and mathematical rigor, like other branches of math. It tends to be studied in math departments, and mathematical statisticians often try to derive new theorems. "Statistics" includes mathematical statistics, but the other parts of the field tend to concentrate on more practical problems of data analysis and so on.
What are good papers/books to better understand the power of Modular Decomposition and its properties? I'm particularly interested in algorithmic aspects of Modular Decomposition. I have heard that it is possible to find a Modular Decomposition of a graph in linear time. Is there are an relatively simple algorithm for that? What about a not so efficient but simpler algorithm?
There is a recent survey Habib and Paul (2010). [A survey of the algorithmic aspects of modular decomposition.](http://dx.doi.org/10.1016/j.cosrev.2010.01.001) Computer Science Review 4(1): 41-59 (2010) that you should check out.
From Tanebaum's *Structed Computer Organization*. > > Exercise 4 of Appendix B > ------------------------ > > > The following binary floating-point number consist of a sign bit, an excess $64$, radix $2$ exponent, and a $16$-bit fraction. Normalize $0$ $1000000$ > $0001010100000001$ ($\star$). > > > As far as I've studied, the number ($\star$) is already normalized and it is a representation of the number $1.0001010100000001 \times 2^0$. Moreover, according to the IEEE754, if one is meant to represent an **un**normalized number, she would have to set the exponent-related bits at $0$ --which is not the case. *My question is*: what is the exercise asking to me? Can that be an unormalized number? Maybe Tanebaum's simply asking the reader to multiply ($\star$) by $2^3$ then to subtract $3$ from the exponent. Yet to me, that has no sense at all --instead, you're really changing the value ...
I had to re-read your question a few times to have an opinion. The question states that there exists a sign, an excess 64 exponent, and a 16-bit mantissa. It then asks you to normalize a given bit pattern. It's my opinion that you are supposed to make some inferences. First off, an excess 64 suggests that you have a 7-bit mantissa. (It's not certain as one technically could have an excess 64 applied to an 8-bit or even a 9-bit mantissa, but those are all highly unlikely in normal circumstances.) This appears to be confirmed in the sense that they supply what appears to be one bit, followed by 7 bits, followed by 16 bits. Second, you are being asked to normalize a value. This is commonly asked by providing you with the signed number and a positioned radix, leaving you to figure out the sign, exponent, and mantissa. But in this case, their question must be taken to imply that they are giving you a denormalized value. So I believe they want you to accept the last 16 bits just as it is given to you and to do the necessary shifting to put it back into a properly normalized form. And I therefore think they want you to accept the given exponent as a starting point that you assume is correct for the given value. This leads to the third problem. You inserted a hidden bit on your own, on the assumption that the denormalized value carries a hidden bit. But denormalized values, even in FP formats supporting hidden bits (and not all do that) do not have an implied hidden bit. So I wonder why you inserted one on your own, writing "$1.0001010100000001\times 2^0$." If you were right about that, then why in the world would they now ask you to normalize it?? It doesn't make any sense, neither in the format of the question nor in actual experience with common cases. So I think here you must assume the opposite... that the 16 bit mantissa they gave you is instead just a pure 16-bit integer that has not been normalized and doesn't imply a hidden bit. Otherwise, what could they possibly be asking you to do? That said, I haven't worked from your book. So I don't know the context here. You will have to use your own judgment about what I write here and see if you think I might be right about it. If I am right, then you are being told $sign=0$, $exponent=0$ as it is in excess 64, and $value=5377$ (decimal.) And you are then asked to normalize it and create a proper 24-bit FP formatted value. Since you mentioned IEEE754, which uses hidden bit notation, I'd assume that this novel 24-bit FP format also uses hidden bit notation. I wish I had your book, as that would help to gain context, but I get something like the following: $0 \,\, 1001100 \,\,\, 0101000000010000$
I have two questions. 1. After compilation of any C program result is assembly language code which should depend on processor. So my question is: how do different computers with different processor types, for example Intel i3,i5,i7 etc., are able to run the same software, for example, media-player, browser, etc.? (I guess strangely it depends on Operating System) and on that software site they do not ask for processor type. 2. Do programs interact with any part of operating system for conversion of assembly code to machine code?
Regarding your first question, the Core i3, Core i5 and Core i7 share the same instruction set, apart for a few marginal instructions. In particular, there is backward compatibility, and so code that works for Core i3 would also work for Core i5 and Core i7 (unless it uses some undocumented behaviours, or specifically tests for the processor version). However, to get superior performance, optimizers need to know the exact target processor, and special instructions should be utilized. This can be handled in at least two ways: (1) crucial functions could have several versions, depending on the processor (either completely different versions or just different optimizing parameters), (2) libraries are used which implement the previous approach. Regarding your second question, the operating system is usually relied on for handling files, so yes, even an assembler needs to interact with the operating system.
I was curious to know what limits the max RAM capacity for an OS while reading about microprocessors being 32-bit and 64-bit. I know that limit for 32-bit OS is 4GB and for 64-bit OS is 16 Exabytes, but my question is how do we get there? I found this calculation: $\qquad 2^{32} = 4\,294\,967\,296$ and $\qquad \frac{4\,294\,967\,296}{1024 \cdot 1024}\,\mathrm{B} = 4\,096\, \mathrm{MB} = 4\, \mathrm{GB}$. It's different for 64-bit: $\qquad 2^{64} = 18\,446\,744\,073\,709\,551\,616$ and $\qquad \frac{18\,446\,744\,073\,709\,551\,616}{1024 \cdot 1024}\mathrm{B} = 16\,\mathrm{EB}$. What I don't understand is how the calculation of bits turned into bytes and reached the results 4GB and 16EXB?
1. First of all, in computer architecture, 32-bit/64-bit computing is the use of processors that have datapath widths, integer size, and memory addresses widths of 32/64 bits (four/eight octets). Also, 32-bit/64-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size(due to design decisions that may differ in different CPU/OS). 2. Secondly, there is a confusion that is created when calculating the maximum RAM capacity for 32/64 bit OS/microprocessor. The 32-bit/64-bit address/machine word is usually considered as containing a single **byte word** and not **byte** (*e.g a byte word might be a floating point number of 32/64 bits depending on the architecture & a byte on the other hand is 8 bits*) but since we do all our memory calculations in terms of bytes the result her too is in terms of bytes. Hence, **bytes** is the unit written where 1byte = 1 byteword = 1 address space/machine word So if we have 4,294,967,296 addresses that means we can address as much byte words and hence the total capacity of RAM in a 32-bit OS is 4GB, similarily for 64-bit OS the calculations are of addresses with one byteword each and later written in terms of bytes. I hope this should make it pretty clear. Thank you for your inputs and replies.
Note: Case is n>>p I am reading Elements of Statistical Learning and there are various mentions about the "right" way to do cross validation( e.g. page 60, page 245). Specifically, my question is how to evaluate the final model (without a separate test set) using k-fold CV or bootstrapping when there has been a model search? It seems that in most cases (ML algorithms without embedded feature selection) there will be 1. A feature selection step 2. A meta parameter selection step (e.g. the cost parameter in SVM). My Questions: 1. I have seen that the feature selection step can be done where feature selection is done on the whole training set and held aside. Then, using k-fold CV, the feature selection algorithm is used in each fold (getting different features possibly chosen each time) and the error averaged. Then, you would use the features chosen using all the data (that were set aside) to train the final mode, but use the error from the cross validation as an estimate of future performance of the model. *IS THIS CORRECT?* 2. When you are using cross validation to select model parameters, then how to estimate model performance afterwards? *IS IT THE SAME PROCESS AS #1 ABOVE OR SHOULD YOU USE NESTED CV LIKE SHOWN ON PAGE 54 ([pdf](http://cseweb.ucsd.edu/~elkan/291/dm.pdf)) OR SOMETHING ELSE?* 3. When you are doing both steps (feature and parameter setting).....then what do you do? complex nested loops? 4. If you have a separate hold out sample, does the concern go away and you can use cross validation to select features and parameters (without worry since your performance estimate will come from a hold out set)?
The key thing to remember is that for cross-validation to give an (almost) unbiased performance estimate *every* step involved in fitting the model must also be performed independently in each fold of the cross-validation procedure. The best thing to do is to view feature selection, meta/hyper-parameter setting and optimising the parameters as integral parts of model fitting and never do any one of these steps without doing the other two. The optimistic bias that can be introduced by departing from that recipe can be surprisingly large, as demonstrated by [Cawley and Talbot](http://jmlr.csail.mit.edu/papers/v11/cawley10a.html), where the bias introduced by an apparently benign departure was larger than the difference in performance between competing classifiers. Worse still biased protocols favours bad models most strongly, as they are more sensitive to the tuning of hyper-parameters and hence are more prone to over-fitting the model selection criterion! Answers to specific questions: The procedure in step 1 is valid because feature selection is performed separately in each fold, so what you are cross-validating is whole procedure used to fit the final model. The cross-validation estimate will have a slight pessimistic bias as the dataset for each fold is slightly smaller than the whole dataset used for the final model. For 2, as cross-validation is used to select the model parameters then you need to repeat that procedure independently in each fold of the cross-validation used for performance estimation, you you end up with nested cross-validation. For 3, essentially, yes you need to do nested-nested cross-validation. Essentially you need to repeat in each fold of the outermost cross-validation (used for performance estimation) *everything* you intend to do to to fit the final model. For 4 - yes, if you have a separate hold-out set, then that will give an unbiased estimate of performance without needing an additional cross-validation.
Say I have a database of around a million words, and I want to get an intuitive idea about exactly how a particular, quite infrequent, word is distributed throughout this data. My goal is to be able to see clearly whether this word tends to cluster together, or whether it is relatively evenly spaced. What would be some good methods for visualizing this? For instance, I have seen something that looks useful. It's basically a strip (long rectangle) in which each instance of something is represented by a very thin red vertical line. The problem is that I don't know what these are called, and therefore I can't figure out how to make something like this in R. Any help finding the right R function for that, or any other suggestions for good ways to visualize this sort of data, would be most appreciated.
While Whuber is correct in principle you still might be able to see something because your word is very infrequent and you only want plots of the one word. Something quite uncommon might only appear 30 times, probably not more than 500. Let's say you convert your words into a single vector of words that's a million long. You could easily construct a plot with basic R commands. Let's call the vector 'words' and the rare item 'wretch'. ``` n <- length(words) plot(1:n, integer(n), type = 'n', xlab = 'index of word', ylab = '', main = 'instances of wretch', yaxt = 'n', frame.plot = TRUE) wretch <- which(words %in% 'wretch') abline(v = wretch, col = 'red', lwd = 0.2) ``` You could change the line assigning wretch using a grep command if you need to account for variations of the word. Also, the lwd in the abline command could be set thicker or thinner depending on the frequency of the word. If you end up plotting 400 instances 0.2 will work fine. I tried some density plots of this kind of data. I imported about 50,000 words of Shakespeare and finding patterns was easier for me in the code above than it was in the density plots. I used a very common word that appeared in frequency 200x more than the mean frequency ('to') and the plots looked just fine. I think you'll make a fine graph like this with rare instances in 1e6 words.
\begin{aligned} Y\_t &= a Y\_{t-1} + e\_t, \\ Z\_t &= Y\_t + H\_t, \\ \end{aligned} where $H\_t$ is independent of $Y\_t$. I'm trying to understand what ARMA model $Z\_t$ corresponds to but I'm not really sure. Can someone provide a quick explanation?
We can see that $Y\_t$ is an AR(1) process with $a$ parameter. We can find the autocorrelation function of $Z\_t$ by first calculating its autocovariance. $$\text{cov}(Z\_t, Z\_{t-k}) = \text{cov}(Y\_t + h\_t, Y\_{t-k} + H\_{t-k})$$ this gives $$\text{cov}(Z\_t, Z\_{t-k}) = \text{cov}(Y\_t, Y\_{t-k}) + \text{cov}(Y\_t, H\_{t-k}) + \text{cov}(H\_t, Y\_{t-k}) + \text{cov}(H\_t, H\_{t-k})$$ therefore when k = 0, we get $$ \gamma\_Z(0) = \sigma^2\_Y + \sigma^2\_H $$ and when k > 0, we get $$\gamma\_Z(k) = \gamma\_Y(k) $$ From the equation for autocovariance for AR(1) models (since $Y\_t$ is and AR(1)), $$ \gamma\_Y(k) = a \* \gamma\_Y(0) \text{ and since } \gamma\_Y(0) = \sigma^2\_Y$$ this gives us an autocorrelation function of $$\rho\_Z(k) = \frac{a \* \sigma^2\_Y}{\sigma^2\_Y + \sigma^2\_H}$$ this has the form $$\rho\_Z(k) = A a^{k-1}$$ which is typical of ARMA$(1,1)$ models and therefore implies that $Z\_t$ is an ARMA$(1,1)$ model.
Why do most people prefer to use many-one reductions to define NP-completeness instead of, for instance, Turing reductions?
I think the reason people prefer (to start with) many-one reductions is pedagogical -- a many-one reduction from A to B is actually a function on strings, whereas a Turing reduction requires the introduction of oracles. Note that Cook reduction (polynomial-time Turing) and Karp-Levin reduction (polynomial-time many-one) are known to be distinct on E unconditionally, by Ko and Moore, and separately by Watanabe (as referenced in the Lutz and Mayordomo paper in Aaron Sterling's response).
I recently tried coming up with an algorithm that uses dynamic programming for the counting variant of the change problem. Given a set of target and a set of denominations, print the number of possible combinations that add up to the target. So if my target is 5 and my set is `{ 1, 2, 5, 10 }` then the solution is 4. More information can be found on: <http://www.wcipeg.com/wiki/Change_problem> Although the solution to this particular problem is a single number, I instead decided to list all possible combinations. My reasoning was that if I list all combinations then it'll be easier for me to work out whether or not my algorithm is working. I assumed that these two problems are identical other than the obvious difference that the form of the output is different. But I really struggled with coming up with a dynamic programming solution for listing all combinations (recursive algorithm wasn't an issue). Since then I've been trying to find such an algorithm online but I'm surprised that it doesn't seem to exist. So I'm wondering if my assumption is incorrect and these are actually two separate problems. I'm aware that when solving problems like this I should focus on just what's required to make things a little easier, but I didn't think it'd result in two completely separate problems. So there are actually THREE variants of change problem: * optimisation problem where I work out the smallest number of coins required to meet my target (the same algorithm can be used for both working out the minimum number of coins required AND also the actual minimum set of coins) * counting problem which lists number of possible combinations (can be solved using dynamic programming AND recursion) * listing set of possible combinations (can only be solved using recursion) Is this correct? Are these separate problems? --- Edit: I should mention that I did come up with this algorithm that uses dynamic programming but I think it only works for certain sets of denominations hence why I said that I failed to come up with an algorithm. ``` int main() { std::unordered_map<int, std::vector<std::pair<std::vector<int>, int>>> m; m[1].push_back({ { 1 }, 1 }); const int target = 25; const std::vector<int> coins = { 200, 100, 50, 20, 10, 5, 2, 1 }; for (int i = 2; i <= target; i++) // N { std::unordered_set<int> s; for (auto coin : coins) // M { if (coin > i) continue; if (coin == 1) { m[i].push_back({ { i }, 1 }); s.insert(1); // constant } for (auto v : m[i - coin]) // N { if (s.find(v.second + 1) != s.end()) continue; // constant std::vector<int> temp_v = v.first; temp_v.push_back(coin); // constant m[i].push_back({ temp_v, v.second + 1 }); // constant s.insert(v.second + 1); // constant std::cout << i << ": "; PRINT_ELEMENTS(temp_v); } } } return 0; } ``` The *trick* I'm using is that for a given target, each combination has a unique number of elements. E.g. for target 5 the combinations are `{5}, {2, 2, 1}, {2, 1, 1, 1}, and {1, 1, 1, 1, 1}` and as you can see the cardinality of each set is unique. This won't work if, let's say, the denominations were `{1, 2, 3, 5, 6}` and the target was 8. In that case I can store use arrays/vectors to represent combinations and store them in a set but I'd have to sort each vector first (so that they can be easily compared) and that increases the complexity of my solution quite a bit I think.
First of all note it is an NP complete problem if you are searching for a specific subset of denominations that sums to a particular integer. You could solve this problem by brute-force simply generating all subsets of coins checking each time that the set adds up to the target. Time complexity of this algorithm is terrible, though there are pseudo-polynomial solutions too. A counting problem which lists the number possible outputs doesn't have to be solved using recursive algorithm though a recursive algorithm in particular for generating subsets is much more shorter, intuitive, and readable. Any recursive algorithm may be rewritten in a nonrecursive form. Similarly, there is nothing special with dynamic programming, it is just a way to solve a problem. But not all counting problems (or any problem) are solved or have to be solved using DP. Also, once you generated a list of sequences or sets then of course you could count, in other words your listing solves corresponding counting problem. However, if you are interested just in the number of elements/sequences/sets then you don't have to list/generate sets. For example given $n$ you could compute that the total number of binary strings of length is $2^n$ without generating these strings. Computing $2^n$ is easier than listing all strings. Similarly, if you are interested in the number of subsets of size $m$ of a set of size $n$, then that value is $\binom{n}{m}$ which may be calculated using the Stirling's approximation. But listing all possible subsets is harder. Thus counting problems ARE NOT always same as problems involving listing all possible combinations.
Principal component analysis (PCA) is usually explained via an eigen-decomposition of the covariance matrix. However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two approaches? What is the relationship between SVD and PCA? Or in other words, how to use SVD of the data matrix to perform dimensionality reduction?
Let the real values data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. Let us assume that it is *centered*, i.e. column means have been subtracted and are now equal to zero. Then the $p \times p$ covariance matrix $\mathbf C$ is given by $\mathbf C = \mathbf X^\top \mathbf X/(n-1)$. It is a symmetric matrix and so it can be diagonalized: $$\mathbf C = \mathbf V \mathbf L \mathbf V^\top,$$ where $\mathbf V$ is a matrix of eigenvectors (each column is an eigenvector) and $\mathbf L$ is a diagonal matrix with eigenvalues $\lambda\_i$ in the decreasing order on the diagonal. The eigenvectors are called *principal axes* or *principal directions* of the data. Projections of the data on the principal axes are called *principal components*, also known as *PC scores*; these can be seen as new, transformed, variables. The $j$-th principal component is given by $j$-th column of $\mathbf {XV}$. The coordinates of the $i$-th data point in the new PC space are given by the $i$-th row of $\mathbf{XV}$. If we now perform singular value decomposition of $\mathbf X$, we obtain a decomposition $$\mathbf X = \mathbf U \mathbf S \mathbf V^\top,$$ where $\mathbf U$ is a unitary matrix (with columns called left singular vectors), $\mathbf S$ is the diagonal matrix of singular values $s\_i$ and $\mathbf V$ columns are called right singular vectors. From here one can easily see that $$\mathbf C = \mathbf V \mathbf S \mathbf U^\top \mathbf U \mathbf S \mathbf V^\top /(n-1) = \mathbf V \frac{\mathbf S^2}{n-1}\mathbf V^\top,$$ meaning that right singular vectors $\mathbf V$ are principal directions (eigenvectors) and that singular values are related to the eigenvalues of covariance matrix via $\lambda\_i = s\_i^2/(n-1)$. Principal components are given by $\mathbf X \mathbf V = \mathbf U \mathbf S \mathbf V^\top \mathbf V = \mathbf U \mathbf S$. To summarize: 1. If $\mathbf X = \mathbf U \mathbf S \mathbf V^\top$, then the columns of $\mathbf V$ are principal directions/axes (eigenvectors). 2. Columns of $\mathbf {US}$ are principal components ("scores"). 3. Singular values are related to the eigenvalues of covariance matrix via $\lambda\_i = s\_i^2/(n-1)$. Eigenvalues $\lambda\_i$ show variances of the respective PCs. 4. Standardized scores are given by columns of $\sqrt{n-1}\mathbf U$ and loadings are given by columns of $\mathbf V \mathbf S/\sqrt{n-1}$. See e.g. [here](https://stats.stackexchange.com/questions/125684) and [here](https://stats.stackexchange.com/questions/143905) for why "loadings" should not be confused with principal directions. 5. **The above is correct only if $\mathbf X$ is centered.** Only then is covariance matrix equal to $\mathbf X^\top \mathbf X/(n-1)$. 6. The above is correct only for $\mathbf X$ having samples in rows and variables in columns. If variables are in rows and samples in columns, then $\mathbf U$ and $\mathbf V$ exchange interpretations. 7. If one wants to perform PCA on a correlation matrix (instead of a covariance matrix), then columns of $\mathbf X$ should not only be centered, but standardized as well, i.e. divided by their standard deviations. 8. To reduce the dimensionality of the data from $p$ to $k<p$, select $k$ first columns of $\mathbf U$, and $k\times k$ upper-left part of $\mathbf S$. Their product $\mathbf U\_k \mathbf S\_k$ is the required $n \times k$ matrix containing first $k$ PCs. 9. Further multiplying the first $k$ PCs by the corresponding principal axes $\mathbf V\_k^\top$ yields $\mathbf X\_k = \mathbf U\_k^\vphantom \top \mathbf S\_k^\vphantom \top \mathbf V\_k^\top$ matrix that has the original $n \times p$ size but is *of lower rank* (of rank $k$). This matrix $\mathbf X\_k$ provides a *reconstruction* of the original data from the first $k$ PCs. It has the lowest possible reconstruction error, [see my answer here](https://stats.stackexchange.com/questions/130721). 10. Strictly speaking, $\mathbf U$ is of $n\times n$ size and $\mathbf V$ is of $p \times p$ size. However, if $n>p$ then the last $n-p$ columns of $\mathbf U$ are arbitrary (and corresponding rows of $\mathbf S$ are constant zero); one should therefore use an *economy size* (or *thin*) SVD that returns $\mathbf U$ of $n\times p$ size, dropping the useless columns. For large $n\gg p$ the matrix $\mathbf U$ would otherwise be unnecessarily huge. The same applies for an opposite situation of $n\ll p$. --- Further links ------------- * [What is the intuitive relationship between SVD and PCA](https://math.stackexchange.com/questions/3869) -- a very popular and very similar thread on math.SE. * [Why PCA of data by means of SVD of the data?](https://stats.stackexchange.com/questions/79043) -- a discussion of what are the benefits of performing PCA via SVD [short answer: numerical stability]. * [PCA and Correspondence analysis in their relation to Biplot](https://stats.stackexchange.com/q/141754/3277) -- PCA in the context of some congeneric techniques, all based on SVD. * [Is there any advantage of SVD over PCA?](https://stats.stackexchange.com/questions/121162) -- a question asking if there any benefits in using SVD *instead* of PCA [short answer: ill-posed question]. * [Making sense of principal component analysis, eigenvectors & eigenvalues](https://stats.stackexchange.com/a/140579/28666) -- my answer giving a non-technical explanation of PCA. To draw attention, I reproduce one figure here: [![Rotating PCA animation](https://i.stack.imgur.com/Q7HIP.gif)](https://i.stack.imgur.com/Q7HIP.gif)
I have a dataset with 15 variables. Some variables are numeric, continuous. Other variables are boolean, dichotomous (true/false). There's also one variable categorical, nominal. ``` str(df) 'data.frame': 30 obs. of 15 variables: nom : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 1 ... X1 : logi FALSE TRUE FALSE TRUE TRUE FALSE ... X3 : logi TRUE TRUE TRUE TRUE FALSE FALSE ... X3 : logi TRUE FALSE FALSE FALSE TRUE FALSE ... X4 : logi FALSE TRUE FALSE TRUE FALSE FALSE ... X5 : logi TRUE FALSE FALSE FALSE FALSE TRUE ... X1.1: num 1.026 -0.285 -1.221 0.181 -0.139 ... X2.1: num -0.045 -0.785 -1.668 -0.38 0.919 ... X3.1: num 1.13 -1.46 0.74 1.91 -1.44 ... X4.1: num 0.298 0.637 -0.484 0.517 0.369 ... X5.1: num 1.997 0.601 -1.251 -0.611 -1.185 ... X6 : num 0.0597 -0.7046 -0.7172 0.8847 -1.0156 ... X7 : num -0.0886 1.0808 0.6308 -0.1136 -1.5329 ... X8 : num 0.134 0.221 1.641 -0.219 0.168 ... X9 : num 0.704 -0.106 -1.259 1.684 0.911 .. X10 : android android OS windows7 windows8... [...] ``` I would like to cluster **the variables** (not data cases) `x1, x2, ..., x9` (probably omitting the nominal `X10`) into clusters or subsets of correlated variables, for example `(x1,x2,x6),(x3,x5), ...` As the variable have mixed types, it is impossible to use `cor()`, I think. It is also impossible to use Gower similarity coefficient, because it is a similarity between data *cases*. Can you help me to find an idea to process this, please? I would prefer a solution in R.
Traditional FA and cluster algorithms were designed for use with continuous (i.e., gaussian) variables. Mixtures of continuous and qualitative variables invariably give erroneous results. In particular and in my experience, the categorical information will dominate the solution. A better approach would be to employ a variant of finite mixture models which are often intended for use with mixtures of continuous and categorical information. Latent class mixture models (which are FMMs) have a huge literature built up around them. Much of that literature is focused in the field of marketing science where these methods see wide use for, e.g., consumer segmentation...but that's not the only field where they are used. The software I know and recommend for latent class modeling is neither free nor R-based but, in terms of proprietary software, it's not *that* expensive. It's called *Latent Gold*, is sold by Statistical Innovations and costs about $1,000 for a perpetual license. If your project has a budget, it could easily be expensed. *LG* offers a wide suite of tools including FA for mixtures, clustering of mixtures, longitudinal markov chain-based clustering, and more. Otherwise, the only R-based freeware I know about (polCA, <https://www.jstatsoft.org/article/view/v042i10>) is intended for use with multi-way contingency tables. I'm not aware that this tool can accept anything other than categorical information. There may be others. If you poke around, maybe you can find some alternatives.
Why do you think it is that most C++ instructors teaching college level computer sciences discourage or even forbid using strings for text, instead requiring students to use character arrays? I am assuming this methodology is somehow intended to teach good programming habits, but in my experience I don't see anything wrong with just using strings, and they are significantly easier to use and learn.
For a CS student, the professor may discourage std::string and ask for char array, so as to teach basics of array. In my school, the teacher asked us to write C routines like `strlen`, `strcmp` etc. on our own. It can't get any low level then this (leaving assembly!). By using array, instead of string, you can learn memory-management, string operations, notably parsing string. By iterating over array, you are basically parsing string, looking for white-space, special chars, tokens etc. But I am not recommending array over string, at all. I am just giving a reason to cope with arrays, in university.
Help me in finding the suitable statistical test to show the significance difference [through p-value] ``` > Sample_data Universe subset x 2200 5 y 2500 50 ``` From the above data, I want to find whether the proportion of "Universe" that are in "subset" for x and y are significantly different. Like, x and y subsets are 0.2% [5/2200\*100] and 2% [50/2500\*100], hence there is 10 fold difference between x and Y. But, how could I achieve this through a statistical test, and which test is most appropriate in R environment? How I can carry the universe proportions to determine the significant difference between subset of x(5) and y(50)? --- The data structure refers a matrix. X has a total number of genes - 2200 (X-Universe) and Y has a total number of genes - 2500 (Y-Universe). Out of all X-2200, only 5 correspond to subset (a category-P) and out of Y-2500, 50 correspond to the same subset (or category-P). And I want to say the x and y subset values (5 and 50) are significantly different with respect to the total number of genes in X and Y (2200 and 2500). Statistically **how can we determine that the 5 out of 2200 and 50 out of 2500 are significantly different?**
It sounds like you want to compare two proportions using Fisher's exact test. Here are the results as computed by an [online free calculator](http://www.graphpad.com/quickcalcs/contingency1/): ``` Outcome1 Outcome2 Total Group 1 5 2195 2200 Group 2 50 2450 2500 Total 55 4645 4700 ``` Fisher's exact test The two-tailed P value is less than 0.0001 The association between rows (groups) and columns (outcomes) is considered to be extremely statistically significant. That answers your question. It might be useful to quantify the ratio of the two proportions with a confidence interval as that is often more informative than just a P value. The relative risk is 0.11 with a 95% confidence interval ranging from 0.045 to 0.28. Looked at the other way (reciprocal), the relative risk is 8.80 with the 95% CI ranging from 3.51 to 22.03. One set of genes has almost 9 times the "hits" of the other, and you can be 95% sure (given some assumptions) that the range of 3.5 to 22 includes the true ratio. You have strong evidence that the difference in "hit rate" (whatever the experimental details are) between the two sets of genes would be very unlikely to occur by chance.
Assuming a poisson distribution, is there a way to solve for lambda in R? My inputs would be "x", and Pr(X<=x) ... and I would like R to tell me the lambda. Thanks
Let's call your original CI a 'probability-symmetric' confidence interval. For a symmetrical distribution, such an interval may be the narrowest one. However, the probability-symmetric 95% CI for normal $\sigma^2,$ based on pivoting $$\frac{(n-1)S^2}{\sigma^2}\sim\mathsf{Chisq}(\nu = n-1)$$ is not the shortest because chi-squared distributions are not symmetrical. For convenience, the probability-symmetric 95% CI is often used. (Also, 'minimum width' may not be the most important criterion, so the narrowest CI may not be the most useful.) *Example:* Suppose a random normal sample of size $n=50$ has $S^2 = 13.52.$ Then the probability-symmetric 95% CI $(9.43,20.99)$ has width $11.56,$ while the 95% CI $(9.28,20.61)$ has width $11.33.$ [Using R below.] ``` CI=49*(13.52)/qchisq(c(.975, .025), 49); CI [1] 9.434025 20.994510 diff(CI) [1] 11.56048 CI = 49*(13.52)/qchisq(c(.98, .03), 49); CI [1] 9.277642 20.611959 diff(CI) [1] 11.33432 CI = 49*(13.52)/qchisq(c(1, .05), 49); CI [1] 0.00000 19.52473 diff(CI) [1] 19.52473 # one-sided ``` In case width is especially important, one could search for the narrowest 95% CI.