input
stringlengths
38
38.8k
target
stringlengths
30
27.8k
Les say I have a data set with several measures and one factor (classification) like the one bellow (for the sake of simplicity, I'm simulating 10 rows and 5 variables only) I'd like to know how much each variable contribute to the overall classification. I thought about running a linear regression, but I'm wondering if it makes sense to use it to "explain" a factor When I run `lm(classification ~ ., data =data)` I get a warning saying ``` Warning messages: 1: In model.response(mf, "numeric") : using type = "numeric" with a factor response will be ignored 2: In Ops.factor(y, z$residuals) : β€˜-’ not meaningful for factors ``` but I do get a result (intercept and coefficients for each variable). My questions are: do they make any sense? And: is there a better way to get to the answer I'm looking for? ``` classification variable_1 variable_2 variable_3 variable_4 variable_5 1 5 -0.90174176 -0.64796703 1.2106427 -0.9229394 -0.6578518 2 5 1.75760214 0.18486432 0.2018499 0.1301168 -0.6510428 3 8 -0.29445029 -0.23108298 -2.6244614 0.3745607 0.3124868 4 4 0.78639724 1.04943276 -0.6047869 -0.4275781 0.6395614 5 3 -2.06554518 0.07336021 2.8142735 1.0558045 -0.1818247 6 4 0.04374419 -0.13775079 0.6132946 -0.5890983 1.9965892 7 10 -1.46731867 1.00367532 -0.8626940 -1.8378582 0.2702731 8 8 0.27206146 -0.13775707 2.6827356 1.5554446 0.1549394 9 5 0.58075881 2.03567118 0.2056770 -0.2935464 -1.3586576 10 9 0.57725709 -0.25396790 0.6640166 -1.9626897 0.3650243 ``` Code to reproduce it: ``` data <- data.frame(classification=sample(3:10,replace=TRUE,size=10)) for(i in 1:5){ data[,paste0("variable_",i)]<-rnorm(10) } ``` thanks
When I need to answer similar question I usually: 1) Run a gradient boosting classifier against my data using the scikit-learn package (it has this algorithm built-in) 2) Get the feature\_importances\_. Just in my experience, feature\_importances\_ show really good approximation of how important the features are. As far as I see R's package gbm also provides same classifier and similar importance approximation method so I suggest you to try it. The nature of GBM classifier is that it should approximate the importance of features relatively well. Same for RandomForest, by the way.
Please forgive my ignorance, however I would like to explain this problem and get some advice on how to approach it. Let's say that I have the following training inputs, where `word`, `x`, and `y` are the data, and `class` is the classification that the data has been put into, with 0 being unclassified - Training 1 ``` word, x, y, class 56, 48, 23, 0 91, 12, 44, 2 74, 45, 23, 0 91, 76, 48, 1 ``` Training 2 ``` word, x, y, class 49, 48, 45, 0 84, 16, 12, 2 10, 45, 23, 0 72, 76, 48, 3 84, 18, 12, 0 10, 45, 23, 1 24, 79, 48, 0 ``` And I would then like to provide the following test data, to be classified - Test 1 ``` word, x, y 64, 36, 45 84, 16, 12 95, 45, 23 72, 76, 88 22, 18, 12 ``` What might be the best approach to this problem?
This is a [classification](/questions/tagged/classification "show questions tagged 'classification'") task. Classification is an enormous field. You might want to start with Classification And Regression Trees (CARTs), which have the advantage of being easy to understand and explain. Plus, they are implemented in pretty much every ML or statistics package, such as R. If you have thoroughly understood CARTs, you might want to look at Random Forests, which generalize CARTs and [perform quite well across different classification tasks](http://jmlr.org/papers/v15/delgado14a.html). In any case, be sure to tell your system what kind of data you have. For instance, your `word` is numerically encoded, but I strongly suspect that it is actually categorical. Same for your target classification.
I am currently learning the lambda calculus and was wondering about the following two different kinds of writing a lambda term. 1. $\lambda xy.xy$ 2. $\lambda x.\lambda y.xy$ Is there any difference in meaning or the way you apply beta reduction, or are those just two ways to express the same thing? Especially this definition of pair creation made me wonder: > > **pair** = $\lambda xy.\lambda p.pxy$ > > >
The first is an abbreviation for the second. It's a common syntactic convention to shorten expressions. On the other hand, if you have tuples in the language, then there is a difference between 1. $\lambda x.\lambda y.xy$ and 2. $\lambda (x,y).xy$. In the former case I can provide a single argument to the function, and pass the resulting function around to other functions. In the latter case, both arguments must be supplied at once. There is, of course, a function that can be applied to convert 1 into 2 and vice versa. This process is known as [*(un)currying*](http://en.wikipedia.org/wiki/Currying). The definition of $\text{pair}$ you mention is an encoding of the notion of pairs into the $\lambda$-calculus, rather than pairs as a primitive data type (as I hinted at above).
Suppose I have two stacks $<a\_1,a\_2,...a\_m>$ and $<b\_1, b\_2,...b\_n>$ and a third stack of size $m+n$. I want to have the third stack in the following manner, $$<a\_1,b\_1,a\_2,b\_2,...a\_n,b\_n...a\_m-1,a\_m>$$ for $$m>n$$ This was easy to do if the two initial stacks were not size constrained. But if the two stacks are size constrained, I am in a fix. Is it even possible to interleave the elements of the two stacks into a third stack in constant space? Also what would be the minimum number of moves to do this? I know using recursion this can be reduced to Tower of Hanoi variant, but what about a non-recursive algorithm?
Here's an algorithm that works without using recursion. I might say it's even easier to do without recursion. I'll be working off the underlying assumption that you don't have access to temporary variables, and that the only available operation to you is $pop(s, d)$ which pop the top element of s(source) and places it on top of d(destination). E.g.: we have the stacks $A=<a\_1,a\_2,a\_3>$ and $B=<>$. After calling $pop(A,B)$ we have $A=<a\_1,a\_2>$ and $B=<a\_3>$. First, I'll define a new function $popm(s,d,c)$. It's role is to iterate $pop(s,d)$ c times. E.g.: $A=<a\_1,a\_2,a\_3>$ and $B=<>$. After calling $popm(A,B,3)$, $A=<>$ and $B=<a\_3,a\_2,a\_>$. Here is the proposed algorithm. 1. Start with $A=<a\_1,a\_2,\dots,a\_m>$, $B=<b\_1,b\_2,\dots,b\_n>$ and $C=<>$ 2. $pop(B,C,n)$. $A=<a\_1,a\_2,\dots,a\_m>$, $B=<b\_1,b\_2,\dots,b\_n-1>$ and $C=<b\_n>$ The goal is free a spot in $B$ 3. $popm(A,C,m)$. $A=<>$, $B=<b\_1,\dots,b\_n-1>$ and $C=<b\_n,a\_m,\dots,a\_2,a\_1>$ You've exposed $a\_1$ on top of $C$ 4. $pop(C,B)$. $A=<>$, $B=<b\_1,\dots,b\_n-1,a\_1>$ and $C=<b\_n,a\_m,\dots,a\_2>$ Have $B$ hold the value $a\_1$ on top 5. $popm(C,A,m-1)$. $A=<a\_2,a\_3,\dots,a\_m>$, $B=<b\_1,\dots,b\_n-1,a\_1>$ and $C=<b\_n>$ Restore $A$ (excluding $a\_1$) to it's original order. Also, have $A$ hold $b\_n$ for now. 6. $pop(C,A)$. $A=<a\_2,a\_3,\dots,a\_m,b\_n>$, $B=<b\_1,\dots,b\_n-1,a\_1>$and $C=<>$ Have $A$ hold $b\_n$ for now. 7. $pop(B,C)$. $A=<a\_2,a\_3,\dots,a\_m,b\_n>$, $B=<b\_1,\dots,b\_n-1>$ and $C=<a\_1>$ Put $a\_1$ at the bottom of $C$, it's desired place. 8. $pop(A,B)$. $A=<a\_2,a\_3,\dots,a\_m>$, $B=<b\_1,\dots,b\_n-1,b\_n>$ and $C=<a\_1>$. Restore $b\_n$ to the top of B 9. Interchange $A$ and $B$ and repeat from step 1, adjusting for the new state of $A$, $B$ and $C$. You can ignore steps 2, 6 and 8 since you now will have a free spot on top of B from now on. 10. When $B$ has been emptied, $popm(A,B,m-n)$ then $popm(B,C,m-n)$ to have your remaining stack $A$ do a "double back flip" off $B$ onto $C$. Let me say why this isn't related to tower of Hanoi. In the Hanoi problem, you cannot inverse the tower. Moreover, you have the middle peg as a temporary holder, which we don't have. Finally, in this example, you can reverse the order on the elements.
If the covariates $X\_1,X\_2$ are distributed as follows, what effect does it have on the linear model $y = \beta\_0 + \beta\_1 X\_1 + \beta\_2 X\_2$ $X\_1,X\_2$ do not seem to exhibit a strong correlation, so collinearity can be ruled out. [![enter image description here](https://i.stack.imgur.com/1h8qp.png)](https://i.stack.imgur.com/1h8qp.png)
Not to be overly pedantic, but those 'outliers' visible in the graph of $X\_1, X\_2$ scatterplot are not what we normally refer to as outliers in the context of regression. For regression, outliers are observations with large (absolute value) residuals. When you have combinations of explanatory variables ($X\_1, X\_2$) that fall outside the pattern of most observations, these are INFLUENTIAL points. They have an abnormally high influence on estimation of the slopes in your model (and on the predictions given by your model when you extrapolate). These are also called high-leverage observations. For instance consider the data: ``` x1 = c(15, 15, 22, 17, 10, 15, 23, 9, 18, 19, 60, 15) x2 = c(27, 21, 35, 16, 17, 20, 19, 30, 17, 27, 30, 80) y = c(11.9, 15.7, 18.4, 9.6, 7.4, 11, 16.9, 12.8, 11, 12.7, 24.5, 22.5) ``` Here, there are extreme (explanatory) values at (60, 30) and (15, 80), so these are high influence observations. A slight change in the y value at these locations will have a big influence on the fitted value. ``` > summary(lm(y~x1+x2)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.06440 1.60958 2.525 0.032494 * x1 0.25994 0.05067 5.130 0.000619 *** x2 0.18809 0.03868 4.863 0.000892 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 2.236 on 9 degrees of freedom Multiple R-squared: 0.8491, Adjusted R-squared: 0.8156 F-statistic: 25.32 on 2 and 9 DF, p-value: 0.0002013 > y[11] [1] 24.5 > y[11] = 10 > summary(lm(y~x1+x2)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 8.91261 2.23551 3.987 0.00317 ** x1 -0.03901 0.07037 -0.554 0.59283 x2 0.18358 0.05372 3.417 0.00766 ** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 3.105 on 9 degrees of freedom Multiple R-squared: 0.5701, Adjusted R-squared: 0.4746 F-statistic: 5.968 on 2 and 9 DF, p-value: 0.02239 > y[12] [1] 22.5 > y[12] = 10 > summary(lm(y~x1+x2)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 12.665278 2.577255 4.914 0.000831 *** x1 -0.004558 0.081130 -0.056 0.956426 x2 -0.010320 0.061933 -0.167 0.871340 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 3.58 on 9 degrees of freedom Multiple R-squared: 0.003453, Adjusted R-squared: -0.218 F-statistic: 0.01559 on 2 and 9 DF, p-value: 0.9846 ``` If I changed the y value at another location (low leverage) I'll get little change in the model: ``` > summary(lm(y~x1+x2)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 12.665278 2.577255 4.914 0.000831 *** x1 -0.004558 0.081130 -0.056 0.956426 x2 -0.010320 0.061933 -0.167 0.871340 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 3.58 on 9 degrees of freedom Multiple R-squared: 0.003453, Adjusted R-squared: -0.218 F-statistic: 0.01559 on 2 and 9 DF, p-value: 0.9846 > y[5] [1] 7.4 > y[5] = -3 > summary(lm(y~x1+x2)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 9.79553 4.22547 2.318 0.0456 * x1 0.04734 0.13302 0.356 0.7301 x2 0.02415 0.10154 0.238 0.8173 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 5.87 on 9 degrees of freedom Multiple R-squared: 0.0202, Adjusted R-squared: -0.1975 F-statistic: 0.09277 on 2 and 9 DF, p-value: 0.9123 ```
I am not very well-versed in the world of theorem proving, much less automated theorem proving, so please correct me if anything I say or assume in my question is wrong. Basically, my question is: are automated theorem provers themselves ever formally proven to work with another theorem prover, or is there just an underlying assumption that any theorem prover was just implemented really really well, extensively tested & reviewed, etc. and so it "must work"? If so, does there always remain some underlying doubt in any proof proven by a formally verified automated theorem prover, as the formal verification of that theorem prover still lies on assuming that the non-formally verified theorem prover was correct in its verification of the former theorem prover, even if it might technically be wrong - as it was not formally verified itself? (That is a mouthful of a question, apologies.) I am thinking of this question in much the same vein as bootstrapping compilers.
While this may trend close to self-advertisement, this is essentially the topic of my recent paper [Metamath Zero: The Cartesian Theorem Prover](https://arxiv.org/abs/1910.10703) ([video](https://www.youtube.com/watch?v=CqZzbaEuNBs)), and the analogy with bootstrapping compilers is spot on. The introduction of the paper lays out what is needed to make this happen, and it's only a problem of engineering. As Andrej says, there are several components that go into a "full bootstrap", and while many of the parts have been done separately, the theorem provers that are used by the community are only correct in the sense that linux is correct: we've used it for a while and there are no bugs we have found so far, except the bugs that we did find and fix. The issue, as ever, is that because the cost of producing verified software is high, verified programs tend to be simplistic or simplified from the "real thing", and so there remains a gap between what people use and what people prove theorems about. A "small trusted kernel" setup is necessary but not sufficient, because unless you have a full formal specification (with proofs) of the programming language, the untrusted part can still interfere with the trusted part, and even if the barrier is air-tight, you have communication problems when the untrusted part is in control - for example, the kernel may flag an error that the untrusted part ignores, or the kernel may never be shown some apparent assertion at the source level. The good news is that projects of this scale have become feasible in the past few years, so I am very hopeful that we can get production quality theorem provers with verified backends soon-ish. * The [CakeML](https://cakeml.org/) project should get a mention here: they have a ML compiler verified in HOL4, that is capable of running a HOL verifier written in ML. (Unfortunately HOL4 is more than just its logical system, so there is some work to be done to make this realistic.) * The [Milawa](http://www.cs.utexas.edu/users/moore/acl2/manuals/current/manual/index-seo.php/ACL2____MILAWA) kernel is written in a subset of ACL2, with a bunch of bootstrapping stages before closing the loop (being able to prove theorems about this same ACL2 subset). This is the only actual theorem prover bootstrap I know, but it doesn't go down to machine code, it stays at the level of Lisp, and from what I understand it's not actually performant enough for production work. It has [since](http://www.cse.chalmers.se/~myreen/2015-jar-milawa.pdf) been verified down to machine code, but that part of the proof was done in HOL4 so it's not actually bootstrapping at the machine code level. * Coq recently made some strides towards this with [Coq Coq Correct!](https://www.irif.fr/~sozeau/research/publications/drafts/Coq_Coq_Correct.pdf), but it doesn't cover the full Coq kernel (including recent additions such as SProp, and the module system). (Aside, if there are any Coq experts reading this: if you know any place where the *complete* formal system implemented by Coq is written down, I would really like to see it. Formalized is nice but informal might even be better, as long as it is complete and precise.) You also can't connect it to [CompCert](http://compcert.inria.fr/doc/index.html) as Andrej suggested, because the typechecking algorithm is only described abstractly, and certainly not in C, which is what CompCert expects as input. * [Metamath Zero](https://github.com/digama0/mm0) is still under construction but the goal is to prove correctness down to the machine code level, inside its own logic. (I also can't help but mention it's currently about 8 orders of magnitude faster than "the other guys" but we'll see if that holds up until the end of the project.)
Memory is used for many things, as I understand. It serves as a disk-cache, and contains the programs' instructions, and their stack & heap. Here's a thought experiment. If one doesn't care about the speed or time it takes for a computer to do the crunching, what is the bare minimum amount of memory one can have, assuming one has a very large disk? Is it possible to do away with memory, and just have a disk? Disk-caching is obviously not required. If we set up swap space on the disk, program stack and heap also don't require memory. Is there anything that does require memory to be present?
The question is not purely academic. It is a matter of historical record that one of the earliest commercially-produced computers [sorry, I don't recall which offhand] did not have any RAM - all programs were executed by fetching instructions directly off of a magnetic drum [a rotating cylinder with outer surface magnetizable (disks came later)]. It was comparatively slow, but much cheaper than a lot of the competition. [this was way back in the 'tube' days] Interestingly, it came with a now-obsolete tool known as an 'optimizing assembler' - i.e the assembler not only generated machine instructions, it wrote them out onto the drum non-consecutively so as to minimize, for each instruction, the amount of time waiting for the drum to rotate to the next.
I don't know where else to ask this question, I hope this is a good place. I'm just curious to know if its possible to make a lambda calculus generator; essentially, a loop that will, given infinite time, produce every possible lambda calculus function. (like in the form of a string). Since lambda calculus is so simple, having only a few elements to its notation I thought it might be possible (though, not very useful) to produce all possible combinations of that notation elements, starting with the simplest combinations, and thereby produce every possible lambda calculus function. Of course, I know almost nothing about lambda calculus so I have no idea if this is really possible. Is it? If so, is it pretty straightforward like I've envisioned it, or is it technically possible, but so difficult that it is effectively impossible? PS. I'm not talking about beta-reduced functions, I'm just talking about every valid notation of every lambda calculus function.
As has been mentioned, this is just enumerating terms from a context free language, so definitely doable. But there's more interesting math behind it, going into the field of analytical combinatorics. The paper [Counting and generating terms in the binary lambda calculus](https://arxiv.org/abs/1511.05334) contains a treatment of the enumeration problem, and a lot more. To make things simpler, they use something called the [binary lambda calulus](https://tromp.github.io/cl/LC.pdf), which is just an encoding of lambda terms using [De Bruijn indices](https://en.wikipedia.org/wiki/De_Bruijn_index), so you don't have to name variables. That paper also contains concrete Haskell code implementing their generation algorithm. It's definitely effectively possible. I happen to have written an [implementation](https://github.com/phipsgabler/BinaryLambdaCalculus.jl) of their approach in Julia.
I assume that computers make many mistakes (like errors, bugs, glitches, etc.), which can be observed from the amount of questions asked everyday on different communities (like Stack Overflow) showing people trying to fix such issues. If computers really make many errors (as I assumed earlier) then critical tasks (like signing in or receiving a receipt) must be designed to be almost error-free, unlike most of the tasks of most software and video games.
Computer *hardware* almost always does exactly what software tells it to do. It's useful to distinguish software bugs from unreliable hardware. --- Cosmic rays can randomly flip a bit in memory, though; that's why servers often use [ECC (Error Correction Code) memory](https://en.wikipedia.org/wiki/ECC_memory) to correct single-bit errors and detect most multi-bit errors. (And internally, CPUs usually use ECC for their caches.) Computers that need even more reliability and *availability*1 than that, like flight computers in aircraft or space craft, often have 3 separate computers processing the same inputs. ([Triple Modular Redundancy](https://en.wikipedia.org/wiki/Triple_modular_redundancy)) If all 3 produce the same output, great, it's almost certainly correct. (Especially if each of the 3 computers is running software written by different teams.) If only 2 out of the 3 outputs match, the odd one out is assumed wrong, so it gets reset and the system uses the outputs of the remaining two until the faulty one is rebooted and agreeing with them. If all 3 systems give different outputs, you have a big problem. If 2 systems give the same *wrong* answer, that's even worse. Safety-critical systems like those are programmed in software much more carefully than video games or even mainstream OSes. Practices like avoiding dynamic memory allocation (`malloc`) remove whole classes of bugs and possible corner cases. Footnote 1: detecting an error and rebooting is not sufficient when the system is part of the flight controls of a jet plane that could crash if the controls stopped responding for half a second. --- Related: * [How do redundancies work in aircraft systems?](https://aviation.stackexchange.com/questions/21744/how-do-redundancies-work-in-aircraft-systems) * [How dissimilar are redundant flight control computers?](https://aviation.stackexchange.com/questions/44349/how-dissimilar-are-redundant-flight-control-computers) - not only do they have redundant systems, they're often built from different hardware running different software. So any power glitch or other weird thing is likely to have different effects on them, hopefully avoiding the case of multiple wrong answers out-voting a correct answer.
I could not fully explain the title. In order to use the Chi-square test in my dataset, I am finding the smallest value and add each cell with that value. (for example, the range of data here is [-8,11] so I added +8 to each cell and the range turned to [0,19]). dataValues variable is DataFrame type that holds my all data and contains ~2000 features, ~1000 rows, dataTargetEncoded variable is Array type that contains results as 0 and 1. ``` for i in range(len(dataValues.index)): for j in range(len(dataValues.columns)): dataValues.iat[i, j] += 8 #fit chi2 and get results bestfeatures = SelectKBest(score_func=chi2, k="all") fit = bestfeatures.fit(dataValues, dataTargetEncoded) feat_importances = pd.Series(fit.scores_, index=dataValues.columns) #print top 10 feature print(feat_importances.nlargest(10).index.values) # back to normal for i in range(len(dataValues.index)): for j in range(len(dataValues.columns)): dataValues.iat[i, j] -= 8 ``` But this causes performance problems. Another solution I'm thinking of is to normalize it. I wrote a function that looks like this: ``` def normalization(df): from sklearn import preprocessing x = df.values # returns a numpy array min_max_scaler = preprocessing.MinMaxScaler() x_scaled = min_max_scaler.fit_transform(x) df = pd.DataFrame(x_scaled, columns=df.columns) return df ``` My program has accelerated lots, but this time my accuracy has decreased. The feature selection process I have done with the first method produces 0.85 accuracy results, this time I am producing 0.70 accuracy. I want to get rid of this primitive method, but I also want accuracy to remain constant. How do I proceed? Thank you in advance
First and foremost, it does not matter to the chi-square test whether your data is positive, negative, string or any other type, as long as it is discrete (or nicely binned). This is due to the fact that the chi-square test calculations are based on a [contingency table](https://en.wikipedia.org/wiki/Contingency_table#Example) and *not* your raw data. The documentation of [sklearn.feature\_selection.chi2](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.chi2.html) and the related [usage example](https://scikit-learn.org/stable/modules/feature_selection.html#univariate-feature-selection) are not clear on that at all. Not only that, but the two are not in concord regarding the type of input data (documentation says booleans or frequencies, whereas the example uses the raw iris dataset, which has quantities in centimeters), so this causes even more confusion. The reason why sklearn's chi-squared expects only non-negative features is most likely the [implementation](https://github.com/scikit-learn/scikit-learn/blob/1495f6924/sklearn/feature_selection/univariate_selection.py#L224): the authors are relying on a row-by-row sum, which means that allowing negative values will produce the wrong result. Some hard-to-understand optimization is happening internally as well, so for the purposes of simple feature selection I would personally go with [scipy's implementation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html). Since your data is not discrete, you will have to bin every feature into *some* number of nominal categories in order to perform the chi-squared test. Be aware that information loss takes place during this step regardless of your technique; your aim is to minimize it by finding an approach that best suits your data. You must also understand that the results cannot be taken as the absolute truth since the test is not designed for data of continuous nature. Another massive problem that will *definitely* mess with your feature selection process in general is that the number of features is larger than the number of observations. I would definitely recommend taking a look at [sklearn's decomposition methods](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.decomposition) such as PCA to reduce the number of features, and if your features come in groups, you can try Multiple Factor Analysis (Python implementation available via [prince](https://github.com/MaxHalford/prince#multiple-factor-analysis-mfa)). Now that that's out of the way, let's go through an example of simple feature selection using the iris dataset. We will add a useless normally distributed variable to the constructed dataframe for comparison. ``` import numpy as np import scipy as sp import pandas as pd from sklearn import datasets, preprocessing as prep iris = datasets.load_iris() X, y = iris['data'], iris['target'] df = pd.DataFrame(X, columns= iris['feature_names']) df['useless_feature'] = np.random.normal(0, 5, len(df)) ``` Now we have to bin the data. For value-based and quantile-based binning, you can use [pd.cut](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html) and [pd.qcut](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.qcut.html), respectively (this [great answer](https://stackoverflow.com/questions/30211923/what-is-the-difference-between-pandas-qcut-and-pandas-cut) explains the difference between the two), but sklearn's [KBinsDiscretizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.KBinsDiscretizer.html) provides even more options. Here I'm using it for one-dimensional [k-means clustering](https://en.wikipedia.org/wiki/K-means_clustering#Algorithms) to create the bins (separate calculation for each feature): ``` def bin_by_kmeans(pd_series, n_bins): binner = prep.KBinsDiscretizer(n_bins= n_bins, encode= 'ordinal', strategy= 'kmeans') binner.fit(pd_series.values.reshape(-1, 1)) bin_edges = [ '({:.2f} .. {:.2f})'.format(left_edge, right_edge) for left_edge, right_edge in zip( binner.bin_edges_[0][:-1], binner.bin_edges_[0][1:] ) ] return list(map(lambda b: bin_edges[int(b)], binner.transform(pd_series.values.reshape(-1, 1)))) df_binned = df.copy() for f in df.columns: df_binned[f] = bin_by_kmeans(df_binned[f], 5) ``` A good way to investigate how well your individual features are binned is by counting the number of data points in each bin (`df_binned['feature_name_here'].value_counts()`) and by printing out a [pd.crosstab](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html) (contingency table) of the given feature and label columns. > > An often quoted guideline for the validity of this calculation is that the test should be used only if the observed and expected frequencies in each cell are at least 5. > > > So the **more zeroes** you see in the contingency table, the **less accurate** the chi-squared results will be. This will require a bit of manual tuning. Next comes the function that performs the chi-squared test for independence on two variables ([this tutorial](https://machinelearningmastery.com/chi-squared-test-for-machine-learning/) has very useful explanations, highly recommended read, code is pulled from there): ``` def get_chi_squared_results(series_A, series_B): contingency_table = pd.crosstab(series_A, series_B) chi2_stat, p_value, dof, expected_table = sp.stats.chi2_contingency(contingency_table) threshold = sp.stats.chi2.ppf(0.95, dof) return chi2_stat, threshold, p_value ``` The values to focus on are the statistic itself, the threshold, and its p-value. The threshold is obtained from a [quantile function](https://en.wikipedia.org/wiki/Quantile_function). You can use these three to make the final assessment of individual feature-label tests: ``` print('{:<20} {:>12} {:>12}\t{:<10} {:<3}'.format('Feature', 'Chi2', 'Threshold', 'P-value', 'Is dependent?')) for f in df.columns: chi2_stat, threshold, p_value = get_chi_squared_results(df[f], y) is_over_threshold = chi2_stat >= threshold is_result_significant = p_value <= 0.05 print('{:<20} {:>12.2f} {:>12.2f}\t{:<10.2f} {}'.format( f, chi2_stat, threshold, p_value, (is_over_threshold and is_result_significant) )) ``` In my case, the output looks like this: ``` Feature Chi2 Threshold P-value Is dependent? sepal length (cm) 156.27 88.25 0.00 True sepal width (cm) 89.55 60.48 0.00 True petal length (cm) 271.80 106.39 0.00 True petal width (cm) 271.75 58.12 0.00 True useless_feature 300.00 339.26 0.46 False ``` In order to claim dependence between the two variables, the resulting statistic should be larger than the threshold value **and** the [p-value](https://en.wikipedia.org/wiki/P-value#Definition_and_interpretation) should be lower than 0.05. You can choose smaller p-values for higher confidence (you'd have to calculate the threshold from `sp.stats.chi2.ppf` accordingly), but 0.05 is the "largest" value needed for your results to be considered significant. As far as ordering of useful features goes, consider looking at the relative magnitude of the difference between the calculated statistic and the threshold for each feature.
To get the regular expression I made a finite automata as the following (not sure if you can directly write regular expression without it): [![enter image description here](https://i.stack.imgur.com/3ztNx.png)](https://i.stack.imgur.com/3ztNx.png) The regular expression for the above according to me should be $(1+01^\*0)^\*$ but elsewhere I have seen it can be $(1^\*01^\*01^\*)^\*$. Why is it different?
There are (infinitely) many regular expressions for every regular language. Your approach gives you one (good job on a structured approach!), others give you others. Consider, for instance, two distinct yet equivalent NFA translated using Thompson's construction.
When talking about variables that are I(1) (the first difference is stationary), Lutkepohl book says: "...in general, a VAR process with cointegrated variables does not admit a pure VAR representation in first differences." And that would justify the use of VECM models, instead of simply taking the first difference and running a VAR when your time serie is I(1). But I do not get how this is possible. Suppose a vector $x\_t$ is $I(1)$ and there is some cointegration between the variables in this vector. Then, since $(1-L)x\_t$ is stationary, by the Wold theorem we get that it can be represented as $$(1-L)x\_t=A(L) \epsilon\_t$$ where $\epsilon\_t$ white noise and $A(L)$ is invertible. Therefore, we can write $$A(L)^{-1}(1-L)x\_t = \epsilon\_t$$ $$A(L)^{-1}\Delta x\_t=\epsilon\_t$$ Which is a pure VAR representation in first differences. What am I doing wrong?
I have the 2nd edition of Lutkepohl's book (1993), and the above issue is treated in ch. 11.1.2, page 354. I believe this is partly a semantic (but not unimportant) subtle issue. What you show in the question is that "the first-differenced process has a pure VAR representation" (since it is stationary), which is what the Wold Decomposition Theorem is all about. I don't see any mistake here. What Lutkepohl appears to mean is that "simply first-differencing the non-stationary *system as originally specified* does not permit us to obtain a pure VAR representation". The difference? As can be seen in the last equation of page 354, for Lutkepohl here "first differencing the non-stationary system as originally specified" means ***first-differencing also the original white noise term***, which produces "the non-invertible MA part, $\Delta u\_t =u\_t-u\_{t-1}$" as Lutkepohl writes immediately before what you quoted. Why may the distinction matter? It appears because by using the (valid) VAR representation of the first-differenced process stemming from the Wold Theorem, *we can no longer estimate the co-integrating relation* -it has "disappeared from view". So perhaps the more accurate statement would be "...In general, the process does not admit a *cointegration preserving* pure VAR representation in first differences". The example Lutkepohl works out in this page of his book is clear. To manipulate the system he does *not* apply the differencing operator, but he adds and subtracts variables that leave the whole unaffected, in order to obtain a linear combination of the process in the left-hand side, and a stationary expression on the right-hand side (where *now* the fact the first-differences are stationary is used). Faced with an $I(1)$ co-integrated system, one can make it stationary by first-differencing and use Wold decomposition while losing the co-integrating relation, or one can manipulate the system differently in order to transform it into a stationary system by virtue of first-differences being stationary, *and* bring into the surface the co-integrating relationship.
One percent of population cannot drive even if they try very very hard, but everyone applies for the driving license. The driving test fails those who cannot drive with a chance of 97%, but because the test has to be strict, it fails those who drive well with a probability of 3%. How likely it is that the person who failed a driving test is actually an able driver?
Regressing price $y$ on a constant and the number of cylinders $x$ would make sense if the *price was known to be affine in the number of cylinders*: the price increase from 2 to 4 cylinders is the same as the price increase from 4 to 6 cylinders and is the same as the price increase from 6 to 8. Then you could run the regression: $$ y\_i = a + b x\_i + \epsilon\_i $$ On the other hand, it may not be affine in reality. If price isn't affine in number of cylinders, the above model would be misspecified. What could one do? Let $z\_2$ be a dummy variable for two cylinders, let $z\_4$ be a dummy variable for 4 cylinders, etc... Since there are only four possibilities (2,4,6, or 8 cylinders), you likely have enough data to run the more complete regression: $$ y\_i = a + b\_4 z\_{4,i} + b\_6 z\_{6,i} + b\_8 z\_{8,i} + \epsilon\_i$$ Here the coefficients would $b\_4$, $b\_6$ etc... would be the price increase relative to a 2 cylinder car. (the constant $a$ would pick up the mean price of a two-cylinder car.) Or if you run the regression without a constant, you could run: $$ y\_i = b\_2 z\_{2,i} + b\_4 z\_{4,i} + b\_6 z\_{6,i} + b\_8 z\_{8,i} + \epsilon\_i$$ Here the coefficients ($b\_2$, $b\_4$, $b\_6$, $b\_8$) would be the mean price of each cylinder type. Observe how the average price no longer is assumed to be affine in the number of cylinders! You could have a small difference between $b\_4$ and $b\_6$ but a large difference between $b\_6$ and $b\_8$.
I am a 4th year psychology student. I need some help in understanding the coefficients in ORDINAL logistic regression. According to Williams (2009) "Using Heterogeneous Choice Models To Compare Logit and Probit Coefficients Across Groups", the predictor variables and residuals are already standardized to the logit distribution (variance = Ο€\*Ο€/3 ), and, therefore, so are the reported coefficients in SPSS. Therefore, in order to compare the relative predictive strength of my variables in the model, I should just be able to directly compare the coefficients (or the odds ratios). However, how do I account for the differences in CI/standard error in my comparisons? For example, variable 1: B=.021, std error = .0068, Exp(B) = 1.022, 95% CI = 1.008 to 1.035 variable 2: B=.051, std error = .0174, Exp(B) = 1.052, 95% CI = 1.017 to 1.089 From comparison of Bs - Variable 2 is the stronger predictor, but the std error and CI are much larger. So, what conclusion can I make?
In general, if your predictors are on different metrics, then the subjective assessment of variable importance can not be easily made by simply comparing the raw sizes of the odds ratios. If all your predictors are continuous, then I think converting the variables to z-scores would be useful for getting a sense of their relative importance. You mention that you have a skewed numeric predictor. I don't think changes anything too much for whether z-scores are appropriate. Ultimately, you have a separate issue of whether you want to apply a shape transformation (z-scores just change mean and variance). If your variable is highly skewed, then consider a transformation, and then z-score the transformed variable. Some times, you have binary predictors. In that case, the 0-1 scoring is quite intuitive, especially if you have a few such variables.
For a while now, I have been very interested in programming language theory and process calculi and have started to study them. To be honest, it something that I wouldn't mind going into for a career. I find the theory to be incredibly fascinating. One constant question I keep running into is if either PL Theory or Process Calculi have any importance at all in modern programming language development. I see so many variants on the Pi-calculus out there and there is a lot of active research, but will they ever be needed or have important applications? The reason why I ask is because I love developing programming languages and the true end goal would be to use the theory to actually build a PL. For the stuff I have written, there really has not been any correlation to theory at all.
You say that "the *true end goal* would be to use the theory to actually build a PL." So, you presumably admit that there are other goals? From my point of view, the No. 1 purpose of theory is to provide understanding, which can be in reasoning about existing programming languages as well as the programs written in them. In my spare time, I maintain a large piece of software, an email client, written ages ago in Lisp. All the PL theory I know such as Hoare logic, Separation Logic, data abstraction, relational parametricity and contextual equivalence etc. does come in handy in daily work. For instance, if I am extending the software with a new feature, I know that it still has to preserve the original functionality, which means that it should behave the same way under all the old contexts even though it is going to do something new in new contexts. If I didn't know anything about contextual equivalence, I probably wouldn't even be able to frame the issue in that way. Coming to your question about pi-calculus, I think pi-calculus is still a bit too new to be finding applications in language design. The [wikipedia page on pi-calculus](https://en.wikipedia.org/wiki/Pi-calculus) does mention BPML and occam-pi as language designs using pi-calculus. But you might also look at the pages of its predecessor CCS, and other process calculi such as CSP, join calculus and others, which have been used in many programming language designs. You might also look at the "Objects and pi-calculus" section of the [Sangiorgi and Walker book](http://books.google.co.uk/books?id=QkBL_7VtiPgC) to see how pi-calculus relates to existing programming languages.
Has there been any work on recovering the slope of a line segment from its digitization? One can't do this with perfect accuracy, of course; what one wants is a method of deriving from a digitized line an interval of possible slopes. (The notion of a digitized line that I am using is Rosenfeld's: the set of pairs $(i,nint(ai+b))$ where $i$ ranges over the integers (or a block of consecutive integers) and $nint(x)$ denotes the integer nearest to $x$ (if $x=k+1/2$, we take $nint(x)=k$).) I've done some work on this on my own (see <http://jamespropp.org/SeeSlope.nb>) but I have no formal background in computational geometry so I suspect I may be reinventing the wheel, since the question seems like such a basic one. In fact, I know that the linear regression method of estimating the slope is in the literature, but I haven't been able to find my $O(1/n^{1.5})$ result anywhere. (This result says that if one chooses $a$ and $b$ uniformly at random in $[0,1]$, then the difference between the slope $a$ of the line $y=ax+b$ and the slope $\overline{a}$ of the regression line approximating the $n$ points $(i,nint(ai+b))$ ($1 \leq i \leq n$) has standard deviation $O(1/n^{1.5})$.) Any leads or pointers to relevant literature will be greatly appreciated. Jim Propp (JamesPropp@ignorethis.gmail.com)
See [Random Generation of Finite Sturmian Words](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.49.9480&rep=rep1&type=pdf) by Berstel and Pocchiola for a proof that the feasible region of your LP has only three or four sides, as well as a simple algorithm for finding the polygon given slope and intercept. (They are dealing with recognizing Sturmian Words, but the problems are strongly related.) They also give an explicit enumeration of the polygons, so it may be possible to enumerate the areas of the polygons and the ranges of the slopes, so you may be able to get the expected value of the range of slopes (as well as higher moments) as an explicit sum.
$\newcommand{\P}{\mathbb{P}}$$\newcommand{\E}{\mathbb{E}}$$X \sim N(0,1)$ $W$ is Rademacher distribution $Y = WX$ <https://en.wikipedia.org/wiki/Rademacher_distribution> In order to prove dependece of random variables I need to show, that $\P(X,Y) = \P(X)\P(Y)$ is violated. I tried to find counterexample that would show, that the variables are dependent. First of all, I thought to show that $\E(XY) = \E(X)\E(Y)$ is not true. But in this case it is true $(\E(XY) = \E(X)\E(Y) = 0)$. How can I prove, that variables are **not** independent
What values can $Y$ take if $X=0$? What values can $Y$ take if $X=1$? Are the conditional pdfs for $Y$ the same at the two values of $X$?
I'm reading up on type classes, and started looking at the paper [Type Classes in Haskell](http://dl.acm.org/citation.cfm?id=227700). In Section 2.2 - Superclasses, the authors use the following example: ``` class (Eq a) => Ord a where (<) :: a -> a -> Bool (<=) :: a -> a -> Bool ``` Then, they proceed to state that "*This declares that type a belongs to class Ord if there are operations (<) and (<=) of the appropriate type and if a belongs to class Eq. Thus, if (<) is defined on some type, then (==) must be defined on that type as well.*" The second sentence does not make any sense to me; why would a type that defines (<) have to define (==) if it is not declared to be an instance of either Ord or Eq?
I do not know of good tutorial material, but there are papers that are sufficiently elementary for a grad student (like me). The first might be what you are looking for (emphasis is mine). > > [*Simple relational correctness proofs for static analyses and program transformations*](http://dl.acm.org/citation.cfm?id=964001.964003), Nick Benton. 2004. > > > We show how some classical static analyses for imperative programs, and the optimizing transformations which they enable, may be expressed and *proved correct using elementary logical and denotational techniques*. The key ingredients are an interpretation of program properties as relations, rather than predicates, and a realization that although many program analyses are traditionally formulated in very intensional terms, the associated transformations are actually enabled by more liberal extensional properties. > > > These papers may also interest you. They helped me greatly! 1. [Proving Correctness of Compiler Optimizations by Temporal Logic](http://www.dcs.warwick.ac.uk/people/academic/David.Lacey/papers/proving.pdf), David Lacey, Neil D. Jones, Eric Van Wyk, Carl Christian Frederiksen. I would have thought there was more material using bisimulation in the context of compiler optimizations. If your aim is really denotational techniques, you can probably encode these proofs using characterisations of bisimulation. 2. [Generating Compiler Optimizations from Proofs](http://cseweb.ucsd.edu/~lerner/papers/popl10.html), Ross Tate, Michael Stepp, and Sorin Lerner. Includes a category theoretic formalisation of their proof method. 3. [Proving Optimizations Correct using Parameterized Program Equivalence](http://dl.acm.org/citation.cfm?id=1542513), Sudipta Kundu, Zachary Tatlock, and Sorin Lerner. Go there if you like logical relations. 4. [A Formally Verified Compiler Back-end](http://www.springerlink.com/content/n513653k294m871k/) Xavier Leroy.
I would like to ask a few questions about Assembly language. My understanding is that it's very close to machine language, making it faster and more efficient. Since we have different computer architectures that exist, does that mean I have to write different code in Assembly for different architectures? If so, why isn't Assembly, write once - run everywhere type of language? Wouldn't be easier to simply make it universal, so that you write it only once and can run it on virtually any machine with different configurations? (I think that it would be impossible, but I would like to have some concrete, in-depth answers) Some people might say C is the language I'm looking for. I haven't used C before but I think it's still a high-level language, although probably faster than Java, for example. I might be wrong here.
The DEFINITION of assembly language is that it is a language that can be translated directly to machine code. Each operation code in assembly language translates to exactly one operation on the target computer. (Well, it's a little more complicated than that: some assemblers automatically determine an "addressing mode" based on arguments to an op-code. But still, the principle is that one line of assembly translates to one machine-language instruction.) You could, no doubt, invent a language that would look like assembly language but would be translated to different machine codes on different computers. But by definition, that wouldn't be assembly language. It would be a higher-level language that resembles assembly language. Your question is a little like asking, "Is it possible to make a boat that doesn't float or have any other way to travel across water, but has wheels and a motor and can travel on land?" The answer would be that by definition, such a vehicle would not be a boat. It sounds more like a car.
SAT solvers give a powerful way to check the validity of a boolean formula with one quantifier. For instance, to check the validity of $\exists x . \varphi(x)$, we can use a SAT solver to determine whether $\varphi(x)$ is satisfiable. To check the validity of $\forall x . \varphi(x)$, we can use a SAT solver to determine whether $\neg \varphi(x)$ is satisfiable. (Here $x=(x\_1,\dots,x\_n)$ is a $n$-vector of boolean variables, and $\varphi$ is a boolean formula.) QBF solvers are designed to check the validity of a boolean formula with an arbitrary number of quantifiers. What if we have a formula with two quantifiers? Are they any efficient algorithms for checking validity: ones that are better than just using generic algorithms for QBF? To be more specific I have a formula of the form $\forall x . \exists y . \psi(x,y)$ (or $\exists x . \forall y . \psi(x,y)$), and want to check its validity. Are there any good algorithms for this? **Edit 4/8:** I learned that this class of formulas is sometimes known as 2QBF, so I am looking for good algorithms for 2QBF. Specializing further: In my particular case, I have a formula of the form $\forall x . \exists y . f(x)=g(y)$ whose validity I want to check, where $f,g$ are functions that produce a $k$-bit output. Are there any algorithms for checking the validity of this particular sort of formula, more efficiently than generic algorithms for QBF? P.S. I am not asking about the worst-case hardness, in complexity theory. I am asking about practically useful algorithms (much as modern SAT solvers are practically useful on many problems even though SAT is NP-complete).
I have read two papers related to this, one specifically related to 2QBF. The papers are the following: [Incremental Determinization](https://link.springer.com/chapter/10.1007/978-3-319-40970-2_23), Markus N. Rabe and Sanjit Seshia, Theory and Applications of Satisfiability Testing (SAT 2016). They have implemented their algorithm in a tool named [CADET](https://github.com/MarkusRabe/cadet). The basic idea is to incrementally add new constraints to the formula till constraints describes a unique Skolem function or until there absence is confirmed. Second one is [Incremental QBF Solving](http://arxiv.org/pdf/1402.2410.pdf), Florian Lonsing and Uwe Egly. Implemented in a tool named [DepQBF](https://github.com/lonsing/depqbf). It do not put any constraint over the number of quantifier alternation. It begins with the assumption that we have a closely related qbf formulas. It's based on incremental solving and do not throw the clauses learned during last solving. It add clauses and cubes to the current formula and stops either if clauses or cubes are empty, representing unsat or sat. **Edit**: Just for a perspective how well these approaches work for 2QBF-benchmarks. Please look at the [Results of QBFEVal-2018](http://www.qbflib.org/main.pdf) for the results of yearly QBF competition [QBFEVAL](http://www.qbflib.org/index_eval.php). In 2019 there was no 2QBF track. > > In **2QBF Track** QBFEVAL-2018 **DepQBF** **was the winner**, **CADET** was the **second** in the race. > > > So these two approaches actually works very well in practice (at least on the QBFEVAL benchmarks).
Are there known algorithms for the following problem that beat the naive algorithm? > > Input: matrix $A$ and vectors $b,c$, where all entries of $A,b,c$ are nonnegative integers. > > > Output: an optimal solution $x^\*$ to $\max \{ c^T x : Ax \le b, x \in \{ 0,1\}^n \}$. > > > This question is a refined version of my previous question [Exact exponential-time algorithms for 0-1 programming](https://cstheory.stackexchange.com/questions/18482/exact-exponential-time-algorithms-for-0-1-programming).
if the number of non-zero coefficients in $A$ is linear in $n$, there is an algorithm that solves this problem in less than $2^n$ time. Here's how it works. We use the standard connection between an optimization problem and its corresponding decision problem. To test whether there exists a solution $x$ where $Ax\le b$ and $c^T x \ge \alpha$, we will form a decision problem: we will adjoin the constraint $c^T x\ge \alpha$ to the matrix $A$, and test whether there exists any $x$ such that $Ax \le b$ and $-c^T x \le -\alpha$. In particular, we will form a new matrix $A'$ by taking $A$ and adding an extra row containing $-c^T$, and we will form $b'$ by taking $b$ and adjoining an extra row with $-\alpha$. We obtain a decision problem: does there exist $x \in \{0,1\}^n$ such that $A' x \le b'$? The answer to this decision problem tells us whether there exists a solution to the original optimization problem of value $\alpha$ or greater. Moreover, as explained in [the answer to your prior question](https://cstheory.stackexchange.com/a/18483/5038), this decision problem can be solved in less than $2^n$ time, if the number of non-zero coefficients in $A'$ is linear in $n$ (and thus if the number of non-zero coefficients in $A$ is linear in $n$). Now we can use binary search on $\alpha$ to solve your optimization problem in less than $2^n$ time. My thanks to AustinBuchanan and Stefan Schneider for helping to debug an earlier version of this answer.
I need to find a context-free grammar for the following language which uses the alphabet $\{a, b\}$ $$L=\{a^nb^m\mid 2n<m<3n\}$$
**Hint**: Can you do $$L=\{a^nb^m\mid m=3n\}$$ Try it also for: $$L=\{a^nb^m\mid m=3n-1\}$$ Then you might want to be able not to always have that many $b$. And there is a bit more to take care of.
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$. I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things. $T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ ***equals*** $O(f(n))$, if one asks you what's the ***value*** of $O(f(n))$, what would be your answer? There is no answer.
In [The Algorithm Design Manual](https://www.springer.com/la/book/9781848000698) [1], you can find a paragraph about this issue: > > The Big Oh notation [including $O$, $\Omega$ and $\Theta$] provides for a rough notion of equality when comparing > functions. It is somewhat jarring to see an expression like $n^2 = O(n^3)$, but its > meaning can always be resolved by going back to the definitions in terms of upper > and lower bounds. It is perhaps most instructive to read the " = " here as meaning > "*one of the functions that are*". Clearly, $n^2$ is one of functions that are $O(n^3)$. > > > Strictly speaking (as noted by [David Richerby's comment](https://cs.stackexchange.com/questions/101324/o-is-not-a-function-so-how-can-a-function-be-equal-to-it/101334#comment216184_101334)), $\Theta$ gives you a rough notion of equality, $O$ a rough notion of less-than-or-equal-to, and $\Omega$ and rough notion of greater-than-or-equal-to. Nonetheless, I agree with [Vincenzo's answer](https://cs.stackexchange.com/questions/101324/o-is-not-a-function/101325#101325): you can simply interpret $O(f(n))$ as a set of functions and the *=* symbol as a set membership symbol $\in$. --- [1] Skiena, S. S. The Algorithm Design Manual (Second Edition). Springer (2008)
This question is in regard to the Fisher-Yates algorithm for returning a random shuffle of a given array. The [Wikipedia page](http://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle) says that its complexity is O(n), but I think that it is O(n log n). In each iteration i, a random integer is chosen between 1 and i. Simply writing the integer in memory is O(log i), and since there are n iterations, the total is O(log 1) + O(log 2) + ... + O(log n) = O(n log n) which isn't better the the naive algorithm. Am I missing something here? Note: The naive algorithm is to assign each element a random number in the interval (0,1) , then sort the array with regard to the assigned numbers.
The standard model of computation assumes that arithmetic operations on O(log n)-bit integers can be executed in constant time, since those operations are typically handed in hardware. So in the Fisher-Yates algorithm, "writing the integer i in memory" only takes O(1) time. Of course, it's perfectly meaningful to analyze algorithm in terms of bit operations, but the bit-cost model is less predictive of actual behavior. Even the simple loop `for i = 1 to n: print(i)` requires O(n log n) bit operations.
If the running time of an algorithm scales linearly with the size of its input, we say it has $O(N)$ complexity, where we understand `N` to represent input size. If the running time does not vary with input size, we say it's $O(1)$, which is essentially saying it varies proportionally to 1; i.e., doesn't vary at all (because 1 is constant). Of course, 1 is not the only constant. *Any* number could have been used there, right? (Incidentally, I think this is related to the common mistake many CS students make, thinking "$O(2N)$" is any different from $O(N)$.) It seems to me that 1 was a sensible choice. Still, I'm curious if there is more to the etymology thereβ€”why not $O(0)$, for example, or $O(C)$ where $C$ stands for "constant"? Is there a story there, or was it just an arbitrary choice that has never really been questioned?
There is no reason why you can't write $O(2)$ instead. $O(1)$ can equally be expressed as $O(2)$, or $O(1/2)$ or $O(2\pi)$, etc. (Untitled explained why it can't be $O(0)$.) It's purely a matter of convention.
I've just finished a module where we covered the different approaches to statistical problems – mainly Bayesian vs frequentist. The lecturer also announced that she is a frequentist. We covered some paradoxes and generally the quirks of each approach (long run frequencies, prior specification, etc). This has got me thinking – how seriously do I need to consider this? If I want to be a statistician, do I need to align myself with one philosophy? Before I approach a problem, do I need to specifically mention which school of thought I will be applying? And crucially, do I need to be careful that I don't mix frequentist and Bayesian approaches and cause contradictions/paradoxes?
***A preliminary note on my nomenclature:** As a preliminary matter, I note that I have never liked the terms "frequentist school" for the philosophy and set of methods it designates, and so I instead refer to this school of thought as "classical". Both Bayesians and classical statisticians agree entirely on the relevant theorems pertaining to the laws of large numbers, so both groups agree that the "frequentist" interpretation of probability holds under valid assumptions (i.e., an exchangeable sequence of values representing "repetition" of an experiment). All Bayesians are also "frequentists", in the sense that we accept the laws of large numbers and agree that probability corresponds to limiting frequency in appropriate circumstances. Since there is no real disagreement on the underlying laws of large numbers, I view it as silly to say that one group is a "frequentist" school and the other isn't.* --- > > This has got me thinking – how seriously do I need to consider this? > > > Others may disagree here, but my view is that if you want to be a good statistician, it is important to take foundational questions in the field seriously, and devote serious thinking to them during your training. Philosophical and methodological issues can seem far-removed from data analysis, but they are foundational issues that inform your choice of modelling methods and your interpretation and communication of results. Learning something always invovles a trade-off (though not always against other learning!) so you will need to decide the appropriate trade-off between learning the philosophical and foundational issues in statistics, versus using your time for something else. This trade-off will depend on your specific aspirations, in terms of how detailed you want your knowledge of the subject to be. When training to be an academic in the field (i.e., when doing my PhD) I spent quite a lot of time reading philosophical papers on this subject, mulling over their implications, and having late-night drunken conversations on the topic with reluctant young ladies at university parties. My view now ---as a practicing academic--- is that this was time well spent. > > If I want to be a statistician, do I need to align myself with one philosophy? > > > If you find one philosophy/methodology to be exclusively correct then you should align yourself entirely with that one philosophy/methodology. However, there are many statisticians who find some merit in each approach under different circumstances, or view one paradigm as philosophically correct, but difficult to apply in certain cases. In any case, it is not necessary to align yourself exclusively with one approach. To be a good statistician, you should certainly understand the difference between the two paradigms and be capable of applying models in either paradigm. You should also have some sense of when a particular approach might be easier to apply to solving a particular problem. (For example, some "paradoxes" arise under classical methods that are easily resolved in Bayesian analysis. Contrarily, some modelling situations are difficult to deal with in Bayesian analysis, such as when we want to test a specific null hypothesis against a broad but vague alternative hypothesis.) In general, if you can enlarge your "toolkit" to be familiar with more methods and models, you will have a greater capacity to deploy effective methods in statistical problems. > > Before I approach a problem, do I need to specifically mention which school of thought I will be applying? > > > This depends on context, but for general modelling purposes, no --- this will be obvious from the type of model and analysis you apply. If you apply a prior distribution to the unknown parameters and derive a posterior distribution, we will know you are doing a Bayesian analysis. If you treat the unknown parameters as "unknown constants" and use classical methods, we will know you you are using classical analysis. In good statistical writing you should explicitly state the model you are using (and maybe give references if you are writing an academic paper), and you might take this occasion to explicitly note if you are doing a Bayesian analysis, but even if you don't, it will be obvious. Of course, if the problem you are approaching is a theoretical or philosophical problem (as opposed to a data analysis problem) then it may hinge upon the relevant interpretation of probability, and the consequent methodological paradigm. In such cases you should explicitly state your philosophical/methodological approach. > > And crucially, do I need to be careful that I don't mix frequentist and Bayesian approaches and cause contradictions/paradoxes? > > > Unless you regard one of these methods to be totally invalid, such that it should never be used, it would stand to reason that it is okay to mix methods under appropriate circumstances. Again, understanding the strong and weak points of each paradigm will assist you in understanding when it is easier to apply one paradigm or the other. In practical statistical work, it is quite common to see Bayesian analysis that has some classical methods applied for diagnostic purposes to test underlying assumptions. Usually this occurs when we want to test some assumption of a Bayesian model against a broad and vague alternative (i.e., where the alternative is not specified as a parametric model which is itself amenable to Bayesian analysis). For example, we might conduct a Bayesian analysis using a linear regression model, but then apply the Grubb's test (a classical hypothesis test) to test whether the assumption of normally distributed error terms is reasonable. Alternatively, we might conduct alternative Bayesian analyses using a set of different models, but then conduct cross-validation using classical methods. Perhaps there are some Bayesian "purists" who completely eschew classical methods, but they are rare. (This partly depends on the state of knowledge in the field of Bayesian analysis; as the field develops further and expands its boundaries, it has less and less need for supplementation by classical methods. Consequently, you should see this as contextual, based on the present state of development of Bayesian theory and related computational tools, etc.) If you mix the two methods then you certainly need to be mindful of creating contradictions or "paradoxes" in your analysis, but obviously that is going to require you to have a good understanding of the two paradigms, which further behoves you to devote time to learning them.
I am looking for resources (preferably a handbook) on advanced topics in algorithms (topics beyond what is covered in algorithms textbooks like CLRS and DPV). The type of material that can be used for teaching a topics in algorithms course like Erik Demaine and David Karger's [Advanced Algorithms course](http://courses.csail.mit.edu/6.854/03/). Resources that would give an overview of the field (like a handbook) are preferable, but more focused resources like Vijay Vazirani's "Approximation Algorithms" book are also fine.
The Design of Approximation Algorithms by Williamson & Shmoys (<http://www.designofapproxalgs.com/>) is a great book for many approximation methods such as greedy algorithms, semidefinite programming, etc. Also, it covers some topics within complexity that are closely related to approximation algorithms (inapproximability, Unique Games-based hardness of MAX-CUT).
I'm trying to implement a k-opt algorithm and I'm bogged down on a detail: the importance of choosing disjoint edges. My question: Is there any benefit to considering adjacent edges, or is the full power of the heuristic achieved when only disjoint edges are considered? Wikipedia seems confused on this question. [Its description of k-opt](https://en.wikipedia.org/wiki/Travelling_salesman_problem#Heuristic_and_approximation_algorithms) explicitly says "k mutually disjoint edges", but then it describes 2-opt and 2.5-opt as special cases of k-opt, and describes both those algorithms as processing adjacent edges. Is there something I'm not understanding here?
I have a partial answer to my own question. I believe there is a benefit to considering adjacent edges. The full power of the heuristic is not achieved when only disjoint edges are considered. To arrive at this conclusion, I implemented two versions of the 4-opt algorithm in C#. One version considers only disjoint edges and the other considers both disjoint and adjacent edges. Then, I used both algorithms to solve the Travelling Salesman problem for 1000 scenarios of 25 nodes each. The two algorithms found the same solution 52.7% of the time. The disjoint+adjacent algorithm found a better solution than the disjoint-only algorithm 45.7% of the time. The disjoint-only algorithm found a better solution than the disjoint+adjacent algorithm 1.6% of the time. The disjoint+adjacent algorithm found tours that were, on average, 99.1% of the length of the tours found by the disjoint-only algorithm. Based on these results, I'm fairly confident that there is a benefit to exploring adjacent edges in the k-opt algorithm. I call this a partial answer because I'm not yet clear on whether it is efficient to explore adjacent edges. Yes, exploring adjacent edges in my experiment improved the tour length by an average of 0.9%, but perhaps I could have improved the tour length more than that by using the same computing effort exploring some 5-opt steps instead. My experimentation suggests that it is optimal to first explore 2-opt steps, then to explore 3-opt steps that include adjacent edges, then to explore 3-opt steps that include no adjacent edges, then to explore 4-opt steps that include adjacent edges, then to explore 4-opt steps that include no adjacent edges, etc. However, that is preliminary speculation. If anyone knows of k-opt documentation that covers this level of detail, please let me know about it.
When building a predictive model using machine learning techniques, what is the point of doing an exploratory data analysis (EDA)? Is it okay to jump straight to feature generation and building your model(s)? How are descriptive statistics used in EDA important?
Not long ago, I had an interview task for a data science position. I was given a data set and asked to build a predictive model to predict a certain binary variable given the others, with a time limit of a few hours. I went through each of the variables in turn, graphing them, calculating summary statistics etc. I also calculated correlations between the numerical variables. Among the things I found were: * One categorical variable almost perfectly matched the target. * Two or three variables had over half of their values missing. * A couple of variables had extreme outliers. * Two of the numerical variables were perfectly correlated. * etc. My point is that *these were things which had been put in deliberately* to see whether people would notice them before trying to build a model. The company put them in because they are the sort of thing which can happen in real life, and drastically affect model performance. So yes, EDA is important when doing machine learning!
We draw $n$ values, each equiprobably among $m$ distinct values. What are the odds $p(n,m,k)$ that at least one of the values is drawn at least $k$ times? e.g. for $n=3000$, $m=300$, $k=20$. Note: I was passed a variant of this by a friend asking for "a statistical package usable for similar problems". My attempt: The number of times a particular value is reached follows a binomial law with $n$ events, probability $1/m$. This is enough to get odds $q$ that a particular value is reached at least $k$ times [Excel gives $q\approx 0.00340$ with `=1-BINOMDIST(20-1,3000,1/300,TRUE)`]. Given that $n\gg k$, we can ignore the fact that odds of a value being reached depends on the outcome for other values, and get an *approximation* of $p$ as $1-(1-q)^m$ [Excel gives $p\approx 0.640$ with `=1-BINOMDIST(20-1,3000,1/300,TRUE)^300`]. *update: the exponent was wrong in the above, that's now fixed* Is this correct? *(now solved, yes, but the approximation made leads to an error in the order of 1% with the example parameters)* **What methods can work for arbitrary parameters $(n,m,k)$?** Is this function available in R or other package, or how could we construct it? *(now solved, both exactly for moderate parameters, and theoretically for huge parameters)* I see how to do a simulation in C, what would be an example of a similar simulation in R? *(now solved, a corrected simulation in R and another in Python gives $p\approx 0.647$)*
There are almost certainly easier ways, but one way of computing the value precisely is compute the number of ways of placing $n$ labeled balls in $m$ labeled bins such that no bin contains $k$ or more balls. We can compute this using a simple recurrence. Let $W(n,j,m',k)$ be the number of ways of placing exactly $j$ of the $n$ labeled balls in $m'$ of the $m$ labeled bins. Then the number we seek is $W(n,n,m,k)$. We have the following recurrence: $$W(n,j,m',k)=\sum\_{i=0}^{k-1}\binom{n-j+i}{i}W(n,j-i,m'-1,k)$$ where $W(n,j,m',k)=0$ when $j<0$ and $W(n,0,0,k)=1$ as there is one way to pace no balls in no bins. This follows from the fact that there are $\binom{n-j+i}{i}$ ways to choose $i$ out of $n-j+i$ balls to put in the $m'$th bin, and there are $W(n,j-i,m'-1,k)$ ways to put $j-i$ balls in $m'-1$ bins. The essence of this recurrence if that we can compute the number of ways of placing $j$ out of $n$ balls in $m'$ bins by looking at the number of balls placed in the $m'$th bin. If we placed $i$ balls in the $m'$th bin, then there were $j-i$ balls in the previous $m'-1$ bins, and we have already calculated the number of ways of doing that as $W(n,j-i,m'-1,k)$, and we have $\binom{n-j+i}{i}$ ways of choosing the $i$ balls to put in the $m'$th bin (there were $n-j+i$ balls left after we put $j-i$ balls in the first $m'-1$ bins, and we choose $i$ of them.) So $W(n,j,m',k)$ is just the sum over $i$ from $0$ to $k-1$ of $\binom{n-j+i}{i}W(n,j-i,m'-1,k)$. Once we have computed $W(n,n,m,k)$ the probability that at least one bin has at least $k$ balls is $1-\frac{W(n,n,m,k)}{m^n}$. Coding in Python because it has multiple precision arithmetic we have ``` import sympy # to get the decimal approximation #compute the binomial coefficient def binomial(n, k): if k > n or k < 0: return 0 if k > n / 2: k = n - k if k == 0: return 1 bin = n - (k - 1) for i in range(k - 2, -1, -1): bin = bin * (n - i) / (k - i) return bin #compute the number of ways that balls can be put in cells such that no # cell contains fullbin (or more) balls. def numways(cells, balls, fullbin): x = [1 if i==0 else 0 for i in range(balls + 1)] for j in range(cells): x = [sum(binomial(balls - (i - k), k) * x[i - k] if i - k >= 0 else 0 for k in range(fullbin)) for i in range(balls + 1)] return x[balls] x = sympy.Integer(numways(300, 3000, 20))/sympy.Integer(300**3000) print sympy.N(1 - x, 50) ``` (sympy is just used to get the decimal approximation). I get the following answer to 50 decimal places 0.64731643604975767318804860342485318214921593659347 This method would not be feasible for much larger values of $m$ and $n$. **ADDED** As there appears to be some skepticism as to the accuracy of this answer, I ran my own Monte-Carlo approximation (in C using the GSL, I used something other than R to avoid any problems that R may have provided, and avoided python because the heat death of the universe is happening any time now). In $10^7$ runs I got 6471264 hits. This seems to agree with my count, and is considerably at odds with whubers. The code for the Monte-carlo is attached. I have finished a run of 10^8 trials and have gotten 64733136 successes for a probability of 0.64733136. I am fairly certain that things are working correctly. ``` #include <stdio.h> #include <stdlib.h> #include <gsl/gsl_rng.h> const gsl_rng_type * T; gsl_rng * r; int testrand(int cells, int balls, int limit, int runs) { int run; int count = 0; int *array = malloc(cells * sizeof(int)); for (run =0; run < runs; run++) { int i; int hit = 0; for (i = 0; i < cells; i++) array[i] = 0; for (i = 0; i < balls; i++) { array[gsl_rng_uniform_int(r, cells)]++; } for (i = 0; i < cells; i++) { if (array[i] >= limit) { hit = 1; break; } } count += hit; } free(array); return count; } int main (void) { int i, n = 10; gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); for (i = 0; i < n; i++) { printf("%d\n", testrand(300, 3000, 20, 10000000)); } gsl_rng_free (r); return 0; } ``` **EVEN MORE** Note: this should be a comment to probabilityislogic's answer, but it won't fit. Reifying probabilityislogic's answer (mainly out of curiosity), this time in R because a foolish inconsistency is the hobgoblin of great minds, or something like that. This is the normal approximation from the Levin paper (the Edgeworth expansion should be straightforward, but it is more typing than I'm willing to expend) ``` # an implementation of the Bruce Levin article here limit is the upper limit on # bin size that does not count approxNorm <- function(balls, cells, limit) { # using N=s sp <- balls / cells mu <- sp * (1 - dpois(limit, sp) / ppois(limit, sp)) sig2 <- mu - (limit - mu) * (sp - mu) x <- (balls - cells * mu) / sqrt(cells * sig2) p2 <- exp(-x^2 / 2)/sqrt(2 * pi * cells * sig2) p1 <- exp(ppois(limit, sp, log.p=TRUE) * cells) sqrt(2 * pi * balls) * p1 * p2 } ``` and `1 - approxNorm(3000, 300, 19)` gives us $p(3000, 300, 20) \approx 0.6468276$ which is not too bad at all.
After reviewing related questions on Cross Validated and countless articles and discussions regarding the inappropriate use of [stepwise regression](http://en.wikipedia.org/wiki/Stepwise_regression) for variable selection, I am still unable to find the answers that I am looking for in regards to building parsimonious, binary logistic regression models from datasets with 1000 (or more) potential predictor variables. For some background information, I typically work with large datasets, 500k or more rows, and my interest is in building binary logistic regression models to predict whether an individual will pay (1) or not pay (0) their bill on a particular account without using stepwise logistic regression. Currently, stepwise logistic regression is hailed as the β€œperfect method” among other statisticians that I have worked with, and I would like to change that as I have witnessed many of its pitfalls firsthand. I have recently dabbled in PCA (`proc varclus`) and random forest analyses (`randomForest`) with the latter being especially helpful; however, I am still seeking further direction on how to reduce the number of variables in my binary logistic models without using stepwise logistic regression. With that being said, any help (suggested articles or thoughts) is greatly appreciated. Thanks!
Tree-based methods (e.g., CART) are widely used for solving this problem. While I would always prefer to estimate a logit model using theory, when in situations where there is insufficient theory or the data is not well understood, a tree-based method is, in my experience, always preferable to a logit model as it scales better to large data sets, is much more robust and better deals with non-linearities and interactions. If you superiors are desperate for a logit model, you can use the variables selected by the tree-based model in a logit model.
I'm currently doing two multiple linear regressions. Each of them with the same set of predictors (measurements for real estate quality) $X\_1,...,X\_n$, but with different dependent variables (one of them the purchasing price, the other one the yearly rent) $Y\_1$ and $Y\_2$. $Y\_1= a\_1X\_1+a\_2X\_2+...$ $Y\_2= b\_1X\_1+b\_2X\_2+...$ What I am interested in is the influence of the independent variables on a third dependent variable $Y\_3$ (a real estate investor's return assumption), which is approximately the quotient of the first two dependent variables $Y\_1/Y\_2$. So what I want to do is to find out which of the independent variables could possibly have an influence on the third dependent variable. As predictors for the regression on $Y\_3$ I only want to use the independent variables out of the original set, of which I think that they have influence on $Y\_3$. To do so I want to compare the influence of the independent variables on $Y\_1$ and $Y\_2$. If the influence points in a different direction (e.g. coefficient $a\_1$ is negative and $b\_1$ is positive) it is obvious that this independent variable will probably have influence on $Y\_3$. But what if the direction is the same? It could happen, that the independent variable $X\_n$ has strong influence on both $Y\_1$ and $Y\_2$ but with the same magnitude, so that $Y\_3$ is not determined by this independent variable. *So my question is*, is there a way to find out (e.g. by comparing standardized regression coefficients?), how large the influence of one predictor is relatively on $Y\_1$ and $Y\_2$? So I can say for example, $X\_n$ determines both $Y\_1$ and $Y\_2$ in a positive way, but $Y\_1$ is determined stronger, so that the quotient $Y\_1/Y\_2$ and therefore probably $Y\_3$ is influenced by $X\_n$ in a positive way, that's why I use it as a predictor in the regression on $Y\_3$." I don't want to use the quotient of the predicted $Y\_1$ and $Y\_2$ as a estimator for $Y\_3$, but do a fully new regression on $Y\_3$.
If you're willing to fit the models: $$ \log Y\_1= a\_1X\_1+a\_2X\_2+\ldots\\ \log Y\_2= b\_1X\_1+b\_2X\_2+\ldots $$ then, noting that $\log c/d = \log c - \log d$, you can interpret the differences of coefficients, $a\_j - b\_j$, directly for their impact on $\log Y\_1/Y\_2$. Moreover, if you fit $(\log Y\_, \log Y\_2)$ as a multivariate response, you'll be able to estimate the standard errors of $a\_j-b\_j$ and hence test their contributions.
So I have the question: Prove that any directed cycle in the graph of a partial order must only involve one node. So I know that a partial order must be transitive, antisymmetric, and reflective, but from there I am pretty lost. I also know that any path of one or more edges from a node to itself is a directed cycle, but from there I am having trouble connecting everything. I kind of need help as to where to start thinking, and the first step as to how to answer this question.
I would say very definitely teach using Karp (many-one) reductions. Regardless of the benefits of using poly-time Turing reductions (Cook), Karp reductions are the standard model. Everybody uses Karp and the main pitfall of teaching Cook is that you'll end up with a whole class of students who become pathologically confused whenever they read a textbook or try to discuss the subject with anyone who wasn't taught by you. I agree that Cook reductions are in several ways more sensible and that there's no distinction between NP-hardness and coNP-hardness in practical terms, in the sense that they both mean "This problem is pretty hard and you're not going to get a general, efficient, exact algorithm that can cope with large instances." On the other hand, the distinction between NP and coNP isn't entirely an artifact of a theory based on Karp reductions: you don't often talk about graphs that are non-3-colourability or in which every set of $k$Β vertices contains at least one edge. Somehow, the "natural" version of the problem often seems to be in NP rather than coNP.
Let $x\_1, \ldots, x\_n$ be points in the plane $\mathbb{R}^2$. Consider a complete graph with the points as vertices and with edge weights of $\|x\_i - x\_j\|^2$. Can you always find a cut of weight that is at least $\frac 2 3$ of the total weight? If not, which constant should replace the $\frac 2 3$? The worst example I'm able to find is 3 points on an equilateral triangle, which achieves the $\frac 2 3$. Note that a random split would produce $\frac 1 2$, but it seems intuitively obvious that in low dimensions, one can cluster better than randomly. What happens for max-k-cut for k > 2? How about a dimension d > 2? Is there a framework to answer such questions? I know about Cheeger's inequalities, but those apply to sparsest cut (not max-cut) and only work for regular graphs. (Question is inspired by the problem of clustering light sources in computer graphics to minimize variance).
From Russell Impagliazzo's [comment](https://cstheory.stackexchange.com/questions/5463/can-one-amplify-p-np-beyond-p-ph#comment14597_5463): > > As a way of formalizing > what languages are in $\mathsf{P}$ if $\mathsf{P}=\mathsf{NP}$, > Regan introduced the complexity class $\mathsf{H}$. > A language $L$ is in $\mathsf{H}$ if and only if $L$ is in $\mathsf{P}^O$ > relative to every oracle $O$ so that $\mathsf{P}^O=\mathsf{NP}^O$. > Thus, $L$ is in $\mathsf{H}$ if the statement > $\mathsf{P}=\mathsf{NP} \implies L\in\mathsf{P}$ relativizes. > $\mathsf{PH} \subseteq \mathsf{H} \subseteq \mathsf{AltTime}(O(\lg\lg n),\mathsf{poly})$. > From Toda's theorem, and some of the lemmas in Toda's theorem, > it is also true that $\mathsf{H} \subseteq \mathsf{P}^{\mathsf{mod}\_q \mathsf{P}}$ for every $q$. > Basically, any oracle satisfying $\mathsf{P}^O=\mathsf{NP}^O$ > gives a new upper bound on $\mathsf{H}$. > It is open whether $\mathsf{H}=\mathsf{PH}$. > > > And from Lance Fortnow's [comment](https://cstheory.stackexchange.com/questions/5463/can-one-amplify-p-np-beyond-p-ph#comment78141_5463): > > Let $f(n)$ be any unbounded function. > $\mathsf{H}$ is not contained in $\mathsf{AltTime}(f(n),\mathsf{poly})$ and > if you could prove > $\mathsf{P}=\mathsf{NP}$ implies $\mathsf{P}=\mathsf{AltTime}(f(n),\mathsf{poly})$ then $\mathsf{NP}$ is different than $\mathsf{L}$. > > > For definition of $\mathsf{H}$ see definition 6.3 in * Kenneth W. Regan, "[Index sets and presentations of complexity classes](http://dx.doi.org/10.1016/0304-3975(95)00146-8)", 1999
First of all, I apologize since this question has probably been asked many times and is easily answered. However, as a statistics amateur I simply couldn't figure out what keywords are relevant to my question. Suppose you have 100 merchants and 100 products. Each merchant sells a certain range of products, ranging from only one product to all 100 products. Also, products are sold in widely different proportions, which differ among merchants, and are subject to the merchant's individual (irrational) preferences. Whenever a merchant makes a "pitch" on the market, we observe whether or not he manages to sell the product he's pitching. We assume the probability of success depends (a) on the skill of the merchant and (b) the attractiveness of the product. The products' prices are fixed, so that's not a factor. The data we have consists of millions of pitches. For each pitch, we know whether or not it was successful, the merchant, and the product. Obviously, if we compare merchants by their average success rate, this information is useless because every merchant sells different products. Likewise, if we compare products, we gain no information since every product is sold by different merchants. **What we want is a skill score for each merchant, which is independent of the products the merchant is selling, and an attractiveness score for each product, which is independent of the merchants who are selling it.** I don't need a comprehensive explanation, just some keywords to point me in the right direction. I literally have no idea where to start. Edit: Note that our assumption is that the product attractiveness is merchant-independent and the merchant skill is product-independent, i.e. there are no merchants which are better at selling certain products but worse at selling others.
Your problem can be modeled by a [Rasch Model](http://en.wikipedia.org/wiki/Rasch_model). [Here is a document](https://math.uc.edu/~brycw/preprint/rasch3.pdf) that explains the model with the following example > > Rasch model is a statistical model of a test that attempts to > describe the probability that a student answers a question correctly. It assigns to > every student a real number, a, called the "ability", and to every questions a real > number, d, called the "difficulty". > > > This is similar to your situation where each merchant has some inherent "skill" and each product has an inherent "attractiveness".
I have large survey data, a binary outcome variable and many explanatory variables including binary and continuous. I am building model sets (experimenting with both GLM and mixed GLM) and using information theoretic approaches to select the top model. I carefully examined the explanatories (both continuous and categorical) for correlations and I am only using those in the same model that have a Pearson or Phicorr coeff less than 0.3. I would like to give all of my continuous variables a fair chance in competing for the top model. In my experience, transforming those that need it based on skew improves the model they participate in (lower AIC). My first question is: is this improvement because transformation improves the linearity with the logit? Or is correcting skew improves the balance of the explanatory variables somehow by making the data more symmetric? I wish I understood the mathematical reasons behind this but for now, if someone could explain this in easy terms, that would be great. If you have any references I could use, I would really appreciate it. Many internet sites say that because normality is not an assumption in binary logistic regression, do not transform the variables. But I feel that by not transforming my variables I leave some at disadvantage compared to others and it might affect what the top model is and changes the inference (well, it usually does not, but in some datasets it does). Some of my variables perform better when log transformed, some when squared (different direction of skew) and some untransformed. Would someone be able to give me a guideline what to be careful about when transforming explanatory variables for logistic regression and if not to do it, why not?
It's important to realize the minimum and maximum are often not very good statistics to use (i.e., they can fluctuate greatly from sample to sample, and don't follow a normal distribution as, say, the mean might due to the Central Limit Theorem). As a result, the range is rarely a good choice for anything other than to state the range of *this exact sample*. For a simple, nonparametric statistic to represent variability, the Inter-Quartile Range is much better. However, while I see the analogy between IQR/median and the coefficient of variation, I don't think this is likely to be the best option. You may want to look into the *median absolute deviation from the median* ([MADM](http://en.wikipedia.org/wiki/Median_absolute_deviation)). That is: $$ MADM = \text{median}(|x\_i-\text{median}(\bf x)|) $$ I suspect a better nonparametric analogy to the coefficient of variation would be MADM/median, rather than IQR/median.
I am evaluating the accuracy of GPS watches, taking many readings over a known distance. I've been calculating standard deviation using the mean reading, but because I know what the reading should be, I could use that instead of the mean. Would this be a reasonable thing to do?
Re:"Would this be a reasonable thing to do?" As Wuber pointed out, it depends on what you are trying to measure. GPS signals for civilians are purposely degraded to prevent GPS signals being used for missile guidance against US targets. So the "location" of a particular position will vary with time on purpose and due to overall error in GPS. When measuring the difference between two positions (at two different times) there will be about a SQRT(2) increase over the error of measuring one position.
I know that Exceptions as a means of flow-control is generally frowned upon. But in my opinion, Exceptions have little value short of the flow-control aspect - after all, if you didn't want the program to continue, you could just output an error-message an terminate the program. Exceptions, on the other hand, provide a means of reporting errors "locally", allowing a service/component to fail, and a consumer to handle the failure - and regardless of how you look at that, it is a means of controlling the flow of the program. So here's my question - over the years, I have frequently wondered, why isn't it possible to *resume* execution after an exception is thrown? Now, you wouldn't want to allow any consumer to resume after an Exception thrown by any other component, as that component was probably not designed to resume after a throw-statement, which would lead to unpredictable results. So let's say there's a supertype of Exception called Interrupt, that allows this behavior. An Interrupt would behave just like an Exception in every respect, except that by throwing an Interrupt, you indicate that the component is ready and able to resume execution after the throw-statement, and that the stack needs to be preserved either until (A) the Interrupt has been handled, or (B) the program exits with an error-message. Let's say we add a new "resume" statement to the language, to be used inside a traditional "catch" block - if you catch an Interrupt, and issue a "resume" statement, control would return to the point from where the Interrupt was originally thrown, and execution would continue from the next statement. I've presented this idea in other circles, and I'm met with a lot of resistance, but no clear argument as to why this is not a good idea. To me, it seems like a natural extension of the idea of exceptions - there are plenty of cases where this could be useful, for example while enumerating a sequence, e.g. in a function that "yields" one result at a time; an unexpected condition could occur while producing one of these results, and you may want the *calling* code to decide whether or not it makes sense to continue producing more results. An exception does not allow for that. Let's say this function throws an interrupt instead - if so, the calling code now has a chance to look at that and decide whether to resume execution (as if the exception never occurred) and produce more results, perhaps log the condition and then resume, or perhaps throw an exception, or perhaps re-throw the interrupt in case it can be handled up-stream. I'm sure I'm not the first person to have this idea, but I would like to understand why this isn't feasible or why it's not a good idea. (PS: I'm a programmer, not a scientist, so go easy on me.)
Regarding gasche's answer - it's true that "you are certainly adding non-trivial complexity to the language", but it's important to keep in mind that in Common Lisp and related languages, conditions are *orthogonal to control flow*. The core of condition handling is merely a pattern for looking up handler functions, and does not in any way alter existing or add new control flow powers to the language. See this sketch of a "condition system" in JS: <https://groups.google.com/forum/?fromgroups#!msg/ll-next/mlixSOPwc-c/iQ-xPd-kIjEJ>
Given a set of $n$ stones, arranged in a row at equal distances from each other. The stones are numbered from $0$ to $n-1$. A frog that is on stone number $0$ has to get to stone number $n-1$. A jump is a move from stone $i$ to stone $j$ ($i<j$). Such a jump has length equal to $j-i$. The maximum jump length of the frog depends on its energy level (which cannot drop below $0$). A jump of length $j-i$ costs the frog $j-i$ energy. For example, with an initial energy of $3$, a frog on stone $0$ can jump to stone $3$ at most. On some stones, there may be worms, which add energy to the frog. If the frog jumps on a stone with a worm on it, it MUST eat the worm. The list $T[0...n-1]$ contains the energy values of the worms on the corresponding stones. If there is no worm on the stone, this is equivalent to a value of $0$. The frog starts with $0$ Energy, but it is guaranteed that there is a worm on stone number $0$. Given a list $T$, the task is to return a list of the indexes of the stones on which the frog has been, before getting to stone $n-1$. However, this must be the shortest possible list. In some cases there may be more than one solution, whichever is accepted. It is guaranteed that always the frog will be able to get to stone $n-1$. Example no. 1: For $T = [3, 0, 2, 1, 0, 2, 5, 0]$ the answer is $[0, 2, 5]$. Example no. 2: For $T = [7, 0, 0, 1, 0, 3, 0, 6, 0, 0, 0, 0, 0, 0, 0]$ the answer is $[0, 5, 7]$. I have no idea how to approach this problem, all I know is that I probably need to use dynamic programming here, but how? Anyone have any ideas?
Computers can perform any algorithm using only addition (or even using only the [x86 MOV instruction](https://stackoverflow.com/a/61048963/4086871)). But what they have is a set of instructions aiming to maximize performance. A computer that only use the add operator will be slow in doing multiplication. To get around this, computers have hardware-implemented functions performing [binary multiplication](https://en.wikipedia.org/wiki/Binary_multiplier) and division. But actually the most of instructions executed in the hardware by the code are moves, branches and add/sub operations, as you can see in [SPEC CPU2017 Benchmarks](https://www.semanticscholar.org/paper/Memory-Centric-Characterization-and-Analysis-of-Singh-Awasthi/a26979c345a85df7a74a75bb3353087aaa8b82e6). Logic gates are part of the processor components, but also can be implemented in software, for example: to compare two numbers for equality we can subtract them and check if the result is zero.
I am having trouble finding good resources that give a worst case $O(n \ln n)$ [in place](http://en.wikipedia.org/wiki/In-place_algorithm) [stable](http://www.algorithmist.com/index.php/Stable_Sort) sorting algorithm. Does anyone know of any good resources? Just a reminder, in place means it uses the array passed in and the sorting algorithm is only allowed to use constant extra space. Stable means that elements with the same key appear in the same order in the sorted array as they did in the original. For example, naive merge sort is worst case $O(n \ln n)$ and stable but uses $O(n)$ extra space. Standard quicksort can be made stable, is in place but is worst case $O(n^2)$. Heapsort is in place, worst case $O(n \ln n)$ but isn't stable. [Wikipedia](http://en.wikipedia.org/wiki/Sorting_algorithm) has a nice chart of which sorting algorithms have which drawbacks. Notice that there is no sorting algorithm that they list that has all three conditions of stability, worst case $O(n \ln n)$ and being in place. I have found a paper called ["Practical in-place mergesort"](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.8523&rep=rep1&type=pdf) by Katajainen, Pasanen and Teuhola, which claims to have a worst case $O(n \ln n)$ in place stable mergesort variant. If I understand their results correctly, they use (bottom-up?) mergesort recursively on the first $\frac{1}{4}$ of the array and the latter $\frac{1}{2}$ of the array and use the second $\frac{1}{4}$ as scratch space to do the merge. I'm still reading through this so any more information on whether I'm interpreting their results correctly is appreciated. I would also be very interested in a worst case $O(n \ln n)$ in place stable quicksort. From what I understand, modifying quicksort to be worst case $O(n \ln n)$ requires [selecting a proper pivot](http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm) which would destroy the stability that it would otherwise normally enjoy. This is purely of theoretical interest and I have no practical application. I would just like to know the algorithm that has all three of these features.
You can write an in-place, stable mergesort. See [this](http://thomas.baudel.name/Visualisation/VisuTri/inplacestablesort.html) for details. In the author's own words: > > A beautiful in place - merge algorithm. Test it on inverted arrays to understand how rotations work. Fastest known in place stable sort. No risk of exploding a stack. Cost: a relatively high number of moves. Stack can still be expensive too. This is a merge sort with a smart in place merge that 'rotates' the sub arrays. This code is litteraly copied from the C++ stl library and translated in Java. > > > I won't copy the code here, but you can find it at the link or by checking the C++ STL. Please let me know if you would like me to try to provide a more detailed description of what's going on here.
I've been researching ways of modeling and executing tasks which are dependent on each other (but in an acyclic way) and came up with task graphs. But the question that's bugging me is how can I find out the maximum degree of concurrency in a given task graph. In my case, I'm talking of a relatively small graph, around 100 nodes, but nodes, representing tasks, are long running tasks. So the occuracy, more then complexity of such an algorithm would matter. Assuming I came up of such a degree, the second problem, is how should I distrubute tasks? I've read about topological sort, and transforming the result in a list of sets, with each set being run in parallel. But again, I suspect if this is the best approach.
If you turn an activity-on-node task graph into a partial order (by taking the [transitive closure](https://en.wikipedia.org/wiki/Transitive_closure)), then the largest [independent set](https://en.wikipedia.org/wiki/Independent_set_%28graph_theory%29) of tasks is what you are looking for. (Taking a topological sort, as suggested in another answer, does not work in general. Consider the series-parallel task graph $((a|b)c)|(d(e|f))$, where $\alpha|\beta$ means parallel composition of task graphs and $\alpha\beta$ means every task in task graph $\alpha$ precedes every task in task graph $\beta$. Here $\{a,b,e,f\}$ is the largest independent set, yet the topological sort will produce $\{a,b,d\}$.) Although finding largest independent sets is NP-complete in general, it [can be done quickly for partial orders](https://cs.stackexchange.com/a/10303/5323). This starts by noting the equivalence of an independent set in a poset with a set of witnesses that realise the width of the poset, and [applying KΓΆnig's theorem](https://en.wikipedia.org/wiki/Dilworth%27s_theorem#Proof_via_K.C3.B6nig.27s_theorem) to compute the witnesses by a perfect matching. Some of these basic algorithms are already part of software toolkits, like the [Graph CPAN module](http://search.cpan.org/~jhi/Graph-0.96/) for Perl, and the [Boost Graph Library](http://www.boost.org/doc/libs/1_55_0/libs/graph/doc/index.html) for C++.
when we use the term " scales of measurement", what do we mean * by *scale* * by *Measurement* ? As far i know, measurement denotes "a value" and scale denotes "the unit" of the value. So if i say, the age of a child is $5$ years, then here * "year" is the *scale* * "$5$" is the *Measurement* Is that the case? But in an article it is written that "Scaling is a procedure for the assignment of numbers to a property of objects." It seems to me "$5$" is the *scale* in the particular example of $5$ years of a child.
Scale is usually a range or a metric of measurement that everything fits in to avoid large calculations and better visualize the data using a reference point. The age "5 years old" is not a scale. Year is the metric used not the scale. The scale is "when your born to the maximum amount of years a person can live" A common use of scaling is changing data to a Log scale so you can easily visualize larger ranges of data for things growth and magnitude.
Does the output of a parser have to be a tree or could it also be general graph? Moreover, is there any existing language or a plausible one that uses general graphs representation instead of trees for their syntax?
If you parse using GLR parsing (Generalized LR), and if the parse of the input is ambiguous (there are multiple possible ways to parse the input), then the result of the parse can be thought of as a parse DAG, rather than a parse tree. The parse DAG compactly encodes many possible parses: multiple possible parse trees. However, the bottom line remains that if you have a context-free grammar, and if your input string is unambiguously parseable (there is only a single derivation in the grammar that produces this input string), and if the job of parsing is to produce that derivation ... then under these conditions, the output of parsing will always necessarily be a parse tree, because any production of a context-free grammar inherently has a tree structure.
Our professor gave us the following algorithm that's supposed to solve the critical section problem. I'm guessing the code is just pseudocode so I'm not trying to focus on the strange syntax and just trying to understand what it does. ``` Main begin process_number=1; parbegin process1; process2; parend; end Process1; begin while process_number=2 do; critical_section1; process_number=2; other_stuff; end; Process2; begin while process_number=1 do; critical_section2; process_number=1; other_stuff; end; ``` After the code he says the following: "Algorithm Guarantees mutual exclusion. Price is high: P1 must go first, so if P2 is ready to enter its critical section, it must wait. When P1 enters and leaves its CS, P2 must go next even P1 is ready, and P2 is not ready. If one of the processes is terminated, then eventually the other will not be able to proceed" I don't even know what's going on. My best guess is that since process 1 and process 2 are running concurrently, and the variable process\_number is originally equal to 1, Process 1 is going to stop at the condition (`while process_number = 2`). But for process 2 the condition (`while process_number = 1`) is true, so...it will `do` whatever lines of code follow it. But the lines of code that follow it are (`critical_section2;`) and why in the world would it be running critical\_section2 if it the variable `process_number = 1` is true. I..honestly have no idea what my professor was trying to get across in this confusing code.
To start, notice that a room is only useful if v[i] divides k, where v[i] is the value of the room i. One other important thing to see is that, is the worst case there are at most 7 different primes (2\*3\*5\*7\*11\*13) in the prime factorization of k and at most 19 primes total (2^19). This gives us a good hint that we can use a bitmask to represent the value of each node in such a way that lcm(v[i], v[j]) = v[i] | v[j] (bitwise or). After having the graph pre-processed (with only the useful nodes and with the bitmask pre-computed for every node and for k), we can do DP on the graph that can be seen as a DAG because we will never go through the same node twice. Our state will then be (cur\_node, cur\_bitmask). There's only one final detail for this to fit in memory. One should notice that for the second dimension of the DP is very sparse so we can use a map for each node to keep the DP results. Hope my explanation was clear enough, if not, feel free to clarify your doubts :) Sample implementation: <http://pastebin.com/KLfyyvhv>
I am a beginner with machine learning, and I'm trying to build a model to classify products by category according to the words present in the product name. My goal is to predict the category of some new product, just by observing the categories for existing products. For example, having the following products: ``` PRODUCT CATEGORY soap bar johnsons green leaves bath cookie bauducco lemon 120gr cookie nesfit cookie choc and st cookie strawberry soap soft bath spoon hercules medium kitchen soap dish plastic medium bath [...] ``` My first thought is to group the words (tokens) present in each product, indicating the designated category and the occurrences count (to be used as a weight). So, for this sample, I have: ``` WORD CATEGORY COUNT soap bath 3 cookie cookie 2 medium bath 1 medium kitchen 1 bar bath 1 johnsons bath 1 ``` Having this, I could be able to train a model, and use it to classify a new product. For example, having a new product `hands liquid soap 120oz`, it could be classified as `bath`, because it contains the word `soap`, which have a strong weight for the `bath` category. In other case, the new product `medium hammer` could be classified as `bath` or `kitchen` , according the occurrence of the word `medium` in the training set. So, my doubts are: * Am I going to the correct approach? * What is the best algorithm to be used in this case? * How can I apply this using Weka?
If you have enough data and reasonable number of classes, you can definitely train your model. The grouping of words that you have done is similar to an approach called bag-of-words model. You can use that to build a classifier using Naive Bayes or SVM etc. On a different note, you can also look at the KNN algorithm because it looks fit for your use case. You can have a look at [this](https://ac.els-cdn.com/S1877705814003750/1-s2.0-S1877705814003750-main.pdf?_tid=91daeabf-59c2-43b7-92d7-ee279d97460c&acdnat=1552500809_0f7d8383d7a176153137d906c096c1e9) paper
I was [researching dynamic programming](http://en.wikipedia.org/wiki/Dynamic_programming) and read the following: > > Often when using a more naive method, many of the subproblems are > generated and solved many times. > > > What is a naive method?
It's not a technical term with a precise meaning, it is just the English word "naive". In a computer science context, the word usually means something like "one of the things you would think of first, but without realizing a less obvious but important fact". For instance, if one knows the definition of Fibonacci numbers is $\mathrm{Fib}(n) = \mathrm{Fib}(n-1) + \mathrm{Fib}(n-2)$, then a "naive" implementation would be ``` def Fib(n): if n <= 1: return 1 else: return Fib(n-1) + Fib(n-2) ``` What's the problem? That if we call, say, `Fib(7)`, then we end up making many of the same calls over and over, such as `Fib(4)` (because `Fib(7)` calls `Fib(6)` and `Fib(5)`, and `Fib(6)` calls `Fib(5)` and `Fib(4)`, and both times we call `Fib(5)` it calls `Fib(4)` and `Fib(3)`, and so on). So here a more a more "sophisticated", as opposed to naive, solution would be like dynamic programming and avoid all the extras computations.
I am looking for a small language that helps 'convince' students that turing machines are a sufficiently general computing model. That is, a language that looks like the languages they are used to, but is also easy to simulate on a turing machine. Papadimitriou uses RAM machines for this job, but I fear that comparing something strange (as a turing machine) to another strange thing (basically, an assembly language) would be too unconvincing for many students. Any suggestions would be most welcome (specially if they came with some recommended literature)
* If your students have done any functional programming, the nicest approach I know is to start with the untyped lambda calculus, and then use the [bracket abstraction theorem](http://www.cantab.net/users/antoni.diller/brackets/intro.html) to translate it into SKI combinators. Then, you can use the $smn$ and $utm$ theorems to show that Turing machines form a [*partial combinatory algebra*](http://ncatlab.org/nlab/show/partial+combinatory+algebra), and so can interpret the SKI combinators. I doubt this is the simplest possible approach, but I like how it rests on some of the most fundamental theorems in computability (which you may well wish to cover for other reasons). It appears that [Andrej Bauer answered a similar question on Mathoverflow](https://mathoverflow.net/questions/132800/a-proof-that-the-natural-numbers-form-a-pca) a few months back. * If you are set on a C-like language, your path will be a lot rougher, since they have a rather complicated semantics -- you'll need to 1. Show that Turing machines can simulate a stack and a heap at the same time, and 2. Show how variables can be implemented with a stack, and 3. Show that procedure calls can be implemented with a stack.This is much of the contents of a compilers class, honestly.
In the object oriented programming concepts I found the term "Identity of object". What does it mean by "Identity of object"?
Depending on the programming language, some or all objects might have reference semantics. What this means is that when you assign the object to a variable or pass it to a method, it's still the same object, not a copy. This means the object maintains *identity* independent of the values it holds. For example, in C#, `class`es have reference semantics, while `struct`s do not. This means that for a `class C` and `struct S`: ``` C c1 = new C(); c1.Name = "c1"; C c2 = c1; c2.Name = "c2"; WriteLine(c1.Name); // writes "c2" WriteLine(c1.Equals(c2)); // writes "True" S s1 = new S(); s1.Name = "s1"; S s2 = s1; s2.Name = "s2"; WriteLine(s1.Name); // writes "s1" WriteLine(s1.Equals(s2)); // writes "False" ``` As you can see, changing the `class` instance through one variable affects the other variable too, showing that both refer to the same object. Also, you can see that the two variables are equal. So, `class`es in C# maintain object identity. For the `struct`, each variable is independent: changing one does not affect the other. This means that `struct`s in C# do not have object identity in C#. Other languages behave differently. For example, Java does not have `struct`s at all, all user-defined types have reference semantics. As another example, in C++, you can decide to use reference semantics or not on a case-by-case basis, by choosing to use pointers (or references) or not. --- Note: `Equals()` in C# can be overridden. But since `C` or `S` do not override it, the default implementation of `Equals()` mostly behaves the way we want for both.
I am trying to figure out the best transformation of my consumption variable. I am running a probit regression to look at whether or not a household enrolls in health insurance. Consumption per capita is an independent variable and in my current model I use both consumption and consumption squared (two separate variables) to show that consumption increases but with diminishing returns. This makes for fairly straightforward interpretation. However, using the log of consumption is a slightly better fit because it normalizes the distribution and contributes a bit more to the overall R2 for the model but it is more difficult to interpret. Which would you suggest I use - log of consumption or consumption plus the quadratic function? My research is focused on health economics so I'm not sure what the preference is in that discipline. Any insight would be much appreciated. Thank you!
I am not sure I understand your interpretation: a log-transformed predictor would imply that the effect is increasing with diminishing returns, while a quadratic function as a predictor would imply the existance of a peak in the effect (for ax^2+bx+c the peak is at -b/(2a)). I would assume the latter is less realistic in your context, and is also not supported by the better R-squared for the log-transformed predictor.
I understand that using DFS "as is" will not find a shortest path in an unweighted graph. But why is tweaking DFS to allow it to find shortest paths in unweighted graphs such a hopeless prospect? All texts on the subject simply state that it cannot be done. I'm unconvinced (without having tried it myself). Do you know any modifications that will allow DFS to find the shortest paths in unweighted graphs? If not, what is it about the algorithm that makes it so difficult?
i think you can find the shortest path using slightly tweaked version of DFS . This is what i have came up with. ``` struct visited{ bool v; int src; int dest; visited():v{0},src{INT32_MIN},dest{INT32_MIN}{ } }; void setvisit(bool val,int src,int dest,visited & obj){ obj.src=src; obj.v=val; obj.dest=dest; } bool check(visited & obj,int src,int dest,visited * v){ if(obj.v){ if(obj.src !=src && obj.dest==dest){ visited v1=v[src]; if(v1.src==dest && v1.dest==src && v1.v){ return false; }else{return true;} }else if(obj.src==INT32_MIN && obj.dest==INT32_MIN){ return true; }else{ return false; } }else{ return true; } } void Dfs(int src,int & res,int dest,std::vector<int> g[],visited ** visit,int count){ visited *v=*visit; for(int i=0;i<g[src].size();i++){ if(check(v[g[src][i]],src,g[src][i],v)){ if(g[src][i]==dest){ res=std::min(res,count+1); return; }else{ setvisit(1,src,g[src][i],v[g[src][i]]); } Dfs(g[src][i],res,dest,g,visit,count+1); } } } int shortestPath(int src,int dest,std::vector<int> g[],int n){ if(src==dest){return 0;} visited * visit=new visited[n]; setvisit(1,src,INT32_MIN,visit[src]); int res=INT32_MAX; Dfs(src,res,dest,g,&visit,0); delete [] visit; return res; } ``` I might be wrong but until now all the test cases i have tried it has worked. i have used these rules to make modifications in DFS ``` 1. you cannot go to source again from any other edge i.e you can never travel from any other edge towards source edge. 2. if there are two edge A & B with B !=destination_edge with an undirected edge connecting them i.e A------B then a. if we travelled through A to B then a1. we cannot travel from B to A using same edge. a2. we can always travel from A to B again using same edge if we again somehow land at edge A. 3. you can travel directly towards destinaton any number of times from any edge i.e A-----dest. then you can travel towards destination from A again an again using same edge. ```
On the wikipedia page of convolutional neural networks, it is stated that rectified linear units are applied to increase the non-linearity of the decision function and of the overall network: <https://en.wikipedia.org/wiki/Convolutional_neural_network#ReLU_layer> Why is increasing non-linearity desired? What effect does it have on the overall performance of the model?
Because linear model has limited "capacity" to perform the task. Consider the data set shown here [Why does feature engineering work ?](https://stats.stackexchange.com/questions/320665/why-does-feature-engineering-work), we cannot draw a line to separate two classes. On the other hand, using nonlinear transformation (feature engineering), the classification tasks becomes easy. For neural network, it is usually a very big and complex system that uses nonlinear transformation on original data to achieve better performance.
typical R-User here, applying a bunch of packages to my data, hoping for a convincing result although I understand only half of what I do at most (and then getting rejected by the reviewers, surprise). But bear with me, I am here to learn. I have a time-series of monthly precipitation data, for which I'd like to find a suitable distribution. The Cullen and Frey graph from the `fitdistrplus` package indicates that a **lognormal distribution should outperform a gamma distribution**: [![enter image description here](https://i.stack.imgur.com/jbTov.png)](https://i.stack.imgur.com/jbTov.png) and the results of fitting a gamma, Weibull and lognormal distribution seem to say the same: [![enter image description here](https://i.stack.imgur.com/SwRK1.png)](https://i.stack.imgur.com/SwRK1.png) ``` Goodness-of-fit statistics Weibull lognormal gamma Kolmogorov-Smirnov statistic 0.0263855 0.03245692 0.07115254 Cramer-von Mises statistic 0.1427163 0.11374430 1.13149886 Anderson-Darling statistic 1.0934641 0.62867455 6.73258465 Goodness-of-fit criteria Weibull lognormal gamma Akaike's Information Criterion 7845.461 7842.426 7954.480 Bayesian Information Criterion 7854.940 7851.905 7963.959 ``` This is of particular interest to me, as in my field of research a gamma distribution is often used for precipitation data, without considering that there might be better fitting distributions. I was quite satisfied with my results until I stumbled upon the GAMLSS package, which offers an all-in-one solution for finding a suiting distribution to your data. So I applied the easy to use `fitDist' function to see what's the outcome: ``` > fit <- fitDist(df$prcp, k = round(log(length(df$prcp))), type = "realplus", trace = F , try.gamlss = TRUE) There were 11 warnings (use warnings() to see them) > summary(fit) ******************************************************************* Family: c("BCCG", "Box-Cox-Cole-Green") Call: gamlssML(formula = y, family = DIST[i], data = sys.parent()) Fitting method: "nlminb" Coefficient(s): Estimate Std. Error t value Pr(>|t|) eta.mu 45.1741563 0.9560784 47.24943 < 2.22e-16 *** eta.sigma -0.5522153 0.0265535 -20.79636 < 2.22e-16 *** eta.nu 0.4250894 0.0439835 9.66474 < 2.22e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Degrees of Freedom for the fit: 3 Residual Deg. of Freedom 842 Global Deviance: 7827.23 AIC: 7833.23 SBC: 7847.45 ``` First of all, the function offers me a distribution I've never heard of, but that's fair since (you might've already guessed it) I am quite new to this statistical topic it makes sense that I don't know "all" distributions. But, as I investigated the results of the other distributions, it got quite interesting (values are the BIC): ``` > fit$fits BCCG BCCGo GA GG BCT BCTo BCPE BCPEo WEI2 WEI WEI3 GB2 GIG exGAUS LNO 7848.230 7848.230 7852.426 7853.005 7854.763 7854.763 7854.989 7854.989 7855.461 7855.461 7855.461 7857.338 7859.426 7871.134 7964.480 LOGNO2 LOGNO IG EXP PARETO2o PARETO2 GP IGAMMA 7964.480 7964.480 8260.617 8290.105 8297.106 8297.106 8297.112 8521.495 ``` Right behind the mysterious BCCG-distributions follows the GA (gamma) distribution, quite closely followed by Weibull, and then, lognormal ranked even lower. So here are my questions: 1. What's the reason for these contrary results? Is it due to different fitting methods? 2. What can I further do to find the most fitting distribution? I know in general that all models are wrong and that it probably won't make a huge difference whether I pick the result of `fitdistrplus` or `GAMLSS` for my further analysis (calculation of SPI). But still, I am wondering about the comparability of research: 3. In a publication, I would probably state the package that I used for the fitting and present some goodness-of-fit values (such as the BIC e.g.). But, obviously, there's more to that and the results depend on the package used. But in the publications that I investigated, there's rarely more information offered. Doesn't that hinder the comparability of research?
I don't think's there's a deep meaning behind differentiating a constant. I'm sure if you think long enough [something can be found](https://en.wikipedia.org/wiki/Rorschach_test). However, I'd sort this into "[integration tricks](https://brilliant.org/wiki/integration-tricks/#differentiation-under-the-integral-sign)" bucket. It's routinely used in theoretical physics.
So, I was analyzing the [Calibron 12](http://www.creativecrafthouse.com/index.php?main_page=product_info&products_id=844) puzzle and to me it looks like a bin-packing problem. Is this puzzle actually a bin-packing problem and thus NP-hard for the perfect solution? Basically, you can make your own calibron 12-ish puzzle by doing the following: Take a rectangular piece of wood. Cut the wood into randomly sized rectangles. Jumble the pieces. Now put it back together into the exact same shape. (note there are technically 4 solutions, due to mirroring vertically or horizontally also fitting the exact shape)
There are two different "laws" being graphed here, [*Moore's law*](http://en.wikipedia.org/wiki/Moore%27s_law), and [*Dennard scaling*](http://en.wikipedia.org/wiki/Dennard_scaling). Moore's law is an economic observation, made by Gordon Moore in 1965, that predicts that the number of transistors on a die will tend to double about every two years. It does not seem to be slowing down (yet). While there are reasons to believe that transistors will stop scaling down soon (14nm is 140 Angstroms (about the width of 140 hydrogen atoms)) it may be possible to continue Moore's law scaling for some time, either by making bigger chips (what Moore originally predicted) or by stacking multiple layers of transistors in the 3rd dimension. Dennard Scaling, on the other hand, is an observation, made by Robert Dennard in 1974 that every time you scale down the width of a CMOS gate by a factor of 2, you can reduce both supply voltage and the threshold voltage by a factor of 2, the clock frequency can improve by a factor of 2, and the power density will stay constant. (Or you can scale down the voltage by less than 2, and you'll get a *better* than 2x clock frequency boost, but an increase in power.) When Dennard's observation was made a CMOS transistor was about 5 micron = 5000 nm wide. (So we've scaled transistor widths by a factor of about 360 in about 40 years.) The limit that showed up in the early 2000s has to do with threshold voltage. Clock frequency (at a particular transistor size) is at best proportional to $V\_{dd} - V\_{th}$ (supply voltage minus threshold voltage), but as you scale $V\_{th}$ down leakage current (and power) increases exponentially. That was fine from 1974 to about 2000 because $V\_{dd}$ was much larger than $V\_{th}$. (In 1974 $V\_{dd}$ was usually about 12V.) So we could hold $V\_{th}$ at around a volt or so, and just change $V\_{dd}$. But now with $V\_{dd}$ around 1.3V and $V\_{th}$ around .5V, we can't scale voltage much anymore (without dramatically increasing leakage power.)
I'm working with a dataset with a number of potential predictors like : **Age** : continuous **Number of children** : discrete and numerical **Marital Situation** : Categorical ( Married/Single/Divorced.. ) **Id\_User** : Categorical ( an id of the user who conducted the first interview with this person ) I'm stopping at four potential predictors, there are more, but for the sake of shortness, these would be enough to ask my question. **Question :** Continuous features are easy to deal with, normalize, and feed it to the model, what about **categorical** and **independant** **?** **Note :** I get that categorical features that follow a certain pattern can be encoded as integers and fed to the model, but what if those categorical features have no meaning as integers ( 1 for single, 2 for married , 3 for divorced ; for the model that treats it as a quantitative predictor it doesn't make sense to feed it to it like that) **Any ways to deal with these different types of features?**
What you are looking for are called *dummy variables*, they convert your categorical data into a matrix where the column is 1 if the person belongs to a category or 0 otherwise. The variable ID is not convertible because you don't want your model to overfit over your ID data (meaning: You don't want your model to remember the result for every ID, you want your model to be general). ``` import pandas as pd dataset2 = pd.get_dummies(dataset) ```
I want to compute the precision, recall and F1-score for my binary KerasClassifier model, but don't find any solution. Here's my actual code: ``` # Split dataset in train and test data X_train, X_test, Y_train, Y_test = train_test_split(normalized_X, Y, test_size=0.3, random_state=seed) # Build the model model = Sequential() model.add(Dense(23, input_dim=45, kernel_initializer='normal', activation='relu')) model.add(Dense(1, kernel_initializer='normal', activation='sigmoid')) # Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) tensorboard = TensorBoard(log_dir="logs/{}".format(time.time())) time_callback = TimeHistory() # Fit the model history = model.fit(X_train, Y_train, validation_split=0.3, epochs=200, batch_size=5, verbose=1, callbacks=[tensorboard, time_callback]) ``` And then I am predicting on new test data, and getting the confusion matrix like this: ``` y_pred = model.predict(X_test) y_pred =(y_pred>0.5) list(y_pred) cm = confusion_matrix(Y_test, y_pred) print(cm) ``` But is there any solution to get the accuracy-score, the F1-score, the precision, and the recall? (If not complicated, also the cross-validation-score, but not necessary for this answer) Thank you for any help!
Metrics have been removed from Keras core. You need to calculate them manually. They removed them on [2.0 version](https://github.com/keras-team/keras/wiki/Keras-2.0-release-notes). Those metrics are all global metrics, but Keras works in batches. As a result, it might be more misleading than helpful. However, if you really need them, you can do it like this ```py from keras import backend as K def recall_m(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = true_positives / (possible_positives + K.epsilon()) return recall def precision_m(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1))) precision = true_positives / (predicted_positives + K.epsilon()) return precision def f1_m(y_true, y_pred): precision = precision_m(y_true, y_pred) recall = recall_m(y_true, y_pred) return 2*((precision*recall)/(precision+recall+K.epsilon())) # compile the model model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m]) # fit the model history = model.fit(Xtrain, ytrain, validation_split=0.3, epochs=10, verbose=0) # evaluate the model loss, accuracy, f1_score, precision, recall = model.evaluate(Xtest, ytest, verbose=0) ```
I am a beginner on Machine Learning. In SVM, the separating hyperplane is defined as $y = w^T x + b$. Why we say vector $w$ orthogonal to the separating hyperplane?
The reason why $w$ is normal to the hyper-plane is because we define it to be that way: Suppose that we have a (hyper)plane in 3d space. Let $P\_0$ be a point on this plane i.e. $P\_0 = x\_0, y\_0, z\_0$. Therefore the vector from the origin $(0,0,0)$ to this point is just $<x\_0,y\_0,z\_0>$. Suppose that we have an arbitrary point $P (x,y,z)$ on the plane. The vector joining $P$ and $P\_0$ is then given by: $$ \vec{P} - \vec{P\_0} = <x-x\_0, y-y\_0, z-z\_0>$$ Note that this vector lies in the plane. Now let $\hat{n}$ be the normal (orthogonal) vector to the plane. Therefore: $$ \hat{n} \bullet (\vec{P}-\vec{P\_0}) = 0$$ Therefore: $$\hat{n} \bullet \vec{P}- \hat{n} \bullet \vec{P\_0} = 0$$ Note that $-\hat{n} \bullet \vec{P\_0}$ is just a number and is equal to $b$ in our case, whereas $\hat{n}$ is just $w$ and $\vec{P}$ is $x$. So by definition, $w$ is orthogonal to the hyperplane.
I got this question in an interview recently. I was given a bunch of points (for eg.- Start(88, 81), Dest(85,80), P1(19, 22), P2(31, 15), P3(27, 29), P4(30, 10), P5(20, 26), P6(5, 14)) on a 2D plane and any two of them were Source and Destination. I was asked to calculate the least cost to travel all the points starting from the source and ending on Destination point. The cost to travel between any two points is |x1 - x2| + |y1 - y2|. So, what I thought of doing was construct a cost matrix and apply Dijkstra since I thought constructing the shortest path tree must be the solution. But I couldn't think on how to construct this cost matrix. What is the best approach for this?
This is the [Traveling Salesman Problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem) with the $L\_1$ metric (also known as the [Manhattan distance](https://en.wikipedia.org/wiki/Taxicab_geometry)). Unfortunately, this version of the TSP is also NP-hard. Therefore, there is no algorithm that always produces the optimal solution, is efficient, and works for all inputs. Instead, you'll have to use a heuristic or approximation, or accept a solution that can take exponential time in the worst case. Dijkstra's algorithm won't suffice. Strictly speaking, the standard formulation of the TSP doesn't allow you to visit any point more than once. However, in this situation allowing you to visit a point more than once doesn't make the problem any easier, since visiting a point more than once will never help you find a shorter solution. Perhaps the following paper will be useful: [The L1 Traveling Salesman Problem](https://www.sciencedirect.com/science/article/pii/0020019084901108). Donald C.S. Allison, M.T. Noga, Information Processing Letters, vol 18, 1984, pp.195-199.
Let's say that I have two 1 dimensional arrays, and when I plot the two arrays they look like this: [![enter image description here](https://i.stack.imgur.com/kklTf.png)](https://i.stack.imgur.com/kklTf.png) If you look at the top and bottom graphs, then you can see that the highlighted parts are very similar (in this case they're exactly the same). I need to find a way to find these sections using some sort of algorithm or method. I've tried searching everywhere in the numpy and scikit docs, I even searched everywhere on stackexchange and couldn't find a solution for this problem. I don't think anyone published a solution for this yet. Does anyone have any idea how I can find similar sections in two graphs? My dataset is a 1 dimensional data array for each graph, and I need a algorithm that tells me where the similar parts are. Just remember that the similar sections are never 100% the same, sometimes they're a little bit off and sometimes there's anomalies so a small part would be different but everything else will still look the same. Also you can ignore the curvature of the graph, that's irrelevant. Only the X and Y coordinates of the data points are important. I can't read explanations that have a lot of maths inside of them and I also can't turn explanations that have a lot of maths into code, I'm still learning how to do that at University. But I'm really good at reading pseudo-code and other programming languages so please give me an answer with real code.
Well, you need to first define what your threshold for 'similar' is, and also what length of similarity is meaningful to you. One way of achieving this is by taking a 'slice' of the first set of coordinates, and comparing them against each slice of the same size in the second set. If all values are within a certain threshold distance, bingo. You can then repeat this with the next slice of coordinates from set #1. e.g. here is an O(n2) implementation: ``` slice_len = 10 thresh = 2 overlap_x1 = [] overlap_x2 = [] for i in range(len(x1)-slice_len): for j in range(len(x2)-slice_len): # checking the y coords are all at most 'threshold' far away if max(abs(y1[i:i+slice_len]-y2[j:j+slice_len])) < thresh: # Adding the similar x-coords to the containers overlap_x1.append(x1[i:i+slice_len]) overlap_x2.append(x2[i:i+slice_len]) # Converting arrays to ordered sets to remove duplicates from overlap # Since they are x-coords, they are monotonic increasing, order is preserved overlap_x1 = OrderedSet(overlap_x1) overlap_x2 = OrderedSet(overlap_x2) ```
Is there an intuitive meaning of XOR of two numbers not involving binary and just decimal? Or is is always converted into binary and then XORed?
Bitwise operations like (bitwise) AND, OR, and XOR don't make much sense from the perspective of decimal expansion. They do make some sense in bases which are powers of 2 like hexadecimal, since in such bases they also operate digit by digit.
I have a simple neural network(NN) for MNIST classification. It includes 2 hidden layers, each with 500 neurons. Hence the dimensions of the NN are: 784-500-500-10. ReLU is used in all neurons, softmax is used at the output, and cross-entropy is the loss function. What puzzles me is why overfitting doesn't appear to devastate the NN? Consider the number of parameters (weights) of the NN. It's approximately $$784\times500+500\times 500+500\times 10=647000.$$ However, in my experiment, I used only $6000$ examples (a tenth of the MNIST training set) to train the NN. (This is just to keep the run time short. The training & test error would both go down considerably if I used more training examples.) I repeated the experiment 10 times. Plain stochastic gradient descent is used (no RMS prop or momentum); no regularization/drop-out/early-stopping was used. The training error and test error reported were: $$\begin{array}{|l|c|c|c|c|c|c|c|c|c|c|} \hline \textrm{No.} & 1 & 2 & 3 &4 &5&6&7&8&9&10\\ \hline E\_{train}(\%) & 7.8 & 10.3 & 9.1 & 11.0 & 8.7 & 9.2 & 9.3 & 8.3 &10.3& 8.6\\ \hline E\_{test}(\%) & 11.7 & 13.9 & 13.2 & 14.1 &12.1 &13.2 &13.3 &11.9 &13.4&12.7\\ \hline \end{array}$$ Note that in all 10 experiments (each with independent random parameter initialization), the test error differed from the training error only by approx. 4%, even though I used 6K examples to training 647K parameters. The VC dimension of the neural network is on the order of $O(|E|log(|E|))$ at least, where $|E|$ is the number of edges (weights). So why wasn't the test error miserably higher (e.g. 30% or 50%) than the training error? I'd greatly appreciate it if someone can point out where I missed. Thanks a lot! **[ EDITS 2017/6/30]** To clarify the effects of early stopping, I did the 10 experiments again, each now with 20 epochs of training. The error rates are shown in the figure below: $\qquad\qquad\qquad$ [![enter image description here](https://i.stack.imgur.com/WkvVr.png)](https://i.stack.imgur.com/WkvVr.png) The gap between test and training error did increase as more epochs are used in the training. However, the tail of the test error stayed nearly flat after the training error is driven to zero. Moreover, I saw similar trends for other sizes of the training set. The average error rate at the end of 20 epochs of training is plotted against the size of the training set below: $\qquad\qquad\qquad$ [![enter image description here](https://i.stack.imgur.com/rTxW7.png)](https://i.stack.imgur.com/rTxW7.png) So overfitting does occur, but it doesn't appear to devastate the NN. Considering the number of parameters (647K) we need to the train and the number of training examples we have (<60K), the question remains: why doesn't overfitting easily render the NN useless? Moreover, is this true for the ReLU NN for all classification tasks with softmax output and cross-entropy objective function? Have someone seen a counter-example?
I have replicated your results using Keras, and got very similar numbers so I don't think you are doing anything wrong. Out of interest, I ran for many more epochs to see what would happen. The accuracy of test and train results remained pretty stable. However, the loss values drifted further apart over time. After 10 epochs or so, I was getting 100% train accuracy, 94.3% test accuracy - with loss values around 0.01 and 0.22 respectively. After 20,000 epochs, the accuracies had barely changed, but I had training loss 0.000005 and test loss 0.36. The losses were also still diverging, albeit very slowly. In my opinion, the network is clearly over-fitting. So the question could be re-phrased: Why, despite over-fitting, does a neural network trained to the MNIST data set still generalise apparently reasonably well in terms of accuracy? It is worth comparing this 94.3% accuracy with what is possible using more naive approaches. For instance, a simple linear softmax regression (essentially the same neural network without the hidden layers), gives a quick stable accuracy of 95.1% train, and 90.7% test. This shows that a lot of the data separates linearly - you can draw hyperplanes in the 784 dimensions and 90% of the digit images will sit inside the correct "box" with no further refinement required. From this, you might expect an overfit non-linear solution to get a worse result than 90%, but maybe no worse than 80% because intuitively forming an over-complex boundary around e.g. a "5" found inside the box for "3" will only incorrectly assign a small amount of this naive 3 manifold. But we're better than this 80% lower bound guesstimate from the linear model. Another possible naive model is template matching, or nearest-neighbour. This is a reasonable analogy to what the over-fitting is doing - it creates a local area close to each training example where it will predict the same class. Problems with over-fitting occur in the space in-between where the values of activation will follow whatever the network "naturally" does. Note the worst case, and what you often see in explanatory diagrams, would be some highly curved almost-chaotic surface which travels through other classifications. But actually it may be more natural for the neural network to more smoothly interpolate between points - what it actually does depends on the nature of the higher order curves that the network combines into approximations, and how well those already fit to the data. I borrowed the code for a KNN solution from [this blog on MNIST with K Nearest Neighbours](https://medium.com/towards-data-science/mnist-with-k-nearest-neighbors-8f6e7003fab7). Using k=1 - i.e. choosing the label of the nearest from the 6000 training examples just by matching pixel values, gives an accuracy of 91%. The 3% extra that the over-trained neural network achieves does not seem quite so impressive given the simplicity of pixel-match counting that KNN with k=1 is doing. I tried a few variations of network architecture, different activation functions, different number and sizes of layers - none using regularisation. However, with 6000 training examples, I could not get any of them to overfit in a way where test accuracy dropped dramatically. Even reducing to just 600 training examples just made the plateau lower, at ~86% accuracy. My basic conclusion is that MNIST examples have relatively smooth transitions between classes in feature space, and that neural networks can fit to these and interpolate between the classes in a "natural" manner given NN building blocks for function approximation - without adding high frequency components to the approximation that could cause issues in an overfit scenario. It might be an interesting experiment to try with a "noisy MNIST" set where an amount of random noise or distortion is added to both training and test examples. Regularized models would be expected to perform OK on this dataset, but perhaps in that scenario the over-fitting would cause more obvious problems with accuracy. --- *This is from before the update with further tests by OP.* From your comments, you say that your test results are all taken after running a single epoch. You have essentially used early stopping, despite writing that you have not, because you have stopped the training at the earliest possible point given your training data. I would suggest running for many more epochs if you want to see how the network is truly converging. Start with 10 epochs, consider going up to 100. One epoch is not many for this problem, especially on 6000 samples. Although increasing number of iterations is not guaranteed to make your network overfit worse than it already has, you haven't really given it much of a chance, and your experimental results so far are not conclusive. In fact I would half expect your test data results to *improve* following a 2nd, 3rd epoch, before starting to fall away from the training metrics as the epoch numbers increase. I would also expect your training error to approach 0% as the network approached convergence.
An automaton is an abstract model of a digital computer. Digital computers are completely deterministic; their state at any time is uniquely predictable from the input and the initial state. When we are trying to model real systems, why include nondeterminism in Automata theory?
(This is a rewording of some of the other answers but I'll post it anyway:) You write: *An automaton is an abstract model of a digital computer.* I disagree! Automata model how we humans specify computation, not only how computers execute it. Nondeterminism is exactly the difference. Our specifications are often nondeterministic. For instance, take [merge sort](http://en.wikipedia.org/wiki/Merge_sort). Merge sort is sorting by splitting the items to be sorted into two halves of roughly equal size, sorting each half using merge sort, and merging the sorted results. This completely specifies the idea of merge sort, but it isn't deterministic: it doesn't specify an order in which to sort the halves (for all we care, it may be done concurrently), nor does it specify an exact way to determine the split. Those details will need to be filled in in order to arrive at a deterministic, sequential version of merge sort that can be implemented by a single-threaded computer program, but I would say they are part of a particular way of doing merge sort, not the idea of merge sort itself. The same thing is true for algorithms in general - e.g. cookbook recipes. Some people *define* algorithms to be deterministic, in which case this more general and in my opinion more natural notion of 'algorithm' needs a different name. The idea of working with nondeterministic specifications was formalized by Dijkstra's method of programming, which starts out by specifications that only give pre- and postconditions to be met by the program, and systematically develops a deterministic, imperative program from them. Dijkstra would probably have said: *sorting* is the problem, the relationship between pre- and postconditions we're trying to establish; *merge sort* is an approach to doing that, somewhere halfway between the problem specification and a deterministic solution; a particular, deterministic merge sorting algorithm is a concrete deterministic solution. But the same general approach can be used for developing concurrent programs, in which the eventual program is still nondeterministic. Such programs can e.g. be run in distributed computing environments.
I would like to learn a bit more on interactive data visualization (zooming, pointing, brushing, point-mapping and so on). I would welcome any: 1. Tutorial/guide/book(?)/video on **how to use** such methods for statistical exploration. 2. Pointers for good/interesting **interactive data-viz packages** (in R, and outside of it) Just to start the ball rolling, I know that in R there are various ways to get interactive visualization, like [rggobi](http://www.ggobi.org/rggobi/), the new [googleViz R package](http://www.r-bloggers.com/r-and-google-visualization-api/), the [animation package](http://cran.r-project.org/web/packages/animation/index.html) and some others. But if there are other packages worth exploring (offering things that R doesn't), I would be glad to know about them (like jmp, mathlab, spss, sas, excel, and so on). p.s: this is the first question to use the tag "interactive-visualization"
Apart from [Protovis](http://vis.stanford.edu/protovis/) (HTML+JS) or [Mayavi](http://mayavi.sourceforge.net/) (Python), I would recommend [Processing](http://processing.org/) which is > > an open source programming language > and environment for people who want to > create images, animations, and > interactions. Initially developed to > serve as a software sketchbook and to > teach fundamentals of computer > programming within a visual context. > > > There are a lot of open-source scripts on <http://www.openprocessing.org/>, and a lot of [related books](http://processing.org/learning/books/) that deal with Processing but also data visualization. I know there is a project to provide an R interface, [rprocessing](https://r-forge.r-project.org/projects/rprocessing/), but I don't know how it goes. There's also an interface with clojure/incanter (see e.g., [Creating Processing Visualizations with Clojure and Incanter](http://data-sorcery.org/2009/08/30/processing-intro/)). There are many online resources, among which Stanford class notes, e.g. [CS448B](https://graphics.stanford.edu/wikis/cs448b-10-fall), or [7 Classic Foundational Vis Papers You Might not Want to Publicly Confess you Don’t Know](http://fellinlovewithdata.com/guides/7-classic-foundational-vis-papers).
I have a file that has a number (a positive integer) on each row. Given a number $q$, I want to find a value that's a sum of some 8 numbers in the file, and is as close to $q$ as possible. So, supposing that I have a file that looks like this: > > 345 > > 3 > > 2 > > 3453 > > 1234 > > 6 > > 7 > > 34 > > 12 > > 1111 > > 48 > > 413 > > > If the given number is 526 I would like to get a solution like: 3+2+6+7+34+12+48+413 = 525. So those are the 8 numbers that best fit our condition. Is there an efficient algorithm with a low complexity? The file can have >1000 rows and the numbers won't be bigger than 10k.
Dynamic programming ------------------- One approach is to use dynamic programming. If you have $n$ numbers ($n$ rows in the file), and each number is in the range $1..m$, then the obvious dynamic programming algorithm has running time about $8mn$. For your parameter size, this might be adequate. In particular, let $A[1..n]$ be an array of your $n$ numbers. Define $T[x,i,j] = 1$ if there is a way to write $x$ as a sum of $j$ numbers from among $A[1..i]$, or 0 if not. Now we have the relation $$T[x,i,j] = T[x,i-1,j] \lor T[x-A[i],i-1,j-1]$$ (with the convention that $T[x,i,j]$ always evaluates to 0 if $x$ is negative). Consequently, you can compute all of the $T[x,i,j]$ values iteratively. There are $8mn$ values to compute, so this takes $8mn$ time. Once you have filled in the $T$ table, now you can answer any query quickly. Suppose you want to find the closest number to $q$ that can be written as a sum of 8 numbers. Then you should scan $T[q,n,8]$, $T[q+1,n,8]$, $T[q-1,n,8]$, $T[q+2,n,8]$, $T[q-2,n,8]$, etc., until you find the first entry that is one. This takes at most $m$ iterations. In total, the running time is $O(mn)$, where the constant hidden by the big-O notation is small. The space required is $O(mn)$ as well, but this can be reduced to $O(m)$ if you fill in the $T$ table in the right order and discard entries once they are no longer needed. Two-way merge ------------- There's an alternative algorithm that will probably be significantly faster for your specific problem. The basic idea is to first find an algorithm that works for sums of 2 numbers, then apply that repeatedly. Let $S$ be a set of numbers. Define $S \bowtie S$ to be the set $$S \bowtie S = \{s+t : s, t \in T\}.$$ In other words, $S \bowtie S$ is the set of all possible sums of two values from $S$. In our case, we're going to represent $S$ as a sorted list of numbers. It turns out you can compute $S \bowtie S$ from $S$, using a FFT-based convolution procedure. Let $N$ be the smallest power of two that is larger than $2m$. Define a $N$-vector $v$ whose $i$th coordinate is 1 if $i\in S$, or 0 otherwise. Compute $v \otimes v$, the convolution of $v$ with itself. It turns out that $v \otimes v$ is the $N$-vector for $S \bowtie S$, so you can recover $S \bowtie S$ from $v \otimes v$. Also, you can compute the convolution $v \otimes v$ using a discrete FFT over the $N$th complex roots of unity: you take the FFT of $v$, square every entry, then take the inverse FFT. The running time of the FFT is $O(N \lg N)$. Since $N=O(m)$, it follows that the running time of this procedure is $O(m \lg m)$. Now we repeat this recursively three times. In other words, we let $S = $ the set of numbers in the original file, compute $T = S \bowtie S$ (the set of 2-way sums), compute $U = T \bowtie T$ (the set of 4-way sums), and then compute $V = U \bowtie U$ (the set of 8-way sums). This gives us the set of all numbers that can be represented as a 8-way sum. If you like, a more elegant way to compute the set of 8-way sums is as follows: let $N$ be the smallest power of two that is larger than $8m$, form a $N$-vector $v$ where $v\_i=1$ if and only if $i$ is part of the original file, take the discrete FFT of $v$ (over $N$th complex roots of unity), raise each entry to the 8th power, then take the inverse FFT. This gives you a $N$-vector that has a 1 in its $i$th coordinate if and only if $i$ is an attainable 8-way sum. Finally, given a query $q$, we can quickly look it up in $V$ (using binary search, say) and find the nearest number to $q$ that's in $V$. The total running time will be $O(m \lg m)$, and the total space will be $O(m)$. For your parameter settings, this should be extremely fast, given that $m$ is no larger than 10,000. However, the implementation complexity is much higher, since you need to implement a FFT, worry about round-off error and precision for complex numbers, and so on.
Let $\{X\_i\}\_{i\geq 1}$ be IID with finite second moment, and $$ Y\_n = \frac{2}{n(n+1)}\sum\_{i=1}^n \,i\cdot X\_i \, , \qquad n\geq 1 \, . $$ Could you please tell me how can I show that $Y\_n$ converges in probability to $\mathrm{E}[X\_1]$? I'm thinking Kolmogorov Convergence criterion. But seems like I cannot prove it using that. Any suggestion?
Actually, we can even show that $\mathbb E|Y\_n-\mathbb E[X\_1]|^2\to 0$. Indeed, since $\sum\_{j=1}^nj=n(n+1)/2$ and $\mathbb E[X\_j]=\mathbb E[X\_1]$ for all $j$, $$Y\_n-\mathbb E[X\_1]=\frac 2{n(n+1)}\sum\_{j=1}^nj(X\_j-\mathbb E[X\_j]),$$ hence $$\tag{1}\mathbb E|Y\_n-\mathbb E[X\_1]|^2=\frac 4{n^2(n+1)^2}\sum\_{i,j=1}^n ij\mathbb E\left[(X\_i-\mathbb E[X\_i])(X\_j-\mathbb E[X\_j])\right].$$ If $i\neq j$, then by independence $\mathbb E\left[(X\_i-\mathbb E[X\_i])(X\_j-\mathbb E[X\_j])\right]=0$ and plugging it in (1), $$\tag{2}\mathbb E|Y\_n-\mathbb E[X\_1]|^2=\frac 4{n^2(n+1)^2}\sum\_{j=1}^n j^2\mathbb E\left[(X\_j-\mathbb E[X\_j])^2\right].$$ Using now the fact that $X\_j$ has the same distribution as $X\_1$ and bounding $\sum\_{j=1}^nj^2$ by $n^2(n+1)$, equality (2) becomes $$\mathbb E|Y\_n-\mathbb E[X\_1]|^2\leqslant\frac 4{n+1}\mathbb E\left[(X\_0-\mathbb E[X\_0])\right]^2$$ and we are done.
All the #SAT solvers I know, e.g RelSat, C2D, only return the number of satisfiable instances. But I want to know each of those instances? Is there such a #SAT solver or how I should modify an available #SAT solver to do this? Thank you.
You are looking for an ALL-SAT or all solutions SAT solver. This is a different problem from #SAT. You do not have to enumerate all solutions to count them. I do not know of a tool that solves your problem because people add these algorithms on top of existing SAT solvers but rarely seem to release these extensions. Two papers that should help you in modifying a CDCL solver to implement ALL-SAT are below. [Memory Efficient All-Solutions SAT solver and its Application to Reachability](http://www.cs.technion.ac.il/users/orna/All_SAT_reachability_FMCAD04.ps.gz), O. Grumberg, A. Schuster, A. Yadgar, FMCAD 2004 Here is a recent article posted on the arXiv. [Extending Modern SAT Solvers for Enumerating All Models](http://arxiv.org/abs/1305.0574v1), Said Jabbour, Lakhdar Sais, Yakoub Salhi, 2013 You could try contacting these authors for their implementation.
We say that the language $J \subseteq \Sigma^{\*}$ is *dense* if there exists a polynomial $p$ such that $$ |J^c \cap \Sigma^n| \leq p(n)$$ for all $n \in \mathbb{N}.$ In other words, for any given lenght $n$ there exist only polynomially many words of length $n$ that are not in $J.$ The problem I am currently studying asks to show the following > > If there exist a dense $NP$-complete language then $P = NP$ > > > What the text suggest is to consider the polynomial reduction to $3$-$SAT$ and then construct an algorithm that tries to satisfy the given $CNF$ formula while also generating elements in $J^c.$ What I am wondering is > > Is there a more direct proof? Is this notion known in a more general setting? > > >
This is a nice homework problem about Mahaney's theorem. Note that the complement of a "dense" language is a sparse language. Moreover if a language is $\mathsf{NP}$-complete its complement is $\mathsf{coNP}$-complete. If there is a "dense" $\mathsf{NP}$-complete language, there is a [sparse](http://en.wikipedia.org/wiki/Sparse_language) $\mathsf{coNP}$-complete language. Mahaney's theorem tells us that there is no sparse $\mathsf{NP}$-complete language unless $\mathsf{P}=\mathsf{NP}$. We can adopt the proof to show that there is no sparse $\mathsf{coNP}$-complete language unless $\mathsf{P}=\mathsf{coNP}$ which is equivalent to $\mathsf{P}=\mathsf{NP}$ (since $\mathsf{P}$ is closed under complements). In summary, the answer is no unless $\mathsf{P}=\mathsf{NP}$. Note that if $\mathsf{P}=\mathsf{NP}$ then every nontrivial language is $\mathsf{NP}$-complete. ps: You may want to try the following and then use Mahaney's theorem: there is a sparse $\mathsf{NP}$-complete set iff there is a sparse $\mathsf{coNP}$-complete set. However I doubt that a proof for this statement would be much easier than a proof for Mahaney's theorem.
A deterministic queue automaton (DQA) is like a PDA except the stack is replaced by a queue. A queue is a tape allowing symbols to be written (push) on the left-end and read (pull) on the right-end. Actually I've proved that a 2-tape Turing Machine can simulate the DQA. Now I'm proving the DQA can simulate Turing Machine TM. Let the queue store all the input and the right-end symbol is the one being read. Suppose $a$ is the right-end symbol in the queue. For the transition $\delta(q,a)=(r,b,L)$ in TM, it's easy to simulate. Just pull $a$ and push $b$. Now the right-end symbol would be the symbol on the left of $a$. It's like move the head in TM to the left. My problem is I cannot find a way to simulate the transition $\delta(q,a)=(r,b,R)$. Since the symbol on the right of $a$ is actually the left-end symbol, how can I let this symbol move to the right-end? I spend several hours on this and I think answers on Internet are not very clear. Could anyone give me some hint?
It is obvious that you can't handle all the right moves at the same time, so the trick is to use a special symbol to mark the head position ($\#$) and place it on the left of the symbol under the head: ``` queue: 0100100#1 equivalent to tape: __0100100[1]___ ``` Now you pop the symbols from the right and re-push them on the left but *with one symbol of delay* (you buffer one symbol *using the internal states*). When you read a symbol $x$ and the internal buffer contains $z$ then you re-push $z$ on the queue: ``` queue: 0100101#1 pop: buf:* (initially the buffer contains the * marker, see below) queue: 0100101# pop:1 buf:* queue: *0100101 pop:# buf:1 ``` When you pop $\#$ then you know that the symbol $x$ in the (internal) buffer is the symbol under the head, and you can apply the corresponding transition: \*\* if the transition says to write symbol $y$ and goto to $Right$ then you push $\#$ and the $y$ on the left of the queue. ``` queue: *0100101 pop:# buf:1 apply transition Write 0 go Right queue: 0#*0100101 pop: buf: queue: 0#*010010 pop:1 buf: queue: 0#*01001 pop:0 buf:1 queue: 10#*0100 pop:1 buf:0 ... ``` Then you continue the pop-push operations until you find again the $\#$. \*\* if the transition says to write symbol $y$ and goto to $Left$ then you can simply push $y$ on the left of the queue and keep the $\#$ in the internal buffer, read the next symbol and apply another transition until you find a Right move again: ``` queue: *0100101 pop:# buf:1 apply transition Write 0 go Left queue: 0*010010 pop:1 buf:# apply transition Write 1 go Left queue: 10*01001 pop:0 buf:# apply transition Write 1 go Right queue: 1#10*01001 pop: buf: ... ``` The special symbol $\*$ marks the begin/end of the tape and you can use it to handle the tape expansion on both directions. Now, you should be able to discover how by yourself. And if you want to be rigorous, you should also consider the special initial case in which the queue doesn't contain the head symbol #.
For functions or computations we have terms like: * Deterministic – Determinism * Pure – Purity Now what is the correct corresponding noun for **side-effect-free**? "Side-effect freeness"? "Side-effect freedom"? "Non-side-effective"?
The word is *nullipotence*, the noun form of *nullipotent*. From the first definition on Wiktionary: > > (mathematics, computing) Describing **an action which has no side effect**. Queries are typically nullipotent: they return useful data, but do not change the data structure queried. Contrast with idempotent. > > > (emphasis mine) <http://en.wiktionary.org/wiki/nullipotent>
I was assigned to demonstrate the Central Limit Theorem (CLT) in R in my statistics class. I already made some progress with simulation using simple.sim in R. I want to prepare 3 examples of the CLT with continuous distributions (*rnorm*, *rt* and *rf*) and 3 examples with discrete distributions (*rbinom*, *rpois* and *rtriangle* for triangle distribution). I also want to demonstrate the failure of the CLT. I realized that this could happen when the variance of the distribution does not converge. I can't find any discrete distribution that has non-converging variance and is implemented in R.
If you want to take a different approach to when the CLT does not apply, you could focus on the method of sampling rather than the distribution being sampled from. This approach could either impress your professor or violate the rules of the assignment (so read the assignment carefully). The CLT mainly applies to an SRS (simple random sample) and some other similar cases. But, what we often skip over in the intro classes (and later classes) is what happens with other sampling schemes. For example consider doing a random-digit-dialing sample for a phone interview where one of the variables we are interested in is household income, since higher income households are also likely to have more phone numbers on average this will be a biased sample (unless you use a weighted mean). So you could demonstrate sampling from a normal population (or other distribution), but with the probability of being in the sample related to the outcome of interest, then show how the unweighted mean and weighted mean behave. You could also look at cluster sampling or stratified sampling or other forms of sampling and show how the parameter estimates behave under a naive approach compared to the proper approach.
The Mobius function $\mu(n)$ is defined as $\mu(1)=1$, $\mu(n)=0$ if $n$ has a squared prime factor, and $\mu(p\_1 \dots p\_k)= (-1)^k$ if all the primes $p\_1,\dots,p\_k$ are different. Is it possible to compute $\mu(n)$ without computing the prime factorization of $n$?
For another non-answer, you might be interested in Sarnak’s conjecture (see e.g. <http://gilkalai.wordpress.com/2011/02/21/the-ac0-prime-number-conjecture/>, <http://rjlipton.wordpress.com/2011/02/23/the-depth-of-the-mobius-function/>, <https://mathoverflow.net/questions/57543/walsh-fourier-transform-of-the-mobius-function>), which basically states that MΓΆbius function is not correlated with any β€œsimple” Boolean function. It’s not unreasonable to expect it should hold when β€œsimple” is interpreted as polynomial-time. What we know so far is that the conjecture holds for $\mathrm{AC}^0$-functions (proved by [Ben Green](http://arxiv.org/abs/1103.4991)), and all monotone functions (proved by [Jean Bourgain](http://arxiv.org/abs/1112.1423)).
In many implementations of depth-first search that I saw (for example: [here](http://ramos.elo.utfsm.cl/~lsb/elo320/aplicaciones/aplicaciones/CS460AlgorithmsandComplexity/lecture9/COMP460%20Algorithms%20and%20Complexity%20Lecture%209.htm)), the code distinguish between a grey vertex (discovered, but not all of its neighbours was visited) and a black vertex (discovered and all its neighbours was visited). What is the purpose of this distinction? It seems that DFS algorithm will never visit a visited vertex regardless of whether it's grey or black.
When doing a DFS, any node is in one of three states - before being visited, during recursively visiting its descendants, and after all its descendants have been visited (returning to its parent, i.e., wrap-up phase). The three colors correspond to each of the three states. One of the reasons for mentioning colors and time of visit and return is to explicitly make these distinctions for better understanding. Of course, there are actual uses of these colors. Consider a directed graph $G$. Suppose you want to check $G$ for the existence of cycles. In an undirected graph, if the node under consideration has a black or grey neighbor, it indicates a cycle (and the DFS does not visit it as you mention). However, in case of a *directed* graph, a black neighbor does not mean a cycle. For example, consider a graph with 3 vertices - $A, B,$ and $C$, with directed edges as $A \to B$, $B \to C$, $A \to C$. Suppose the DFS starts at $A$, then visits $B$, then $C$. When it has returned to $A$, it then checks that $C$ has already been visited and is black. But there is no cycle in the graph. In a directed graph, a cycle is present if and only if a node is seen again before all its descendants have been visited. In other words, if a node has a neighbor which is grey, then there is a cycle (and not when the neighbor is black). A grey node means we are currently exploring its descendants - and if one such descendant has an edge to this grey node, then there is a cycle. So, for cycle detection in directed graphs, you need to have 3 colors. There could be other examples too, but you should get the idea.
What is the significance of the log base being 2 in entropy? What if we take e or 10 as the base?
Nothing much. Recall that we have $$\log\_a b = \frac{\log\_c b}{\log\_c a}$$ Hence, if you use other base, such as $e$ and $10$, you can always convert to another base using a scalar multiplication. Communication/ information was thought in terms of bits, hence the magical number, $2$.
I'd like to write mathematical proofs using some proof assistant. Everything will be written using first order logic (with equality) and natural deduction. The background is set theory (ZF). For example, how could I write the following proof? Axiom: $\forall x\forall y(x=y\leftrightarrow\forall z(z\in x\leftrightarrow z\in y))$ Theorem: $\forall x\forall y(\forall z(z\notin x)\land\forall z(z\notin y)\rightarrow x=y)$ That is, the empty set is unique. It's trivial for me to accomplish that using paper and a pen, but what I really need is a software to help me checking proof for correctness. Thank you.
**Moved from comment at Kaveh's suggestion** First you need to select a proof assistant. [Coq](http://arxiv.org/abs/cs/0603118) is what I use, but there are many [others](http://en.wikipedia.org/wiki/Interactive_theorem_proving). Coq is based on higher-order logic (the so-called Calculus of Inductive Constructions). Other proof assistants are based on first order logic, so may be more suited to your needs (modulo the comments above). Then you need to commit to learning the proof assistant. The linked document is a tutorial for getting of the ground with Coq. Becoming a Coq expert requires years of dedication and practice, but simple theorems can be proven in an afternoon. The key to learning Coq or any other proof assistant is to do proofs, such as the ones in the linked paper. Just reading the paper will help very little, because the whole experience of interacting with the proof assistant cannot be conveyed well on paper. Within a few days you ought to be able to encode simple theorems, such as the one above, and prove them. Don't expect that we will do this for you. You'll learn nothing that way. When you do succeed in proving these theorems, feel free to post your answers here and maybe leave a few comments about your experiences. Are you up for the challenge?
unique numbers $1 - n$ combinations (sets) of size $k$ $k < n$ do not re-use an $n$ in a set [1, 1, 1] is not valid How to generate all *unique* sets of size $k$? [1,2,3] = [3,2,1] order does not matter the number of sets will be ${\binom{n}{k}}$ input $n = 4, k = 2$ output [1, 2] [1, 3] [1, 4] [2, 3] [2, 4] [3, 4] I have found that is often called n chose k hard coded it looks like ``` for (i = i; i <= n - 1; i++) for (j = i + 1; j <= n - 1; j++) ``` I tried coding the first solution from Yuval Filmus but it does not work for me It returns [1,2], [1,2] Had to adjust for 0 based arrays ``` public static void Combinations(int n, int k) { bool[] A = new bool[n]; int[] B = Enumerable.Range(1, k).ToArray(); Generate(1, A, B, k, n); } public static void Generate(int l, bool[] A, int[] B, int k, int n) { Debug.WriteLine(string.Join(", ", B)); if (l == k + 1) { Debug.WriteLine("l == k + 1"); return; } for (int i = 0; i < n; i++) { if (!A[i]) { A[i] = true; B[l - 1] = i + 1; Generate(l + 1, A, B, k, n); A[i] = false; } } } ```
First of all note that *[sparse](https://en.wikipedia.org/wiki/Dense_graph#Sparse_and_tight_graphs)* means that you have very few edges, and *[dense](https://en.wikipedia.org/wiki/Dense_graph)* means many edges, or almost complete graph. In a complete graph you have $n(n-1)/2$ edges, where $n$ is the number of nodes. Now, when we use matrix representation we allocate $n\times n$ matrix to store node-connectivity information, e.g., $M[i][j] = 1$ if there is edge between nodes $i$ and $j$, otherwise $M[i][j] = 0$. But if we use adjacency list then we have an array of nodes and each node points to its adjacency list **containing ONLY its neighboring nodes**. Now if a graph is sparse and we use matrix representation then most of the matrix cells remain unused which leads to the waste of memory. Thus we usually don't use matrix representation for sparse graphs. We prefer adjacency list. But if the graph is dense then the number of edges is close to (the complete) $n(n-1)/2$, or to $n^2$ if the graph is directed with self-loops. Then there is no advantage of using adjacency list over matrix. In terms of space complexity Adjacency matrix: $O(n^2)$ Adjacency list: $O(n + m)$ where $n$ is the number nodes, $m$ is the number of edges. When the graph is undirected tree then Adjacency matrix: $O(n^2)$ Adjacency list: $O(n + n)$ is $O(n)$ (better than $n^2$) When the graph is directed, complete, with self-loops then Adjacency matrix: $O(n^2)$ Adjacency list: $O(n + n^2)$ is $O(n^2)$ (no difference) And finally, when you implement using matrix, checking if there is an edge between two nodes takes $O(1)$ times, while with an adjacency list, it may take linear time in $n$.
I have some data I need to visualize and am not sure how best to do so. I have some set of base items $Q = \{ q\_1, \cdots, q\_n \}$ with respective frequencies $F = \{f\_1, \cdots, f\_n \}$ and outcomes $O \in \{0,1\}^n$. Now I need to plot how well my method "finds" (i.e., a 1-outcome) the low frequency items. I initially just had an x-axis of frequency and a y axis of 0-1 with point-plots, but it looked horrible (especially when comparing data from two methods). That is, each item $q \in Q$ is has an outcome (0/1) and is ordered by its frequency. > > Here is an example with a single method's results: > > > ![enter image description here](https://i.stack.imgur.com/ELfGN.png) My next idea was to divide the data into intervals and compute a local sensitivity over the intervals, but the problem with that idea is the frequency distribution is not necessarily uniform. So how should I best pick the intervals? > > Does anyone know of a better/more useful way to visualize these sort of data to portray the effectiveness of finding rare (i.e., very low-frequency) items? > > > EDIT: To be more concrete, I am showcasing the ability of some method to reconstruct biological sequences of a certain population. For validation using simulated data, I need to show the ability to reconstruct variants regardless of its abundance (frequency). So in this case I am visualizing the missed and found items, ordered by their frequency. This plot will not include reconstructed variants that are not in $Q$.
The [ggridges](https://CRAN.R-project.org/package=ggridges) package offers more creative ways to avoid overplotting those ones and zeros. Modifying @MattBagg's example. Not optimal for this dataset, but you'll get the point. ```r library(ggplot2) library(ggridges) N=100 data=data.frame(Q=seq(N), Freq=runif(N,0,1), Success=sample(seq(0,1), size=N, replace=TRUE)) ggplot() + ggridges::geom_density_ridges(data = data, aes(x = Freq, y = Success, group = Success), scale = 0.2) + stat_smooth(data = data, aes(x = Freq, y = Success), size=1.5) + coord_cartesian(ylim = c(0, 1.25), xlim = c(0, 1), expand = FALSE) + scale_y_continuous(breaks = c(0, 0.5, 1)) + labs(x = "Frequency", y = "Probability of Detection") + theme_bw() #> Picking joint bandwidth of 0.123 #> `geom_smooth()` using method = 'loess' and formula 'y ~ x' ``` ![](https://i.imgur.com/Ex2cQld.png) Created on 2021-06-30 by the [reprex package](https://reprex.tidyverse.org) (v2.0.0)
I am doing a presentation about Turing machines and I wanted to give some background on FSM's before introducing Turing Machines. Problem is, I really don't know what is VERY different from one another. Here's what I know it's different: > > FSM has sequential states depending on the corresponding condition met while Turing machines operate on infinite "Tape" with a head which > reads and writes. > > > There's more room for error in FSM's since we can easily fall on a non-ending state, while it's not so much for Turing machines since we > can go back and change things. > > > But other than that, I don't know a whole lot more differences which make Turing machines better than FSM's. Can you please help me?
The major distinction between how DFAs (Deterministic Finite Automaton) and TMs work is in terms of how they use memory. Intuitively, DFAs have no "scratch" memory at all; the configuration of a DFA is entirely accounted for by the state in which it currently finds itself, and its current progress in reading the input. Intuitively, TMs have a "scratch" memory in the form of tape; the configuration of a TM consists both of its current state and the current contents of the tape, which the TM may change as it executes. A DFA may be thought of as a TM that neither changes any tape symbols nor moves the head to the left. These restrictions make it impossible to recognize certain languages which can be accepted by TMs. Note that I use the term "DFA" rather than "FSM", since, technically, I'd consider a TM to be a finite-state machine, since TMs by definition have a finite number of states. The difference between DFAs and TMs is in the number of configurations, which is the same as the number of states for a DFA, but is infinitely great for a TM.
I am reading the naive conversion to CNF, this procedure is explaining in this book [book](https://www.cin.ufpe.br/~tfl2/artificial-intelligence-modern-approach.9780131038059.25368.pdf), but I have not found a conplexity analysis of this algorithm: 1. elimination of equivalence 2. Elimination of Implications 3. elimination of double negation 4. De Morgan Laws 5. distributive law I found one implementation of this method in this Repo <https://github.com/netom/satispy> Thanks
Any algorithm necessarily has exponential complexity, since the CNF of a formula can be exponentially longer than the DNF, and the input might be in DNF.
I am using both random forest and xgboost to examine the feature importance. but i noticed that they give different weights for features as shown in both figures below, for example HFmean-Wav had the most important in RF while it has been given less weight in XGBoost and i can understand why? [![enter image description here](https://i.stack.imgur.com/vbQ01.png)](https://i.stack.imgur.com/vbQ01.png)
First you should understand that these two are **similar** models not **same** ( Random forest uses bagging ensemble model while XGBoost uses boosting ensemble model), so it may differ sometimes in results. Now let me tell you why this happens. When the correlation between the variables are high, XGBoost will pick one feature and may use it while breaking down the tree further(if required) and it will ignore some/all the other remaining correlated features(because we will not be able to learn different aspects of the model by using these correlated feature because it is already highly correlated with the chosen feature). But in random forest , the tree is not built from specific features, rather there is **random** selection of features (by using row sampling and column sampling), and then the model in whole learn different correlations of different features. So you can see the procedure of two methods are different so you can expect them to behave little differently. Hope this helps!
I have a population animals, and am looking at the effect of (yearly) environmental temperature on date of birth and litter size. Births in the population have been monitored for 15 years (one litter per year), consequently, many females have been repeatedly observed giving birth over this time. Environmental temperature is the same for all animals within each year. I am currently modelling this as a linear mixed model, with yearly temperature as a fixed effect and maternal ID as a random effect, to account for repeated measures of mothers. A reviewer has recommended that I fit the model with Year as a random effect as well. However, because only a single yearly temperature applies to all mothers within each year, this drastically changes the results. - Essentially, the random effect of year soaks up all the variation due to temperature, rendering the effect of temperature nonsignificant. For temperature to not have an effect would be VERY surprising since experiments manipulating temperature directly show a strong effect. What is the best way to proceed? Thanks, **summary of LMM without Year as a random effect:** ``` AIC BIC logLik deviance df.resid 10378.6 10399.7 -5185.3 10370.6 1419 Scaled residuals: Min 1Q Median 3Q Max -2.35860 -0.60767 -0.02228 0.57020 3.08272 Random effects: Groups Name Variance Std.Dev. TrueID (Intercept) 39.22 6.262 Residual 58.43 7.644 Number of obs: 1423, groups: TrueID, 679 Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 36.3515 0.3320 582.1000 109.476 < 2e-16 CritTemp.c -3.0817 0.3757 1238.7000 -8.203 4.44e-16 Correlation of Fixed Effects: (Intr) CritTemp.c 0.010 ``` **summary of LMM with Year as a random effect:** ``` AIC BIC logLik deviance df.resid 9819.7 9846.0 -4904.8 9809.7 1418 Scaled residuals: Min 1Q Median 3Q Max -3.2943 -0.4964 0.0186 0.4700 3.4843 Random effects: Groups Name Variance Std.Dev. TrueID (Intercept) 39.16 6.258 Year (Intercept) 27.80 5.273 Residual 31.73 5.633 Number of obs: 1423, groups: TrueID, 679; Year, 16 Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 36.550 1.358 16.900 26.917 2.66e-15 CritTemp.c -2.757 1.976 16.145 -1.395 0.182 --- Correlation of Fixed Effects: (Intr) CritTemp.c -0.095 ```
As fleshed out in the comments, if you don't expect any overall effect for `Year`and you are not interested in the individual `Year` estimates, then indeed random intercepts are a better model. In this case you can talk about the variance expained by each level. One major benefit is the greater degrees of freedom in the random intercepts model. Do check the residuals at all levels and do the usual fit tests. Oh and you can always report the results for both models.
I'm a first year postdoc in pure mathematics (geometry/topology with strong background in analysis) with undergraduate statistics and probability background (with also measure theoretic probability). I also have some programming knowledge in ForTran, C and Matlab, but I never used them in my pure mathematics career in my graduate school. In my next job, I'm considering doing a postdoc in machine learning. The reason behind this switch is: I'm satisfied with my pure knowledge so far and have been wanting to see some real-life applications of mathematics, and also keep my options open in both industry and academia. My questions are: 1) How hard is the switch going to be? I guess I've all the required mathematics background, but will it be hard to pick up the necessary computer science skills, even if I work in more theory-oriented problems? 2) Is there a website/email-list where I can get notifications on jobs in machine learning? I'm looking for jobs in Europe mostly. Thanks and much appreciation :)
1. overfitting is always bad as it means you have done something to your model that means that it generalisation performance has become worse. This is less likely to happen when you have lots of data, and in such circumstances regularisation tends to be less helpful, but over-fitting is still something you don't want. 2. This diagram (from Wikimedia) shows an over-fitted regression model ![](https://i.stack.imgur.com/75hwQ.jpg) In order for the regression line to pass through each of the data points, the regression has high curvature at many points. At these points, the output of the model is very sensitive to changes in the value of the input variable. This generally requires model parameters of large magnitude, so that small changes in the input are magnified into large changes in the output. 3. No, regularisation is not always needed, particularly if you have so much data that the model isn't flexible enough to exploit the noise. I would recommend putting regularisation in and use cross-validation to set the regularisation parameter(s). If regularisation is unhelpful, cross-validation will tend to make the regularisation parameter small enough that it has no real effect. I tend to use leave-one-out cross-validaition as it can be computed very cheaply for many interesting models (linear regression, SVMs, kernel machines, Gaussian processes etc.), even though its high variance is less attractive.
I am currently using an SVM with a linear kernel to classify my data. There is no error on the training set. I tried several values for the parameter $C$ ($10^{-5}, \dots, 10^2$). This did not change the error on the test set. Now I wonder: is this an error *caused by the ruby bindings* for `libsvm` I am using ([rb-libsvm](https://github.com/febeling/rb-libsvm)) or is this *theoretically explainable*? Should the parameter $C$ always change the performance of the classifier?
C is essentially a regularisation parameter, which controls the trade-off between achieving a low error on the training data and minimising the norm of the weights. It is analageous to the ridge parameter in ridge regression (in fact in practice there is little difference in performance or theory between linear SVMs and ridge regression, so I generally use the latter - or kernel ridge regression if there are more attributes than observations). Tuning C correctly is a vital step in best practice in the use of SVMs, as structural risk minimisation (the key principle behind the basic approach) is party implemented via the tuning of C. The parameter C enforces an upper bound on the norm of the weights, which means that there is a nested set of hypothesis classes indexed by C. As we increase C, we increase the complexity of the hypothesis class (if we increase C slightly, we can still form all of the linear models that we could before and also some that we couldn't before we increased the upper bound on the allowable norm of the weights). So as well as implementing SRM via maximum margin classification, it is also implemented by the limiting the complexity of the hypothesis class via controlling C. Sadly the theory for determining how to set C is not very well developed at the moment, so most people tend to use cross-validation (if they do anything).
On the Wikipedia page for [quantum algorithm](https://en.wikipedia.org/wiki/Quantum_algorithm) I read that > > [a]ll problems which can be solved on a quantum computer can be solved on a classical computer. In particular, problems which are undecidable using classical computers remain undecidable using quantum computers. > > > I expected that the fundamental changes that a quantum computer brings would lead to the possibility of not only solving problems that could already be solved with a classical computer, but also new problems that could not be solved before. Why is it that a quantum computer can only solve the same problems?
Actually, it depends of what you mean by "solving a problem". As previously stated, a classical computer is already Turing complete and can decide any decidable problem in finite time. ~~There is no such thing of a computer able to decide an undecidable problem in finite time (that would actually lead to a contradiction).~~ [Edit: as David Richerby stated in the comments, that hasn't been proved (only conjectured) However most of the undecidability proofs can be repeated for any reasonable kind of computer (this doesn't prove the conjecture, but in my opinion it's a good hint) The good argument regarding why a quantum computer couldn't decide more problems than a classic Turing machine has already been given by David Richerby (you can simulate a quantum Turing machine with a classical Turing machine)] Hence a classical computer can theoretically solve any "reasonable" problem... given enough time (it will be finite, it doesn't mean it will be fast). The wikipedia quote doesn't talk about time at all, and a quantum computer is expected to solve some problems faster than a classical computer. By "faster", i mean that some problems which would have taken millenia to decide on a classical computer could be solved in minutes on a quantum computer (provided such a computer can be built, we don't know for sure yet, although there are promising results, and provided BQP != BPP which is a weaker hypothesis than NP != P if i'm not mistaken). For instance, the famous Shor's algorithm shows that factoring an integer in its prime factors is in BQP (Bounded error Quantum Polynomial time) whereas that problem isn't believed to be in P (Polynomial time) or BPP (Bounded error Probabilistic Polynomial time). That doesn't mean factoring an integer on a classical computer is impossible, but it will be a time consuming task, and for sufficiently big numbers the computation may exceed any reasonable time limit (like the age of the universe; of course that's true of almost any computation given a sufficiently big entry, but that would happen much faster for problems outside of BPP on a classical computer than for problems inside of BPP). So, even though all decidable problems can be solved on classical computers, some problems are still practically out of reach because of unreasonable computation times. A quantum computer may allow us to decide such problems in more reasonable time.
I am trying to sample random values from a dataframe where the NaN values should be ignored, without dropping the entire row or column. My sampling function at the moment looks like this: ```py def random_port(x, y): port = df.sample(n=x, axis=1).sample(n=y) return port ``` The problem shows up when the Output has NaN values in them, like so: ```py IN: random_port(5, 5) OUT: name a b c d e date 2018-06-23 -0.382931 -0.740939 0.033059 NaN NaN 2018-10-21 2.230166 -0.632479 -0.499691 NaN -0.532929 2018-05-30 0.432295 0.101531 NaN NaN NaN 2018-03-02 NaN 11.006190 4.427038 NaN NaN 2018-08-17 -0.038829 -0.603785 -0.104375 NaN NaN ``` I want to be able to exclude these values from the sample, before they are sampled. I have tried with df.isna() without luck. df.dropna() would also not work as this would drop all rows of the dataframe. Hope you could help me with some inputs! **EDIT** My dataframe looks like this: ``` name 0x 2GIVE 300 Token ... iTicoin imbrex vSlice date ... 2018-01-01 NaN 65.290909 NaN ... NaN NaN 1.710043 2018-01-02 NaN 80.463768 NaN ... NaN NaN 2.435115 2018-01-03 NaN 57.126316 NaN ... NaN NaN 3.717667 2018-01-04 NaN 60.589286 NaN ... NaN NaN 4.230297 2018-01-05 NaN 93.228137 NaN ... NaN NaN 6.291709 ... ... ... ... ... ... ... 2018-11-21 1.299640 -0.722204 0.251369 ... -0.871292 -0.385648 -0.972958 2018-11-22 0.822972 -0.698515 -0.005144 ... -0.872788 -0.509496 -0.973531 2018-11-23 0.849339 -0.689389 0.017049 ... -0.863086 -0.583974 -0.976263 2018-11-24 0.537992 -0.709757 -0.005032 ... -0.874165 -0.543323 -0.979586 2018-11-25 0.615335 -0.726006 -0.081572 ... -0.883667 -0.637062 -0.974509 [329 rows x 879 columns] ``` As you can see there are columns that have both NaN and values that I want to use in the sampling. Now I would like to exclude these NaN values, so that they are not included in the sampling at all. This way I when I calculate the mean of the sample it will have no NaN. So: Sample from df where value is not equal to NaN. I want my output from the function random\_port to **not** have any NaN values. ```py OUT: name ChessCoin Etheroll AquariusCoin MyBit Megacoin date 2018-10-29 0.684864 -0.873093 -0.035047 -0.988149 -0.736966 2018-01-28 0.684864 -0.873093 -0.035047 -0.988149 -0.736966 2018-04-22 0.684864 -0.873093 -0.035047 -0.988149 -0.736966 2018-11-05 0.089559 -0.849822 -0.053746 -0.987191 -0.757519 2018-07-16 0.292095 -0.634921 3.961392 -0.053746 -0.987191 ```
EDIT ==== If you are using Pandas and you want skip NA when you calculate mean ``` mean = df["col_name"].mean(skipna=True) ``` <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mean.html>
The ever increasing complexity of computer programs and the increasingly crucial position computers have in our society leaves me wondering why we still don't collectively use programming languages in which you have to give a formal proof that your code works correctly. I believe the term is a 'certifying compiler' (I found it [here](https://cstheory.stackexchange.com/questions/2815/what-do-we-know-about-restricted-versions-of-the-halting-problem/2816#2816)): a compiler compiling a programming language in which one not only has to write the code, but also state the specification of the code and prove that the code adheres to the specification (or use an automated prover to do so). While searching the internet, I only found projects that either use a very simple programming language or failed projects that try to adapt modern programming languages. This leads me to my question: > > Are there any certifying compilers implementing a full-blown programming language, or is this very hard/theoretically impossible? > > > Additionally, I've yet to see any complexity class involving provable programs, such as 'the class of all languages decidable by a Turing machine for which a proof exists that this Turing machine halts', which I shall call $ProvableR$, as an analogue to $R$, the set of recursive languages. I can see advantages of studying such a complexity class: for instance, for $ProvableR$ the Halting problem is decidable (I even conjecture $ProvableRE$ defined in the obvious way would be the largest class of languages for which it is decidable). In addition, I doubt we would rule out any practically useful programs: who would use a program when you can't prove it terminates? So my second question is: > > What do we know about complexity classes which require their containing languages to provably have certain properties? > > >
I think the answer to the first question is that generally it is *too much work* with current tools. To get the feeling, I suggest trying to prove the correctness of Bubble Sort in Coq (or if you prefer a little more challenge, use Quick Sort). I don't think it is reasonable to expect programmers write verified programs as long as proving correctness of such basic algorithms is so difficult and time consuming. This question is similar to asking why mathematicians don't write formal proofs verifiable by proof checkers? Writing a program with a formal correctness proof means proving a mathematical theorem about the written code, and the answer to that question also applies to your question. This does not mean that there has not been successful cases of verified programs. I know that there are [groups](http://www.verisoftxt.de/StartPage.html) who are proving the correctness of systems like [Microsoft's hypervisor](http://www.microsoft.com/hyper-v-server/en/us/default.aspx). A related case is Microsoft's [Verified C Compiler](http://research.microsoft.com/en-us/projects/vcc/). But in general the current tools need a lot of development (including their SE and HCI aspects) before becoming useful for general programmers (and mathematicians). Regarding the final paragraph of [Neel's answer](https://cstheory.stackexchange.com/questions/4052/what-do-we-know-about-provably-correct-programs/4055#4055) about program size growth for languages with only total functions, actually it is easy to prove even more (If I understood it correctly). It is reasonable to expect that the syntax of any programing language will be c.e. and the set of total computable functions is not c.e., so for any programming language where all programs are total there is a total computable function which cannot be computed by any program (of any size) in that language. --- For the second question, I answered a similar question on Scott's blog sometime ago. Basically if the complexity class has a nice characterization and is computably representable (i.e. it is c.e.) then we can prove that some representation of the problems in the complexity class are provably total in a very weak theories corresponding to the complexity class. The basic idea is that the provably total functions of the theory contains all $AC^0$ functions and a problem which is $AC^0$-complete for the complexity class, therefore it contains all problems in the complexity class and can prove the totality of those programs. The relation between proofs and complexity theory is studied in proof complexity, see S.A. Cook and P. Nguyen's recent book "[Logical Foundations of Proof Complexity](http://www.cup.es/us/catalogue/catalogue.asp?isbn=9780521517294)" if you are interested. (A [draft](http://www.cs.toronto.edu/~sacook/homepage/book) from 2008 is available.) So the basic answer is that for many classes "Provably C = C". This is not true in general since there are semantic complexity classes which do not have syntactic characterization, e.g. total computable functions. If by recursive you mean total recursive functions then the two are not equal, and the set of computable functions which are provably total in a theory is well studied in proof theory literature and are called *the provably total functions* of the theory. For example: the provably total functions of $PA$ are $\epsilon\_0$-recursive functions (or equivalently functions in Godel's system $T$), the provably total functions of $PA^2$ are function in Girard's system $F$, the provably total functions of $I\Sigma\_1$ are primitive recursive functions, ... . But it does not seem to me that this means much in program verification context, since there are also programs which are extensionally computing the same function but we cannot prove that the two programs are computing the same function, i.e. the programs are extensionally equal but not intentionally. (This is similar to the Morning Star and the Evening Star.) Moreover it is easy to modify a given provably total program to get one which the theory is unable to prove its totality. --- I think the two questions are related. The objective is to get a verified program. A verified programs means that the program satisfies a description, which is a mathematical statement. One way is to write a program in a programming language and then prove its properties like it satisfies the description, which is the more common practice. Another option is to try to prove the mathematical statement describing the problem using restricted means and then extract a verified program from it. For example, if we prove in the theory corresponding to $P$ that for any given number $n$ there is a sequence of prime numbers which their product is equal to $n$, then we can extract a $P$ algorithm for factorization from the proof. (There are also researcher who try to automatize the first approach as much as possible, but checking interesting non-trivial properties of programs is computationally difficult and cannot be completely verified without false positives and negatives.)
I am referencing some code I found on GeeksForGeeks.com: Why is the current node printed (and processed) first before its children are processed? Wouldn't "breadth first" mean "Process children first, then process parent"? or, is that only for Trees? I can't be the only one to not understand this, so instead of flaming me, somebody please simply post the answer? ``` void Graph::DFSUtil(int v, bool visited[]) { visited[v] = true; <-- why is this printed FIRST? cout << v << " "; // Recur for all the vertices adjacent // to this vertex list<int>::iterator i; for (i = adj[v].begin(); i != adj[v].end(); ++i) if (!visited[*i]) DFSUtil(*i, visited); } // DFS traversal of the vertices reachable from v. // It uses recursive DFSUtil() void Graph::DFS(int v) { // Mark all the vertices as not visited bool *visited = new bool[V]; for (int i = 0; i < V; i++) visited[i] = false; // Call the recursive helper function // to print DFS traversal DFSUtil(v, visited); } ```
1) In answer to the semantic question "Wouldn't 'breadth first' mean 'Process children first, then process parent'?" This question is a duplicate of the following: [What is the meaning of 'breadth' in breadth first search?](https://cs.stackexchange.com/questions/107187/what-is-the-meaning-of-breadth-in-breadth-first-search) 2) In answer to the technical question "Why is the current node printed (and processed) first before its children are processed?": BFS processes nodes in the following order: the starting vertex, then all the vertices at distance 1, then all the vertices at distance 2, etc. As pointed out by @user111398, you need to mark a node as visited before processing it or you will go on an infinite loop if the graph is cyclic. If you mark the parent but don't process it before (marking and) processing its children then you are processing the nodes in the following order: all the nodes at max distance, then all the nodes at max depth minus one, etc. This is reverse-BFS, not BFS.
I have a question about calculation of conditional density of two normal distributions. I have random variables $X\mid M \sim \text{N}(M,\sigma^2)$ and $M \sim \text{N}(\theta, s^2)$, with conditional and marginal densities given by: $$\begin{equation} \begin{aligned} f(x|m) &= \frac{1}{\sigma \sqrt{2\pi}} \cdot \exp \Big( -\frac{1}{2} \Big( \frac{x-m}{\sigma} \Big)^2 \Big), \\[10pt] f(m) &= \frac{1}{s \sqrt{2\pi}} \cdot \exp \Big( - \frac{1}{2} \Big( \frac{m-\theta}{s} \Big)^2 \Big). \end{aligned} \end{equation}$$ I would like to know the marginal distribution of $X$. I have multiplied the above densities to form the joint density, but I cannot successfully integrate the result to get the marginal density of interest. My intuition tells me that this is a normal distribution with different parameters, but I can't prove it.
Your intuition is correct - the marginal distribution of a normal random variable with a normal mean is indeed normal. To see this, we first re-frame the joint distribution as a product of normal densities by [completing the square](https://en.wikipedia.org/wiki/Completing_the_square): $$\begin{equation} \begin{aligned} f(x,m) &= f(x|m) f(m) \\[10pt] &= \frac{1}{2\pi \sigma s} \cdot \exp \Big( -\frac{1}{2} \Big[ \Big( \frac{x-m}{\sigma} \Big)^2 + \Big( \frac{m-\theta}{s} \Big)^2 \Big] \Big) \\[10pt] &= \frac{1}{2\pi \sigma s} \cdot \exp \Big( -\frac{1}{2} \Big[ \Big( \frac{1}{\sigma^2}+\frac{1}{s^2} \Big) m^2 -2 \Big( \frac{x}{\sigma^2} + \frac{\theta}{s^2} \Big) m + \Big( \frac{x^2}{\sigma^2} + \frac{\theta^2}{s^2} \Big) \Big] \Big) \\[10pt] &= \frac{1}{2\pi \sigma s} \cdot \exp \Big( -\frac{1}{2 \sigma^2 s^2} \Big[ (s^2+\sigma^2) m^2 -2 (x s^2+ \theta \sigma^2) m + (x^2 s^2+ \theta^2 \sigma^2) \Big] \Big) \\[10pt] &= \frac{1}{2\pi \sigma s} \cdot \exp \Big( - \frac{s^2+\sigma^2}{2 \sigma^2 s^2} \Big[ m^2 -2 \cdot \frac{x s^2 + \theta \sigma^2}{s^2+\sigma^2} \cdot m + \frac{x^2 s^2 + \theta^2 \sigma^2}{s^2+\sigma^2} \Big] \Big) \\[10pt] &= \frac{1}{2\pi \sigma s} \cdot \exp \Big( - \frac{s^2+\sigma^2}{2 \sigma^2 s^2} \Big( m - \frac{x s^2 + \theta \sigma^2}{s^2+\sigma^2} \Big)^2 \Big) \\[6pt] &\quad \quad \quad \text{ } \times \exp \Big( \frac{(x s^2 + \theta \sigma^2)^2}{2 \sigma^2 s^2 (s^2+\sigma^2)} - \frac{x^2 s^2 + \theta^2 \sigma^2}{2 \sigma^2 s^2} \Big) \\[10pt] &= \frac{1}{2\pi \sigma s} \cdot \exp \Big( - \frac{s^2+\sigma^2}{2 \sigma^2 s^2} \Big( m - \frac{x s^2 + \theta \sigma^2}{s^2+\sigma^2} \Big)^2 \Big) \cdot \exp \Big( -\frac{1}{2} \frac{(x-\theta)^2}{s^2+\sigma^2} \Big) \\[10pt] &= \sqrt{\frac{s^2+\sigma^2}{2\pi \sigma^2 s^2}} \cdot \exp \Big( - \frac{s^2+\sigma^2}{2 \sigma^2 s^2} \Big( m - \frac{x s^2 + \theta \sigma^2}{s^2+\sigma^2} \Big)^2 \Big) \\[6pt] &\quad \times \sqrt{\frac{1}{2\pi (s^2+\sigma^2)}} \cdot \exp \Big( -\frac{1}{2} \frac{(x-\theta)^2}{s^2+\sigma^2} \Big) \\[10pt] &= \text{N} \Big( m \Big| \frac{xs^2+\theta\sigma^2}{s^2+\sigma^2}, \frac{s^2 \sigma^2}{s^2+\sigma^2} \Big) \cdot \text{N}(x|\theta, s^2+\sigma^2). \end{aligned} \end{equation}$$ We then integrate out $m$ to obtain the marginal density $f(x) = \text{N}(x|\theta, s^2+\sigma^2)$. From this exercise we see that $X \sim \text{N}(\theta, s^2+\sigma^2)$.
I have a time series that zigzags up and down in cycles. I want to find a simple algorithm to partition it into segments of up sequences and down sequences. It should be robust to noise so that a small kink shouldn't partition a sequence needlessly. Is there a name for such a technique? **EDIT**: several of you have brought up seasonality. This is something I'm not literated in. Surely its something for me to learn. In this case I'm only want to find the points when direction change. I'm not sure if there considered any seasonality, in the sense of regular recurrence, in the data.
There is a function borrowed from the TA community, called a ZigZag indicator (which can be adjusted for noise thresholds). It is available in the TTR package of R. ![zigzag](https://i.stack.imgur.com/0DNd2.jpg) [zigzagR](http://rgm2.lab.nig.ac.jp/RGM2/func.php?rd_id=TTR:ZigZag) It might be possible to do the transformation(s) using classical time-series decomposition methods, but I don't know of a simple way to adjust the noise thresholding as in the zig-zag function.
Yes, I already looked [here](https://stats.stackexchange.com/questions/125462/how-to-interpret-pca-on-time-series-data) but that's too high profile for my humble mind (and it's not exactly what I'm looking for). Imagine we have a timecourse with time on the x-axis and some value on the y-axis (e.g. a signal). Now I can sample this timecourse to obtain a vector in a multidimensional vector space. My question: What does it mean if I perform a PCA on this data? What is the PCA of a single vector (such as the timeseries) and how can I interpret the resulting eigenvectors?
Maybe, β€œperforming PCA on a single time series” means application of singular spectrum analysis (SSA), which is sometimes called PCA of time series. In SSA, multivariate data are constructed from lagged (moving) subseries of the initial time series. Then PCA (usually, SVD, which is PCA without centering/standardizing) is applied to the obtained multivariate data. Example in R: ``` library(Rssa) s <- ssa(co2) # Plot eigenvectors plot(s, type = "vectors") # Reconstruct the series, grouping elementary series. r <- reconstruct(s, groups = list(Trend = c(1, 4), Season1 = c(2,3), Season2 = c(5, 6))) plot(r) ```
I ran a study, where a single variable was measured repeatedly for each participant, under different conditions. Analyzing this data with SPSS has been rather straightforward, by using the General Linear Model -> Repeated Measures ANOVA, and specifying each measure in the within-subject factors. My problem arises when I try to include some post-experiment data I collected, like a test that all participants completed after the experiment, with integral scores ranging from 0-24. I want to include this to see how these scores relate to the within-subjects measures, and adding this data as a between-subject factor makes sense, but due to the nature of the test score variable I have up to 25 categories that make the data impossible to work with. It seems like this should be a common issue, and my Googling has helped me a bit, but most of the examples I have seen use categorical variables like gender for the between-subject factors. What do I do if the between-subject factor isn't categorical? I will be very grateful for any insight, comments and tips you have. Thanks!
You could treat the test score as an interval between subjects predictor in a full model of that and the within effects. You really should be doing multi-level modelling at this point and I'd strongly encourage you look at some tutorials on that for SPSS. There is an alternative, that is not as good but might be simpler to understand. Let's say your primary question is whether the repeated measures effect is dependent on (interacts with) a linear effect of the test score. Calculate the repeated measures effect score for each participant and then perform a regression on the resulting effect scores. In general, once you have each subject with a single score you're interested in then you could analyze the continuous test score predictor how you'd handle any continuous variable.
What is a good introduction to statistics for a mathematician who is already well-versed in probability? I have two distinct motivations for asking, which may well lead to different suggestions: 1. I'd like to better understand the statistics motivation behind many problems considered by probabilists. 2. I'd like to know how to better interpret the results of Monte Carlo simulations which I sometimes do to form mathematical conjectures. I'm open to the possibility that the best way to go is not to look for something like "Statistics for Probabilists" and just go to a more introductory source.
As you said, it's not necessarily the case that a mathematician may want a rigorous book. Maybe the goal is to get some intuition of the concepts quickly, and then fill in the details. I recommend two books from CMU professors, both published by Springer: "[*All of Statistics*](https://link.springer.com/book/10.1007/978-0-387-21736-9)" by Larry Wasserman is quick and informal. "[*Theory of Statistics*](https://link.springer.com/book/10.1007/978-1-4612-4250-5)" by Mark Schervish is rigorous and relatively complete. It has decision theory, finite sample, some asymptotics and sequential analysis. Added 7/28/10: There is one additional reference that is orthogonal to the other two: very rigorous, focused on learning theory, and short. It's by Smale (Steven Smale!) and Cucker, "[On the Mathematical Foundations of Learning](http://www.ams.org/journals/bull/2002-39-01/S0273-0979-01-00923-5/home.html)". Not easy read, but the best crash course on the theory.
Sorry if this is really elementary stuff, but I'm a stats noob trying to wrap my head around this. Intuitively, I'm imagining that a model with this information alone would be able to accurately predict the outcome, on average, in 10 tries. Is this one way of thinking about it or am I completely wrong?
I love the answer by @StephanKolassa but wanted to emphasise a few points with a practical illustration β€” from a beginner's perspective, how we measure the accuracy in a regression model, and what Stephan means when he talks about the *relative* nature of $R^2$. People often like to think of predictions as basically "right" or "wrong", and there's a hint of that in your question as to whether we are "able to accurately predict the outcome, on average, in 10 tries". But "we get the answer right M times out of N" or "P% of our predictions are accurate" aren't really helpful or meaningful when you're performing a regression analysis to predict a continuous variable. I'm going to use the example of estimating people's height in centimetres, $y$, based on some personal data $x$ (this answer covers both simple and multiple regression, so interpret that as "several predictor variables $x\_1, x\_2, x\_3, \dots x\_k$" when that makes more sense). If your prediction is $\hat y$ and the true value is $y$, you care about questions like "did I make an under- or overestimate, and how much by?" So the **residual** $y - \hat y$, also measured in centimetres, is a useful quantity; if it's positive you underestimated, negative you overestimated, and near to zero means you're about right. Big negative or positive values are *both* bad, in different ways, so one measure of the "overall badness" of your model's predictions is to square the residuals of your observations and add them up, $\sum (y\_i - \hat y\_i)^2$, to get the **residual sum of squares**, $SS\_{res}$. Note this would be in square centimetres. That doesn't tell you how badly wrong your *individual* predictions are likely to be. One suitable metric is the **mean squared error** (MSE) found by sharing out $SS\_{res}$ between your data points (with a small degrees of freedom adjustment that won't make much difference in large samples). You may also be interested in its square root, often called the **standard error of the regression** or **residual standard error**, and sometimes abbreviated SER. Despite its name, you're better to think of this like the *standard deviation* of the residuals (strictly speaking, it's an estimate of the standard deviation $\sigma$ of the regression's error or disturbance term $\varepsilon$). It shows, roughly speaking, "how much am I usually out by?" in the units of your original data, i.e. centimetres: intuitively you'd expect most of your predictions to be within two or three SERs of the true values, and indeed many to be within one SER. However, you probably care how accurate a *particular* prediction is likely to be, and we can "personalise" this uncertainty based on the values of the predictors. If the predictors $x$ are close to their average values, your prediction $\hat y$ has less uncertainty (i.e. is likely to be nearer the true value of $y$) than for a data point with $x$ far from the average β€” there's additional uncertainty when you're *extrapolating*. Why? Your prediction $\hat y$ was based on estimating the mean value $\mathbb{E}(y|x)$ for the given $x$ values. Prediction errors occur partly because observed values of $y$ have random variation $\varepsilon$ above/below $\mathbb{E}(y|x)$ that your model cannot possibly account for. We generally assume $\varepsilon$ has a normal distribution and its variance $\sigma^2$ is the same at all values of $x$ (*homoskedasticity*), so this source of prediction error is equally bad everywhere. But another source of error is that your estimate of $\mathbb{E}(y|x)$ depends on *estimated* regression coefficients (slopes and intercept) and particularly in small samples, you'll have incurred some estimation error. When you're close to the mean value of your predictor $x$, your error in fitting the line isn't too bad, but as you extend the line further and further out, even a small error in your estimated slope $\hat \beta$ can result in your prediction $\hat y$ being badly off. Try holding a ruler to your regression line and wiggle it around slightly to represent uncertainty in its estimation; what other lines would look plausible if your data were slightly different, e.g. a couple of points deleted? Your predictions for the observed data will change a bit, but imagine extending your ruler out to make a prediction for a point a metre to your right β€” small wiggles of your ruler cause that prediction to jump up and down with great uncertainty. (You may notice the wigglability of your ruler seems to depend on $R^2$. If it's near one and the data points fit very close to a straight line, you can't deviate the ruler very far before it looks implausible. If it's near zero and the data points are scattered about all over the place, it feels like you can fit the ruler almost anywhere and the uncertainty in your predictions is much greater. We'll come back to this later.) More scientific than ruler-wiggling, your statistical software can add **confidence intervals** around your regression line to show the uncertainty in $\mathbb{E}(y|x)$, i.e. where the line should be fit, and **prediction intervals**, also known as **forecast intervals**, to show how the likely error in your predictions changes as $x$ varies. For example, we expect only about 1 in 20 data points to lie outside a *95% prediction interval*. In the simulated data set below, 3 out of 100 points lie clearly outside the 95% prediction limits (dashed red lines) and another 3 lie close to the boundary. You'll see these intervals trumpet outwards to show greater uncertainty as you move left or right from the mean $\bar x$: it's more visible for the confidence interval (shaded region), but if you count how many squares wide it is, the prediction interval widens too. So although the SER gives you an overview of how far out your predictions tend to be, you need to tweak it to find the uncertainty of a specific prediction with given $x$ values. But the SER is still useful, because these "personalised" uncertainty estimates scale with the SER β€” if you can halve the SER, you'll halve the width of your prediction intervals too. (The width of confidence intervals also scales with SER, but unlike prediction intervals, is roughly inversely proportional to the square root of the sample size. By collecting sufficient data you can make the uncertainty in your estimate of $\mathbb{E}(y|x)$ very small even if your model's fit is quite poor, i.e. $R^2$ is low, although it's easier β€” requires less data β€” if the fit is good. The uncertainty in your predictions would then be dominated by the random variation $\varepsilon$ about this mean.) [![100 data points with 95% prediction interval](https://i.stack.imgur.com/HizMW.png)](https://i.stack.imgur.com/HizMW.png) We've discussed what the "accuracy" or "badness" of our predictions might mean and how we might measure it, and sources of error in those predictions. Now let's consider what $R^2 = 0.9$ would tell us about them. In a sense, the answer is "nothing"! If you were interested in how many units (i.e. centimetres) wrong your height predictions might be, then you need to look at the SER (and for a specific data point, the prediction interval that depends on the SER and the $x$ values). You can have two regressions with identical $R^2$ but very different SERs. This is obvious since $R^2$ is a dimensionless quantity and doesn't scale with your data. Suppose you decided to change all your height data $y$ into metres and re-run the regression. You'll get exactly the same $R^2$ but all your predictions $\hat y$, the SER, and the prediction intervals are, numerically, a hundred times smaller/narrower. And because of the squaring effect, the residual sum of squares, $SS\_{res}$, will be 10,000 times smaller! However, it's clear that your model and its accompanying predictions haven't got 100 or 10,000 times better. In that sense, it's useful that the $R^2$ is dimensionless and remains at 0.9. The $R^2$, SER and $SS\_{res}$ are telling you things on fundamentally different scales. A give-away for this issue is that they were measured in different units, which motivated my choice of "height" for $y$. $R^2$ is a *relative* measure because it's comparing the situation of using your regression model to a situation where you didn't make any use of $x$ to predict $y$. How *can* I predict someone's height if I know nothing about them?! The obvious answer is "just go with the average height". It's easy to prove that, using the ordinary least squares (OLS) approach, the optimal prediction is indeed the mean $\bar y$. This is equivalent to fitting a regression model, called the **null model**, which contains only the intercept term. You can tell your statistical software to do this, and check that it estimates the constant term as $\bar y$. Just how bad are the null model's predictions? Taking a similar approach to before, we can sum the squares of their errors, $\sum (y\_i - \bar y\_i)^2$, to obtain the **total sum of squares**, $SS\_{tot}$. Intuitively, because we know the null model's predictions aren't very good β€” we were hampered by not being able to use the $x$ values to help us make personalised predictions and had to use a "one size fits all" approach β€” we'd expect $SS\_{tot}$ to be a larger number than $SS\_{res}$, since the latter is the sum of squared errors for predictions we'd expect to be superior. In fact, since by definition OLS regression minimises this sum of squares, $SS\_{res}$ cannot be larger than $SS\_{tot}$: even in the worst case, it's always an option to estimate the slope coefficients as zero, so the fitted model duplicates the null model and performs just as badly. Since the fitted model can't fare worse than the null model, it makes sense to treat the null model's performance as a baseline. When people define $R^2$ as the "proportion of total variation explained by your model", the "total variation" they're referring to is $SS\_{tot}$. For $R^2 = 0.9$ it can be useful to flip things around and consider the "10% of total variation that remains unexplained by your model". The stuff your model still didn't get right, even though its predictions make use of the $x$ data, is represented by $SS\_{res}$, and in this case we're saying it's 10% of $SS\_{tot}$. Flipping the $R^2$ formula around, we have: $$1 - R^2 = \frac{SS\_{res}}{SS\_{tot}} $$ You can think of $R^2 = 0.9$ as saying the "badness" of your predictions (as measured by the sum of the squares of their errors) is a mere 10% of the "badness" they'd have if you had just predicted everyone's height to be the mean. In other words, a 90% improvement on the null model. There's a theoretical upper limit of $R^2 = 1$ if your model's prediction make no errors and $SS\_{res} = 0$. However, even a correctly-specified regression model which includes all relevant predictors will likely have some purely random variation $\varepsilon$ that you'll never be able to predict. Depending on how large the variance $\sigma^2$ of this random error is compared to the variance of $y$ (we get a formula for this later), you may max out well before 100% of variation can be explained. When predicting height, you can't legislate for measurement error; when predicting someone's wealth in ten years, you can't tell who'll win the lottery (literally or metaphorically). So while $R^2$ tells you proportionately how far down the road from $SS\_{tot}$ (the worst) to zero (perfection) your model has been able to squeeze the $SS\_{res}$ of its predictions by making use of the $x$ data, it doesn't tell you how far along that route is realistically attainable. In social sciences, when dealing with systems that are hard to model and include lots of random variation, $R^2 = 0.3$ can still be a "good" result, even though the residual sum of squares of your predictions is only 30% better than the null model. How does $R^2$ relate to the accuracy/uncertainty of individual predictions? If we use MSE as our metric, it scales (compared to the null model) in a similar way to the sum of squared errors, except that due to its degrees-of-freedom correction we need to use the **adjusted R-squared**, $R^2\_{adj}$. You can think of this as a "penalised" version of $R^2$, using a degrees-of-freedom adjustment to slightly reduce $R^2$ for every extra parameter (e.g. additional variable in a multiple regression) included in the model. The adjustment is only small if the sample size is large. Now, let's return to the scenario where we tried predicting height $y$ without use of any predictors, and resorted to the null model where we predict $\bar y$ for each individual. The MSE of the null model depends on how the true $y$ values are distributed about their mean $\bar y$, so is just the (sample) variance of the observed heights, $S^2\_y$. We hope that by making use of the $x$ data, the MSE of our fitted model will be lower, although a certain proportion of variation will remain unexplained: $$1 - R^2\_{adj} = \frac{MSE}{S^2\_y} $$ So another way of thinking about the "unexplained proportion" is that it's roughly the variance $\sigma^2$ of $\varepsilon$, divided by the variance of $y$ β€” i.e. what proportion of the variance of $y$ is due to the random error your model cannot capture? You may find that more intuitive than "sums of squares". By square-rooting and rearranging this equation, we find a handy formula for the standard error of the regression: $$SER = S\_y \sqrt{1 - R^2\_{adj}} $$ So the SER is directly proportional to the sample standard deviation of $y$; this explains why the SER was scaled down by a factor of 100 when we switched the $y$ data from centimetres to metres. Note that $S\_y$ depends only on the data, not our model. We could try to improve our model, by including or excluding predictor variables, or changing the functional form (e.g. including quadratic or cross terms) and this only affects the SER via the $\sqrt{1 - R^2\_{adj}}$ factor. Suppose a new model increases $R^2\_{adj}$ from 0.8 to 0.9; this results in a halving of the "proportion unexplained" from 0.2 to 0.1, but only reduces the SER by a factor of $\sqrt {0.5} \approx 0.707$. The width of our prediction intervals will narrow, but only by about 30%. Halving the prediction intervals would require the "proportion unexplained" to be reduced by a factor of 4, a much trickier feat of modelling. Drawing a larger sample (from the same population) doesn't help much, since the SER is just an estimate of $\sigma$, the standard deviation of $\varepsilon$, so we don't expect it to fall as $n$ increases. Larger $n$ does reduce the sampling error of our coefficient estimates, so that *confidence intervals* around the regression line become very narrow. The consequent reduction in extrapolation error means the prediction intervals no longer trumpet out, and instead their upper and lower limits run parallel to the regression line (illustrated in the graph below where the sample size has been increased to 1000). But even if we are increasingly confident of the true value of $\mathbb{E}(y|x)$ anywhere along our regression line, any observed value of $y$ includes an error term $\varepsilon$ which means it could easily lie $1.96\sigma$ above or below its predicted value. This uncertainty persists even if we take $n \to \infty$, and we can only narrow it down if we can chip away at the proportion of unexplained variance by increasing $R^2\_{adj}$. The effects of mismeasured variables, omitted variables, and other sins all get lumped in to our residuals, and we may be able to strip some of that out by getting our hands on more accurately measured data, or correcting the specification of our model. But if what's left of $\varepsilon$ was fundamentally random and unpredictable, there's nothing we can do to improve our predictive accuracy further. [![Prediction limits with n = 1000 run parallel to regression line](https://i.stack.imgur.com/pSKWk.png)](https://i.stack.imgur.com/pSKWk.png) I hope this makes it clear why $R^2$ and $R^2\_{adj}$ are related to the accuracy of your predictions, but can't tell you the likely magnitude of such errors. We should be somewhat cautious when we quantify uncertainty in predictions, e.g. by using prediction intervals, even if our $R^2$ looks "good". Much depends on the individuals you're making predictions for being drawn from the same distribution as the sample you fitted your model to. When making a prediction from $x$ values that are unusual for your sample, it's nice that prediction intervals account for the additional uncertainty due to extrapolating with a potentially erroneous slope β€” but the more fundamental problem is you cannot know if your model really applies in this region, even if a high $R^2$ shows it fitted well in the region you *did* have data. A classic example is how springs initially extend linearly with load, and a regression model will have excellent fit to data that only used loads well below the spring's elastic limit. It's physically clear the model would give inaccurate predictions when that limit is approached or exceeded, but without data there's no statistical way to determine how bad the errors might be. $R^2$ can also be misleadingly high if many irrelevant variables have been included in the regression. Including a variable for which the true regression slope is zero can never reduce $R^2$ and in practice almost always increases it, but obviously such an "improvement" in $R^2$ is not accompanied by improved accuracy of predictions. You get a bit of protection because prediction intervals depend on $R^2\_{adj}$ and aren't so easily fooled by this kind of overfitting: due to the degrees-of-freedom penalty, the *adjusted* $R^2$ *can* fall when irrelevant predictors are included in the model, and if this happens then the SER increases and the prediction intervals widen. But it's better not to overfit your models in the first place :-) ... an obsession with making "better" models, with ever-improved $R-squared$, can be dangerous, and you certainly shouldn't expect ever-improved predictive accuracy from doing so. --- **Technical note** In simple linear regression, the formulas for the prediction and confidence intervals for a data point $(x\_0, y\_0)$ depend on the critical $t$-statistic with the residual degrees of freedom, $t\_{res.df}^{crit}$ (pick from your statistical tables, depending on whether you want e.g. 95% or 99% coverage etc), the variance $S\_x^2$ of $x$, sample size $n$, and the standard error of the regression: $$\text{Confidence limits} = \hat y \pm t\_{res.df}^{crit} \times SER \times \sqrt{\frac{1}{n} + \frac{(x\_0 - \bar x)^2}{(n-1) S\_x^2}} $$ $$\text{Prediction limits} = \hat y \pm t\_{res.df}^{crit} \times SER \times \sqrt{\color{red}{1} + \frac{1}{n} + \frac{(x\_0 - \bar x)^2}{(n-1) S\_x^2}} $$ $R^2$ enters into the interval widths via the SER factor: messy-looking data with a low $R^2$ produces a higher SER and wider intervals, i.e. greater uncertainty in both your estimated conditional mean $\mathbb{E}(y|x)$ and consequent potential for error in your predictions. The $(x\_0 - \bar x)^2$ term is why the intervals widen as you move away from $\bar x$. The red $\color{red}{1}$ in the prediction limit formula makes a big difference. As you increase the sample size, $n \to \infty$, the confidence limits tend to $\hat y \pm 0$ even if $R^2$ is poor and the SER is high, whereas the prediction limits tend to $\hat y \pm z^{crit} \times SER$. This is basically the $\pm 1.96 \sigma$ prediction error you can't shake off. The story for multiple regression is similar. Let $\mathbf{x}\_0$ be the row vector of predictors for the data point you want to make a prediction for. It should look like a row in the *design matrix* $\mathbf{X}$ (i.e. a 1 for the intercept term, then subsequent cells occupied by the variables). Then the formulas become: $$\text{Confidence limits} = \hat y \pm t\_{res.df}^{crit} \times SER \times \sqrt{\mathbf{x}\_0 (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{x}\_0^T} $$ $$\text{Prediction limits} = \hat y \pm t\_{res.df}^{crit} \times SER \times \sqrt{\color{red}{1} + \mathbf{x}\_0 (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{x}\_0^T} $$ Since $SER = \sqrt MSE$, these formulas are often written with the mean squared error written inside the square root, rather than the standard error of the regression written as a factor outside it.
Suppose we are given a list of $n$ events $E = \{E\_1, E\_2, \ldots, E\_n\}$ where each $E\_i$ is represented by $(s\_i, h\_i, v\_i)$ or $(start, hours, value)$. So if you attend an **entire** event that lasts 5hrs and has a value of 10 then you will gain 10 points. If you only attend for 3 hours, you only get 3\*(10/5) = 6 points. The output that the algorithm should produce is the maximum number of points that can be obtained by attending combinations of various events. **Note:** you are allowed to leave and come back to an event. **My Algorithm (pseudocode):** * Sort all the events $E$ by order of their $v\_i/h\_i$ ratio to produce a new set of events $E'$ such that $E'[1]$ has a higher *value/hours* ratio than $E'[2]$, $E'[3]$, etc... * $max = 0$ * For each event $e \in E'$ + $h =$ hours that have not yet been taken + $max\ += h\* (e\_{v\_i}/e\_{h\_i})$ **Note:** I did not specify in the algorithm how to determine the hours that have been scheduled/taken already but you can assume that there is an array that flags hours that have been scheduled. I do believe that my algorithm gives the optimal result, but how do I prove this?
I'm assuming that the value given to $h$ in each step is "number of hours during the running of the event, which are not yet covered". (Your wording is somewhat ambiguous.) In the following I'll assume that all events start and end at full hours. For real valued times, just replace hours by an amount of time small enough that all the differences between relevant moments (start and end times) are multiples of that amount. From the constraints given, we see that the event visited at one point in time has no influence on the profit we can make at a different moment in time. Thus, the optimal solution will be to visit the most profitable event at each point in time, i.e. during each hour. Now, we only have to show that the algorithm achieves this: Assume that there is an hour $h$ during which the algorithm suggests to visit event $e$, while visiting $e'$ would be more profitable. Since $e'$ is more profitable, it will have been put before $e$ during the first step of the algorithm. Furthermore, since $e'$ has been scheduled for $h$, $h$ must have been available when the algorithm processed $e$ and (since the algorithm never frees an hour) also when it processed $e'$. But this means that the algorithm would have scheduled $e'$ for hour $h$, contradicting our assumption. Thus we conclude that the algorithm is correct.
Current chess algorithms go about 1 or maybe 2 levels down a tree of possible paths depending on the player's move's and the opponent's moves. Let's say that we have the computing power to develop an algorithm that predicts all possible movements of the opponent in a chess game. An algorithm that has all the possible paths that opponent can take at any given moment depending on the players moves. Can there ever be a perfect chess algorithm that will never lose? Or maybe an algorithm that will always win? I mean in theory someone who can predict all the possible moves must be able to find a way to defeat each and every one of them or simply choose a different path if a certain one will effeminately lead him to defeat..... edit-- What my question really is. Let's say we have the computing power for a perfect algorithm that can play optimally. What happens when the opponent plays with the same optimal algorithm? That also will apply in all 2 player games with finite number (very large or not) of moves. Can there ever be an optimal algorithm that always wins? Personal definition: An optimal algorithm is a perfect algorithm that always wins... (not one that never loses, but one that always wins
Your question is akin to the old chestnut: "What happens when an irresistible force meets an immovable object?" The problem is in the question itself: the two entities as described cannot exist in the same logically consistent universe. Your optimal algorithm, an algorithm that always wins, cannot be played by both sides in a game where one side must win and the other must by definition lose. Thus your optimal algorithm as defined cannot exist.
Suppose we are given an array of numbers representing lengths of line segments. Find which three of these segments can be assembled into a triangle with maximum area. I can compute the areas of all $O(n^3)$ possible triangles in $O(n^3)$ time using Heron's formula, and then return the largest. Can this be improved to $O(n^2)$ time? Or even faster?
There is an $O(n^2)$ algorithm. By Heron's formula, the area of a triangle whose sides have length $a$, $b$, and $c$ is $S = \frac{1}{4}\sqrt{4a^2b^2 - (a^2+b^2-c^2)^2}$. Thus if we fix $a$ and $b$, the area is maximized when $c$ is closest to $\sqrt{a^2+b^2}$. A simple approach is to try all $O(n^2)$ possible pairs of $a$ and $b$ and perform binary search for the best value of $c$. This approach yields an $O(n^2 \log n)$ algorithm. To achieve $O(n^2)$, we first sort the lengths of line segments ($l\_1, l\_2,\ldots, l\_n$) in non-decreasing order. Let $a = l\_i$ and $b = l\_j$ with $i < j$, and let $k$ be the position in which $l\_k$ is closest to $\sqrt{l\_i^2 + l\_j^2}$. When $j$ runs from $i+1$ to $n$, $k$ also increases gradually to $n$. Thus finding $k$ has an $O(1)$ amortized run time, i.e., the whole algorithm takes $O(n^2)$. Update 11/26/2013: An easier $O(n \log n)$ algorithm is to sort $l\_i$ in a non-decreasing order, and find $3\leq k \leq n$ that maximizes the area of the triangle assembled from $l\_{k-2}, l\_{k-1}$, and $l\_{k}$. It follows from the fact that if $i ,j < k$ then $S(l\_i, l\_j, l\_k) \leq S(l\_{k-2}, l\_{k-1}, l\_k)$ (can be proved by taking the derivative of $S$ and that $l\_k$ is the length of the longest side).
In frequentist statistics, a 95% confidence interval is an interval-producing procedure that, if repeated an infinite number of times, would contain the true parameter 95% of the time. Why is this useful? Confidence intervals are often misunderstood. They are *not* an interval that we can be 95% certain the parameter is in (unless you are using the similar Bayesian credibility interval). Confidence intervals feel like a bait-and-switch to me. The one use case I can think of is to provide the range of values for which we could not reject the null hypothesis that the parameter is that value. Wouldn't p-values provide this information, but better? Without being so misleading? In short: Why do we need confidence intervals? How are they, when correctly interpreted, useful?
So long as the confidence interval is treated as *random* (i.e., looked at from the perspective of treating the data as a set of random variables that we have not seen yet) then we can indeed make useful probability statements about it. Specifically, suppose you have a confidence interval at level $1-\alpha$ for the parameter $\theta$, and the interval has bounds $L(\mathbf{x}) \leqslant U(\mathbf{x})$. Then we can say that: $$\mathbb{P}(L(\mathbf{X}) \leqslant \theta \leqslant U(\mathbf{X}) | \theta) = 1-\alpha \quad \quad \quad \text{for all } \theta \in \Theta.$$ Moving outside the frequentist paradigm and marginalising over $\theta$ for any prior distribution gives the corresponding (weaker) marginal probability result: $$\mathbb{P}(L(\mathbf{X}) \leqslant \theta \leqslant U(\mathbf{X})) = 1-\alpha.$$ Once we fix the bounds of the confidence interval by fixing the data to $\mathbf{X} = \mathbb{x}$, we no longer appeal to this probability statement, because we now have fixed the data. However, *if the confidence interval is treated as a random interval* then we can indeed make this probability statement --- i.e., with probability $1-\alpha$ the parameter $\theta$ will fall within the (random) interval. Within frequentist statistics, probability statements are statements about relative frequencies over infinitely repeated trials. But that is true of *every probability statement* in the frequentist paradigm, so if your objection is to relative frequency statements, that is not an objection that is specific to confidence intervals. If we move outside the frequentist paradigm then we can legitimately say that a confidence interval contains its target parameter with the desired probability, so long as we make this probability statement marginally (i.e., not conditional on the data) and we thus treat the confidence interval in its random sense. I don't know about others, but that seems to me to be a pretty powerful probability result, and a reasonable justification for this form of interval. I am more partial to Bayesian methods myself, but the probability results backing confidence intervals (in their random sense) are powerful results that are not to be sniffed at. Moreover, even within the context of Bayesian analysis, where we let $\theta$ be a random variable with a prior distribution, we can see that the prior predictive probability that the confidence interval contains the parameter is equal to the confidence level. Thus, even within this alternative paradigm, the confidence interval can be regarded as an estimator that has powerful *a priori* prediction properties.
I have been working on machine learning and noticed that most of the time, dimensionality reduction techniques like `PCA` and `t-SNE` are used in machine learning, but I rarely noticed anyone doing it for deep learning projects. Is there a specific reason for not using dimensionality reduction techniques in deep learning?
Deep learning does not use dimensionality reduction because deep learning itself is a useful dimensionality reduction technique. Deep learning learns a compressed, nonlinear representation of the data through the hidden layers. Since Deep Learning can learn nonlinear mappings, it is a more flexible dimensionality reduction technique than PCA which restricted to linear mappings.
In terms of references and their implementation on the heap and the stack, how is equality testing for arrays different from that for integers? This is to do with Java programming, if you have a stack and a heap, would equality testing for example `j == i` be the same for arrays and for integers? I understand that arrays, are stored in the heap and the stack, as it holds bulks of data, but integers are only stored in the stack and referenced in the heap. ![this is a picture on how integer variables are stored on the heap and referenced on the heap](https://i.stack.imgur.com/xtIHW.png) I understand for equality testing `j==i` (variables) the stack pointer will point to the same location. I'm confused on how `j==i` would be different for array and integers. Could someone explain?
This is a special case of a [selection algorithm](http://en.wikipedia.org/wiki/Selection_algorithm) that can find the $k$th smallest element of an array with $k$ is the half of the size of the array. There is an implementation that is linear in the worst case. ### Generic selection algorithm First let's see an algorithm `find-kth` that finds the $k$th smallest element of an array: ``` find-kth(A, k) pivot = random element of A (L, R) = split(A, pivot) if k = |L|+1, return pivot if k ≀ |L| , return find-kth(L, k) if k > |L|+1, return find-kth(R, k-(|L|+1)) ``` The function `split(A, pivot)` returns `L,R` such that all elements in `R` are greater than `pivot` and `L` all the others (minus one occurrence of `pivot`). Then all is done recursively. This is $O(n)$ in average but $O(n^2)$ in the worst case. ### Linear worst case: the [median-of-medians algorithm](https://en.wikipedia.org/wiki/Median_of_medians) A better pivot is the median of all the medians of sub arrays of `A` of size 5, by using calling the procedure on the array of these medians. ``` find-kth(A, k) B = [median(A[1], .., A[5]), median(A[6], .., A[10]), ..] pivot = find-kth(B, |B|/2) ... ``` This guarantees $O(n)$ in all cases. It is not that obvious. These [powerpoint slides](http://web.archive.org/web/20151004200043/http://c3p0demo.googlecode.com/svn/trunk/scalaDemo/script/Order_statistics.ppt) are helpful both at explaining the algorithm and the complexity. Note that most of the time using a random pivot is faster.
Is there some natural way to understand the essence of relational semantics for parametric polymorphism? I have just started reading about the notion of relational parametricity, a la John Reynolds' "Types, Abstraction and Parametric Polymorphism", and I am having trouble understanding how the relational semantics is motivated. Set semantics makes perfect sense to me, and I realise that set semantics is insufficient to describe parametric polymorphism, but the leap to relational semantics seems to be magic, coming completely out of nowhere. Is there some way of explaining it along the lines "Assume relations on the base types and terms, and then the interpretation of the derived terms is just the natural relationship between *...such and such a natural thing...* in your programming language"? Or some other natural explanation?
Another possible answer different from Andrej's is the given by the example of the [$\omega$-set model of polymorphism](http://www.sciencedirect.com/science/article/pii/0304397588900977#). Since every function in the polymorphic calculus is computable, it's natural to interpret a type by a set of numbers which represent the computable functions of that type. Furthermore, it's tempting to identify functions with the same extensional behavior, thus leading to an equivalence relation. The relation is partial if we exclude the "undefined" functions, that is the functions which "loop" for some well-formed input. The PER models are a generalization of this. Another way to see these models are as a (very) special case of the simplicial set models of [Homotopy Type Theory](http://homotopytypetheory.org/). In that framework, types are interpreted as (a generalization of), sets with relations, and relations between those relations, etc. At the lowest level, we simply have the PER models. Finally, the field of constructive mathematics has seen the appearance of related notions, in particular [the Set Theory of Bishop](http://ncatlab.org/nlab/show/Bishop+set) involves describing a set by giving both elements and an explicit equality relation, which must be an equivalence. It's natural to expect some principles of constructive mathematics make their way into type theory.
Can you suggest me a good book with illustrative examples and problems with solutions for finite languages and automata theory? I tried Hopcroft's but it has mostly theory and not many problems with solutions. Thanks! :)
Try [this](https://cstheory.stackexchange.com/questions/1955/books-on-automata-theory-for-self-study) question to get an idea. I think the classical ones at universities are the one from Sipser and Hopcroft's one. For Hopcroft I would really look for the old version if you are mathematically inclined. They removed a lot of proofs in new versions. For specific examples of problems you'll get solutions here and there over the Internet. It is good not to have a straight look to the solution and having the solution very near can be very tempting at times. While looking for books to get an idea of what's on them I like [this](http://gen.lib.rus.ec/) Internet library. Then you can always buy the book on paper or borrow it from your University library. Hope this helps. **Some books with solutions: (under construction)** Hopcroft's et alii. [solutions](http://infolab.stanford.edu/~ullman/ialcsols/sols.html) to their book. Apparently, Sipser's book contains solutions after each chapter.
After my Theory of Computation class today this question popped in my mind: If a problem can be solved by a finite automaton, this problem belongs to P. I think its true, since automata recognize very simple languages, therefore all these languages would have polynomial algorithms to solve them. Thus, is it true that any problem solved by a finite automaton is in P?
Yes, it is true. In terms of complexity classes, $$ \text{REG} \subseteq \text{P}, $$ where $\text{REG}$ is the class of regular languages (i.e., problems that can be solved by a finite automaton). More specifically, $$ \text{REG} \subseteq \text{DTIME}(n), \tag{\*} $$ and $\text{DTIME}(n)$ is a strict subset of $\text{P}$ by the time hierarchy theorem. The proof of (\*) is as follows: for any problem in $\text{REG}$, there is a DFA which solves it. Convert that DFA to a Turing machine with the same states and transition function, which always moves to the right until it sees a blank, and then accepts or rejects. This Turing machine always halts in time exactly $n$. --- It's also worth mentioning that $$ \text{REG} = \text{DSPACE}(0) = \text{DSPACE}(k)$$ for any fixed constant $k$.