input
stringlengths
38
38.8k
target
stringlengths
30
27.8k
Given a graph $G$ where each node has a value $c$ and weight $w$, I want to select a connected subgraph $V^\*$, such that, 1. Sum of all values in $V^\*$ crosses threshold $t$. 2. Sum of all weights(say $w^\*$) in $V^\*$ is as low as possible. A practical example is finding smallest continuous area of a country that hosts at least $x\%$ of the population. In this case, value would be population, and weight would be area. I found a [related question](https://cs.stackexchange.com/questions/93877/largest-weight-limited-connected-subgraph-np-complete), but it only asks about the complexity, not the algorithm. I thought of 0 - 1 knapsack, such that values and weights swap role. So, 1. Size of knapsack is $t$, however we are allowed to cross it once. 2. minimize $w^\*$. However, I think this won't work, mainly because we can't order the nodes by $value/weights$, and secondly because of ability to exceed knapsack size.
This problem is NP-hard. Let $S = \{x\_1, \dots, x\_n\}$ be an instance of partition. Create a clique $G$ on $n$ nodes $v\_1, \dots, v\_n$. Set both the cost and the weight of $v\_i$ to $x\_i$. Set $t = \frac{1}{2}\sum\_{x\_i \in S} x\_i$. If there is a subset $C$ of $S$ such that $2 \sum\_{x\_i \in C} x\_i = \sum\_{x\_i \in S} x\_i$, then the set of vertices $\{v\_i \mid x\_i \in C\}$ is connected, has total value $t$ and total weight $t$. If there is no subset $C$ of $S$ such that $2 \sum\_{x\_i \in C} x\_i = \sum\_{x\_i \in S} x\_i$ then every subset of vertices of $G$ either has total value smaller than $t$ (and hence is not a feasible solution), or has a total value larger than $t$, and hence also weight larger than $t$. Then you have that the answer to the instance of partition is yes if and only if the optimal solution to your problem has measure (total weight) $t$.
I'm studying pattern recognition and I'm at the part about Kernel density estimators. During the introduction of the subject, the book I'm studying (*Pattern Recognition & Machine Learning by Bishop*) takes for granted something I'm not sure I can understand. Say we have an unknown pdf $p(x)$ in some D-dimensional space and let us consider some small region $R$ containing $x$. Then, if we make the assumption that $R$ is small enough so that the pdf is roughly constant over the region, we have $$P \approx p(x)V$$ where $V$ is **the volume of $R$**. I'm completely unaware of how this formula was derived or how the volume $V$ appeared there. Any help woud be greatly appreciated. Thank you.
It's simple integral approximation. First, think in 1D. The area under a curve $f(x)$ in a very small x-axis segment (e.g. $[x,x+\Delta x]$) is $\approx f(x)\Delta x$; because $f(x)$ is nearly constant across this region. Similarly, in 2D, integral of $f(x,y)$ over a small region is $\approx f(x)\Delta x\Delta y$. In multiple dimensions, all the multiplicands near $f$ is called as *volume*, i.e. $f(x\_1,...,x\_n)\underbrace{\Delta x\_1...\Delta x\_n}\_V$.
It's my understanding that Turing's model has come to be the "standard" when describing computation. I'm interested to know why this is the case -- that is, why has the TM model become more widely-used than other theoretically equivalent (to my knowledge) models, for instance Kleene's μ-Recursion or the Lambda Calculus (I understand that the former didn't appear until later on and the latter wasn't originally designed specifically as a model of computation, but it shows that alternatives have existed from the start). All I can think of is that the TM model more closely represents the computers we actually have than its alternatives. Is this the only reason?
One of the nice things about Turing machines is that they work on strings instead of natural numbers or lambda terms, because the input and the output of many problems can be naturally formulated as strings. I do not know if this counts as a “historical” reason or not, though.
I have for each day sensor timeseries data. I just ask myself how to train with that a LSTM eg. for classification? Since I would like to have the LSTM train on all examples and not just one? I just see examples where LSTM are trained on one timeseries
It is questionable whether doing the PCA and reducing the dimension to 2D was a good idea. You can try experimenting with the number of components in PCA or even trying some non-linear dimensionality reduction like [LLE](https://scikit-learn.org/stable/modules/manifold.html#locally-linear-embedding): > > Locally linear embedding (LLE) seeks a lower-dimensional projection of the data which preserves distances within local neighborhoods. It can be thought of as a series of local Principal Component Analyses which are globally compared to find the best non-linear embedding. > > >
I am given an exercise, and I can't quite figure it out. > > ### The Prisoner Paradox > > Three prisoners in solitary confinement, > A, B and C, have been sentenced to death on the same day but, because > there is a national holiday, the governor decides that one will be > granted a pardon. The prisoners are informed of this but told that > they will not know which one of them is to be spared until the day > scheduled for the executions. > > > Prisoner A says to the jailer “I already know that at least one the > other two prisoners will be executed, so if you tell me the name of > one who will be executed, you won’t have given me any information > about my own execution”. > > > The jailer accepts this and tells him that C will definitely die. > > > A then reasons “Before I knew C was to be executed I had a 1 in 3 > chance of receiving a pardon. Now I know that either B or myself will > be pardoned the odds have improved to 1 in 2.”. > > > But the jailer points out “You could have reached a similar conclusion > if I had said B will die, and I was bound to answer either B or C, so > why did you need to ask?”. > > > What are A’s chances of receiving a pardon and why? Construct an > explanation that would convince others that you are right. > > > You could tackle this by Bayes theorem, by drawing a belief network, > or by common sense. Whichever approach you choose should deepen your > understanding of the deceptively simple concept of conditional > probability. > > > Here's my analysis: This looks like the the [Monty Hall problem](https://stats.stackexchange.com/questions/373/the-monty-hall-problem-where-does-our-intuition-fail-us), but not quite. If A says `I change my place with B` after he is told C will die, he has 2/3 chances to be saved. If he doesn't, then I would say his chances are 1/3 to live, like when you don't change your choice in the Monty Hall problem. But at the same time, he is in a group of 2 guys, and one should die, so it is tempting to say that his chances are 1/2. So the paradox is still here, how would you approach this. Also, I have no idea how i could make a belief network about this, so i'm interested to see that.
The answer depends on how the jailer chooses which prisoner to name when he knows that A is to be pardoned. Consider two rules: 1) The jailer chooses among B and C at random, and just happened to say C in this case. Then A's chance of being pardoned is 1/3. 2) The jailer always says C. Then A's chance of being pardoned is 1/2. All we are told is that the jailer said C, so we don't know which of these rules he followed. In fact, there could be other rules -- perhaps the jailer rolls a die and only says C if he rolls a 6.
How can we describe decision tree in laymen's language and what are the major fields that require this?
Decision Trees are pretty straight forward to understand. Take for example a famous problem where you have to label each passenger on the [Titanic](https://www.kaggle.com/c/titanic). For each person you have a bunch of info (`sex` and `age`, for example), and the `outcome` after the disaster, whether they lived or not. The `DT`, tries to find the best pattern in order to correctly classify a person. This is *almost* like what you'd do, for example you may think (correctly) that women and children were the first to get into a safeboat, and so `sex = Female` and `age < 18`, would be two pretty good first splits of the data. These are good because they let you discriminate well the overall observations (in `alive` or `dead`), because there's a good portion of subjects that are either `Female` or `Children` that survided. The `DT` does this, but with some kind of measure of how good a variable splits the data, the variable that discriminate most is the first, and then it continues, building what looks like a tree. To answer your second question, almost every filed can have an application for `DT`, at least for more "advanced" types, called *Random Forest* or *Boosting*, all you have to know in layman terms is that both try to find the best way to classify observation by *averaging* *a lot* of trees. By this I mean that you have `trained` lots of trees on the same data, and you take for each observation the major label (if most of them said that one has `lived` the accident, then it probably is safe to say so). This can shed some lights on a lot of applications as I was saying, from anomaly transaction detection in Banks, Medical Diagnose, some Regression problem, and even Handwritten Image Recognition.
I want to do a statistical test to test the following business assumptions: 1. Higher duration is associated with a lower score 2. However, Hypothesis (1) may or may not be true for all Survey reasons What I am thinking of is to do a regression with interaction terms: score = duration + reason + duration \* reason My questions: 1. Is it possible to do a categorical \* continuous variable interaction? Most resources I saw online only shows instances where the categorical variable is binary. I have a multi-group categorical variable. 2. Is there a graphical way of showing this?
Yes, it's possible. Suppose your categorical model has $k$ levels, you'll need $(k-1)$ binary indicators to represent them, and you'll need another $(k-1)$ interaction terms that interact with the continuous varaible to model the interaction correctly. In essence, it's just a regression model that allow each level of the categorical variable to have its own slope and intercept (while when without interaction, each level can have their own intercept, but slopes are bound to be the same). Given the model: $y = 50 + 100(Lv2) + 200(Lv3) + 2.5 x + 3.5(x\times Lv2) - 6.5(x \times Lv3)$ where $Lv2$ and $Lv3$ are binary dummy variables to represent attributes 2 and 3 of the categorical variable, respectively. Here, $k$ = 3 and we kept $Lv1$ as the reference group. It's easy to visualize them once we realized this is just a compact way to express three regression lines. If we substitute 1 and 0 into the regression model accordingly, we will find that the equations are: for $Lv1$: $y = 50 + 2.5 x$ for $Lv2$: $y = (50 + 100) + (2.5+3.5) x$ for $Lv3$: $y = (50 + 200) + (2.5-6.5) x$ If we plot the predicted y, $\hat{y}$, against the continuous variable and then assign different features by the categorical variable's levels, we'll get: [![enter image description here](https://i.stack.imgur.com/TS3j1.png)](https://i.stack.imgur.com/TS3j1.png) The red line is group 1, with slope 2.5 and intercept 50; the green line is group 2; the blue line is group 3. There are more sophisticated ways (for example, it's possible to plot the 95%CI shading), this is just an overall gist. R code I used: ``` set.seed(1520) x <- rep(0:199, 3) group <- as.factor(rep(1:3, rep(200,3))) lv2 <- as.numeric(group==2) lv3 <- as.numeric(group==3) y <- 50 + 100 * lv2 + 200 * lv3 + 2.5 * x + 3.5 * (x * lv2) - 6.5 * (x * lv3) + rnorm(600, 0, 15) # Without interactions, lines will have to be parallel: m01 <- lm(y ~ x + group) summary(m01) yhat <- m01$fit plot(x, yhat, col=as.numeric(group)+1) # With interactions, lines can have their own slope: m02 <- lm(y ~ x + group + x:group) summary(m02) yhat <- m02$fit plot(x, yhat, col=as.numeric(group)+1) ``` --- > > Just to clarify, what do you mean by 'Without interactions, lines will > have to be parallel?' Is that parallel vs reference group? > > > Correct, we can simulate the data again but this time we don't change the slopes for $Lv2$ and $Lv3$ (aka, we replace the slope adjustment 3.5 and -6.5 with 0): ``` set.seed(1520) x <- rep(0:199, 3) group <- as.factor(rep(1:3, rep(200,3))) lv2 <- as.numeric(group==2) lv3 <- as.numeric(group==3) y <- 50 + 100 * lv2 + 200 * lv3 + 2.5 * x + 0 * (x * lv2) + 0 * (x * lv3) + rnorm(600, 0, 15) m03 <- lm(y ~ x + group + x:group) summary(m03) yhat <- m03$fit plot(x, yhat, col=as.numeric(group)+1) ``` Here is the output: ``` Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.980e+01 2.011e+00 24.761 <2e-16 *** x 2.487e+00 1.748e-02 142.279 <2e-16 *** group2 9.942e+01 2.844e+00 34.956 <2e-16 *** group3 1.986e+02 2.844e+00 69.833 <2e-16 *** x:group2 8.207e-03 2.472e-02 0.332 0.740 x:group3 1.398e-02 2.472e-02 0.566 0.572 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ``` And here is the predicted y: [![enter image description here](https://i.stack.imgur.com/byrs7.png)](https://i.stack.imgur.com/byrs7.png) As we can see, if we don't adjust the lines' slope, the interaction terms above (x:group2 and x:group3) will be close to 0, and because of that, the group predicted y will be close to parallel. > > So in the graphical illustration you shown, no lines are parallel, > therefore there is no interaction? > > > No, the other way around. Interaction means that the association between an independent variable and the dependent actually depends on the value of another independent variable. In general case, a regression model like: $$y = \beta\_0 + \beta\_1 x\_1+ \beta\_2 x\_2$$ indicates that for one unit increase in $x\_1$, mean $y$ differs by $\beta\_1$ unit, regardless what value $x\_2$ is. Applying to your situation, when there is no interaction, each unit increase in the continuous independent variable should be associated with the same amount of change in mean $y$, regardless of which group we are talking about. That scenario means that the lines have to be parallel. When interaction exists, then each unit increase in the continuous independent variable will be associated with the amount of change in mean $y$ differently depends on which level of the categorical variable we're talking about. And that implies the lines are not parallel. In the example I provided above, +3.5 adds an extra 3.5 to the slope 2.5 for $Lv2$ and -6.5 takes 6.5 away from the slope 2.5 for $Lv3$. If these two coefficients are different from zero, we have a significant interaction and the lines are not parallel; if they are close to zero, we don't have evidence of an interaction, and the lines are parallel. > > Also how do I interpret the coefficients and p-value of the > interaction terms? Is it just the same as how coefficients and > p-values of categorical variables are interpreted? > > > First, to safeguard against multiple testing, we test if the whole set of interaction terms is significant or not using extra sum of squares F test: ``` m01 <- lm(y ~ x + group) m02 <- lm(y ~ x + group + x:group) anova(m01, m02) ``` If this test is significant, then at least one of the interaction terms in the model is significant. Then we can go on the look at each of their p-values and discuss where the difference might be coming from. The coefficient (e.g. the 3.5 and -6.5 above in the model) are really just difference in slopes. So, given the reference group has a slope of 2.5, we can report that $Lv2$ has a significant increase in slope, which is 3.5, resulting in a final slope of 6.0. For the same reason the slope for $Lv3$ is (2.5 - 6.5) = -4. To put this all into context, a unit increase in x is then associated with: 2.5 unit increase in mean $y$ in $Lv1$ of the categorical variable, 6.0 unit increase in mean $y$ in $Lv2$ of the categorical variable, 4.0 unit decrease in mean $y$ in $Lv3$ of the categorical variable.
I am experimenting a bit autoencoders, and with tensorflow I created a model that tries to reconstruct the MNIST dataset. My network is very simple: X, e1, e2, d1, Y, where e1 and e2 are encoding layers, d2 and Y are decoding layers (and Y is the reconstructed output). X has 784 units, e1 has 100, e2 has 50, d1 has 100 again and Y 784 again. I am using sigmoids as activation functions for layers e1, e2, d1 and Y. Inputs are in [0,1] and so should be the outputs. Well, I tried using cross entropy as loss function, but the output was always a blob, and I noticed that the weights from X to e1 would always converge to an zero-valued matrix. On the other hand, using mean squared errors as loss function, would produce a decent result, and I am now able to reconstruct the inputs. Why is that so? I thought I could interpret the values as probabilities, and therefore use cross entropy, but obviously I am doing something wrong.
I think the best answer to this is that the cross-entropy loss function is just not well-suited to this particular task. In taking this approach, you are essentially saying the true MNIST data is binary, and your pixel intensities represent the probability that each pixel is 'on.' But we know this is not actually the case. The incorrectness of this implicit assumption is then causing us issues. We can also look at the cost function and see why it might be inappropriate. Let's say our target pixel value is 0.8. If we plot the MSE loss, and the cross-entropy loss $- [ (\text{target}) \log (\text{prediction}) + (1 - \text{target}) \log (1 - \text{prediction}) ]$ (normalising this so that it's minimum is at zero), we get: [![cross-entropy vs. mse loss](https://i.stack.imgur.com/CWK4C.png)](https://i.stack.imgur.com/CWK4C.png) We can see that the cross-entropy loss is asymmetric. Why would we want this? Is it really worse to predict 0.9 for this 0.8 pixel than it is to predict 0.7? I would say it's maybe better, if anything. We could probably go into more detail and figure out why this leads to the specific blobs that you are seeing. I'd hazard a guess that it is because pixel intensities are above 0.5 on average in the region where you are seeing the blob. But in general this is a case of the implicit modelling assumptions you have made being inappropriate for the data. Hope that helps!
What are the differences between "inference" and "estimation" under the context of **machine learning**? As a newbie, I feel that we *infer* random variables and *estimate* the model parameters. Is my this understanding right? If not, what are the differences exactly, and when should I use which? Also, which one is the synonym of "learn"?
Well, there are people from different disciplines today who make their career in the area of ML, and it's likely that they speak slightly different dialects. However, whatever terms they might use, the concepts behind are distinct. So it's important to get these concepts clear, and then translate those dialects in the way that your prefer. Eg. In PRML by Bishop, > > inference stage in which we use training data to learn a model for $p(C\_k|x)$ > > > So it seems that here `Inference`=`Learning`=`Estimation` But in other material, inference may differ from estimation, where `inference` means `prediction` while `estimation` means the learning procedure of the parameters.
I have created a very basic model of an array list in the B method as shown below ``` MACHINE Array(TYPE) VARIABLES block INVARIANT block : seq(TYPE) INITIALISATION block := [] OPERATIONS add(e) = PRE e : TYPE THEN block := block <- e END; remove = PRE block /= [] THEN block := tail(block) END; res <-- showArray = res := block; res <-- getfront = PRE block /= [] THEN res := first(block) END END ``` However what I am trying to achieve is to create a model that prevents deletion from the list, so when an element is added, it cannot be deleted. What should I add in the invariant to achieve this?
You cannot add such a property to your invariant because the invariant describes how a valid state might look like but not how it can be altered. The operations are responsible for specifying the behaviour of the model. If you want to prevent that an element will be removed from the array, just do not specify that as a possible behaviour. In your example above, just remove the operation `remove`. Every valid refinement is then not allowed to remove an element.
I've created a [Non-metric MultiDimensional Scaling (NMDS)](https://jonlefcheck.net/2012/10/24/nmds-tutorial-in-r/) ordination from a Bray-Curtis dissimilarity matrix. (Starting data were basal areas of various tree species across multiple research plots). I'd like to determine correlations of various plot-level environmental variables (e.g., soil chemistry, topography, elevation ,etc.) with my two NMDS ordination axes. Correlations between the ordination axes and environmental variables will be calculated with Pearson’s r2. * For reference, I've chosen to do this using the [cor2m()](http://127.0.0.1:12048/library/ecodist/html/cor2m.html) and [vf()](http://127.0.0.1:12048/library/ecodist/html/vf.html) functions available in the Ecodist package (vs vegan) in R. My question: **Do I have to scale/standardize my environmental variables before calculating correlations with my NMDS ordination axes?** I ask because my variable cover multiple orders of magnitudes: some of my variables have values in the 1000s while others have values in the hundre*dths*. If the answer is *yes*, what is the appropriate method? If the answer is *no*, why not?
Surveys are relatively expensive, which determines a different balance between the cost and value. One instance of a Monte Carlo simulation is extremely cheap so you can repeat it more until the marginal cost of the ongoing simulation exceeds its marginal value.
Apparently, deep neural networks have been making an impact recently. The layer-by-layer training of these networks has made it feasible to construct complex, deep, and well-performing neural networks. Still, I feel that some applications of deep learning models might benefit from global optimization approaches that do not easily get stuck in local optima. I haven't seen any research in this direction, though. Couldn't evolutionary/biologically inspired algorithms (e.g., Particle Swarm Optimization, Differential Evolution) be used to make deep learning models more powerful? Or is the computing power necessary for this particular combination of techniques currently a limiting factor?
As Cagdas Ozgenc points out in his answer, there is a simple sufficient condition to render this question moot: that the likelihood be concave in the parameter space (almost surely with respect to the sampling distribution of the data). That does cover many interesting cases (ie, the exponential family), but basically leaves everything else out. I don't have an answer here, but think that there are several ways this question could be refined or restated: 1. What properties does the MLE have in finite samples, and under what models? Although everyone likes the MLE, it's usage (AFAIK) is predicated on asymptotic guarantees. I can't think of any finite sample guarantees for it. 2. What properties, if any, does a local maxima have? Bibliography ------------ "Evaluation of the Maximum-Likelihood Estimator where the Likelihood Equation has Multiple Roots" VD Barnette 1966. **The Cauchy distribution with location parameter offers a canonical example of a likelihood with multiple roots (even asymptotically).** ["Testing for a Global Maximum of the Likelihood"](http://math.univ-lille1.fr/~biernack/index_files/testML_full_version.pdf) Christophe Biernacki, 2005. **A test for consistency for a root of the likelihood equation, based on comparing the observed maximized likelihood to its expected value under the putative argmax** ["Eliminating Multiple Root Problems in Estimation"](http://projecteuclid.org/download/pdf_1/euclid.ss/1009213001) Small, Wang, Yang 2000. **If you are going to read one paper, this is probably it. Discusses all of the above, also in the context of generalized estimating equations, plus suggests smoothing or penalizing the likelihood to help resolve multiple roots.**
I am building a propensity model using logistic regression for a utility client. My concern is that out of the total sample my 'bad' accounts are just 5%, and the rest are all good. I am predicting 'bad'. * Will the result be biassed? * What is optimal 'bad to good proportion' to build a good model?
I disagreed with the other answers in the comments, so it's only fair I give my own. Let $Y$ be the response (good/bad accounts), and $X$ be the covariates. For logistic regression, the model is the following: $\log\left(\frac{p(Y=1|X=x)}{p(Y=0|X=x)}\right)= \alpha + \sum\_{i=1}^k x\_i \beta\_i $ Think about how the data might be collected: * You could select the observations randomly from some hypothetical "population" * You could select the data based on $X$, and see what values of $Y$ occur. Both of these are okay for the above model, as you are only modelling the distribution of $Y|X$. These would be called a *prospective study*. Alternatively: * You could select the observations based on $Y$ (say 100 of each), and see the relative prevalence of $X$ (i.e. you are stratifying on $Y$). This is called a *retrospective* or *case-control study*. (You could also select the data based on $Y$ and certain variables of $X$: this would be a stratified case-control study, and is much more complicated to work with, so I won't go into it here). There is a nice result from epidemiology (see [Prentice and Pyke (1979)](http://biomet.oxfordjournals.org/content/66/3/403.short)) that for a case-control study, the maximum likelihood estimates for $\beta$ can be found by logistic regression, that is using the prospective model for retrospective data. So how is this relevant to your problem? Well, it means that if you are able to collect more data, you could just look at the bad accounts and still use logistic regression to estimate the $\beta\_i$'s (but you would need to adjust the $\alpha$ to account for the over-representation). Say it cost $1 for each extra account, then this might be more cost effective then simply looking at all accounts. But on the other hand, if you already have ALL possible data, there is no point to stratifying: you would simply be throwing away data (giving worse estimates), and then be left with the problem of trying to estimate $\alpha$.
Are there any algorithms/methods for taking a trained model and reducing its number of weights with as little negative effect as possible to its final performance? Say I have a very big (too big) model which contains X weights and I want to cut it down to have 0.9\*X weights with as little damage as possible to the final performance (or maybe even to the highest possible gain in some cases). Weight reduction occurs either by changing the model's basic architecture and removing layers or by reducing feature depth in said layers. Obviously after reduction some fine-tuning of the remaining weights will be required.
You might wanna check: <http://yann.lecun.com/exdb/publis/pdf/lecun-90b.pdf> And a more recent paper on the topic: <https://arxiv.org/pdf/1506.02626v3.pdf> However, I was not able to find an implementation of these two. So you will need to implement it yourself.
I have currently been tasked with designing an application that tracks several different measurements around the office, eg. the temperature, light, presence of people, etc. Having never really worked on data analysis before, I would like some guidance on how to store this data (which database design to use). What we're looking at currently are around 50 sensors that only send data when an event of interest occurs: if the temperature changes by 0.5 degrees or if the light turns on/off or if a room becomes occupied/vacant. So, the data will only be updated every few seconds. Also, in the future, I'd like to analyse some of the data. Hence, the data must be persistent in the database. What kind of technologies would you suggest to carry out this task?
The best choice for storage technologies will depend largely on how much data (in terms of bytes) you expect to accumulate over the lifetime of your project, so the first thing i would do is try to get some sample data, or make some educated guesses (e.g. how many bytes does 1 temperature recording take up X how many change events am I expecting per day X how many temperature sensors X how many days worth of data you want to store and analyse over time). Once you have a rough idea of how much data you need to store and analyse, you can use that to start narrowing down your choices. There's no right answer, and others may disagree, but I would suggest that if you're dealing with anything less than terabytes of data, you don't need hadoop (I noticed that's a tag in your question) - hadoop is not really a data storage solution (although it does have it's own file system called HDFS or just DFS), it's more of a framework for processing and transforming huge quantities of data. Also if you don't have thousands of events per second to record, you probably don't need NoSQL solutions either. For storage of structured data, given that you've never really done data analysis before, SQL databases are probably the way to go if you have gigabytes or less, and SQL will be easier and more useful to learn - it's mature, been around for ages and is still the go-to standard in most industries, so there are [plenty of learning resources](http://www.w3schools.com/sql/). Maybe try out MySQL Community Edition server (free, open source) as a start, I would also recommend the MySQL Workbench to help you get started (a bunch of GUI tools you can use to mess around with SQL when learning) PS I don't know anything about capturing signals from sensors, so maybe there are more appropriate technologies which I'm not aware of!
Real world data sometimes has a natural number of clusters (trying to cluster it into a number of cluster lesser than some magic k will cause a dramatic increase the clustering cost). Today I attended a lecture by Dr. Adam Meyerson and he referred to that type of data as "separable data". What are some clustering formalizations, other than K-means, that could be amenable to clustering algorithms (approximations or heuristics) that would exploit natural separability in data?
One [recent model](http://portal.acm.org/citation.cfm?id=1496886) trying to capture such a notion is by Balcan, Blum, and Gupta '09. They give algorithms for various clustering objectives when the data satisfies a certain assumption: namely that if the data is such that any $c$-approximation for the clustering objective is $\epsilon$-close to the optimal clustering, then they can give efficient algorithms for finding an almost-optimal clustering, even for values of $c$ for which finding the $c$-approximation is NP-Hard. This is an assumption about the data being somehow "nice" or "separable." Lipton has [a nice blog post](http://rjlipton.wordpress.com/2010/05/17/the-shadow-case-model-of-complexity/) on this. Another similar type of condition about data given in [a paper](http://www.cs.huji.ac.il/~nati/PAPERS/stable_instance.pdf) by Bilu and Linial '10 is perturbation-stability. Basically, they show that if the data is such that the optimal clustering doesn't change when the data is perturbed (by some parameter $\alpha$) for large enough values of $\alpha$, one can efficiently find the optimal clustering for the original data, even when the problem is NP-Hard in general. This is another notion of stability or separability of the data. I'm sure there is earlier work and earlier relevant notions, but these are some recent theoretical results related to your question.
When searching graphs, there are two easy algorithms: **breadth-first** and **depth-first** (Usually done by adding all adjactent graph nodes to a queue (breadth-first) or stack (depth-first)). Now, are there any advantages of one over another? The ones I could think of: * If you expect your data to be pretty far down inside the graph, *depth-first* might find it earlier, as you are going down into the deeper parts of the graph very fast. * Conversely, if you expect your data to be pretty far up in the graph, *breadth-first* might give the result earlier. Is there anything I have missed or does it mostly come down to personal preference?
Breadth-first and depth-first certainly have the same worst-case behaviour (the desired node is the last one found). I suspect this is also true for averave-case if you don't have information about your graphs. One nice bonus of breadth-first search is that it finds shortest paths (in the sense of fewest edges) which may or may not be of interest. If your average node rank (number of neighbours) is high relative to the number of nodes (i.e. the graph is dense), breadth-first will have huge queues while depth-first will have small stacks. In sparse graphs, the situation is reversed. Therefore, if memory is a limiting factor the shape of the graph at hand may have to inform your choice of search strategy.
I read on Wikipedia and in lecture notes that if a lossless data compression algorithm makes a message shorter, it must make another message longer. E.g. In this set of notes, it says: > > Consider, for example, the 8 possible 3 bit messages. If one is > compressed to two bits, it is not hard to convince yourself that two > messages will have to expand to 4 bits, giving an average of 3 1/8 > bits. > > > There must be a gap in my understand because I thought I could compress all 3 bit messages this way: * Encode: If it starts with a zero, delete the leading zero. * Decode: If message is 3 bit, do nothing. If message is 2 bit, add a leading zero. * Compressed set: 00,01,10,11,100,101,110,111 What am I getting wrong? I am new to CS, so maybe there are some rules/conventions that I missed?
You are missing an important nuance. How would you know if the message is only 2 bits, or if it's part of a bigger message? For that, you must also encode a bit that says that the message starts, and a bit that says it ends. This bit should be a new symbol, because 1 and 0 are already used. If you introduce such a symbol and then re-encode everything to binary, you will end up with an even longer code.
There is a large literature on "property testing" -- the problem of making a small number of black box queries to a function $f\colon\{0,1\}^n \to R$ to distinguish between two cases: 1. $f$ is a member of some class of functions $\mathcal{C}$ 2. $f$ is $\varepsilon$-far from every function in class $\mathcal{C}$. The range $R$ of the function is sometimes Boolean: $R = \{0,1\}$, but not always. Here, $\varepsilon$-far is generally taken to mean Hamming distance: the fraction of points of $f$ that would need to be changed in order to place $f$ in class $\mathcal{C}$. This is a natural metric if $f$ has a Boolean range, but seems less natural if the range is say real-valued. My question: does there exist a strand of the property-testing literature that tests for closeness to some class $\mathcal{C}$ with respect to other metrics?
Yes, there is! I will give three examples: 1. Given a set S and a "multiplication table" over S x S, consider the problem of determining if the input describes an abelian group or whether it is far from one. [Friedl, Ivanyos, and Santha in STOC '05](http://doi.acm.org/10.1145/1060590.1060614) showed that there is a property tester with query complexity polylog(|S|) when the distance measure is with respect to the *edit distance* of multiplication tables which allows addition and deletion of rows and columns from the multiplication table. The same problem was also considered in the Hamming distance model by [Ergun, Kannan, Kumar, Rubinfeld and Viswanathan (JCSS '00)](http://dx.doi.org/10.1006/jcss.1999.1692) where they showed query complexity of O~(|S|^{3/2}). 2. There is a large amount of work done on testing graph properties where the graphs are represented using adjacency lists and there is a bound on the degree of each vertex. In this case, the distance model is not exactly Hamming distance but rather how many edges can be added or deleted while preserving the degree bound. 3. In the closely related study of testing properties of distributions, various notions of distance between distributions have been studied. In this model, the input is a probability distribution over some set and the algorithm gets access to it by sampling from the set according to the unknown distribution. The algorithm is then required to determine if the distribution satisfies some property or is "far" from it. Various notions of distance have been studied here, such as L\_1, L\_2, earthmover. Probability distributions over infinite domains have also been studied here ([Adamaszek-Czumaj-Sohler, SODA '10](http://www.siam.org/proceedings/soda/2010/SODA10_006_adamaszekm.pdf)).
I have an example for a reduction of 3CNF to Clique, there is one thing I don't get about it, hopefully you could clarify it. The reduction works like this: > > Construct a graph G = (V, E) as follows: > > > Vertices: Each literal corresponds to a vertex. > > > Edges: All vertices are connected with an edge except the vertices of > the same clause and vertices with negated literals. > > > Why is it important that that negated literals will not be connected? How would that effect the reduction?
Rather than your approach, I suggest you formulate this as an integer linear program and feeding it to an off-the-shelf ILP solver. Alternatively, formulate it as a SAT problem and feed it to a SAT solver: you'll probably need to take the decision problem version, where you ask whether there exists a subset of $k$ non-overlapping squares, and then use binary search on $k$. Those would be the first approaches I would try, personally. --- If you definitely want to try your approach based upon a "separating line", then I think the best way to answer your question is going to be to pick a representative set of problem instances, and try some different heuristics on them to see which seems to work best. My intuition suggests that the best way to select the separator line may be to pick the line $L$ that intersects as few squares as possible, without worrying about how balanced the division is (though if there is a tie among multiple lines that each intersect the same number of squares, you could always use "how balanced the division is" as a tie-breaker). The reason is that you are getting an exponential multiplicative increase in the running time each time you enumerate all subsets of the squares that intersect the line $L$. I think your prime consideration is going to be keeping that blowup down. But that's just my intuition, and my intuition might be wrong. I think you need to do the experiment to find out empirically what works best. If you do apply your separating line approach, you might consider using a branch-and-bound approach. Keep track of the best solution you've found so far (i.e., the largest set of non-overlapping squares you've been able to find so far); say that it is of size $s$ at any point in time. Now anytime your search tree enters a subtree where you can prove that all solutions below the subtree will have size $\le s$, there is no need to explore that subtree. For instance, if you have a line $L$ and a subset $X$ where you know that the size of $X$ plus the size of the largest sub collection of non-intersecting squares above $L$ plus the total number of squares below $L$ is $\le s$, then there is no need to recursively compute the largest sub collection of non-intersecting squares below $L$, which saves you one recursive call. But, if you use an off-the-shelf ILP solver, it will already implement these sort of branch-and-bound heuristics for you -- hence my advice to start by formulating this as an ILP problem and applying an off-the-shelf ILP solver. --- Finally, the following paper apparently describes an $O(2^{\sqrt{n}})$ time algorithm to compute the exact solution to your problem. This is an improvement over the obvious algorithm that enumerates all possible subsets of squares, which takes $O(2^n)$ time. An application of the planar separator theorem to counting problems. S.S. Ravi and H.B. Hunt III. Information Processing Letters, Volume 25, Issue 5, 10 July 1987, Pages 317–321. <http://www.sciencedirect.com/science/article/pii/0020019087902067>
We know that Maximum Independent Set (MIS) is hard to approximate within a factor of $n^{1-\epsilon}$ for any $\epsilon > 0$ unless P = NP. What are some special classes of graphs for which better approximation algorithms are known? What are the graphs for which polynomial-time algorithms are known? I know for perfect graphs this is known, but are there other interesting classes of graphs?
There is a truly awesome list of all known graph classes that have some nontrivial algorithms for MIS: [see this entry](http://www.graphclasses.org/classes/problem_Independent_set.html) in the graph classes website.
In a recitation video for [MIT OCW 6.006](http://www.youtube.com/watch?feature=player_embedded&v=P7frcB_-g4w) at 43:30, Given an $m \times n$ matrix $A$ with $m$ columns and $n$ rows, the 2-D peak finding algorithm, where a peak is any value greater than or equal to it's adjacent neighbors, was described as: *Note: If there is confusion in describing columns via $n$, I apologize, but this is how the recitation video describes it and I tried to be consistent with the video. It confused me very much.* > > 1. Pick the middle column $n/2$ // *Has complexity $\Theta(1)$* > 2. Find the max value of column $n/2$ //*Has complexity $\Theta(m)$ because there are $m$ rows in a column* > 3. Check horiz. row neighbors of max value, if it is greater then a peak has been found, otherwise recurse with $T(n/2, m)$ //*Has complexity $T(n/2,m)$* > > > Then to evaluate the recursion, the recitation instructor says > > $T(1,m) = \Theta(m)$ because it finds the max value > > > $$ T(n,m) = \Theta(1) + \Theta(m) + T(n/2, m) \tag{E1}$$ > > > I understand the next part, at 52:09 in the video, where he says to treat $m$ like a constant, since the number of rows never changes. But I don't understand how that leads to the following product: $$ T(n,m) = \Theta(m) \cdot \Theta(\log n) \tag{E2}$$ I think that, since $m$ is treated like a constant, it is thus treated like $\Theta(1)$ and eliminated in $(E1)$ above. But I'm having a hard time making the jump to $(E2)$. Is this because we are now considering the case of $T(n/2)$ with a constant $m$? I think can "see" the overall idea is that a $\Theta(\log n)$ operation is performed, at worst, for m number of rows. What I'm trying to figure out is how to describe the jump from $(E1)$ to $(E2)$ to someone else, i.e. gain real understanding.
Context: SysID and controls guy who got into ML. I think [user110686's answer](https://cs.stackexchange.com/a/43355/) does a fair job of explaining some differences. SysID is **necessarily** about dynamic models from input/output data, whereas ML covers a wider class of problems. But the biggest difference I see is to do with (a) memory (number of parameters); (b) end use of the "learned" model . System Identification is very much a signal processing approach considering frequency domain representations, time-frequency analysis etc. Some ML folks call this "feature engineering". **(a) Memory:** SysID became prominent long before ML as a research field took shape. Hence statistics and signal processing were the primary basis for the theoretical foundations, and computation was scare. Hence, people worked with very simple class of models (Bias-Variance tradeoff) with very few parameters. We are talking at most 30-40 parameters and mostly linear models even for cases where people clearly know the the problem is non-linear. However, now computation is very cheap but SysID hasn't come out of its shell yet. People should start realizing that we have much better sensors now, can easily estimate 1000s of parameters with very rich model sets. Some researchers have attempted to use neural networks for SysID but many seem reluctant to accept these as "mainstream" since there aren't many theoretical guarantees. For anything not linear, its going to be hard getting guarantees anyway, so I am curious as to how the field will proceed. **(b) End use of learned model:** Now this is one thing SysID got very correct, but many ML algorithms fail to capture. It is important to recognize that for the target applications, you are necessarily building models that can be used effectively for **online optimization.** These models will be used to propagate any control decisions made, and when setting this up as an optimal control problem, the models become constraints. So when using an extremely complicated model structure, it makes the online optimization that much more difficult. Also note that these online decisions are made in the scale of seconds or less. An alternative proposed is to directly learn value function in an off-policy manner for optimal control. This is basically reinforcement learning, and I think there is good synergy between SysID and RL.
I want to do systematic review and meta-analysis but I am facing difficulties: how to take adjusted odds ratio (AOR) when studies classify the explanatory variables differently? For example one study may put the AOR for education in three ways — "No education" as a reference category, "Primary education" with AOR (CI) , Secondary education with AOR(CI) — and the other study might put e.g. "Primary education and below" as a reference, "Secondary and above" with AOR (CI). So in the first study there are two adjusted odds ratios and in the second one only one adjusted ratio. Is it possible to take the crude odds ratio by calculating manually for meta-analysis?
The gold standard when you have studies which have used different levels of the same explanatory variable is network meta-analysis (also known as multiple treatment comparison). If you do not want to delve into that level of complexity you have a number of more or less satisfactory ways of going forward. You could just select two levels and use them throughout while just ignoring all the other results. If you have all the raw frequencies you could, as you say, recompute unadjusted odds ratios for one comparison (say none versus more than none). If the unadjusted and adjusted ratios are close this might be convincing enough for your audience.
I have an idea of how a C program is turned into machine code by the compiler. I also know how the processor processes the instructions (<https://www.youtube.com/watch?v=cNN_tTXABUA> this video has a good introduction). But what I don't understand, is how an operating system (most times written in C or some low-level language) can run programs also in C (or other low-level language). I don't understand this. Does the OS read the code and then processes it with some internal functions, or does it only open the machine code and send it to the processor, that makes the rest? In case of the second option, how the OS take care of which instructions are allowed to be executed, and which are not? (example: I may write a program that has an instruction that jumps to a forbidden part of the memory RAM, how the OS protect it from happening?) I don't expect to understand it fully in this post's answers, but if you guys could give me an idea and then some books or tags to search, I'd be happy!
A very practical way to look at this is from the point of view of the shell you'd use after you log in to Linux. The shell itself will probably be written in C. The shell will call fork() to divide itself into two processes, and then call exec() from the second of those processes to replace the second process's code with that of the program you asked the shell to run. The shell's calls to fork() or exec() would go to functions in the C runtime library. The C runtime library provides all the functions specified as library functions in the C specification and/or POSIX specification. In general, fork(), exec() and all the other C library functions that rely on the OS will ultimately call syscall() which transitions into the kernel. The kernel has its own memory layout but can carefully access memory on behalf of a user process. The parameters to syscall() specify which OS function should be run and are a version of the parameters passed to the C runtime function. The syscall() piece is optimized in some Linux implementations but in others it is a software interrupt or trap, and the optimization implemented in for example x86 architecture is pretty much just a speedier way of doing the exact same thing. This interrupt or trap causes the kernel to execute from some fixed address within the kernel code, which translates the parameters that syscall was passed into a call to the appropriate C function in the kernel itself. If the parameters to the C runtime library function involve pointers to strings or buffers, those are passed across the syscall call, and then the function in the kernel that implements the OS function will probably need to obtain versions of those pointers that are valid in the context of the kernel. Your question also asks about memory access protection. Very generally speaking, a user process or the kernel itself have access to defined regions of memory, which are described in tables that the kernel programs into the CPU. A table entry in the CPU will specify a range of memory in terms of virtual addresses, the same range as physical addresses, and whether that should be accessible to the user program or just to the kernel. This table is known as the GDT on the x86 architecture. Additional features of such an entry could include whether reading, writing or executing are allowed for that chunk of memory. But when the CPU is running the user process, the only valid memory accesses are the ones that are specified for the user process. You have multiple user processes running, and to simplify, when the kernel switches from running one user process to running another, it switches out the old user process memory map and switches in a new user process memory map, so that the processes don't have a way to see each other's memory. If a process tries to violate the rules of the memory map, it results in running a trap handler in the kernel. Within the memory mapping system I described there are optimizations. One allows some sharing of the code when programs are linked to the same dynamic shared object (libc.so being an important example of one that it pays well to share) and for optimizing other things. Some of the interface between the CPU itself and the kernel involves special functions executing. These functions can be written in C as long as they are tagged with special attributes to cause the C compiler to generate the entry code of the function in a way that is compatible with how the CPU will call the function. GCC supports the attribute "interrupt" as one way to code such functions but the details depend on the architecture.
When considering some cdf $F\_X(x)$ — e.g. from [here](https://stats.stackexchange.com/questions/83538/distribution-function-applied-to-itself) — I’m having a hard time trying to understand what $F\_X(X)$ really means. Expanding gives $P(X \leq X)$, which at first glance should always equal $1$; the only other possibility I can think of is $1/2$, assuming that the two $X$’s are actually different (bad notation?). But then we’d have $W = F\_X(X) = 1$ (or $=1/2$) from the question linked above, which makes no sense. So how can we understand the quantity $F\_X(X)$?
This is a case where you are confusing yourself with incorrect use of notation. The function $F\_X: \mathbb{R} \rightarrow [0,1]$ describes the distribution of the random variable $X$, but it does not use this random variable as an implicit or explicit argument. From the probabilistic definition of the CDF, for all $x \in \mathbb{R}$ we can validly say that: $$F\_X(x) = \mathbb{P}(X \leqslant x) \ \quad \quad \quad (\text{Valid equation}) \quad \ \ \ $$ However, it is not valid to bring the random variable $X$ in both as the descriptor in the probabilistic definition of the function and also as its argument value. Doing so leads you to the erroneous equation: $$F\_X(X) = \mathbb{P}(X \leqslant X) \quad \quad \quad (\text{Erroneous equation})$$ You are correct that $\mathbb{P}(X \leqslant X) = 1$,$^\dagger$ but you are incorrect to equate this expression to the CDF evaluated using the random variable as its input. A better way to proceed is to note that if we let $Y = F\_X(X)$ then for all $0 \leqslant y \leqslant 1$ we have: $$\begin{align} F\_Y(y) &= \mathbb{P}(Y \leqslant y) \\[6pt] &= \mathbb{P}(F\_X(X) \leqslant y) \\[6pt] &= \mathbb{P}(X \leqslant F\_X^{-1}(y)) \\[6pt] &= F\_X(F\_X^{-1}(y)) \\[6pt] &= y, \\[6pt] \end{align}$$ which is the CDF of the [continuous uniform distribution on the unit interval](https://en.wikipedia.org/wiki/Continuous_uniform_distribution). (In this working I have assumed that $X$ is continuous, so that its distribution funciton is invertible. If $X$ is not continuous then the resulting distribution is not uniform. In this case the distribution has one or more discrete "lumps" corresponding to the discrete values of the distribution.) --- $^\dagger$ Indeed, the antisymmetry property of the [total order](https://en.wikipedia.org/wiki/Total_order) $\leqslant$ means that the statement $X \leqslant X$ is a tautology. This is an even stronger finding than saying that $\mathbb{P}(X \leqslant X) = 1$.
In Support Vector Machine, why is it a quadratic programming problem instead of a linear programming problem to obtain the optimal separating hyperplane. I only find, in book references, that the author choose the quadratic, my question is why????
In order to find an optimal separating hyperplane, the norm of the weight vector $||\overline{w}||$ should be minimized, subject to constraints $y\_i(\overline{w} \cdot \varphi(x\_i) + b) ≥ 1 − \xi\_i$, $\xi\_i \geqslant 0, i=1,\dots, l$ (see [here](https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf)). While it's technically possible to minimize $\mathcal{l}^1$-norm $||\overline{w}|| = \sum\_i^n |w\_i|$ (i.e. to solve linear programming problem) instead of $\mathcal{l}^2$ norm (quadratic problem), the $l^1$-approach has [a number of disadvantages](https://en.wikipedia.org/wiki/Least_absolute_deviations#Contrasting_least_squares_with_least_absolute_deviations) over the $l^2$: (a) the solutions for $l^1$-norm minimization problem lack stability, (b) the solution isn't unique, (c) it's harder to provide computationally efficient method for $l^1$-minimization, as compared to $l^2$-minimization. On the other hand, while the solution of $l^1$-minimization problem is more robust to outliers than of the corresponding $l^2$ problem, this doesn't play a great role specifically for SVMs, since there's a very small chance for an outlier to become a support vector.
When training a model it is possible to train the Tfidf on the corpus of only the training set or also on the test set. It seems not to make sense to include the test corpus when training the model, though since it is not supervised, it is also possible to train it on the whole corpus. What is better to do?
Usually, as this site's name suggests, you'd want to separate your train, cross-validation and test datasets. As @Alexey Grigorev mentioned, the main concern is having some certainty that your model can generalize to some **unseen** dataset. In a more intuitive way, you'd want your model to be able to **grasp** the relations between each row's features and each row's prediction, and to apply it later on a different, unseen, 1 or more rows. These relations are at the row level, but they are learnt at deep by looking at the **entire** training data. The challenge of generalizing is, then, making sure the model is grasping a **formula**, not depending (over-fitting) on the specific set of training values. I'd thus discern between two TFIDF scenarios, regarding **how you consider your corpus**: **1. The corpus is at the row level** We have 1 or more text features that we'd like to TFIDF in order to discern some term frequencies for **this row**. Usually it'd be a large text field, important by "itself", like an additional document describing a house buying contract in house sale dataset. In this case the text features should be processed at the row level, like all the other features. **2. The corpus is at the dataset level** In addition to having a row context, there **is** meaning to the text feature of each row in the context of the **entire** dataset. Usually a smaller text field (like a sentence). The TFIDF idea here might be calculating some "rareness" of words, but in a larger context. The larger context might be the entire text column from the train and even the test datasets, since the more corpus knowledge we'd have - the better we'd be able to ascertain the rareness. And I'd even say you could use the text from the unseen dataset, or even an outer corpus. The TFIDF here helps you feature-engineering at the row-level, from an outside (larger, lookup-table like) knowledge Take a look at [HashingVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.HashingVectorizer.html "HashingVectorizer"), a "stateless" vectorizer, suitable for a mutable corpus
This has never been made clear to me before, so I would love some help. Lets say I have 3 experimental groups of animals (A,B,C) A is the baseline control, B is treatment x, and C is treatment y. I hypothesise that either treatment X or Y will have an effect. I collect data and find that ANOVA shows no group effects, however if I just look at A vs B-treatment with t-Test, result is highly significant. Now here is my question: Is it acceptable to exclude group C from the analysis and conclude that B-treatment has a real effect? If not, why? I understand the issues of multiple testing type 1 errors testing on the same samples, but in this case these are independent groups, so why not just remove one from the hypothesis test? They are biologically independent groups, so isnt it true they are effectively like 2 different experiments (A vs B) and (A vs C)? --- Thanks for the help everyone! I think I understand: So the issue with multiple testing is that for each sample A B C, the test states there is a 5% chance of seeing a statistical difference by chance. So we can't do multiple t-tests using sample A for example, because we are basically multiplying this probability for sample A, therefore increasing the false positive error rate. Is this the idea? I’m trying to get a bit of a grip on how the mathematics of multiplying probabilities relates to the biology. So if we wanted to do several t-tests we need several groups (e.g., A1 B1, A2 B2), is that right?
Here is an example in which a t test distinguishes between A and B, but a one-way ANOVA does not find any significant differences. The trouble is that group C has a large variance, which inflates the error, preventing the ANOVA from finding differences. Descriptive Statistics: A, B, C ``` Variable N Mean SE Mean StDev Minimum Q1 Median Q3 Maximum A 10 114.70 4.14 13.09 95.00 103.00 114.00 126.00 136.00 B 10 101.70 4.14 13.09 82.00 90.00 101.00 113.00 123.00 C 10 104.1 10.8 34.2 37.0 79.5 119.0 129.3 142.0 ``` [![enter image description here](https://i.stack.imgur.com/lRvSi.png)](https://i.stack.imgur.com/lRvSi.png) A two-sample t test finds a significant difference, at the 5% level, between A and B. ``` Pooled Two-sample T for A vs B N Mean StDev SE Mean A 10 114.7 13.1 4.1 B 10 101.7 13.1 4.1 Difference = μ (A) - μ (B) T-Test of difference = 0 (vs ≠): T-Value = 2.22 P-Value = 0.039 DF = 18 ``` However, a one-way ANOVA finds no differences. (Because no differences are found it is not appropriate to do *ad hoc* comparisons. However, if you do Tukey's HSD procedure anyhow, it finds no significant differences with a family error rate of 5%.) ``` One-way ANOVA: A, B, C Method Null hypothesis All means are equal Alternative hypothesis At least one mean is different Equal variances were assumed for the analysis. Analysis of Variance Source DF SS MS F-Value P-Value Factor 2 957.1 478.5 0.95 0.399 Error 27 13607.1 504.0 Total 29 14564.2 ``` *Notes:* (1) A Welch ANOVA (not assuming equal variances) also finds no significant differences. The effective Error DF is reduced to about 17 on account of heteroscedasticity. (2) Output is from Minitab, abridged for relevance. (3) Fake normal data with respective population means 115, 100, 105 and population standard deviations 15, 15, 25. So there really are differences, between all pairs of groups, which the ANOVA does not have the power to detect.
I have researched multiple related questions([here](https://stats.stackexchange.com/questions/89531/forecasting-daily-data-with-trend-yearly-day-of-the-week-and-moving-holiday-e), [here](https://stats.stackexchange.com/questions/144509/forecast-daily-data-with-weekly-and-monthly-seasonality-using-exponential-smooth/144569#144569)) but it lacks detailed context and solutions. My goal is to improve my daily sales forecast accuracy after having incorporated a simple holiday dummy for lunar new year. ``` y <- msts(train$Sales, seasonal.periods=c(7,365.25)) # precomputed optimal fourier terms bestfit$i <- 3 bestfit$j <- 20 z <- fourier(y, K=c(bestfit$i, bestfit$j)) fit <- auto.arima(y, xreg=cbind(z,train_df$cny), seasonal=FALSE) # forecasting horizon <- length(test_ts) zf <- fourier(y, K=c(bestfit$i, bestfit$j), h=horizon) fc <- forecast(bestfit, xreg=cbind(zf,test_df$cny), h=horizon) plot(fc, include=365, type="l", xlab="Days", ylab="Sales", main="Comparing arimax forecast and actuals") lines(test_ts, col='green') ``` [![ARIMAX forecast](https://i.stack.imgur.com/4YFPe.png)](https://i.stack.imgur.com/4YFPe.png) However, this does not reflect the lagged effect of the holiday. [![Impact on sales around lunar new year](https://i.stack.imgur.com/2x5Ae.png)](https://i.stack.imgur.com/2x5Ae.png) An approach will be to model the effects with a continuous variable(fitted to the effect curve above), but will like to heard other suggestions.
There's several nice answers here already, but I think it's still pertinent to add another viewpoint, from the excellent paper > > Good Colour Maps: How to Design Them. Peter Kovesi. [arXiv:1509.03700](https://arxiv.org/abs/1509.03700) (2015). Software available [here](https://peterkovesi.com/projects/colourmaps/index.html). > > > which lays out in a very clear fashion the principles of colour-map design, and provides a really nice tool to analyze them for perceptual uniformity: > > [![enter image description here](https://i.stack.imgur.com/8HkpT.png)](https://i.stack.imgur.com/8HkpT.png) > > > This 'washboard' plot has a steady ramp from zero to one going left to right along the bottom, and the top of the plot has a sinusoidal modulation of uniform amplitude. For a properly-designed color map, all of the fringes at the top should show identical, or at least similar, contrast. However, when you put `jet` to the test, it is immediately obvious that this is not the case: > > [![enter image description here](https://i.stack.imgur.com/IMfsD.png)](https://i.stack.imgur.com/IMfsD.png) > > > In other words, there are a ton of fringes, in the red and particularly the green stretches of `jet`, that get completely nuked out and become completely invisible, because the colour map simply does not have any contrast there. When you apply this to your data, the contrast in those regions will go the same way as the fringes. Similarly, the sharp contrasts along the bottom, on what should be a smooth linear scale, represent places where the map is introducing features that are not really present in the data.
Has there been a successful implementation of Nash-Equilibrium in big data problems like suggesting a best buy in a stock market, in traffic monitoring systems or crowd control systems. All the above mentioned scenarios have competitive environments and one needs to get the best possible solution in them, which should suit well for Nash-Equilibrium cases.
This question isn't terribly clear. Data analysis and strategic modeling (game theory) are different tasks. Nash equilibrium is a way of understanding the incentives they have by assuming a set of players with assumed utility function and making **deductive** inferences about what they ought to do to maximize those utility functions given their interaction. Data analysis is an **inductive** process. There are a number of ways game theory and data analysis might interact, here are the easy top two: 1. Someone might use data to infer players' utility functions (I'm sure this exists in econometrics-land somewhere; also, political scientists have a technique called "ideal point estimation," to infer political preferences from voting behavior---which you can easily google to learn more); 2. Someone might use game theory to generate behavioral predictions which are testable by data. Thinking about the specific kinds of cases you mention, the obvious application would be in the stock market one. Suppose you have a ML model that can reliably predict the market behavior of other people at time T from a given feature set. Then the consumer of the ML model might have an optimal purchase at T-1, and finding that optimal purchase is going to be strategic. But combining the two approaches might just break the ML. This is really interesting to think about... musing out loud... Consider the simple case of a two-player market in one stock. Player 1 wants to buy at T-1 if player 2 will be buying at T (because the price will go up); player 1 wants to sell at T-1 if player 2 will sell at T (because the price will go down). The naive approach for player 1 is "use my ML model to predict what player 2 will do, then do it first at T-1." But, of course, P1's behavior at T-1 is itself observable by P2, and changes P2's behavior (the price has gone up); moreover, by definition P1's behavior at T-1 can't be a feature of the ML model used to predict P2's behavior at T, because it's behavior that is chosen on the basis of the ML prediction. All sorts of fun puzzles begin here, but none of them look real good...
Automatas are turing-complete grid-based systems with progression rules on which we can encode arbitrarily complex structures. For example, this is a "glider gun" on Conway's Game of Life: ![gun](https://upload.wikimedia.org/wikipedia/commons/e/e5/Gospers_glider_gun.gif) Due to turing-completeness, with enough efforts, one could encode fully-featured structures, such as machines that claimed territories on the grid and defended those against intrudes. As much as that is possible by human design, the system will not emerge complex structures on its own. That is, you can't start a Game of Life with random initial conditions, leave it running for years and hope that, when you come back, you'll be able to observe gliders and other complex structures emerged naturally. **My question is: is there any known computing system in which complex structures emerge naturally from just running it for long enough?**
It seems very hard to define the phrase "internal structures that defend their own existences" in a rigorous or precise way, so it is not clear that the question is well-defined. However, some very simple systems can admit behavior that might be described in these terms. For instance, consider [Conway's game of life](https://en.wikipedia.org/wiki/Conway's_Game_of_Life). It is known to allow for [replicators](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life#Self-replication): i.e., there are self-replicating patterns which create a complete copy of themselves. Replication can be thought of as a "strategy" for "defending your own existence"; if you spawn many copies of yourself, then even if someone messes up one of the copies, the other copies will still exist. So, to the extent that the phrase "internal structures that defend their own existence" is well-defined, self-replicators in Conway's game of life might be considered a form of internal structure that will defend its own existence. Conway's game of life is very simple. Another very simple example is [Rule 110](https://en.wikipedia.org/wiki/Rule_110), which is an extremely simple cellular automaton. It is known that Rule 110 is Turing complete, which means that it is possible to simulate Conway's game of life in a Rule 110 cellular automaton, which means that a Rule 110 cellular automaton can be argued to allow for "internal structures that defend their own existence". It's probably going to be hard to find a system that's much simpler than a Rule 110 cellular automaton. --- In general, we should probably be careful about anthromorphicizing the behavior of computational systems like this. Just because they behave in ways that resemble the behavior of people or animals doesn't necessarily mean it's necessarily going to be super-meaningful to describe them as acting with a 'purpose' (like self-defense). Ascribing human motivations or emotions to them runs the risk of misleading our intuition.
Let's say I have $N$ covariates in my regression model, and they explain 95% of the variation of the target set, i.e. $r^2=0.95$. If there are multicollinearity between these covariates and PCA is performed to reduce the dimensionality. If the principal components explains, say 80% of the variation (as opposed to 95%), then I have incurred some loss in the accuracy of my model. Effectively, if PCA solves the issue of multicollinearity at the cost of accuracy, is there any benefit to it, other than the fact it can speed up model training and can reduce collinear covariates into statistically independent and robust variables?
Your question is implicitly assuming that reducing explained variation is necessarily a bad thing. Recall that $R^2$ is defined as: $$ R^2 = 1 - \frac{SS\_{res}}{SS\_{tot}} $$ where $SS\_{res} = \sum\_{i}{(y\_i - \hat{y})^2}$ is a residual sum of squares and $SS\_{tot} = \sum\_{i}{(y\_i - \bar{y})^2}$ is a total sum of squares. You can *easily* get an $R^2 = 1$ (i.e. $SS\_{res} = 0$) by fitting a line that passes through all of the (training) points (though this, in general, requires more flexible model as opposed to a simple linear regression, as noted by Eric), which is a perfect example of [overfitting](https://en.wikipedia.org/wiki/Overfitting). So reducing explained variation isn't necessarily bad as it could result in a better performance on unseen (test) data. PCA can be a good preprocessing technique if there are reasons to believe that the dataset has an intrinsic lower-dimensional structure.
I'm looking for an algorithm to find the longest path between two nodes in a bidirectional, unweighted, cyclic graph. The path must not have repeated vertices (otherwise the path would be infinite of course). Would someone point me a to a good one (site or explain)? The graph will be sparse. Thanks for any help!
You can solve your problem in $O(n^2 2^n)$ on a graph with $n$ vertices by dynamic programming. Let $G=(V,E)$ be an undirected graph with edge weights $d\_{uv}$. Let $L(v,S)$ be the length of the longest path from some fixed vertex $s$ to vertex $v$, which visits no vertex in $S$. $L$ satisfies $$L(v,S)=\begin{cases} \max\_{w\in N(v)\setminus S} d\_{vw}+L(w,S\cup\{w\}) & v\neq s\\ 0 & \text{otherwise} \end{cases}\mathrm{,}$$ where $N(v)$ is the set of $v$'s neighbors. (Define the empty $\max$ to be $-\infty$.) Then the longest path between $s$ and $v$ has length $L(v,\{v\})$.
A unipathic graph is a directed graph such that there is at most one simple path from any one vertex to any other vertex. Unipathic graphs can have cycles. For example, a doubly linked list (not a circular one!) is a unipathic graph; if the list has $n$ elements, the graph has $n-1$ cycles of length 2, for a total of $2(n-1)$. What is the maximum number of edges in a unipathic graph with $n$ vertices? An asymptotic bound would do (e.g. $O(n)$ or $\Theta(n^2)$). Inspired by [Find shortest paths in a weighed unipathic graph](https://cs.stackexchange.com/questions/625/find-shortest-paths-in-a-weighed-unipathic-graph); in [my proof](https://cs.stackexchange.com/questions/625/find-shortest-paths-in-a-weighed-unipathic-graph/679#679), I initially wanted to claim that the number of edges was $O(n)$ but then realized that bounding the number of cycles was sufficient.
A unipathic graph can have $\Theta(n^2)$ edges. There's a well-known kind of graph that's unipathic and has $n^2/4$ edges. > > Consider a complete bipartite graph, with oriented edges $\forall (i,j) \in [1,m]^2, a\_i \rightarrow b\_j$. This graph is unipathic and has no cycle: all its paths have length $1$. It has $2m$ vertices and $m^2$ edges. > > > (Follow-up question: is this ratio maximal? Probably not, but I don't have another example. This example is maximal in the sense that any one edge that you add between existing nodes will break the unipathic property.)
From [Wikipedia](http://en.wikipedia.org/wiki/Computational_complexity_theory) > > **a computational problem** is understood to be a task that is in principle amenable to being solved by a computer (i.e. the problem can > be stated by a set of mathematical instructions). Informally, a > computational problem consists of **problem instances** and **solutions** to > these problem instances. For example, primality testing is the problem > of determining whether a given number is prime or not. The instances > of this problem are natural numbers, and the solution to an instance > is yes or no based on whether the number is prime or not. > > > ... A key distinction between analysis of algorithms and computational > complexity theory is that the former is devoted to **analyzing the > amount of resources needed by a particular algorithm to solve a > problem**, whereas the latter asks a more general question about **all > possible algorithms that could be used to solve the same problem**. > > > So a problem can be solved by multiple algorithms. I was wondering if an algorithm can solve different problems, or can only solve one problem? Note that I distinguish a problem and its instances as in the quote.
I think the question is more philosophical than scientific, and indeed, as @raphael mentioned, the problem is in the definition of a "Problem". Algorithm, in the simplest way, is a function (a mapping). It gets an input (instance) $in \in I$ and gives an output $out \in O$. For any instance $in$ there is only a single output $out=\mathsf{ALG}(in)$. Therefore, the algorithm solves only a single problem — that is, it defines only a single mapping from $I$ to $O$. True, if you have two problems (mappings), $P\_1 : I\_1 \to O\_1$ and $P\_2: I\_2 \to O\_2$ we can construct a "single" algorithm that solves "both". It gets as an input the pair $(in,type)$. If $type=1$ it returns $P\_1(in)$ and otherwise it returns $P\_2(in)$. But again, this fancy algorithm can be seen as solving a **single** problem from the domain $(I\_1\cup I\_2) \times \{1,2\}$ into $O\_1 \cup O\_2$.
Different implementation software are available for **lasso**. I know a lot discussed about bayesian approach vs frequentist approach in different forums. My question is very specific to lasso - ***What are differences or advantages of baysian lasso vs regular lasso***? Here are two example of implementation in the package: ``` # just example data set.seed(1233) X <- scale(matrix(rnorm(30),ncol=3))[,] set.seed(12333) Y <- matrix(rnorm(10, X%*%matrix(c(-0.2,0.5,1.5),ncol=1), sd=0.8),ncol=1) require(monomvn) ## Lasso regression reg.las <- regress(X, Y, method="lasso") ## Bayesian Lasso regression reg.blas <- blasso(X, Y) ``` So when should I go for one or other methods ? Or they are same ?
The standard lasso uses an [L1 regularisation penalty](http://www.youtube.com/watch?v=PKXpaLUigA8) to achieve sparsity in regression. Note that this is also known as Basis Pursuit (Chen & Donoho, 1994). In the Bayesian framework, the choice of regulariser is analogous to the choice of prior over the weights. If a Gaussian prior is used, then the Maximum a Posteriori (MAP) solution will be the same as if an L2 penalty was used. Whilst not directly equivalent, the Laplace prior (which is sharply peaked around zero, unlike the Gaussian which is smooth around zero), produces the same shrinkage effect to the L1 penalty. Park & Casella (2008) describes the Bayesian Lasso. In fact, when you place a Laplace prior over the parameters, the MAP solution should be identical (not merely similar) to regularization with the L1 penalty and the Laplace prior will produce an identical shrinkage effect to the L1 penalty. However, due to either approximations in the Bayesian inference procedure, or other numerical issues, solutions may not actually be identical. In most cases, the results produced by both methods will be very similar. Depending on the optimisation method and whether approximations are used, the standard lasso will probably be more efficient to compute than the Bayesian version. The Bayesian automatically produces interval estimates for all of the parameters, including the error variance, if these are required. Chen, S., & Donoho, D. (1994). Basis pursuit. In *Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers* (Vol. 1, pp. 41-44). IEEE. <https://doi.org/10.1109/ACSSC.1994.471413> Park, T., & Casella, G. (2008). The bayesian lasso. *Journal of the American Statistical Association, 103*(482), 681-686. <https://doi.org/10.1198/016214508000000337>
I have data from an experiment that I analyzed using t-tests. The dependent variable is interval scaled and the data are either unpaired (i.e., 2 groups) or paired (i.e., within-subjects). E.g. (within subjects): ``` x1 <- c(99, 99.5, 65, 100, 99, 99.5, 99, 99.5, 99.5, 57, 100, 99.5, 99.5, 99, 99, 99.5, 89.5, 99.5, 100, 99.5) y1 <- c(99, 99.5, 99.5, 0, 50, 100, 99.5, 99.5, 0, 99.5, 99.5, 90, 80, 0, 99, 0, 74.5, 0, 100, 49.5) ``` However, the data are not normal so one reviewer asked us to use something other than the t-test. However, as one can easily see, the data are not only not normally distributed, but the distributions are not equal between conditions: ![alt text](https://i.stack.imgur.com/5y0kx.png) Therefore, the usual nonparametric tests, the Mann-Whitney-U-Test (unpaired) and the Wilcoxon Test (paired), cannot be used as they require equal distributions between conditions. Hence, I decided that some resampling or permutation test would be best. **Now, I am looking for an R implementation of a permutation-based equivalent of the t-test, or any other advice on what to do with the data.** I know that there are some R-packages that can do this for me (e.g., coin, perm, exactRankTest, etc.), but I don't know which one to pick. So, if somebody with some experience using these tests could give me a kick-start, that would be ubercool. **UPDATE:** It would be ideal if you could provide an example of how to report the results from this test.
As this question did pop up again, I may add another answer inspired by a recent [blog post](http://statmethods.wordpress.com/2012/05/21/permutation-tests-in-r/) via R-Bloggers from Robert Kabacoff, the author of [Quick-R](http://www.statmethods.net/) and [R in Action](http://www.amazon.de/Action-Data-Analysis-Graphics/dp/1935182390) using the `lmPerm` package. However, this methods produces sharply contrasting (and very unstable) results to the one produced by the `coin` package in the answer of @caracakl (the p-value of the within-subjects analysis is `0.008`). The analysis takes the data preparation from @caracal's answer as well: ``` x1 <- c(99, 99.5, 65, 100, 99, 99.5, 99, 99.5, 99.5, 57, 100, 99.5, 99.5, 99, 99, 99.5, 89.5, 99.5, 100, 99.5) y1 <- c(99, 99.5, 99.5, 0, 50, 100, 99.5, 99.5, 0, 99.5, 99.5, 90, 80, 0, 99, 0, 74.5, 0, 100, 49.5) DV <- c(x1, y1) IV <- factor(rep(c("A", "B"), c(length(x1), length(y1)))) id <- factor(rep(1:length(x1), 2)) library(lmPerm) summary(aovp( DV ~ IV + Error(id))) ``` produces: ``` > summary(aovp( DV ~ IV + Error(id))) [1] "Settings: unique SS " Error: id Component 1 : Df R Sum Sq R Mean Sq Residuals 19 15946 839 Error: Within Component 1 : Df R Sum Sq R Mean Sq Iter Pr(Prob) IV 1 7924 7924 1004 0.091 . Residuals 19 21124 1112 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ``` If you run this multiple times, the p-values jumps around between ~.05 and ~.1. Although it is an answer to the question let me allow to pose a question at the end (I can move this to a new question if desired): Any ideas of why this analysis is so unstable and does produce a so diverging p-values to the coin analysis? Did I do something wrong?
How would you know if your (high dimensional) data exhibits enough clustering so that results from kmeans or other clustering algorithm is actually meaningful? For k-means algorithm in particular, how much of a reduction in within-cluster variance should there be for the actual clustering results to be meaningful (and not spurious)? Should clustering be apparent when a dimensionally-reduced form of the data is plotted, and are the results from kmeans (or other methods) meaningless if the clustering cannot be visualized?
About k-means specifically, you can use the Gap statistics. Basically, the idea is to compute a goodness of clustering measure based on average dispersion compared to a reference distribution for an increasing number of clusters. More information can be found in the original paper: > > Tibshirani, R., Walther, G., and > Hastie, T. (2001). [Estimating the > numbers of clusters in a data set via > the gap statistic](http://www.stanford.edu/~hastie/Papers/gap.pdf). J. R. Statist. > Soc. B, 63(2): 411-423. > > > The answer that I provided to a [related question](https://stats.stackexchange.com/questions/9671/how-can-i-assess-how-descriptive-feature-vectors-are) highlights other general validity indices that might be used to check whether a given dataset exhibits some kind of a structure. When you don't have any idea of what you would expect to find if there was noise only, a good approach is to use resampling and study clusters stability. In other words, resample your data (via bootstrap or by adding small noise to it) and compute the "closeness" of the resulting partitions, as measured by [Jaccard](http://en.wikipedia.org/wiki/Jaccard_index) similarities. In short, it allows to estimate the frequency with which similar clusters were recovered in the data. This method is readily available in the [fpc](http://cran.r-project.org/web/packages/fpc/index.html) R package as `clusterboot()`. It takes as input either raw data or a distance matrix, and allows to apply a wide range of clustering methods (hierarchical, k-means, fuzzy methods). The method is discussed in the linked references: > > Hennig, C. (2007) [Cluster-wise > assessment of cluster stability](http://www.ucl.ac.uk/statistics/research/pdfs/rr271.pdf). > *Computational Statistics and Data Analysis*, 52, 258-271. > > > Hennig, C. (2008) [Dissolution point > and isolation robustness: robustness > criteria for general cluster analysis > methods](http://www.ucl.ac.uk/statistics/research/pdfs/rr272.pdf). *Journal of Multivariate > Analysis*, 99, 1154-1176. > > > Below is a small demonstration with the k-means algorithm. ``` sim.xy <- function(n, mean, sd) cbind(rnorm(n, mean[1], sd[1]), rnorm(n, mean[2],sd[2])) xy <- rbind(sim.xy(100, c(0,0), c(.2,.2)), sim.xy(100, c(2.5,0), c(.4,.2)), sim.xy(100, c(1.25,.5), c(.3,.2))) library(fpc) km.boot <- clusterboot(xy, B=20, bootmethod="boot", clustermethod=kmeansCBI, krange=3, seed=15555) ``` The results are quite positive in this artificial (and well structured) dataset since none of the three clusters (`krange`) were dissolved across the samples, and the average clusterwise Jaccard similarity is > 0.95 for all clusters. Below are the results on the 20 bootstrap samples. As can be seen, statistical units tend to stay grouped into the same cluster, with few exceptions for those observations lying in between. ![enter image description here](https://i.stack.imgur.com/Z6IGg.gif) You can extend this idea to any validity index, of course: choose a new series of observations by bootstrap (with replacement), compute your statistic (e.g., silhouette width, cophenetic correlation, Hubert's gamma, within sum of squares) for a range of cluster numbers (e.g., 2 to 10), repeat 100 or 500 times, and look at the boxplot of your statistic as a function of the number of cluster. Here is what I get with the same simulated dataset, but using Ward's hierarchical clustering and considering the cophenetic correlation (which assess how well distance information are reproduced in the resulting partitions) and silhouette width (a combination measure assessing intra-cluster homogeneity and inter-cluster separation). The cophenetic correlation ranges from 0.6267 to 0.7511 with a median value of 0.7031 (500 bootstrap samples). Silhouette width appears to be maximal when we consider 3 clusters (median 0.8408, range 0.7371-0.8769). ![enter image description here](https://i.stack.imgur.com/ex1Bo.png)
I am having difficulty finding information on the assumptions for the intraclass correlation. Can someone please tell me what they are?
There is no context to your request... here, I give an attempt to answer it in the context of regression models. More specifically, I shall refer to the usual linear mixed model. Let $y\_{ij}$ be the observation for subject $j \in \{ 1, \ldots{}, n\_i\}$ from group $i \in \{1, \ldots{}, g\}$. The model I shall consider takes the form $y\_{ij} = \underbrace{\mu\_{ij}}\_{\textrm{fixed part}} + \underbrace{u\_{i}}\_{\textrm{random part}} + e\_{ij}$, under the assumption that $u\_{i}$ is a realisation of a $N(0, \sigma^2\_u)$ random variable, $e\_{ij}$ is a realisation of a $N(0, \sigma^2)$ random variable, and under independence between these two random variables. Observe that all subjects from the $i$th group share the same value for the random effect. Therefore, the random effect accounts for the association between observations from the same group. Mathematically, this can be seen by computing $\textrm{Corr}(Y\_{ij}, Y\_{i'j'})$ (see, e.g., [here](https://stats.stackexchange.com/questions/19779/linear-hierarchical-model)). When $ i = i'$, $\textrm{Corr}(Y\_{ij}, Y\_{i'j'}) = \textrm{Corr}(Y\_{ij}, Y\_{ij'}) > 0$ is known as the *intraclass correlation*.
I'm having problems understanding the concept of a random variable as a function. I understand the mechanics (I think) but I do not understand the motivation... Say $(\Omega, B, P) $ is a probability triple, where $\Omega = [0,1]$, $B$ is the Borel-$\sigma$-algebra on that interval and $P$ is the regular Lebesgue measure. Let $X$ be a random variable from $B$ to $\{1,2,3,4,5,6\}$ such that $X([0,1/6)) = 1$, $X([1/6,2/6)) = 2$, ..., $X([5/6,1]) = 6$, so $X$ has a discrete uniform distribution on the values 1 through 6. That's all good, but I do not understand the necessity of the original probability triple... we could have directly constructed something equivalent as $(\{1,2,3,4,5,6\}, S, P\_x)$ where $S$ is all the appropriate $\sigma$-algebra of the space, and $P\_x$ is a measure that assigns to each subset the measure (# of elements)/6. Also, the choice of $\Omega=[0,1]$ was arbitrary-- it could've been $[0,2]$, or any other set. So my question is, why bother constructing an arbitrary $\Omega$ with a $\sigma$-algebra and a measure, and define a random variable as a map from the $\sigma$-algebra to the real line?
If you are wondering why all this machinery is used when something much simpler could suffice--you are right, for most common situations. However, the measure-theoretic version of probability was developed by Kolmogorov for the purpose of establishing a theory of such generality that it could handle, in some cases, very abstract and complicated probability spaces. In fact, Kolmogorov's measure theoretic foundations for probability ultimately allowed probabilistic tools to be applied far beyond their original intended domain of application into areas such as harmonic analysis. At first it does seem more straightforward to skip any "underlying" $\sigma$-algebra $\Omega$, and to simply assign probability masses to the events comprising the sample space directly, as you have proposed. Indeed, probabilists effectively do the same thing whenever they choose to work with the "induced-measure" on the sample space defined by $P \circ X^{-1}$. However, things start getting tricky when you start getting into infinite dimensional spaces. Suppose you want to prove the Strong Law of Large Numbers for the specific case of flipping fair coins (that is, that the proportion of heads tends arbitrarily closely to 1/2 as the number of coin flips goes to infinity). You could attempt to construct a $\sigma$-algebra on the set of infinite sequences of the form $(H,T,H,...)$. But here can find that it is much more convenient to take the underlying space to be $\Omega = [0,1)$; and then use the binary representations of real numbers (e.g. $0.10100...$) to represent sequences of coin flips (1 being heads, 0 being tails.) An illustration of this very example can be found in the first few chapters of Billingsley's *Probability and Measure*.
Where can I find graphs relevant to real-life problems? Two repositories I know of are: * University of Florida's [Sparse Matrix Collection](http://www.cise.ufl.edu/research/sparse/matrices/) * Bodlaender's [TreewidthLib](http://people.cs.uu.nl/hansb/treewidthlib/index.php)
There are some real benchmark instances for Frequency Assignment problem on: <http://fap.zib.de/problems/>
It's well known that the CPS (continuation-passing style) translation often employed in compilers corresponds to double negation translation under the Curry-Howard isomorphism. Though often the target language of a CPS translation is the same as the source language, sometimes it's a specialized language which only allows terms in CPS form (i.e., there are no direct style functions anymore). See, e.g., [this](https://www.microsoft.com/en-us/research/wp-content/uploads/2007/10/compilingwithcontinuationscontinued.pdf) or [this](https://www.cs.princeton.edu/%7Edpw/papers/tal-toplas.pdf). As an example, consider Thielecke's [CPS-calculus](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.41.7371&rep=rep1&type=pdf), where commands are defined as either jumps or bindings: $$b ::= x\langle\vec{x}\rangle\ |\ b\ \{\ x\langle\vec{x}\rangle = b\ \}$$ And one-hole contexts (commands with holes) are defined as follow: $$C ::= [-]\ |\ C \ \{\ x\langle\vec{x}\rangle = b\ \}\ |\ b\ \{\ x\langle\vec{x}\rangle = C\ \}$$ If we try to see these languages under the Curry-Howard isomorphism, we don't have implication anymore, but rather we use negations and products alone. The typing rules for such languages demonstrate we're trying to derive a contradiction: $$\frac{\color{orange}{\Gamma\vdash} k{:}\ \color{orange}{\neg\vec{\tau}}\quad\quad\color{orange}{\Gamma\vdash}\vec{x}{:}\ \color{orange}{\vec{\tau}}}{\color{orange}{\Gamma\vdash} k\langle\vec{x}\rangle}(J)$$ $$\frac{\color{orange}{\Gamma,}k{:}\ \color{orange}{\neg\vec{\tau}\vdash} b\quad\quad\color{orange}{\Gamma,}\vec{x}{:}\ \color{orange}{\vec{\tau}\vdash} c}{\color{orange}{\Gamma\vdash} b\ \{\ k\langle\vec{x}\rangle=c\ \}}(B)$$ (Note that these look similar to the (AXIOM) and (CUT) rules from linear logic, though on the other side of the sequent: we have a conjunction rather than a disjunction.) Reduction rules in intermediate languages such as the ones above allow jumps to be performed to bound continuations, immediately replacing arguments (hence the name "jump with arguments" sometimes employed). For the CPS-calculus, this can be represented by the following reduction rule: $$\frac{}{C[\color{blue}{k\langle \vec{x}\rangle}]\ \{\ k\langle\color{red}{\vec{y}}\rangle=\color{red}c\ \} \longrightarrow C[\color{red}{c[\color{blue}{\vec{x}}/\vec{y}]}]\ \{\ k\langle\color{red}{\vec{y}}\rangle=\color{red}c\ \}}$$ $$\frac{a\longrightarrow b}{C[a]\longrightarrow C[b]}$$ ...though similar languages have similar notions of jump. I'm not totally sure, but I believe that the reduction rule would corresponde to a cut inference rule similar to the following (quickly sketched): [![Cut rule](https://i.stack.imgur.com/iXClW.png)](https://i.stack.imgur.com/iXClW.png) ...where we're allowed to copy a bound proof tree and replace the jump subtree with it (in the example above, replacing subtree a with a copy of subtree b, though with a different context). I'm interested on how such an intermediate language could be seen by the Curry-Howard isomorphism. So, my actual question is twofold: 1. Has a similar implication-free subset of some logic (e.g., propositional logic) been studied somewhere? I mean, has a "logic without implication" been proposed? 2. What is the equivalent of a jump with arguments in logic? Assuming the cut rule I sketched above is correct (and it corresponds to the reduction rule), has something similar to it appeared elsewhere?
1. Such a logic of continuations (or a syntax of continuation that arose from logical considerations) would be Laurent's “polarised linear logic” (LLP): [Olivier Laurent, *Étude de la polarisation en logique* (2002)](https://tel.archives-ouvertes.fr/tel-00007884/en/). A good explanation of what is going on from a categorical perspective is given in [Melliès and Tabareau, *Resource modalities in tensor logic* (2010)](https://www.sciencedirect.com/science/article/pii/S0168007209001602). A detailed description of the correspondence between LLP and CPS along the lines of your question appears in [my PhD thesis (2013)](https://tel.archives-ouvertes.fr/tel-00918642/en/) (Chapter III, pp.91-95,153-199). (There are a lot of other references in this area; the bibliographies should provide you with a good starting point.) 2. The two rules (J) and (B) you wrote are derived as follows (in the notations of Laurent's PhD thesis): \begin{array}{c} \dfrac{\dfrac{\vdash\mathcal{N},!(P\_{1}^{\bot}\mathbin{⅋}\cdots\mathbin{⅋}P\_{n}^{\bot})\qquad\dfrac{\dfrac{\vdash\mathcal{N},P\_{1}\quad\cdots\quad\vdash\mathcal{N},P\_{n}}{\vdash\mathcal{N},\dots,\mathcal{N},P\_{1}\otimes\cdots\otimes P\_{n}}}{\vdash\mathcal{N},\dots,\mathcal{N},?(P\_{1}\otimes\cdots\otimes P\_{n})}}{\vdash\mathcal{N},\mathcal{N},\dots,\mathcal{N}}}{\vdash\mathcal{N}}\\ \dfrac{\dfrac{\vdash\mathcal{N},?(P\_{1}\otimes\dots\otimes P\_{n})\qquad\dfrac{\dfrac{\vdash\mathcal{N},P\_{1}^{\bot},\dots,P\_{n}^{\bot}}{\vdash\mathcal{N},P\_{1}^{\bot}\mathbin{⅋}\cdots\mathbin{⅋}P\_{n}^{\bot}}}{\vdash\mathcal{N},!(P\_{1}^{\bot}\mathbin{⅋}\cdots\mathbin{⅋}P\_{n}^{\bot})}}{\vdash\mathcal{N},\mathcal{N}}}{\vdash\mathcal{N}} \end{array} LLP is a sequent calculus and as such is finer-grained: it makes explicit structural rules and (what corresponds to) left-introduction of negation. Visible above, it also internalizes duality (all formulae on the right-hand side, like in linear logic). While Laurent does not cite Thielecke's works, Thielecke's PhD thesis previously suggested that having a “duality functor” could be useful to clarify the duality aspects of CPS. Laurent's graphical syntax will let you interpret linear substitution, but one would have to look at the details to see if it is exactly the same as the one you mention.
Feature extraction and feature selection essentially reduce the dimensionality of the data, but feature extraction also makes the data more separable, if I am right. Which technique would be *preferred* over the other and when? I was thinking, since feature selection does not modify the original data and it's properties, I assume that you will use feature selection when it's important that the features you're training on be unchanged. But I can't imagine why you would want something like this..
Adding to The answer given by Toros, These(see below bullets) three are quite similar but with a subtle differences-:(concise and easy to remember) * **feature extraction and feature engineering**: transformation of raw data into features suitable for modeling; * **feature transformation**: transformation of data to improve the accuracy of the algorithm; * **feature selection**: removing unnecessary features. Just to add an Example of the same, > > Feature Extraction and Engineering(we can extract something from them) > > > * Texts(ngrams, word2vec, tf-idf etc) * Images(CNN'S, texts, q&a) * Geospatial data(lat, long etc) * Date and time(day, month, week, year, rolling based) * Time series, web, etc * Dimensional Reduction Techniques (PCA, SVD, Eigen-Faces etc) * Maybe we can use Clustering as well (DBSCAN etc) * .....(And Many Others) > > Feature transformations(transforming them to make sense) > > > * Normalization and changing distribution(Scaling) * Interactions * Filling in the missing values(median filling etc) * .....(And Many Others) > > Feature selection(building your model on these selected features) > > > * Statistical approaches * Selection by modeling * Grid search * Cross Validation * .....(And Many Others) Hope this helps... Do look at the links shared by others. They are Quite Nice...
Computational complexity asks the following question: Given a problem $P$, what is the time-cost of the lowest time-cost machine $M^\*$ that solves $P$? But this misses a certain aspect of the complexity of $P$, namely the complexity of *finding* $M^\*$ in the space of machines. The problem of finding $M^\*$ can be seen as an *instance in the meta-problem of finding for some problem $P$ in a class class $\mathcal P$, a machine, or the optimal machine (according to some criterion) that solves $P$*". > > The meta-problem $\mathcal P$ is: Given a problem $P\in \mathcal P$, find a (Turing) machine that solves $P$, optimized for some resource constraints $C(P)$. > > > We could turn the *set* of problems $\mathcal P$ into a single problem $\tilde {\mathcal P}$, where the specification of which $P\in \mathcal P$ we want to solve, is defined within the information describing the *instances* of $\tilde {\mathcal P}$. However, an efficient machine $\tilde M$ that solves $\tilde {\mathcal P}$, can *not* necessarily be used to solve the meta-problem $\mathcal P$, since $\tilde M$ might not make use of specific possible optimizations for problems $P\in \mathcal P$. For example. the solution to some specific problem $P\_i\in \mathcal P$ might be simply to always output $0$, in which case the solution to $\mathcal P$ for instance $P\_i$, is a $C(1)$ complexity machine $M\_i$ that ignores input and outputs $0$. But $\tilde M$ might instead do all kinds of complex computations, that still are below the worst-case bound for *all* problems in $\mathcal P$, but don't make use of this specific feature of the instance $P\_i$. Hence it may be that some problem $P$ has very low computational-complexity, but high "meta-complexity" (i.e. for problems in the class of problems $\mathcal P$ that $P$ is a part of, it is hard to find an efficient algorithm). **Is there a theory akin to this type of "meta-complexity"?**
It sounds to me that you're interested in complexity of preprocessing. I'll state your question in a different way. We have a problem $P$ consisting of pairs $(x,y)$ that satisfy some condition. We'd like an algorithm which (1) takes $x$, (2) can perform a lot of computation (preprocessing), (3) finally it's given $y$ and needs to output quickly whether $(x,y) \in P$. This is the setting of [parameterized complexity](https://en.wikipedia.org/wiki/Parameterized_complexity), a huge field.
It's well known that the CPS (continuation-passing style) translation often employed in compilers corresponds to double negation translation under the Curry-Howard isomorphism. Though often the target language of a CPS translation is the same as the source language, sometimes it's a specialized language which only allows terms in CPS form (i.e., there are no direct style functions anymore). See, e.g., [this](https://www.microsoft.com/en-us/research/wp-content/uploads/2007/10/compilingwithcontinuationscontinued.pdf) or [this](https://www.cs.princeton.edu/%7Edpw/papers/tal-toplas.pdf). As an example, consider Thielecke's [CPS-calculus](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.41.7371&rep=rep1&type=pdf), where commands are defined as either jumps or bindings: $$b ::= x\langle\vec{x}\rangle\ |\ b\ \{\ x\langle\vec{x}\rangle = b\ \}$$ And one-hole contexts (commands with holes) are defined as follow: $$C ::= [-]\ |\ C \ \{\ x\langle\vec{x}\rangle = b\ \}\ |\ b\ \{\ x\langle\vec{x}\rangle = C\ \}$$ If we try to see these languages under the Curry-Howard isomorphism, we don't have implication anymore, but rather we use negations and products alone. The typing rules for such languages demonstrate we're trying to derive a contradiction: $$\frac{\color{orange}{\Gamma\vdash} k{:}\ \color{orange}{\neg\vec{\tau}}\quad\quad\color{orange}{\Gamma\vdash}\vec{x}{:}\ \color{orange}{\vec{\tau}}}{\color{orange}{\Gamma\vdash} k\langle\vec{x}\rangle}(J)$$ $$\frac{\color{orange}{\Gamma,}k{:}\ \color{orange}{\neg\vec{\tau}\vdash} b\quad\quad\color{orange}{\Gamma,}\vec{x}{:}\ \color{orange}{\vec{\tau}\vdash} c}{\color{orange}{\Gamma\vdash} b\ \{\ k\langle\vec{x}\rangle=c\ \}}(B)$$ (Note that these look similar to the (AXIOM) and (CUT) rules from linear logic, though on the other side of the sequent: we have a conjunction rather than a disjunction.) Reduction rules in intermediate languages such as the ones above allow jumps to be performed to bound continuations, immediately replacing arguments (hence the name "jump with arguments" sometimes employed). For the CPS-calculus, this can be represented by the following reduction rule: $$\frac{}{C[\color{blue}{k\langle \vec{x}\rangle}]\ \{\ k\langle\color{red}{\vec{y}}\rangle=\color{red}c\ \} \longrightarrow C[\color{red}{c[\color{blue}{\vec{x}}/\vec{y}]}]\ \{\ k\langle\color{red}{\vec{y}}\rangle=\color{red}c\ \}}$$ $$\frac{a\longrightarrow b}{C[a]\longrightarrow C[b]}$$ ...though similar languages have similar notions of jump. I'm not totally sure, but I believe that the reduction rule would corresponde to a cut inference rule similar to the following (quickly sketched): [![Cut rule](https://i.stack.imgur.com/iXClW.png)](https://i.stack.imgur.com/iXClW.png) ...where we're allowed to copy a bound proof tree and replace the jump subtree with it (in the example above, replacing subtree a with a copy of subtree b, though with a different context). I'm interested on how such an intermediate language could be seen by the Curry-Howard isomorphism. So, my actual question is twofold: 1. Has a similar implication-free subset of some logic (e.g., propositional logic) been studied somewhere? I mean, has a "logic without implication" been proposed? 2. What is the equivalent of a jump with arguments in logic? Assuming the cut rule I sketched above is correct (and it corresponds to the reduction rule), has something similar to it appeared elsewhere?
1. Such a logic of continuations (or a syntax of continuation that arose from logical considerations) would be Laurent's “polarised linear logic” (LLP): [Olivier Laurent, *Étude de la polarisation en logique* (2002)](https://tel.archives-ouvertes.fr/tel-00007884/en/). A good explanation of what is going on from a categorical perspective is given in [Melliès and Tabareau, *Resource modalities in tensor logic* (2010)](https://www.sciencedirect.com/science/article/pii/S0168007209001602). A detailed description of the correspondence between LLP and CPS along the lines of your question appears in [my PhD thesis (2013)](https://tel.archives-ouvertes.fr/tel-00918642/en/) (Chapter III, pp.91-95,153-199). (There are a lot of other references in this area; the bibliographies should provide you with a good starting point.) 2. The two rules (J) and (B) you wrote are derived as follows (in the notations of Laurent's PhD thesis): \begin{array}{c} \dfrac{\dfrac{\vdash\mathcal{N},!(P\_{1}^{\bot}\mathbin{⅋}\cdots\mathbin{⅋}P\_{n}^{\bot})\qquad\dfrac{\dfrac{\vdash\mathcal{N},P\_{1}\quad\cdots\quad\vdash\mathcal{N},P\_{n}}{\vdash\mathcal{N},\dots,\mathcal{N},P\_{1}\otimes\cdots\otimes P\_{n}}}{\vdash\mathcal{N},\dots,\mathcal{N},?(P\_{1}\otimes\cdots\otimes P\_{n})}}{\vdash\mathcal{N},\mathcal{N},\dots,\mathcal{N}}}{\vdash\mathcal{N}}\\ \dfrac{\dfrac{\vdash\mathcal{N},?(P\_{1}\otimes\dots\otimes P\_{n})\qquad\dfrac{\dfrac{\vdash\mathcal{N},P\_{1}^{\bot},\dots,P\_{n}^{\bot}}{\vdash\mathcal{N},P\_{1}^{\bot}\mathbin{⅋}\cdots\mathbin{⅋}P\_{n}^{\bot}}}{\vdash\mathcal{N},!(P\_{1}^{\bot}\mathbin{⅋}\cdots\mathbin{⅋}P\_{n}^{\bot})}}{\vdash\mathcal{N},\mathcal{N}}}{\vdash\mathcal{N}} \end{array} LLP is a sequent calculus and as such is finer-grained: it makes explicit structural rules and (what corresponds to) left-introduction of negation. Visible above, it also internalizes duality (all formulae on the right-hand side, like in linear logic). While Laurent does not cite Thielecke's works, Thielecke's PhD thesis previously suggested that having a “duality functor” could be useful to clarify the duality aspects of CPS. Laurent's graphical syntax will let you interpret linear substitution, but one would have to look at the details to see if it is exactly the same as the one you mention.
Given a vector of points (on the 2D plane), what is the fastest algorithm to find all pairs of points at a distance of 1? Of course, I could use the $O(N^2)$ algorithm to check all pairs of points. However, I recently found out that there are at most $O(N^{4/3})$ points which satisfy the property, so I am wondering if a quicker algorithm exists. Thanks!
It depends on the particular instruction set and model of computation, but in some architectures, here is a valid program: EXIT In other words, it is just one instruction, which immediately causes the program to halt. If that fits into a single memory cell (and it could; there's no reason in principle why it can't), then you've got a program that fits in one memory cell.
I'm doing data analysis at the business I work for. 23 people were asked for input for two separate lists. The first list the answers could be any integer 6 to 10. The second list the answers could be any integer 0 to 15. After receiving all inputs, the average number chosen for each list was identical. For the average to be identical so was the sum. Is it possible to calculate the probability the two sums were equal even though they drew from different ranges? The original question was how much would a prospective baby weigh. 23 people answered with a guess of pounds and additional ounces. The pounds were limited 6 to 10 and the ounces were limited 0 to 15. There was no discernible connection between the guess of pounds to the guess of ounces. Also, I need help with the calculation in addition to a yes or no answer. I need a calculation over results obtained through simulation but thank you for those results as well. I would comment on posts, however, I am not allowed to yet. Also, can you please try to answer in Layman's terms please.
**Yes it is possible**, and the other answers/comments have shown how you can calculate/approximate the probability. **But this is probably (I imagine) not what you want**. If you want to show the probability that people respond similar or different on two different scales then it is a bit simplistic to compare them to people that just randomly pick uniform numbers. We are not speaking of random dice rolls. The current setup gives a low 1.5% probability that the two sets end up with the same sum (as answered and commented by others). So you might be tempted to assume that you have an extreme case of similarity (ie reject the hypothesis that the sums are not gonna end up the same). If you had a larger range for the pounds then you probably would have had the same equal sum (since few newborns are less than 6 or more than 10 pounds), but the theoretical naively calculated probability would be extremely low. --- * **estimate of ounces**. This is probably gonna end up in the middle around 7.5 since not so many people know the precision and are just estimating like picking a uniformly distributed random number. (possibly a bit lower than 7.5 since people might tend to place the estimate behind the comma more low or leave it out) * **estimate of the pounds**. Since the average newborn is close to between 7 an 8 you are very likely gonna end up with a mean somewhere close to 7. This makes it much more likely that the sum of ounces and sum of inches are gonna match. (in comparison to the 1.5% based on uniform picks) --- You could use your data to create estimates of the distributions how people will prospect birth weight (for the pounds I would model with a multinomial with the events 6,7,8,9,10, for the ounces I would model with a beta distribution). Based on those estimates of the distributions you could estimate the probability for any other new group of 23 people (from the same population) to again produce the same sum of the two different sets of random variables.
Here is a simple statistics question I was given. I'm not really sure I understand it. > > X = the number of aquired points in an exam (multiple choice and a right answer is one point). Is X binomial distributed? > > > The professor's answer was: > > Yes, because there is only right or wrong answers. > > > My answer: > > No, because each question has a different "success-probability" p. As I did understand a binomial distribution is just a series of Bernoulli-experiments, which each have a simple outcome (success or failure) with a given success-probability p (and all are "identical" regarding p). E.g., Flipping a (fair) coin 100 times, this is 100 Bernoulli-experiments and all have p=0.5 . But here the questions have > different kinds of p right? > > >
The answer to this problem depends on the framing of the question and when information is gained. Overall, I tend to agree with the professor but think the explanation of his/her answer is poor and the professor's question should include more information up front. If you consider an infinite number of potential exam questions, and you draw one at random for question 1, draw one at random for question 2, etc. Then going into the exam: 1. Each question has two outcomes (right or wrong) 2. There are a fixed number of trials (questions) 3. Each trial could be considered independent (going into question two, your probability $p$ of getting it right is the same as when going into question one) Under this framework, the assumptions of a binomial experiment are met. Alas, ill-proposed statistical problems are very common in practice, not just on exams. I wouldn't hesitate to defend your rationale to your professor.
I'm using the median of response times (of users) as a feature in a machine learning context. As a second feature I want to use the standard deviation or variance of the response times. Of course, the variance is just the squared standard deviation, so from that point of view I think it should not matter which I choose. Does it really not matter if choosing the standard deviation or the variance?
The simple answer is that all else being equal, if you have a scale and dichotomize it into just A or B, you will generally have less power than if you used the continuous scale (see e.g. Senn, Stephen. "Disappointing dichotomies." Pharmaceutical Statistics 2.4 (2003): 239-240. or Senn, Stephen. "Being efficient about efficacy estimation." Statistics in Biopharmaceutical Research 5.3 (2013): 204-210.). You should of course consider whether somehow all else is not equal. E.g. it could be possible that one particular way of asking a question (e.g. forcing people to make a clear choice) is somehow better at elicitating a clear preference. However, I would like to see some pretty compelling evidence for that before I would abandon a more nuanced outcome in favour of a dichotomized outcome.
You might often find cutting plane methods, variable propagation, branch and bound, clause learning, intelligent backtracking or even handwoven human heuristics in SAT solvers. Yet for decades the best SAT solvers have relied heavily on resolution proof techniques and use a combination of other things simply for aid and to direct resolution-style search. Obviously, it's suspected that ANY algorithm will fail to decide the satisfiability question in polynomial time in at least some cases. In 1985, Haken proved in his paper ["The intractability of resolution"](http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V1G-48HRYBH-P&_user=10&_coverDate=12%2F31%2F1985&_rdoc=1&_fmt=high&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=1436167533&_rerunOrigin=scholar.google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=3eadc0120fda0dd052d1f9a8007b9f66) that the pigeon hole principle encoded in CNF does not admit polynomial sized resolution proofs. While this does prove something about the intractability of resolution-based algorithms, it also gives criteria by which cutting edge solvers can be judged - and in fact one of the many considerations that goes into designing a SAT solver today is how it is likely to perform on known 'hard' cases. Having a list of classes of Boolean formulas that provably admit exponentially sized resolution proofs is useful in the sense it gives 'hard' formulas to test new SAT solvers against. What work has been done in compiling such classes together? Does anyone have a reference containing such a list and their relevant proofs? Please list one class of Boolean formula per answer.
**Hard instances for resolution**: 1. Tseitin's formulas (over expander graphs). 2. Weak ($ m $ to $ n$) pigeonhole principle (exponential in $n$ lower bounds, for any $ m>n $). 3. Random 3CNF's with $ n $ variables and $ O(n^{1.5-\epsilon})$ clauses, for $ 0<\epsilon<1/2 $. Good, relatively up-to-date, technical survey for proof complexity lower bounds, see: Nathan Segerlind: The Complexity of Propositional Proofs. Bulletin of Symbolic Logic 13(4): 417-481 (2007) available at: <http://www.math.ucla.edu/~asl/bsl/1304/1304-001.ps>
Let's say I have 2 series of data: * series A has 1M samples, out of which 1000 are "1", the rest is 0 * series B has 100 samples, all 0s I'm trying to use a statistical test to check if boths series have same mean or not. To me, these series could have the same mean (ie 0.001). Using scipy: ``` from scipy import stats A = [0.0]*100 B = [1.0]*1000 + [0.0]*(1000000-1000) scipy.stats.ttest_ind(A,B, equal_var=False) ``` Returns a pvalue=1.399064173683964e-219, which means that both series are different. It seems that the size of A is not taken into account, ie I get the same results if I switch A to be 1 million zeroes. Is this expected? To me, this p-value is inacurate. Have I made an incorrect assumption somewhere?
Student's t-test is based on the normality of the data. You have manually generated binary data and I think it cannot be assumed they are normal. You should test the probability of being 0 or 1, instead: $$H\_0: p\_A = p\_B, \text{ } H\_A:p\_a \neq p\_B$$ You can estimate the probability of being 1 in the sample A and B as $$\hat{p}\_A = \frac{\text{number of ones}}{\text{sample size}} = 0$$ $$\hat{p}\_B = \frac{\text{number of ones}}{\text{sample size}} = 0.001$$ Using the *Moivre-Laplace central limit theorem* you get $$\frac{\hat{p}\_A - \hat{p}\_B}{\sqrt{\frac{\hat{p}\_A (1 - \hat{p}\_A)}{n\_A} + \frac{\hat{p}\_B (1 - \hat{p}\_B)}{n\_B}}}$$ is approximately $N(0,1)$ standard normal ($n\_A$ and $n\_B$ are the corresponding sample sizes). So calculating the above test statistic, if it is greater than $1.96$ or less than $-1.96$, you should reject $H\_0$ at significance level $0.05$ (the calculation left as an exercise to the reader :-) ).
One big advantage of SAS over R is arguably its ability to produce quite complex reports with few statements; think of `PROC SUMMARY` or `PROC TABULATE` for instance. My heart goes to R because of its openness and vibrant community. But I must admit that SAS's PROCS are quite powerful out-of-the-box. To partially address those issues I wrote an R package titled [summarytools](http://cran.r-project.org/web/packages/summarytools/index.html) which provides ways to generate decent looking and translatable (thanks to pander, `Pandoc` implemented in R) simple reports (frequencies, univariate stats, codebook, for the essential part) to various formats like RTF, pdf, and markdown. However, even with the use of `by()` to stratify the stats (be it frequencies or univariate numerical stats), I feel I'm still miles away from generating as flexible and complete tables such as with `PROC TABULATE` or `PROC MEANS`. So my question is: what R packages do you find are "musts" for needs of extracting essential stats from dataframes, splitting on this variable and filtering on that other one. I hope this is not judged as too broad a question; I have made my homework and tried finding the answer to this question before posting here. I'm sure there are some really really well-made packages that address those issues, and I simply haven't seen them around ... yet.
The problem with R is that there are so many ways to construct great reports, and so many R packages that are helpful for this task. One approach, though getting out of date, is shown in <http://biostat.app.vumc.org/wiki/pub/Main/StatReport/summary.pdf> . Note that some of the functions there have been updated as shown in <http://hbiostat.org/R/Hmisc> [and really take note of the `tabulr` function]. That approach revolves around $\LaTeX$, and I believe you'll find that for producing advanced tables (includes ones containing micrographics and footnotes), $\LaTeX$ has many advantages over the `markdown-pandoc` approach. *But* I believe that we should replace almost all tables with graphics. The new R `greport` ("graphical report") and `hreport` ("html report") packages takes the philosophy that graphics should be used for the main presentation, and graphs should be hyperlinked to supporting tables that appear in an appendix to the pdf report. See <http://hbiostat.org/r>. These packages use new functions in the `Hmisc` package for graphing categorical data (i.e., translating tables to plots) and for showing whole distributions of continuous variables.
I found this question on a forum chat on and while looking at it I thought I can solve it using recursion, > > A group of friends is split into cells in a room in a random > arrangement of m X n cell locations in a rectangular or square form, > such that each person in a cell can see all the people in their cell > as well as the people in all the cells at the higher or equal position > in row or column number. You are required to find out how many persons > can be seen by each cell player from their respective cells i.e. you > have to print the view matrix. persons in cell [i,j] can see all the > players in cell [a,b], where a = i to m and b = j to n. > > > Let there is a matrix= \begin{bmatrix}1&1&1\\1&1&1\\1&1&1\end{bmatrix} The output must be =\begin{bmatrix}9&6&3\\6&4&2\\3&2&1\end{bmatrix} After hours of thinking about it, I realized that this question can be done using dynamic programming, I don't know enough about dynamic programming, after few tutorials from the internet, I am confused, Can anyone refer me a very exact question like this on the Internet,I am unable to find a question like this, I wish to learn about this topic by doing this question. Otherwise,you can also tell me how questions like this are done. I will keep editing as I understand how to do this question. Thanks. Few resources that helped me that it is dynamic programming question: [What is dynamic programming about?](https://cs.stackexchange.com/questions/47216/what-is-dynamic-programming-about) [When can I use dynamic programming to reduce the time complexity of my recursive algorithm?](https://cs.stackexchange.com/questions/2057/when-can-i-use-dynamic-programming-to-reduce-the-time-complexity-of-my-recursive)
A problem is called solvable by Dynamic Programming if its solution can be calculated by solving its sub-problems. Let's see how the current state of the given problem depends on its previous states (aka sub-problems). [![enter image description here](https://i.stack.imgur.com/JJRJR.png)](https://i.stack.imgur.com/JJRJR.png) Let the answer to the block in colour C be denoted by Ans(C). We can clearly see that, **Ans(PINK) = 1 + Ans(BLUE) + Ans(ORANGE) - Ans(GREEN)** *Note: we are subtracting the GREEN block because it was added twice (once in ORANGE and once in BLUE).* Mathematically speaking, if **Ans(i,j) = answer to the block starting from (i,j) and ending at (n,m).** **Ans(i,j) = 1 + Ans(i+1,j) + Ans(i,j+1) - Ans(i+1,j+1)** This is known as the Dynamic Programming transition. The base cases are self-explanatory :) 2D Prefix Sums (generalised solution): <https://usaco.guide/silver/more-prefix-sums?lang=cpp>
Prove/disprove: if $L\in \text{coRE}$ then $L$ is mapping-reducible to $\text{PAL}\_{\text{TM}}$, where $\text{PAL}\_{\text{TM}} = \{~\langle M,w\rangle ~|~ M ~\text{is a TM and}~w~\text{is a palindrome}~\}$ It feels like there's not necessarily a mapping reduction from every language to a harder language, but I couldn't find a counter example or disprove the claim (How do I prove that a mapping doesn't exist?) I tried playing with languages that I know that are in $\text{coRE}$, and also tried to assume towards contradiction that such a mapping exists, but it didn't really lead me anywhere.
It is very easy to disprove the claim that if L is in co-RE, then L must be mapping reducible to $PAL\_{TM}$. Consider a simple language such as the empty language, denoted as $∅$. This language contains no strings, so it is not a palindrome language, and therefore cannot be mapped reducible to $PAL\_{TM}$. However, the empty language is in co-RE because it is the complement of the language {ε}, which is RE (since it contains a single string, the empty string ε). Therefore, we have found a counterexample to the claim that if L is in co-RE, then L must be mapping reducible to $PAL\_{TM}$.
**UPDATE** Now community wiki. My new version of the question is: let's make a big list of classes of polygons. We may be able to produce the most comprehensive list on the web, or in the literature. If there is community interest, after Jan 1st I will organize the information from all the answers into a post on the community blog. **ORIGINAL QUESTION BELOW** > > Could you recommend a source, either in print or online, for a menagerie of polygons? An extensive/exhaustive list of classes of polygon? > > > The Wikikpedia article on polygons provides a [partial classification](http://en.wikipedia.org/wiki/Polygon#Classification), but I would like something more complete. Also, I am not concerned about how to name a 90-sided figure. Rather, I am trying to find a list of classes that contain infinitely many figures each (examples: star polygon, isothetic polygon). For example: actual words for "looks like a spider," "would look like a football if it were smoothed out," and so on. I've already checked [The Geometry Junkyard](http://www.ics.uci.edu/~eppstein/junkyard/) and several pages of a Google search, but perhaps I didn't know what to look for. Thanks very much.
Another class of polygons is called anthropomorphic: these polygons have exactly two ears and one mouth. Godfried T.Toussaint, "Anthropomorphic Polygons." American Mathematical Monthly 122, 31-35, 1991. Grunbaum wrote a long paper describing many classes of crossing polygons here: Branko Grunbaum, "Polygons," In The Geometry of Metric and Linear Spaces, L. M. Kelly, ed. Lecture Notes in Mathematics Number 490, pp. 147 - 184. Springer-Verlag, Berlin-Heidelberg-New York 1975.
How can I compute $P(Y > 3X \mid Y > 0)$ where $X$ and $Y$ are i.i.d. standard normal? The solution that I have is pretty unclear to me: "The key is that $N(0, 1)^2$ is cyclically symmetric. When plotting the distributions, the p.d.f. will be cyclically symmetric about the origin. Then, one can perform a geometric probability calculation to obtain an answer in terms of $\arctan$". I'm pretty confused, and I'm wondering if someone can please explain the solution. I tried using Bayes' Rule, which led me nowhere. I don't quite see how to visualize $N(0, 1)^2$, or even how it's related to the problem. I just drew this picture in Desmos for reference: [![enter image description here](https://i.stack.imgur.com/4JpGZ.png)](https://i.stack.imgur.com/4JpGZ.png)
Hint: *draw* (preferably on a piece of paper) a sketch of a circle centered at the origin. Mark (by shading) the region corresponding to the event $(Y > 0)$. Then, mark (by cross-hatching) the *sub*-region corresponding to the event $(Y >0)\cap Y > 3X)$. What *fraction* of the region $(Y > 0)$ is the region $(Y >0)\cap Y > 3X)$?
Today I revisited the topic of runtime complexity orders – big-O and big-$\Theta$. I finally fully understood what the formal definition of big-O meant but more importantly I realised that big-O orders can be considered sets. For example, $n^3 + 3n + 1$ can be considered an **element** of set $O(n^3)$. Moreover, $O(1)$ is a subset of $O(n)$ is a subset of $O(n^2)$, etc. This got me thinking about big-Theta which is also obviously a set. What I found confusing is how each big-Theta order relates to each other. i.e. I believe that $\Theta(n^3)$ **is not** a subset of $\Theta(n^4)$. I played around with Desmos (graph visualiser) for a while and I failed to find how each big-Theta order relates to other orders. A simple example [Big-Theta example graphs](https://i.stack.imgur.com/5Eq8Q.png) shows that although $f(n) = 2n$ is in $\Theta(n)$ and $g(n) = 2n^2$ is in $\Theta(n^2)$, the graphs in $\Theta(n)$ are obviously not in $\Theta(n^2)$. I kind of understand this visually, if I think about how different graphs and bounds might look like but I am having a hard time getting a solid explanation of why it is the way it is. So, my questions are: 1. Is what I wrote about big-O correct? 2. How do big-Theta sets relate to each other, if they relate at all? 3. Why do they relate to each other the way they do? The explanation is probably derivable from the formal definition of big-Theta (might be wrong here) and if someone could relate the explanation back to that definition it would be great. 4. Is this also the reason why big-O is better for analysing complexity? Because it is easier to compare it to other runtimes?
> > Is what I wrote about big-O correct? > > > Yes. > > How do big-Theta sets relate to each other, if they relate at all? > > > They are a partition of the space of functions. If $\Theta(f)\cap \Theta(g)\not = \emptyset$, then $\Theta(f)=\Theta(g)$. Moreover, $\Theta(f)\subseteq O(f)$. > > Why do they relate to each other the way they do? The explanation is probably derivable from the formal definition of big-Theta (might be wrong here) and if someone could relate the explanation back to that definition it would be great. > > > A function $f$ is in $\Theta(g)$ if and only if there are constants $c\_1,c\_2>0$ such that $c\_1 g(n)\leq f(n) \leq c\_2g(n)$ for all sufficiently large $n$. Seeing that the above relation holds is a simple case of doing some substitutions: Suppose there is some $a\in \Theta(f), a\in \Theta(g)$ and $b\in \Theta(f)$, then we know there exist constants such that (for sufficiently large $n$) $c\_1 f(n)\leq a(n) \leq c\_2f(n)$ $c\_3 g(n)\leq a(n) \leq c\_4g(n)$ $c\_5 f(n)\leq b(n) \leq c\_6f(n)$ then $c\_5 c\_3 g(n)/c\_2 \leq c\_5 a(n)/c\_2 \leq c\_5 f(n)\leq b(n)\leq c\_6f(n)\leq c\_6 a(n)/c\_1\leq c\_6c\_4g(n)/c\_1$ and thus $b\in \Theta(g)$. > > Is this also the reason why big-O is better for analysing complexity? Because it is easier to compare it to other runtimes? > > > It is not "better". You could say it is worse, because an algorithm being $\Theta(f)$ implies that it is $O(f)$ (but not vice-versa), so "$\Theta$" is a strictly stronger statement than "$O$". The reason "$O$" is more popular is because "$O$" expresses an upper bound on the speed of an algorithm, i.e., it is a guarantee it will run in at most a given time. "$\Theta$" also expresses the same upper bound, but, in addition, also expresses that this upper bound is the best possible upper bound for a given algorithm. E.g., an algorithm running in time $O(n^3)$ can actually turn out to also run in $O(n^2)$, but an algorithm running in time $\Theta(n^3)$ can not also run in $\Theta(n^2)$ time. From a practical perspective, if we want to know whether an algorithm is fast enough for a practical purpose, knowing it runs in $O(n^2)$ time is good enough to determine whether it is fast enough. The information that it runs in $\Theta(n^2)$ time is not really important to the practical use. If we have determined that an $O(n^2)$-time algorithm is fast enough for our application, then who cares if the algorithm that was claimed to be $O(n^2)$ is actually $\Theta(n)$? Obviously, if you are going to give an upper bound on the running time of an algorithm, you will endeavor to give the best possible upper bound (there is no sense in saying your algorithm is $O(n^3)$ when you could also say it is $O(n^2)$). For this reason, when people say "$O$" they often implicitly mean "$\Theta$". The reason people write "$O$" is because this is easier on a normal keyboard, is customary, conveys the most important information about the algorithm (the upper bound on the speed) and people often can't be bothered to formally prove the lower bound.
Suppose we had an algorithm that solved an **NP**-complete problem (SAT, TSP, etc.) in time $O(2^{N/B})$ where $B>2$ is an input to the algorithm, along with the instance to be solved. So for $B < N$, we have a reduced exponential growth runtime, but for $B \geq N$ we actually have a constant runtime. What would this say about **P** vs **NP**? Does this complexity class already exist or would it be a new one? Would this add anything significant to our current understanding of complexity theory?
It says nothing about the **P** vs **NP** question because no such algorithm can exist. If there is an algorithm that takes as its input a formula $\phi$ and an integer $B$ and determines whether $\phi$ is satisfiable in time $O(2^{N/B})$, where $N$ is the length of $\phi$, then we can decide SAT in constant time by calling the algorithm with $B=N$. But we know that SAT cannot be solved in constant time because, in $k$ steps of the computation, you can only read the first $k$ characters of the input. That means that you have no way of distinguishing between the formulas $X\land X\land\dots \land X$ and $X\land X\land\dots \land X\land\neg X$, where $X$ is repeated, say, $k$ times. One of these formulas is satisfiable and the other isn't, but an algorithm running in $k$ steps would have to return "satisfiable" or "unsatisfiable" before seeing the $\neg X$ (or lack of it) at the end.
I have been studying statistics from many books for the last 3 years, and thanks to this site I learned a lot. Nevertheless one fundamental question still remains unanswered for me. It may have a very simple or a very difficult answer, but I know for sure it requires some deep understanding of statistics. When fitting a model to data, be it a frequentist or a Bayesian approach, we propose a model, which may consist of a functional form for likelihood, a prior, or a kernel (non-parametric), etc. The issue is any model fits a sample with some level of goodness. One can always find a better or worse model compared to what's currently at hand. At some point we stop and start drawing conclusions, generalize to population parameters, report confidence intervals, calculate risk, etc. Hence, whatever conclusion we draw is always conditional on the model we decided to settle with. Even if we are using tools to estimate the expected KL distance such as AIC, MDL, etc., it doesn't say anything about where we stand on an absolute basis, but just improves our estimation on a relative basis. It seems there is no objectivity as the model error is completely ignored. Now suppose that we would like to define a step by step procedure to apply to any data set when building models. What should we specify as a stopping rule? Can we at least bound the model error which will give us an objective stopping point (this is different than stopping training using a validation sample, since it also gives a stopping point within the evaluated model class rather than w.r.t. the true DGP)?
Unfortunately, this question does *not* have a good answer. You can choose the best model based on the fact that it minimizes absolute error, squared error, maximizes likelihood, using some criteria that penalizes likelihood (e.g. AIC, BIC) to mention just a few most common choices. The problem is that neither of those criteria will let you choose the objectively best model, but rather the best from which you compared. Another problem is that while optimizing you can always end up in some local maximum/minimum. Yet another problem is that your choice of criteria for model selection is *subjective*. In many cases you consciously, or semi-consciously, make a decision on what you are interested in and choose the criteria based on this. For [example](https://stats.stackexchange.com/questions/577/is-there-any-reason-to-prefer-the-aic-or-bic-over-the-other), using BIC rather than AIC leads to more parsimonious models, with less parameters. Usually, for *modeling* you are interested in more parsimonious models that lead to some general conclusions about the universe, while for *predicting* it doesn't have to be so and sometimes more complicated model can have better predictive power (but does not have to and often it does not). In yet other cases, sometimes more complicated models are preferred for *practical* reasons, for example while estimating Bayesian model with MCMC, model with hierarchical [hyperpriors](https://stats.stackexchange.com/questions/133067/what-exactly-is-a-hyperparameter/133071#133071) can behave better in simulation than the simpler one. On the other hand, generally we are afraid of [overfitting](https://stats.stackexchange.com/questions/128616/whats-a-real-world-example-of-overfitting) and the simpler model has the lower risk of overfitting, so it is a safer choice. Nice example for this is a automatic [stepwise model selection](https://stats.stackexchange.com/questions/20836/algorithms-for-automatic-model-selection/20856#20856) that is generally not recommended because it easily leads to overfitted and biased estimates. There is also a philosophical argument, [Occam's razor](https://en.wikipedia.org/wiki/Occam's_razor), that the simplest model is the preferred one. Notice also, that we are discussing here comparing different models, while in real life situations it also can be so that using different statistical tools can lead to different results - so there is an additional layer of choosing the method! All this leads to sad, but entertaining, fact that we can never be sure. We start with uncertainty, use methods to deal with it and we end up with uncertanity. This may be paradoxical, but recall that we use statistics because we *believe* that world is uncertain and probabilistic (otherwise we would choose a career of prophets), so how could we possibly end up with different conclusions? There is no objective stopping rule, there are multiple possible models, all of them are wrong (sorry for the cliché!) because they try to simplify the complicated (constantly changing and probabilistic) reality. We find some of them more useful than others for our purposes and sometimes we *do* find different models useful for different purposes. You can go to the very bottom to notice that in many cases we make models of unknown $\theta$'s, that in most cases can never be known, or even do not exist (does a *population* has any $\mu$ for age?). Most models do *not* even try to describe the reality but rather provide abstractions and generalizations, so they cannot be "right", or "correct". You can go even deeper and find out that there is no such a thing as "probability" in the reality - it is just some approximation of uncertainty around us and there are also alternative ways of approximating it like e.g. fuzzy logic (see Kosko, 1993 for discussion). Even the very basic tools and theorems that our methods are grounded on are approximations and are not the only ones that are possible. We simply cannot be certain in such a setup. The stopping rule that you are looking for is always problem-specific and subjective, i.e. based on so called professional judgment. By the way, there are lots of research examples that have shown that professionals are often not better and sometimes even worse in their judgment than laypeople (e.g. revived in papers and books by [Daniel Kahneman](https://en.wikipedia.org/wiki/Daniel_Kahneman)), while being more prone to [overconfidence](https://en.wikipedia.org/wiki/Overconfidence_effect) (this is actually an argument on why we should *not* try to be "sure" about our models). --- Kosko, B. (1993). Fuzzy thinking: the new science of fuzzy logic. New York: Hyperion.
The recent breakthrough circuit complexity lower-bound result of Ryan Williams provides a proof technique that uses upper-bound result to prove complexity lower-bounds. Suresh Venkat in his answer to this question, [Are there any counter-intuitive results in theoretical computer science?](https://cstheory.stackexchange.com/questions/2802/are-there-any-counterintuitive-results-in-theoretical-computer-science/2914#2914), provided two examples of establishing lower-bounds by proving upper-bounds. > > * What are the other interesting results for proving complexity lower-bounds that was obtained by proving complexity upper-bounds? > * Is there any upper-bound conjecture that would imply $NP \not\subseteq P/poly$ (or $P \ne NP$)? > > >
One could turn the question around and ask what lower bounds *aren't* proved by proving an upper bound. Almost all communication complexity lower bounds (and streaming algorithm lower bound and data structure lower bounds that rely on communication complexity arguments) are proved by showing that a communication protocol can be constructively turned into an encoding scheme, with the length of the encoding depending on the communication complexity of the protocol, and the lower bound for the protocol follows from the fact that you cannot encode all n bit messages using n-1 bits or fewer. The Razborov-Smolensky circuit lower bounds work by showing how to simulate bounded-depth circuits by low-degree polynomials. A couple of candidates of lower bounds that are not proved with an upper bound could be the time hierarchy theorem (although, to get the tightest bounds, one needs an efficient universal turing machine, which is a non-trivial algorithmic task) and the proof of AC0 lower bounds using the switching lemma (but the cleanest proof of the switching lemma uses a counting/incompressibility/Kolmogorov-complexity)
From Wikipedia: In theoretical computer science, correctness of an algorithm is asserted when it is said that the algorithm is correct with respect to a specification. But the problem is that to get the "appropriate" specification is not a trivial task, and there is no 100% correct method (as far as i know) to get the right one, it just an estimation, so if we are going to take a predicate as a specification just because it "looks" like the "one", why not taking the program as correct just because it "looks" correct?
First off, you're absolutely right: you're on to a real concern. Formal verification transfers the problem of confidence in program correctness to the problem of confidence in specification correctness, so it is not a silver bullet. There are several reasons why this process can still be useful, though. 1. Specifications are often simpler than the code itself. For instance, consider the problem of sorting an array of integers. There are fairly sophisticated sorting algorithms that do clever things to improve performance. But the specification is fairly simple to state: the output must be in increasing order, and must be a permutation of the input. Thus, it is arguably easier to gain confidence in the correctness of the specification than in the correctness of the code itself. 2. There is no single point of failure. Suppose you have one person write down a specification, and another person write the source code, and then formally verify that the code meets the spec. Then any undetected flaw would have to be present in *both* the spec *and* the code. In some cases, for some types of flaws, this feels less likely: it's less likely that you'd overlook the flaw when inspecting the spec *and* overlook the flaw when inspecting the source code. Not all, but some. 3. Partial specs can be vastly simpler than the code. For instance, consider the requirement that the program is free of buffer overrun vulnerabilities. Or, the requirement that there are no array index out-of-bounds errors. This is a simple spec that is fairly obviously a good and useful thing to be able to prove. Now you can try to use formal methods to prove that the entire program meets this spec. That might be a fairly involved task, but if you are successful, you gain increased confidence in the program. 4. Specs might change less frequently than code. Without formal methods, each time we update the source code, we have to manually check that the update won't introduce any bugs or flaws. Formal methods can potentially reduce this burden: suppose the spec doesn't change, so that software updates involve only changes to the code and not changes to the spec. Then for each update, you are relieved of the burden to check whether the spec is still correct (it hasn't changed, so there's no risk new bugs have been introduced in the spec) and of the burden to check whether the code is still correct (the program verifier checks that for you). You still need to check that the original spec is correct, but then you don't need to keep checking it each time a developer commits a new patch/update/change. This can potentially reduce the burden of checking correctness as code is maintained and evolves. Finally, remember that specs typically are declarative and can't necessarily be executed nor compiled directly to code. For instance, consider sorting again: the spec says that the output is increasing and is a permutation of the input, but there is no obvious way to "execute" this spec directly and no obvious way for a compiler to automatically compile it to code. So just taking the spec as correct and executing it often isn't an option. Nonetheless, the bottom line remains the same: formal methods are not a panacea. They simply transfer the (very hard) problem of confidence in code correctness to the (merely hard) problem of confidence in spec correctness. Bugs in the spec are a real risk, they are common, and they can't be overlooked. Indeed, the formal methods community sometimes separates the problem into two pieces: *verification* is about ensuring the code meets the spec; *validation* is about ensuring the spec is correct (meets our needs). You might also enjoy [Formal program verification in practice](https://cs.stackexchange.com/q/13785/755) and [Why aren't we researching more towards compile time guarantees?](https://cs.stackexchange.com/q/33127/755) for more perspectives with some bearing on this.
My dataset contains two (rather strongly correlated) variables $t$ (runtime of algorithm) and $n$ (number of examined nodes, whatever). Both are strongly correlated by design, because the algorithm can manage roughly $c$ nodes per second. The algorithm was run on several problems, but it was terminated if a solution hasn't been found after some timeout $T$. So data is right-censored on the time variable. I plot the estimated cumulative density function (or the cumulated count) of variable $n$ for the cases where the algorithm did terminate with $t<T$. This shows how many problems could be solved by expanding at most $n$ nodes and is useful for comparing different configurations of the algorithm. But in the plot for $n$, there are those funny tails at the top going right sharply, as can be seen in the image below. Compare the ecdf for variable $t$, on which the censoring was done. Cumulated Count of $n$ ---------------------- ![ecdf of n](https://i.stack.imgur.com/BlohK.png) Cumulated Count of $t$ ---------------------- ![ecdf of t](https://i.stack.imgur.com/V8EMk.png) Simulation ---------- I understand why this happens, and can reproduce the effect in a simulation using the following *R* code. It's caused by censoring on a strongly correlated variable under the addition of some noise. ``` qplot( Filter(function(x) (x + rnorm(1,0,1)[1]) < 5, runif(10000,0,10)), stat="ecdf",geom="step") ``` ![synthetic data](https://i.stack.imgur.com/Zjscp.png) **How is this phenomenon called?** I need to state in a publication that these fans are artifacts of the experiment and do not reflect the real distribution.
I suspect you run into the family of stable non-symmetric distributions. First, plot your ecdf in a log-log plot. Adopt a parametric approach, assume Pareto Distribution, ![enter image description here](https://i.stack.imgur.com/wqX06.png) The cdf in your case is translated as $F\_t(t)=1-(\frac{t\_{min}}{t})^a \ for \ t>t\_{min}$, where $t\_{min}$ is the minimum completion time of your algorithm, hence the threshold appearing on the left side of the ecdf graph If you see a line in the log-log plot, then you are on the right path, make a linear regression on the log transformed data you have, to find out $\hat{\alpha}$, the so called Pareto index. Pareto index must be greater than 1, it gives and interpretation of the heavy "tailness" of the distribution, how much data is spans on the edges. The closer to 1 the more pathogenic situation you have. In other words, $\alpha$ expresses the ratio of nodes spent negligible time vs nodes spent excessive time before their completion. Previous reader pinpointed the fact you terminate abruptly your experiment, this introduces a complication described as $\hat{\alpha}=\hat{\alpha}(T)$. I suggest you should vary $T$ to explore this dependence. Heavy tails phenomenon is common in computer science, particularly when nodes compete against shared resources in a random fashion, e.g. computer networks.
I'm working on a multiclass text classification task (5 classes). I've 2 types of datasets: 1. regular (~22000 samples) 2. dataset of duplicates (~19000 samples) I've written a logic that labels them all. I've noticed that after adding an additional set of data (in which being labeled using a different logic code path), the `val_accuracy` doesn't reach more than 67%, while using only the regular data set I can easily each 74%. A few questions: 1. Is working only with ~22000 samples is sufficient to this kind of classification problem? 2. How come adding more samples damages the val\_accuracy (I was under the impression it should increase it). **Some more info** I feel like my use case wasn't elaborated enough: My goal is to classify bugs to the relevant owner group (there're 5 of them). A duplicate bug is not nearly identical (text wise) to his "dupped" one and so I though adding it can improve the model accuracy. Again, as mentioned, my logic takes care of labeling the duplicated bugs correctly (by finding the owner group of the original one). Once doing so, I'm adding the duplicates to the dataset, shuffle it and **only then** splitting it to train and test. Another point to mention is that indeed my dataset is imbalance and I use class weights to handle that (also tried augmentation but it took a lot of time and didn't change much)
"All models are wrong, some are useful" - [George Box](https://en.wikipedia.org/wiki/All_models_are_wrong) CART based trees are greedy. They may miss the best split. CART based trees do not look ahead or backfit. The hope is that after many splits, many trees, the final answer will be close enough to optimal. There is no guarantee with any algorithm though, i.e. [the no free lunch theorem](https://www.geeksforgeeks.org/what-is-no-free-lunch-theorem/). There are other types of trees besides CART. Bayesian trees, optimal trees, model based trees, and more, that may not be as greedy and MAY or MAY NOT make a "better" model for your use case. You can always try different algorithms.
I created a causal model in which $X$ causes $Y$ and $Z$, and $Y$ causes $Z$ in the following way: ``` set.seed(2021) N <- 10000 X <- purrr::rbernoulli(N) Y <- X + purrr::rbernoulli(N) Z <- 2*X + 3*Y + purrr::rbernoulli(N) ``` I created it this way so that the variables are discrete. That's it. The equivalent DAG would be the one below: [![enter image description here](https://i.stack.imgur.com/7iDqS.png)](https://i.stack.imgur.com/7iDqS.png) People working with causal inference are probably more used to unshielded triplets, in which case we would have no edge between $X$ and $Y$ and therefore a v-structure. In this hypothetical situation, $X$ and $Y$ are independent but become [spuriously] dependent when conditioning on $Z$, a collider. However, going back to the diagram I showed, there is direct association between $X$ and $Y$ and if we adjust on $Z$, we open a blocked path, in the sense that we add some spurious dependence between $X$ and $Y$ through $Z$. What's driving me nuts is that the mutual information between $X$ and $Y$ not only is larger than $X$ and $Y$ conditioned by $Z$, but the latter is $0$! That is, $I(X;Y) > I(X;Y|Z) = 0$. In R: ``` infotheo::condinformation(X,Y) infotheo::condinformation(X,Y,Z) ``` I tried changing the equations for $Y$ and $Z$ and yet, the zero is always there (almost always, check the end of the question) for $I(X;Y|Z)$, whenever it's a shielded triplet. Even if I do it with continuous variables and normal distributions for the noise, I still find the same thing. ``` set.seed(2021) N <- 10000 X <- rnorm(N, mean=10, sd=2) Y <- X + rnorm(N, mean=10, sd=2) Z <- X + Y + rnorm(N, mean=10, sd=2) miic::discretizeMutual(X,Y, plot=FALSE)$info miic::discretizeMutual(X,Y, matrix\_u=matrix(Z), plot=FALSE)$info ``` But then, if I change a bit the structural equation for $Z$, I get something different from zero. ``` set.seed(2021) N <- 10000 X <- rnorm(N, mean=10, sd=2) Y <- X + rnorm(N, mean=10, sd=2) Z <- 2*X + 3*Y + rnorm(N, mean=10, sd=2) miic::discretizeMutual(X,Y, plot=FALSE)$info miic::discretizeMutual(X,Y, matrix\_u=matrix(Z), plot=FALSE)$info ``` I also get a value different from zero if I make the distribution of the noise in $Z$ explicitly different from the noise in $X$ and $Y$. ``` set.seed(2021) N <- 10000 X <- rnorm(N, mean=10, sd=2) Y <- X + rnorm(N, mean=10, sd=2) Z <- X + Y + rnorm(N, mean=100, sd=10) miic::discretizeMutual(X,Y, plot=FALSE)$info miic::discretizeMutual(X,Y, matrix\_u=matrix(Z), plot=FALSE)$info ``` I don't understand what's happening here. I tried to draw a few diagrams, see what paths would be blocked or opened, what would happen with correlation between the noises, but I can't think of a way that adjusting for the collider $Z$ would make $X$ and $Y$ independent. A hypothesis would be some sort of cancelling of effects between the two paths, but I changed the equations in ways that I didn't expect it to happen and still... The $0$ is there. Could you please explain to me what's happening here? Both analytically, if possible, and intuitively. By intuitively, I'm referring to the intuitive explanations for the unshielded triplets. Adjusting for $B$ in $A \rightarrow B \leftarrow C$ makes A and C dependent. Adjusting for $B$ in $A \rightarrow B \rightarrow C$ or $A \leftarrow B \rightarrow C$ make $A$ and $B$ independent. Something along these lines :-)
Here you can find a formal example of linear causal model that share your DAG. [Which OLS assumptions are colliders violating?](https://stats.stackexchange.com/questions/525369/which-ols-assumptions-are-colliders-violating/525435#525435) I consider (there and here) the case where all noises are independent each others. Moreover now I consider here the particular case where noises are standard Normal. Moreover, like in your example, I start considering all three causal parameters equal to $1$. As showed in my example (in the link) the causal coefficient/effect of $X$ on $Y$ can be consistently estimated from the regression of $Y$ on $X$. No controls are needed, indeed if we add a collider ($Z$) as control the regression coefficient of $X$ is no more a consistent estimator of the causal coefficient of $X$ on $Y$; worse, it do not represent any causal parameter of SCM. It is a useless regression coefficient. Actually under the just suggested parametrization, the useless regression coefficient converge precisely to $0$; from normality assumption we can said that $X$ and $Y$ are independent conditioned on $Z$. Now you said that > > I tried changing the equations for $Y$ and $Z$ and yet, the zero is always > there (almost always, check the end of the question) for $I(X;Y|Z)$, > whenever it's a shielded triplet. Even if I do it with continuous > variables and normal distributions for the noise, I still find the > same thing. ... > But then, if I change a bit the structural equation for $Z$, I get something different from zero. ... > A hypothesis would be some sort of cancelling of effects between the two paths, but I changed the equations in ways that I didn't expect it to happen and still... The 0 is there. > > > This do not seems me completely clear and true. Indeed from my example if the causal parameter in the structural equation for $Y$ is different from $1$ the useless regression coefficient become different from zero. For example if the causal parameter in that structural equation is $2$ the conditional independence claimed do not hold. Indeed the useless regression coefficient converrge to $0,5$ Moreover following the alternative parameterization that you suggest ($2$ and $3$ as causal parameters in structural equation for $Z$ and retain $1$ in the equation for $Y$) the useless regression coefficient converge to $-0,5$. In general, different values of causal parameters can produce null or positive or negative value for the useless regression coefficient. Moreover even the distribution of structural errors matters. Finally > > Why does collider adjustment in a shielded triplet tend to cause > independence? > > > the independence you claimed can happen in particular cases (parameters combinations) but in general it do not hold. Indeed the general message is that control for collider is a bad idea.
Under a standard gaussian distribution (mean 0 and variance 1), the kurtosis is $3$. Compared to a heavy tail distribution, is the kurtosis normally larger or smaller?
If you go with a formal definition, such as [one in Wikipedia](https://en.wikipedia.org/wiki/Heavy-tailed_distribution#Relationship_to_fat-tailed_distributions), then the tails must be heavier than exponential distribution. Exponential distribution's excess kurtosis is 6. Student t distribution's excess kurtosis goes from infinite to zero as the degrees of freedom go from 4 to infinity, and Student t converges to normal. Also, some people, myself included, use a much simpler definition: positive excess kurtosis. So, the answer is yes, excess kurtosis will be positive for heavy tailed distributions. I can't say whether it is possible to construct a distribution that would satisfy formal requirements of heavy tailed distribution and has negative excess kurtosis. If it is possible, I bet it would be a purely theoretical construct that nobody uses to model heavy tails anyway.
CPU caches are used by exploiting temporal and spatial locality. My question is who is responsible for managing these caches? Is this Operating system that identifies a particular access pattern and then manages (i.e store the data in) cache, using low level OS function calls?
The [CPU cache](http://en.wikipedia.org/wiki/CPU_cache) handles each and every access to memory, that is just too fast to be under software control. It is entirely built into the hardware, either on the CPU chip itself or on the motherboard.
I am working with a time series of **anomaly scores** (the background is anomaly detection in computer networks). Every minute, I get an anomaly score $x\_t \in [0, 5]$ which tells me how "unexpected" or abnormal the current state of the network is. The higher the score, the more abnormal the current state. Scores close to 5 are theoretically possible but occur almost never. Now I want to come up with an algorithm or a formula which automatically determines a **threshold** for this anomaly time series. As soon as an anomaly score exceeds this threshold, an alarm is triggered. The frequency distribution below is an example for an anomaly time series over 1 day. However, it is **not** safe to assume that every anomaly time series is going to look like that. In this special example, an anomaly threshold such as the .99-quantile would make sense since the few scores on the very right can be regarded as anomalies. ![enter image description here](https://i.stack.imgur.com/Ieo7b.jpg) And the same frequency distribution as time series (it only ranges from 0 to 1 since there are no higher anomaly scores in the time series): ![time series](https://i.stack.imgur.com/0ZPoe.jpg) Unfortunately, the frequency distribution might have shapes, where the .99-quantile is **not useful**. An example is below. The right tail is very low, so if the .99-quantile is used as threshold, this might result in many false positives. This frequency distribution **does not seem to contain anomalies** so the threshold should lie outside the distribution at around 0.25. ![enter image description here](https://i.stack.imgur.com/gZw9a.jpg) Summing up, the difference between these two examples is that the first one seems to exhibit anomalies whereas the second one does not. From my naive point of view, the algorithm should consider these two cases: * If the frequency distribution has a large right tail (i.e. a couple abnormal scores), then the .99-quantile can be a good threshold. * If the frequency distribution has a very short right tail (i.e. no abnormal scores), then the threshold should lie outside the distribution. /edit: There is also no ground truth, i.e. labeled data sets available. So the algorithm is "blind" against the nature of the anomaly scores. Now I am not sure how these observations can be expressed in terms of an algorithm or a formula. Does anyone have a suggestion how this problem could be solved? I hope that my explanations are sufficient since my statistical background is very limited. Thanks for your help!
Do you have any 'labeled' examples of what constitutes an anomaly? i.e. values associated with a network failure, or something like that? One idea you might consider applying is a ROC curve, which is useful for picking threshholds that meet a specific criteria, like maximizing true positives or minimizing false negatives. Of course, to use a ROC curve, you need to label your data in some way.
A program is an encoded Turing Machine. And a size optimizer of a program is a TM $M\_1$ such that: > > On any input $M$, $M\_1$ outputs $M\_{min}$ such that $M\_{min}$ is the shortest TM which is equivalent to $M$. > > > If size-optimization is not computable, the above $M\_1$ shouldn't exist. 1. How do we prove this? 2. Does this also mean a size-optimality decider can't exist? 3. How does this generalize to other kinds of optimization (speed-optimization, for example)?
In your example, you consider a size optimizer, that is a program that > > takes a program $P$, and returns an equivalent program $P'$ such that there is no other program $P''$ also equivalent to $P$ but strictly smaller than $P'$. > > > With such a perfect size optimizer, you could build a decision procedure for this question > > Is an input program $P$ equivalent to any program of at most $33$ characters? > > > But this is a non trivial property of program semantics. [Rice Theorem](http://en.wikipedia.org/wiki/Rice_theorem) says that such properties are undecidable.
What I want to do is turn a math problem I have into a boolean satisfiability problem (SAT) and then solve it using a SAT Solver. I wonder if someone knows a manual, guide or anything that will help me convert my problem to a SAT instance. Also, I want to solve this in a better than exponential time. I hope a SAT Solver will help me.
[Chapter 2 of the SAT Handbook (by Steven Prestwich)](http://www.google.com/search?q=chapter%202%20cnf%20encodings%20handbook) covers how to turn discrete decision problems into CNF, in some depth. (Unfortunately, I don't think there is a draft version online -- probably best to consult your local library.) Several of the other references cited in Magnus Björk's quirky overview [*Successful SAT Encoding Techniques*](https://satassociation.org/jsat/index.php/jsat/article/download/153/118) are also useful. If your problems are continuous, or you are especially interested in systems of inequalities, then other kinds of solvers are more likely to be useful. As Kyle points out, SMT solvers (such as [Z3](http://z3.codeplex.com/), [Yices](http://yices.csl.sri.com/), or [OpenSMT](http://code.google.com/p/opensmt/)) might be useful, although traditionally SMT theories tend to be focused on verification of computer software, so SMT solvers usually have great support for things like expressions involving intervals of integers, but may perform poorly on injectivity constraints. For problems that are naturally expressed as systems of inequalities, [CPLEX](http://en.wikipedia.org/wiki/CPLEX) is the one to beat (it used to be available for academic use for free, and it might still be). For some combinatorial decision problems (like finding [packings of rectangles into a square](http://4c.ucc.ie/~hsimonis/almostsquarepacking.pdf)), constraint solvers such as [Minion](http://minion.sourceforge.net/) outperform SAT solvers and are often easier to use.
I use Cholesky decomposition to simulate correlated random variables given a correlation matrix. The thing is, the result never reproduces the correlation structure as it is given. Here is a small example in Python to illustrate the situation. ``` import numpy as np n_obs = 10000 means = [1, 2, 3] sds = [1, 2, 3] # standard deviations # generating random independent variables observations = np.vstack([np.random.normal(loc=mean, scale=sd, size=n_obs) for mean, sd in zip(means, sds)]) # observations, a row per variable cor_matrix = np.array([[1.0, 0.6, 0.9], [0.6, 1.0, 0.5], [0.9, 0.5, 1.0]]) L = np.linalg.cholesky(cor_matrix) print(np.corrcoef(L.dot(observations))) ``` This prints: ``` [[ 1. 0.34450587 0.57515737] [ 0.34450587 1. 0.1488504 ] [ 0.57515737 0.1488504 1. ]] ``` As you can see, the post-hoc estimated correlation matrix drastically differs from the prior one. Is there a bug in my code, or is there some alternative to using the Cholesky decomposition? **Edit** I beg your pardon for this mess. I didn't think there was an error in the code and/or in the way Cholesky decomposition was applied due to some misunderstanding of the material I had studied before. In fact I was sure that the method itself was not meant to be precise and I had been okay with that up until the situation that made me post this question. Thank you for pointing at the misconception I had. I've edited the title to better reflect the real situation as proposed by @Silverfish.
There's nothing wrong with the Cholesky factorization. There is an error in your code. See edit below. Here is MATLAB code and results, first for n\_obs = 10000 as you have, then for n\_obs = 1e8. For simplicity, since it doesn't affect the results, I don't bother with means, i.e., I make them zeros. Note that MATLAB's chol produces an upper triangular Cholesky factor R of the matrix M such that R' \* R = M. numpy.linalg.cholesky produces a lower triangular Cholesky factor, so an adjustment vs. my code is needed; but I believe your code is fine in that respect. ``` >> correlation_matrix = [1.0, 0.6, 0.9; 0.6, 1.0, 0.5;0.9, 0.5, 1.0]; >> SD = diag([1 2 3]); >> covariance_matrix = SD*correlation_matrix*SD covariance_matrix = 1.000000000000000 1.200000000000000 2.700000000000000 1.200000000000000 4.000000000000000 3.000000000000000 2.700000000000000 3.000000000000000 9.000000000000000 >> n_obs = 10000; >> Random_sample = randn(n_obs,3)*chol(covariance_matrix); >> disp(corr(Random_sample)) 1.000000000000000 0.599105015695768 0.898395949647890 0.599105015695768 1.000000000000000 0.495147514173305 0.898395949647890 0.495147514173305 1.000000000000000 >> n_obs = 1e8; >> Random_sample = randn(n_obs,3)*chol(covariance_matrix); >> disp(corr(Random_sample)) 1.000000000000000 0.600101477583914 0.899986072541418 0.600101477583914 1.000000000000000 0.500112824962378 0.899986072541418 0.500112824962378 1.000000000000000 ``` Edit: I found your mistake. You incorrectly applied the standard deviation. This is the equivalent of what you did, which is wrong. ``` >> n_obs = 10000; >> Random_sample = randn(n_obs,3)*SD*chol(correlation_matrix); >> disp(corr(Random_sample)) 1.000000000000000 0.336292731308138 0.562331469857830 0.336292731308138 1.000000000000000 0.131270077244625 0.562331469857830 0.131270077244625 1.000000000000000 >> n_obs=1e8; >> Random_sample = randn(n_obs,3)*SD*chol(correlation_matrix); >> disp(corr(Random_sample)) 1.000000000000000 0.351254525742470 0.568291702131030 0.351254525742470 1.000000000000000 0.140443281045496 0.568291702131030 0.140443281045496 1.000000000000000 ```
I'm aware that there must be something wrong with my reasoning, but I'm not sure what and neither are a few other CS people I've asked. So here goes: Take the following problem for example: Let $x[n]$ be a real valued discrete signal of length $N$. It is known that the signal is of the form $$x[n] = sin(an) + sin(bn) + sin(cn)$$ where $a,b,c$ are some real valued constants. We want to find $a,b,c$ , the 3 frequencies composing $x[n]$. Now lets assume a parallel universe which somehow doesn't know of the Fourier transform. The only way they could propose to solve the problem (that I can think of) would be some form of grid search over $a,b,c$ which would seem to be exponentially hard. Given the Fourier transform, one could solve the problem in poly-time. So my question is: could some problems we think are NP-hard just seem that way because we haven't yet found the "magic" transform to change them into a basis which bring the problem to be solvable in poly-time? I know that I haven't given the full formal formulation of the problem, and probably the catch is that even without Fourier the problem isn't NP hard.. but I'm still curious about how to refute the claim that there *could* exist a basis transform which would make an NP-hard problem solvable in poly time.
> > The only way they could propose to solve the problem (that I can think of) ... would seem to be exponentially hard." > > > That's your core fallacy, and "that I can think of" is the answer. The complexity of the problem did not change, but the transformation made finding an efficient algorithm easier (as a tool). Yes, it's certainly possible that we have yet to find a similar tool that, once we have it, allows even our feeble minds to easily find efficient algorithms for NP-hard problems. That would be a constructive and generalisable proof for P=NP, that is something like a holy grail.
This question is a further step of [this question](https://stackoverflow.com/questions/39263002/calling-fit-multiple-times-in-keras). My data inputs are tens of **.csv files**, I have already read csv input data until the following format: ``` # train_x is data, train_y is label print(train_x.shape) # (2000000,10,100) 3D array print(train_y.shape) # (2000000,) labels ``` I already can fit & evaluate them using: ``` model.fit(train_x, train_y, batch_size=32, epochs=10) model.evaluate(train_x, train_y) ``` It works well if the dataset is **LESS than RAM size**. But if dataset is too BIG then "**large dataset do not fit the memory**". Most online suggestions are to use [**fit\_generator( )**](https://keras.io/models/sequential/#fit_generator) instead of **fit( )** [(also suggested from keras website)](https://keras.io/getting-started/faq/#how-can-i-use-keras-with-datasets-that-dont-fit-in-memory). ``` fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0) ``` How to write a **generator** function (the 1st parameter of fit\_generator)? * I only know the generator function aims to feed data batch by batch. > > As the name suggests, the .fit\_generator function assumes there is an underlying function that is generating the data for it. > > > * What should be included in this geneator function? What should be returned? Any **related** example? *Mark: I have read several online examples (e.g., [this](https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly) and [this](https://www.pyimagesearch.com/2018/12/24/how-to-use-keras-fit-and-fit_generator-a-hands-on-tutorial/)). They use images as example, which is not my case (csv data only), and not easy to understand.*
The problem you are describing is known as *wake word* detection or *trigger word* detection. I'm sure you could use a CNN to classify a chunked [Mel](https://en.wikipedia.org/wiki/Mel_scale)-spectrogram of your audio (see also [librosa](https://librosa.github.io/librosa/generated/librosa.feature.melspectrogram.html)). As training labels you would simply use `0` for timestamps with no wake word (no "pizza") and `1` for timestamps with the wake word. Alternatively to classifying all timestamps of one chunk, you could also train for just the the center frame of each spectrogram chunk (makes things easier). In any case, you will have to make sure that your dataset is at least mildly balanced, i.e., you'll have to have enough wake word and non-wake word instances. One way to achieve this, is to *overlay* recordings of background noise with recordings of wake and non-wake words. There are some tutorials that detail how to do this, e.g. [this YouTube video](https://www.youtube.com/watch?v=Zqx_hbTmN6A), [this article](https://www.dlology.com/blog/how-to-do-real-time-trigger-word-detection-with-keras/) or [this GitHub repo](https://github.com/MycroftAI/mycroft-precise). Note that all these approaches use [RNN](https://en.wikipedia.org/wiki/Recurrent_neural_network)s for the task. However, it has been argued by [Bia et al.](https://arxiv.org/pdf/1803.01271.pdf) that a temporal convolutional network (TCN) architecture (in essence a [CNN](https://en.wikipedia.org/wiki/Convolutional_neural_network) [skip connections](https://en.wikipedia.org/wiki/Residual_neural_network) and [dilation](https://towardsdatascience.com/understanding-2d-dilated-convolution-operation-with-examples-in-numpy-and-tensorflow-with-d376b3972b25)) may work equally well or better for such tasks as the one you describe and is probably easier to train. Hopefully this answer will give you some points to start from.
I understand that Dynamic Time Warping is an algorithm to find a matching between two signals with different length and speed But is there a possible way to find the speed difference between the two signals being compared? To clarify my question. I am working on a project to find similarity between two motions. The data being compared is in the form of Quaternion. By using DTW I was able to find the similarity/ dis-similarity between these motion signals, However I would like to know is there a way to find if one motion signal is slower/faster than the other. Ex: both motions are actually similar, but one of the motion is slower than the other
Let us first go into the proof. Let us then review any further considerations as you do (ie., whether the graph should be complete or not). **Lemma** Given an undirected graph $G=(V,E), |V|\geq 3$, and a cost function $c$ defined over $E$, if the triangle inequality is satisfied, then $c(u,v)\geq 0, \forall \langle u,v\rangle \in E$. Note that the lemma does not require the graph to be complete ---a graph is *complete* if there is an edge for every pair of vertices, i.e., $\langle u, v\rangle\in E, \forall u, v\in V$. To prove the statement above take any three vertices $u, v, w\in V$ which are related to each other, i.e., that form a triangle. Let $\alpha=c(u,v), \beta=c(v,w), \gamma=c(u,w)$. Let us assume further without loss of generality that these edge costs are sorted in increasing order of cost: $\alpha\leq\beta\leq\gamma$ ---and this order necessarily exists as discrete optimization problems are usually defined over *total orders*. Now, let us proof by contradiction the lemma. There are three different cases (all somehow related to each other, if you manage to understand the first, the others are rather equivalent but I wrote them down here for your convenience). **Case 1** (*only one negative edge cost*) Assume that $\alpha<0$. Because triangle inequality is assumed by hypothesis then: $\alpha+\beta\geq\gamma$. To make signs apparent, take absolute values: $|\beta|-|\alpha|\geq |\gamma|$, where I considered $\beta, \gamma\geq 0$. Obviously, $|\gamma|\geq |\gamma| - |\alpha|$ and this is true for all plausible values of $\gamma$ (also negative ones). Thus: $ |\beta|-|\alpha|\geq |\gamma|\geq |\gamma| - |\alpha| $ leading to $\beta\geq\gamma$ which contradicts our first assumption ($\beta\leq\gamma$). However, devil's in the details so let us consider the frontier case where $\beta=\gamma$. In this case, take now the other relationship that results from triangle inequality: $\alpha+\gamma\geq\beta$. Since $\beta=\gamma$, the preceding relationship simplifies to $\alpha\geq 0$ which, again, contradicts our second assumption, $\alpha<0$. Hence, $\alpha\geq 0$. It is straightforward to prove that the same reasoning applies in case $\beta<0$ or, equivalently, that $\gamma<0$. Note, however, that from the order we assumed over these costs, $\beta<0$ implies that there are two edge costs which are negative, and that $\gamma<0$ implies that all of them are negative. These cases are examined next. **Case 2** (*two negative edge costs*) Still, there is a caveat in the preceding reasoning. When taking absolute values to make the signs apparent, I considered $\beta, \gamma\geq 0$. You might think that the lemma might not hold in case that two edge costs are negative. Let us assume that both $\alpha, \beta<0$, then as triangle inequality is assumed $\alpha+\beta\geq\gamma$ and again, after taking absolute values the following results: $-|\alpha|-|\beta|\geq|\gamma|$, which is clearly impossible, as the left hand is a negative number and the right hand is a positive number. And this completes the proof for the second case. If you are thinking of trying the same proof for $\beta, \gamma<0$ or $\alpha,\gamma<0$, note that all edge costs would be necessarily negative as they are sorted in increasing order of cost. This observation leads us to the third case. **Case 3** (*all edge costs are negative*) To conclude, what if all edge costs are negative, $\alpha, \beta, \gamma<0$? Oh, well, then it just suffices to combine two expressions resulting from triangle inequality: $\alpha+\beta\geq\gamma$ and, since all of these are negative numbers, then $-|\alpha|-|\beta|\geq -|\gamma|$, but also $\alpha+\gamma\geq\beta$ and, again, because these are all negative numbers then $-|\alpha|-|\gamma|\geq-|\beta|$. Now, the observation is that the first inequality shall hold for values larger or equal than $-|\beta|$ and hence: $-|\alpha|-(-|\alpha|-|\gamma|)\geq -|\gamma|$ which leads to $|\gamma|\geq -|\gamma|$, which is only possible if $\gamma=0$ but this contradicts our assumption that $\gamma<0$ and hence the third case is proven as well by contradiction. *Note to the third case* - in case you have difficulty to follow how the last inequality is hold, transform all $\geq$ to $\leq$ by multiplying all inequalities by $-1$. Now, only with positive numbers it might be easier to follow the reasoning. **Observations** Nowhere in the preceding proof I assumed the graph to be complete and, indeed, there might well be a pair of vertices $u,v\in V$ which are not directly related to each other through an edge. What we actually did is to prove that every three vertices connected to each other shall be non-negative given that triangle inequality is verified. This does not prevent some edge costs to be negative. It just suffices that the vertex connected to an edge with a negative cost is not connected to any other vertex. This is true because if the degree of a vertex equals 1 then triangle inequality can not be verified. Equivalently, if the degree of each vertex is larger or equal than two, triangle inequality can be always verified and, in case it holds, then all edge costs are necessarily non-negative. Hope this helps,
On page 12 of [Bates' book on mixed effect model](http://lme4.r-forge.r-project.org/lMMwR/lrgprt.pdf), he describes the model as follows: ![Bates' mixed effect model](https://i.stack.imgur.com/Cry9S.png) Near the end of the screenshot, he mentions the > > *relative covariance factor* $\Lambda\_{\theta}$, depending on the *variance-component parameter*, $\theta$ > > > without explaining what exactly is the relationship. Say we are given $\theta$, how would we derive $\Lambda\_{\theta}$ from it? On a related note, this is one of many instance in which I find Bates' exposition to be a bit lacking in details. Is there a better text that actually goes through the optimization process of parameter estimation and the proof for the distribution of test statistic?
The variance-component parameter vector $\theta$ is estimated iteratively to minimise the model deviance $\widetilde{d}$ according to eq. 1.10 (p. 14). The relative covariance factor, $\Lambda\_\theta$, is a $q \times q$ matrix (dimensions are explained in the excerpt you posted). For a model with a simple scalar random-effects term, (p. 15, Fig. 1.3) it is calculated as a multiple of $\theta$ and identity matrix of dimensions $q \times q$: $$\Lambda\_\theta = \theta \times {I\_q}$$ [![fm01ML](https://i.stack.imgur.com/JZA23.png)](https://i.stack.imgur.com/JZA23.png) This is the general way to calculate $\Lambda\_\theta$, and it is modified according to the number of random-effects and their covariance structure. For a model with two uncorrelated random-effects terms in a crossed design, as on pp. 32-34, it is block diagonal with two blocks each of which is a multiple of $\theta$ and identity (p. 34, Fig. 2.4): [![fm03](https://i.stack.imgur.com/pZJcp.png)](https://i.stack.imgur.com/pZJcp.png) Same with two nested random-effects terms (p. 43, Fig. 2.10, not shown here). For a longitudinal (repeated-measures) model with a random intercept and a random slope which are allowed to correlate $\Lambda\_\theta$ consists of triangular blocks representing both random-effects and their correlation (p. 62, Fig. 3.2): [![enter image description here](https://i.stack.imgur.com/pk0Bg.png)](https://i.stack.imgur.com/pk0Bg.png) Modelling the same dataset with two uncorrelated random-effects terms (p. 65, Fig. 3.3) returns $\Lambda\_\theta$ of the same structure as shown previously, in Fig. 2.4: [![enter image description here](https://i.stack.imgur.com/kBlIk.png)](https://i.stack.imgur.com/kBlIk.png) --- Additional notes: $\theta\_i = \frac{\sigma\_i}{\sigma}$ Where $\sigma\_i$ refers to the square root of the i-th random-effect variance, and $\sigma$ refers to the square root of the residual variance (compare with pp. 32-34). The book version from June 25, 2010 refers to a version of `lme4` which has been modified. One of the consequences is that in the current version 1.1.-10. the random-effects model object-class `merMod` has a different structure and $\Lambda\_\theta$ is accessed in a different way, using the method `getME`: ``` image(getME(fm01ML, "Lambda")) ```
There are well known techniques for proving lower bounds on the communication complexity of boolean functions, like fooling sets, the rank of the communication matrix, and discepancy. 1) How do we use these techniques for lower bounding partial boolean functions? More specifically, how do you count the rectangles in the communication matrix? Do you also count the "undefined part" of the function? Or do you leave it out? Another thing, what are the known relations between the communication complexity of total and partial functions? For example, is there a function $f$ with a promise version of it, call it $f'$, such that $C(f)\neq C(f')$? (here $C$ could be any communication complexity measure like deterministic, probabilistic with all its flavors, quantum). A concrete example is the equality function EQ, and its promise version defined in this [paper](http://arxiv.org/abs/quant-ph/9802040) denoted as EQ'. It is known that $C(EQ)=n$ and $C(EQ')=\Omega(n)$ where $C$ is the bounded error communication complexity (see the paper). There is no matching upper bound here, but they are asymptotically the same. 2) Is there a function defined in the same spirit of EQ and EQ', but with a different complexity?
(1) The way I like to think about partial functions is by defining a *total* function with three outputs e.g, $f: \{0,1\}^n \times \{0,1\}^n \rightarrow \{0,1, \*\}$. The $\*$ values are where your *partial* function is undefined. (2) You can still define a "monochromatic" rectangle in this case. But here, you allow $\*$'s to be in the rectangle. A rectangle $R$ is monochromatic if $f(R) \subseteq \{0,\*\}$ or $f(R) \subseteq \{1,\*\}$. From this, you can still use many of the standard lower bounds techniques such as fooling sets. (3) As Marc mentions, you can always define trivial partial functions where the communication complexity is much less than the original. For example, say the partial function TEQ is the EQ function, restricted to $(x,y)$ pairs such that $x \neq y$. A partial function that people might care about is the Gap-Hamming-Distance function. $GHD\_{n,g}$ is takes two $n$ bit strings $x,y$ and returns $1$ if their Hamming distance is more than $n/2 + g$ and returns $0$ if their Hamming distance is less than $n/2 - g$. (The Hamming distance $\Delta(x,y)$ is the number of $i$ where $x\_i \neq y\_i$.) People are really particularly interested in the randomized communication complexity of $GHD$. It's not hard to show that the gapless version (Alice/Bob want to tell if $\Delta(x,y)$ is greater than $n/2$ or not) requires $\Omega(n)$ bits of communication. It turns out that when $g = O(\sqrt{n})$, you still need a linear amount of communication. However, when $g = \omega(\sqrt{n})$, you can get away with only $O((n/g)^2)$ bits. When $g = \Omega(n)$, you get a partial function with $O(1)$ communication complexity. The $g = \Theta(\sqrt{n})$ case seems to be the "important case". Indyk and Woodruff introduced this problem, gave a lower bound for one-way randomized protocols, and used it to get lower bounds for streaming algorithms that estimate frequency moments. The state of the art lower bound is $\Omega(n)$ bits for any randomized protocol and is due to Chakrabarti and Regev. Piotr Indyk and David Woodruff. Tight Lower Bounds for the Distinct Elements Problem. [FOCS 2003](http://www.computer.org/portal/web/csdl/doi/10.1109/SFCS.2003.1238202). Amit Chakrabarti and Oded Regev. An Optimal Lower Bound on the Communication Complexity of Gap-Hamming-Distance. <http://arxiv.org/pdf/1009.3460>.
I am reading a computer theory book and came across the following example converting a NFA to a DFA: [![enter image description here](https://i.stack.imgur.com/svY7I.png)](https://i.stack.imgur.com/svY7I.png) My question is, would you not need an edge labeled $b$ going from the $x\_4$ node on the right to the null collection state?
The standard definition of DFAs involves a transition function, that is total. That means that, from every state, there should be an arrow for each symbol. So, yes, there is a missing $b$ arrow from $q\_4$ to $\emptyset$ and also, missing $a$ and $b$ arrows from $\emptyset$ to itself. While we're at it, the diagram also has no indicated start state. Note, though, that some authors use the convention that, if there's no transition given for some symbol from some state then, when the automaton reads that symbol in that state, it goes to a "dead" state. The dead state is non-accepting and the automaton cannot leave it (reading any character keeps it in that state). The author of your book might be using that convention inconsistently: the dead state is $\emptyset$ but they've only shown some of the transitions that lead to it. I think that's bad style: in my opinion, it's clearest to either show all the transitions to the dead state, or none of them.
(First of all, just to confirm, an offset variable functions basically the same way in Poisson and negative binomial regression, right?) Reading about the use of an offset variable, it seems to me that most sources recommend including that variable as an option in statistical packages (exp() in Stata or offset() in R). Is that functionally the same as converting your outcome variable to a proportion if you're modeling count data and there is a finite number that the count could have happened? My example is looking at employee dismissal, and I believe the offset here would simply be log(number of employees). And as an added question, I am having trouble conceptualizing what the difference is between these first two options (including exposure as an option in the software and converting the DV to a proportion) and including the exposure on the RHS as a control. Any help here would be appreciated.
Recall that an offset is just a predictor variable whose coefficient is fixed at 1. So, using the standard setup for a Poisson regression with a log link, we have: $$\log \mathrm{E}(Y) = \beta' \mathrm{X} + \log \mathcal{E}$$ where $\mathcal{E}$ is the offset/exposure variable. This can be rewritten as $$\log \mathrm{E}(Y) - \log \mathcal{E} = \beta' \mathrm{X}$$ $$\log \mathrm{E}(Y/\mathcal{E}) = \beta' \mathrm{X}$$ Your underlying random variable is still $Y$, but by dividing by $\mathcal{E}$ we've converted the LHS of the model equation to be a *rate* of events per unit exposure. But this division also alters the variance of the response, so we have to weight by $\mathcal{E}$ when fitting the model. Example in R: ``` library(MASS) # for Insurance dataset # modelling the claim rate, with exposure as a weight # use quasipoisson family to stop glm complaining about nonintegral response glm(Claims/Holders ~ District + Group + Age, family=quasipoisson, data=Insurance, weights=Holders) Call: glm(formula = Claims/Holders ~ District + Group + Age, family = quasipoisson, data = Insurance, weights = Holders) Coefficients: (Intercept) District2 District3 District4 Group.L Group.Q Group.C Age.L Age.Q Age.C -1.810508 0.025868 0.038524 0.234205 0.429708 0.004632 -0.029294 -0.394432 -0.000355 -0.016737 Degrees of Freedom: 63 Total (i.e. Null); 54 Residual Null Deviance: 236.3 Residual Deviance: 51.42 AIC: NA # with log-exposure as offset glm(Claims ~ District + Group + Age + offset(log(Holders)), family=poisson, data=Insurance) Call: glm(formula = Claims ~ District + Group + Age + offset(log(Holders)), family = poisson, data = Insurance) Coefficients: (Intercept) District2 District3 District4 Group.L Group.Q Group.C Age.L Age.Q Age.C -1.810508 0.025868 0.038524 0.234205 0.429708 0.004632 -0.029294 -0.394432 -0.000355 -0.016737 Degrees of Freedom: 63 Total (i.e. Null); 54 Residual Null Deviance: 236.3 Residual Deviance: 51.42 AIC: 388.7 ```
I am training 3 different models, with varying parameters like learning\_rate, regularization\_strength. But number of epochs is same for all. For a fixed no. of epoch, model 2 has the highest validation loss value. But the accuracy of it is highest in test set. How can it be possible ? Or I have some bug ?
The accuracy of the learned model on the validation set is just a **proxy** for its accuracy in the real-world. It is possible that the accuracy of a learned model in the real-world (for example in the test set), be different than its estimated accuracy during training. If the difference between the accuracy of the model on the validation set and the test set is significant, it means that your validation set and test set comes from two different distributions. It is a common advice for machine learning practitioner that try to choose validation and test sets that are drawn from the same distribution. On the other hand, there is a topic in the machine learning called [domain adaptation](https://en.wikipedia.org/wiki/Domain_adaptation) which tries to develop learning methods that perform well on a different target distribution.
I want to test whether men or women are higher risk takers. In the data, I have a likert scale for risk (strongly agree to take risk, agree to take risk, uncertain, disagree to take risk and strongly disagree to take risk). The data has a gender variable but I want to know whether men or women are high risk takers. which test should I use?
You might want to see the answers to a recent Cross Validated questions here: [CV Question](https://stats.stackexchange.com/questions/320938/compare-samples-with-grouped-data-can-t-test-be-used). A couple of potential solutions would be the Cochran-Armitage test or to use ordinal regression. I'll use the sample data from @BruceET, and present these in R. ``` Input =( "RiskTaking 1 2 3 4 5 Sex Men 3 9 12 14 12 Women 7 23 12 2 6 ") Tabla = as.table(read.ftable(textConnection(Input))) Tabla ``` As @BruceET mentioned, a permutation test could be used. In R, the `coin` package has an implementation for a table with one ordinal variable and one categorical variable. This is a different test than those described by @BruceET, and I think is supposed to be used like the Cochran-Armitage test. Note that the ordinal categories (`RiskTaking`) are specified as being equidistant with the `scores` option. ``` ### Adapted from: ### http://rcompanion.org/handbook/H_09.html library(coin) Test = chisq_test(Tabla, scores = list("RiskTaking" = c(-2, -1, 0, 1, 2))) Test #### Asymptotic Linear-by-Linear Association Test #### #### data: RiskTaking (ordered) by Sex (Men, Women) #### Z = 3.6366, p-value = 0.0002762 #### alternative hypothesis: two.sided ``` As @Gijs mentioned, a more *EDIT: flexible* approach is to use ordinal regression. In R, the `ordinal` package is a great tool for this. I'll recreate the data to keep things simple. Note that the dependent variable must be specified as an ordered factor variable. ``` Sex = c(rep("Men", 5), rep("Women", 5)) RiskTaking = rep(1:5,2) Count = c(3,9,12,14,12,7,23,12,2,6) Data = data.frame(Sex, RiskTaking, Count) Data$RiskTaking = factor(Data$RiskTaking, ordered=TRUE) str(Data) ``` The following conducts ordinal regression with the `ordinal` package. Here I'll use the anova function between two models. The `summary` function could be used instead. ``` ### Adapted from: ### http://rcompanion.org/handbook/G_02.html if(!require(ordinal)){install.packages("ordinal")} library(ordinal) model = clm(RiskTaking ~ Sex, data = Data, weight = Data$Count) model.null = clm(RiskTaking ~ 1, data = Data, weight = Data$Count) anova(model, model.null) #### Likelihood ratio tests of cumulative link models: #### #### formula: link: threshold: #### model.null RiskTaking ~ 1 logit flexible #### model RiskTaking ~ Sex logit flexible #### #### no.par AIC logLik LR.stat df Pr(>Chisq) #### model.null 4 315.85 -153.93 #### model 5 302.93 -146.46 14.924 1 0.0001119 *** ```
I've been reading the Wikipedia articles about various statistical tests, and trying to code up programs that perform them. However, [the article on the Kolmogorov-Smirnov test](http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) is rather unclear to me. Can somebody explain, in simple language, how I would write a program that takes a list of real numbers and a CDF, and computes a p-value for how closely they match?
[Here](https://stats.stackexchange.com/questions/27958/testing-data-against-a-known-distribution/27965#27965) is an answer to a related question that as part of the answer gives an example of computing the statistic and a graph demonstrating the general idea.
Is there a standard name for the operator that takes a function $f:X\rightarrow(Y\rightarrow Z)$ and returns the function $f':Y\rightarrow(X\rightarrow Z)$ that satisfies, for every $y \in Y$ and $x \in X$, $f'(y)(x) = f(x)(y)$?
The function $$\lambda f.\lambda x.\lambda y.f\;y\;x$$ of type $$\forall X. \forall Y. \forall Z.(X \to Y \to Z) \to Y \to X \to Z$$ is often called **flip**. This is the case in Haskell (see [here](https://hoogle.haskell.org/?hoogle=%28a+-%3E+b+-%3E+c%29+-%3E+b+-%3E+a+-%3E+c&scope=set%3Aincluded-with-ghc)), and in some OCaml libraries as well (see [here](https://stackoverflow.com/questions/16627444/is-there-a-flip-function-in-the-ocaml-standard-library)). According to [wikipedia](https://en.wikipedia.org/wiki/Combinatory_logic#Combinators_B,_C), people call this function (or combinator) $C$ in the context of combinatory logic (that name sounds pretty random though).
Oracles do not exist. If one did exist, then you would replace them with a subroutine with computational requirements and you would no longer need an "Oracle". Thus, Oracles do not exist almost by definition. I don't understand how one can make an mathematical argument based on something that does not exist (excluding unbounded memory machines, of course). Have a problem, ask the "magic" Oracle, problem solved $O(1)$ time. Of course you can't say anything about what is happening in the oracle, because its "magic". From this point of view, Turing Machines (bizarre fetish formalism if you ask me) with Oracles do not exist. Also, that X historic proofs rely on oracles makes oracle arguments valid is a circular argument.
There are several applications to oracles. First, there is usage in proving lower bounds (i.e. Turing reductions): if you know that a problem $L$ cannot be solved within some complexity (or computability) class $C$, and you show that an oracle to $L'$ allows you to solve $L$ within $C$, then you can conclude that $L'$ is also not in $C$. Second, there is a pure-mathematical interest in adding oracles. For example, consider an oracle to $A\_{TM}$. The fact that even under this oracle there are problems that are undecidable leads to the beautiful concept of the arithmetical hierarchy. Moreover, there is an interesting connection to logic - the arithmetical hierarchy corresponds to definable sets.
Or more so "will it be"? [Big Data](http://en.wikipedia.org/wiki/Big_data) makes statistics and relevant knowledge all the more important but seems to underplay Sampling Theory. I've seen this hype around 'Big Data' and can't help wonder that "why" would I want to analyze **everything**? Wasn't there a reason for "Sampling Theory" to be designed/implemented/invented/discovered? I don't get the point of analyzing the entire 'population' of the dataset. Just because you can do it doesn't mean you should (Stupidity is a privilege but you shouldn't abuse it :) So my question is this: Is it statistically relevant to analyze the entire data set? The best you could do would be to minimize error if you did sampling. But is the cost of minimizing that error really worth it? Is the "value of information" really worth the effort, time cost etc. that goes in analyzing big data over massively parallel computers? Even if one analyzes the entire population, the outcome would still be at best a guess with a higher probability of being right. Probably a bit higher than sampling (or would it be a lot more?) Would the insight gained from analyzing the population vs analyzing the sample differ widely? Or should we accept it as "times have changed"? Sampling as an activity could become less important given enough computational power :) Note: I'm not trying to start a debate but looking for an answer to understand the why big data does what it does (i.e. analyze everything) and disregard the theory of sampling (or it doesn't?)
Cross validation is an specific example of sub-sampling which is quite important in ML/big data. More generally, big data is still usually a sample of a population, as other people here have mentioned. But, I think OP might be specifically referring to sampling as it applies to a controlled experiments, versus observational data. Usually big data is thought of as the latter, but to me at least there are exceptions. I would think of randomized trials, A/B testing, and multiarmed bandits in e-commerce and social network settings as examples of "sampling in big data settings."
I am trying to teach myself data science by solving some of the problems available on the internet. Currently I am trying to predict a fraud event with the aid of 4 categorical variables. Each of the categorical variables have 100 of levels. Some of the levels occur frequently, some none at all. Currently I have tried the following * Throwing the categorical variable directly into linear regression. I get memory related errors * Dummy encoding each categorical variable. Throw all of them into a linear regression. This also fails due to memory error * Throwing the categorical variable into a Random ForestTM. This fails as Random Forest can only handle 52 distinct levels None of the approaches I tried above have worked. What else can I try? Any help is appreciated. Update #1: The categorical variables are groupings like zip code, county, etc. In addition to these the customers are grouped into different buckets based on presence or absence of certain factors. Update #2: I was able to use memory.limit() and increase the maximum memory size available to R. This has solved my memory issues. Update #3: The current approach I was to use linear regression with regularization to find average fraud rate per factor level (for example average fraud rate per zip code). Now the categorical variable with hundreds of factors has been reduced to a numeric variable. I follow the same approach to convert all the 4 categorical variables into numeric. I throw these 4 numeric variables into a random forest. My results with the approach are okay with lots of room for improvement. Update #4: I am using the R ecosystem
Unless I'm missing something, it would make a lot of sense to try a generalized linear mixed model on this (e.g. `lme4` package in R). * it avoids the memory problem because the model matrix is coded as a sparse matrix (you can do this in any case to help with memory use, even if you choose to use some other approach -- e.g. see `Matrix::sparse.model.matrix` in R). * it automatically does regularization on the underlying parameters associated with each categorical predictor, with the degree of regularization selected automatically (by assuming that the variation among categories is Normally distributed on the link scale). This assumption might be stronger than you want to make, but it's theoretically and computationally convenient.
I've been looking into computer assisted music composition lately for my school project. While searching for literature I came across [GenJam](http://igm.rit.edu/~jabics/GenJam94/Paper.html), an interactive jazz improvisation software which uses genetic algorithms to produce musical phrases. I was wondering If anyone has done some work on computer generated music and could suggest term papers, books or other reading material I should look into.
see also eg * [Wikipedia evolutionary music](http://en.wikipedia.org/wiki/Evolutionary_music) * [A genetic algorithm for producing music](http://www.doiserbia.nb.rs/img/doi/0354-0243/2010/0354-02431001157M.pdf) Matic * [A fitness function for computer-generated music using genetic algorithms](http://arantxa.ii.uam.es/~alfonsec/docs/artint/music2b.pdf) Alfonseca * [Computer Models of Musical Creativity](http://rads.stackoverflow.com/amzn/click/0262033380) Cope / "asks whether computer programs can effectively model creativity -- and whether computer programs themselves can create" * [Virtual Music: Computer Synthesis of Musical Style](http://rads.stackoverflow.com/amzn/click/0262532611) Cope / "the author and a distinguished group of experts discuss many of the issues surrounding the program, including artificial intelligence, music cognition, and aesthetics." * [Machine Musicianship](http://rads.stackoverflow.com/amzn/click/0262681498) "This book explores the technology of implementing musical processes such as segmentation, pattern processing, and interactive improvisation in computer programs."
This is a problem from the practice session of the [Polish Collegiate Programming Contest 2012](http://main.edu.pl/en/archive/amppz/2012/dzi). Although I could find the solutions for the main contest, I can't seem to find the solution for this problem anywhere. The problem is: Given a set of $N$ distinct positive integers not greater than $10^9$, find the size $m$ of the smallest subset that has no common divisor other than 1. $N$ is at most 500, and a solution can be assumed to exist. I managed to show that $m \le 9$. My reasoning is: Suppose there exists a minimal subset $S$ of size $|S|=10$, with gcd = 1. Then all 9-subsets of $S$ must have gcd > 1. There are exactly 10 such subsets, and their gcds must be pairwise coprime. Let these gcds be $1 < g\_1 < g\_2 < ... < g\_{10}$, where $\gcd(g\_i,g\_j)=1$, for $i \neq j$. Then the maximum number in $S$ is $g\_2g\_3...g\_{10}$. But $g\_2g\_3...g\_{10} \ge 3\times5\times7\times11\times...\times29=3234846615 > 10^9$, a contradiction. However, even with this, a straightforward brute force is still too slow. Does anyone have any other ideas?
This problem is equivalent to the following, and it's trivial to construct reduction both ways. Given a list of bit vectors, find the minimum number of them such that `and` all of them result the $0$ bit vector. $(\*)$ Then we show set cover reduces to $(\*)$. By set cover, I mean given a list of sets $S\_1,\ldots,S\_k$, find the minimum number of sets that covers their union. We order the elements in the sets to be $a\_1,\ldots,a\_n$. Let $f(S) = (1-\chi\_{a\_1}(S),\ldots,1-\chi\_{a\_n}(S))$, where $\chi\_x(S) = 1$ if $x\in S$, 0 otherwise. Note this function is a bijection so it has a inverse. Now, if we solve $(\*)$ on $f(S\_1),\ldots,f(S\_k)$, and the solutions is $\{ f(S\_{b\_1}),\ldots,f(S\_{b\_m})\}$, then $\{ f^{-1}(S\_{b\_1}),\ldots,f^{-1}(S\_{b\_m})\}$ is the solution to set cover. Thus I would think this problem is testing one's ability to prune the search space.
I am not a Regular Experssion expert, but my request is simple: I need to match any string that has at least 3 characters of B and at most 2 characters of A so for example: ``` "ABABBB" => Accepted "ABBBBBB" => Accepted "BBB" => Accepted "ABB" => Rejected "AABB" => Rejected "AA" => Rejected ```
A problem is said to be NP-complete if it is in both NP and NP-hard, so yes all NP-complete problems are also NP-hard.
I have a **daily** time series data. I have to perform autocorrelation on it so as to find correlation between weeks and find the minimum number of lags that have good correlation coefficients. That is, with what weeks is week(t) correlated to. Is it well correlated with week(t-1), week(t-2) etc... Autocorrelation with my data would give me correlations between days and not weeks which I want. But I want correlation between weeks. That is, autocorrelation with groups of 7 days (1week=7days). How can I do it?
You can first average the daily values to weekly (so that, for example, if you started out with 364 daily values, you'll have 52 weekly values) and then compute the autocorrelation of the weekly series.
For 100 companies, I have collected (i) `tweets` and (ii) corporate website `pageviews` for `148` days. The tweetvolume and pageviews per day are two independent variables corpaired against the stock `trading volume` for each company, resulting in 100 x 148 = 14,800 observations. My data is structured like this: ``` company date tweetVol pageviewVol tradingVol ------------------------------------------------ 1 1 200 150 2423325 1 2 194 152 2455343 1 3 214 199 3100429 . . . . . . . . . . 1 148 205 233 2563463 2 1 752 932 7434124 2 2 932 2423 7464354 2 3 600 1435 5324323 . . . . . . . . . . . . . . . 100 148 3 155 32324 ``` Because there is much difference in company-size (some companies only receive 2 tweets per day, where others like Apple get over 10,000 per day), all variables are logged to smoothen distribution. (This is in line with previous research - this is for my thesis). I just performed a linear regression on this data, including both independend variables. R-Squared is .411 but Durbin-Watson only .141 (!) Without looking for the exact bounderies, I know this directly means my residuals are non-linear, eg. auto-correlated, right? My question is: how can I solve this? When I think about it, this data should not be autocorrelated, so I don't really understand. Is it due to this actually being a timeseries analysis? I wouldn't think that either, since for instance trading volume today is independent of yesterdays trading volume. Can somebody explain this to me? P.S. At my university, we use SPSS/PASW without additional modules, so I am unable to perform a timeseries analysis on this like you could in STATA or R.
Autocorrelation has nothing to do with nonlinearity. The Durbin-Watson test is used to determine if the residuals from your model have significant autocorrelation. So you look at the p-value for the test and conclude that there is autocorrelation if the p-value is small (usually taken as less than 0.05). Is 0.141 the p-value for the test or the value of the test statistic? If it is the p-value it is not low enough to conclude that there is significant autocorrelation. If it is the value of the test statistic you need to find out what the corresponding p-value is. If you do have a p-value less than 0.05, a way to account for this would be to construct a model that includes residual correlation structure such as an autorgerssive model for the residuals.
Given a huge database of allowed words (alphabetically sorted) and a word, find the word from the database that is closest to the given word in terms of Levenshtein distance. The naive approach is, of course, to simply compute the Levenshtein distance between the given word and all the words in the dictionary (we can do a binary search in the database before actually computing the distances). I wonder if there is a more efficient solution to this problem. Maybe some heuristic that lets us reduce the number of words to search, or optimizations to the Levenshtein distance algorithm. Links to papers on the subject welcome.
If you have a small number of mis-edits that you are going to tolerate, then you can try to use a [dotted suffix tree](https://doi.org/10.1007/11880561_27 "Coelho, L.P., Oliveira, A.L. (2006). Dotted Suffix Trees: A Structure for Approximate Text Indexing. In: Crestani, F., Ferragina, P., Sanderson, M. (eds) String Processing and Information Retrieval. SPIRE 2006. Lecture Notes in Computer Science, vol 4209. Springer, Berlin, Heidelberg"). Disclaimer: I wrote that paper, but it solves what you want: it has a high disk space cost, but queries are really fast. In general, it is better to look at it the other way around: you have an index of all of the words in the dictionary. Now, for an input word w, if it is in the dictionary, stop. Otherwise, generate all variations at distance 1 and look for those. If they are not there, look for variations at distance 2, and so on... There are several improvements to this basic idea.
**I am interested in calculating area under the curve (AUC), or the c-statistic, by hand for a binary logistic regression model.** For example, in the validation dataset, I have the true value for the dependent variable, retention (1 = retained; 0 = not retained), as well as a predicted retention status for each observation generated by my regression analysis using a model that was built using the training set (this will range from 0 to 1). My initial thoughts were to identify the "correct" number of model classifications and simply divide the number of "correct" observations by the number of total observations to calculate the c-statistic. By "correct", if the true retention status of an observation = 1 and the predicted retention status is > 0.5 then that is a "correct" classification. Additionally, if the true retention status of an observation = 0 and the predicted retention status is < 0.5 then that is also a "correct" classification. I assume a "tie" would occur when the predicted value = 0.5, but that phenomenon does not occur in my validation dataset. On the other hand, "incorrect" classifications would be if the true retention status of an observation = 1 and the predicted retention status is < 0.5 or if the true retention status for an outcome = 0 and the predicted retention status is > 0.5. I am aware of TP, FP, FN, TN, but not aware of how to calculate the c-statistic given this information.
I would recommend Hanley’s & McNeil’s 1982 paper ‘[The meaning and use of the area under a receiver operating characteristic (ROC) curve](http://www.med.mcgill.ca/epidemiology/hanley/software/Hanley_McNeil_Radiology_82.pdf)’. Example ======= They have the following table of disease status and test result (corresponding to, for example, the estimated risk from a logistic model). The first number on the right is the number of patients with *true* disease status ‘normal’ and the second number is the number of patients with *true* disease status ‘abnormal’: (1) Definitely normal: 33/3 (2) Probably normal: 6/2 (3) Questionable: 6/2 (4) Probably abnormal: 11/11 (5) Definitely abnormal: 2/33 So there are in total 58 ‘normal’ patients and ‘51’ abnormal ones. We see that when the predictor is 1, ‘Definitely normal’, the patient is usually normal (true for 33 of the 36 patients), and when it is 5, ‘Definitely abnormal’ the patients is usually abnormal (true for 33 of the 35 patients), so the predictor makes sense. But how should we judge a patient with a score of 2, 3, or 4? What we set our cutoff for judging a patients as abnormal or normal to determines the sensitivity and specificity of the resulting test. Sensitivity and specificity =========================== We can calculate the *estimated* sensitivity and specificity for different cutoffs. (I’ll just write ‘sensitivity’ and ‘specificity’ from now on, letting the estimated nature of the values be implicit.) If we choose our cutoff so that we classify *all* the patients as abnormal, no matter what their test results says (i.e., we choose the cutoff 1+), we will get a sensitivity of 51/51 = 1. The specificity will be 0/58 = 0. Doesn’t sound so good. OK, so let’s choose a less strict cutoff. We only classify patients as abnormal if they have a test result of 2 or higher. We then miss 3 abnormal patients, and have a sensitivity of 48/51 = 0.94. But we have a much increased specificity, of 33/58 = 0.57. We can now continue this, choosing various cutoffs (3, 4, 5, >5). (In the last case, we won’t classify *any* patients as abnormal, even if they have the highest possible test score of 5.) The ROC curve ============= If we do this for all possible cutoffs, and the plot the sensitivity against 1 minus the specificity, we get the ROC curve. We can use the following R code: ```r # Data norm = rep(1:5, times=c(33,6,6,11,2)) abnorm = rep(1:5, times=c(3,2,2,11,33)) testres = c(abnorm,norm) truestat = c(rep(1,length(abnorm)), rep(0,length(norm))) # Summary table (Table I in the paper) ( tab=as.matrix(table(truestat, testres)) ) ``` The output is: ```r testres truestat 1 2 3 4 5 0 33 6 6 11 2 1 3 2 2 11 33 ``` We can calculate various statistics: ```r ( tot=colSums(tab) ) # Number of patients w/ each test result ( truepos=unname(rev(cumsum(rev(tab[2,])))) ) # Number of true positives ( falsepos=unname(rev(cumsum(rev(tab[1,])))) ) # Number of false positives ( totpos=sum(tab[2,]) ) # The total number of positives (one number) ( totneg=sum(tab[1,]) ) # The total number of negatives (one number) (sens=truepos/totpos) # Sensitivity (fraction true positives) (omspec=falsepos/totneg) # 1 − specificity (false positives) sens=c(sens,0); omspec=c(omspec,0) # Numbers when we classify all as normal ``` And using this, we can plot the (estimated) ROC curve: ```r plot(omspec, sens, type="b", xlim=c(0,1), ylim=c(0,1), lwd=2, xlab="1 − specificity", ylab="Sensitivity") # perhaps with xaxs="i" grid() abline(0,1, col="red", lty=2) ``` ![AUC curve](https://i.stack.imgur.com/hE7UQ.png) Manually calculating the AUC ============================ We can very easily calculate the area under the ROC curve, using the formula for the area of a trapezoid: ```r height = (sens[-1]+sens[-length(sens)])/2 width = -diff(omspec) # = diff(rev(omspec)) sum(height*width) ``` The result is 0.8931711. A concordance measure ===================== The AUC can also be seen as a concordance measure. If we take all possible *pairs* of patients where one is normal and the other is abnormal, we can calculate how frequently it’s the abnormal one that has the highest (most ‘abnormal-looking’) test result (if they have the same value, we count that this as ‘half a victory’): ```r o = outer(abnorm, norm, "-") mean((o>0) + .5*(o==0)) ``` The answer is again 0.8931711, the area under the ROC curve. This will always be the case. A graphical view of concordance =============================== As pointed out by Harrell in his answer, this also has a graphical interpretation. Let’s plot test score (risk estimate) on the *y*-axis and true disease status on the *x*-axis (here with some jittering, to show overlapping points): ```r plot(jitter(truestat,.2), jitter(testres,.8), las=1, xlab="True disease status", ylab="Test score") ``` ![Scatter plot of risk score against true disease status.](https://i.stack.imgur.com/laXh2.png) Let us now draw a line between each point on the left (a ‘normal’ patient) and each point on the right (an ‘abnormal’ patient). The proportion of lines with a positive slope (i.e., the proportion of *concordant* pairs) is the concordance index (flat lines count as ‘50% concordance’). It’s a bit difficult to visualise the actual lines for this example, due to the number of ties (equal risk score), but with some jittering and transparency we can get a reasonable plot: ```r d = cbind(x_norm=0, x_abnorm=1, expand.grid(y_norm=norm, y_abnorm=abnorm)) library(ggplot2) ggplot(d, aes(x=x_norm, xend=x_abnorm, y=y_norm, yend=y_abnorm)) + geom_segment(colour="#ff000006", position=position_jitter(width=0, height=.1)) + xlab("True disease status") + ylab("Test\nscore") + theme_light() + theme(axis.title.y=element_text(angle=0)) ``` ![Scatter plot of risk score against true disease status, with lines between all possible observation pairs.](https://i.stack.imgur.com/16gPA.png) We see that most of the lines slope upwards, so the concordance index will be high. We also see the contribution to the index from each type of observation pair. Most of it comes from normal patients with a risk score of 1 paired with abnormal patients with a risk score of 5 (1–5 pairs), but quite a lot also comes from 1–4 pairs and 4–5 pairs. And it’s very easy to calculate the actual concordance index based on the slope definition: ```r d = transform(d, slope=(y_norm-y_abnorm)/(x_norm-x_abnorm)) mean((d$slope > 0) + .5*(d$slope==0)) ``` The answer is again 0.8931711, i.e., the AUC. The Wilcoxon–Mann–Whitney test ============================== There is a close connection between the concordance measure and the Wilcoxon–Mann–Whitney test. Actually, the latter tests if the probability of concordance (i.e., that it’s the abnormal patient in a *random* normal–abnormal pair that will have the most ‘abnormal-looking’ test result) is exactly 0.5. And its test statistic is just a simple transformation of the estimated concordance probability: ```r > ( wi = wilcox.test(abnorm,norm) ) Wilcoxon rank sum test with continuity correction data: abnorm and norm W = 2642, p-value = 1.944e-13 alternative hypothesis: true location shift is not equal to 0 ``` The test statistic (`W = 2642`) counts the number of concordant pairs. If we divide it by the number of possible pairs, we get a familar number: ```r w = wi$statistic w/(length(abnorm)*length(norm)) ``` Yes, it’s 0.8931711, the area under the ROC curve. Easier ways to calculate the AUC (in R) ======================================= But let’s make life easier for ourselves. There are various packages that calculate the AUC for us automatically. The Epi package --------------- The `Epi` package creates a nice ROC curve with various statistics (including the AUC) embedded: ```r library(Epi) ROC(testres, truestat) # also try adding plot="sp" ``` ![ROC curve from the Epi package](https://i.stack.imgur.com/bWAZT.png) The pROC package ---------------- I also like the `pROC` package, since it can smooth the ROC estimate (and calculate an AUC estimate based on the smoothed ROC): ![ROC curve (unsmoothed and smoothed) from the pROC package](https://i.stack.imgur.com/eSNV7.png) (The red line is the original ROC, and the black line is the smoothed ROC. Also note the default 1:1 aspect ratio. It makes sense to use this, since both the sensitivity and specificity has a 0–1 range.) The estimated AUC from the *smoothed* ROC is 0.9107, similar to, but slightly larger than, the AUC from the unsmoothed ROC (if you look at the figure, you can easily see why it’s larger). (Though we really have too few possible distinct test result values to calculate a smooth AUC). The rms package --------------- Harrell’s `rms` package can calculate various related concordance statistics using the `rcorr.cens()` function. The `C Index` in its output is the AUC: ```r > library(rms) > rcorr.cens(testres,truestat)[1] C Index 0.8931711 ``` The caTools package ------------------- Finally, we have the `caTools` package and its `colAUC()` function. It has a few advantages over other packages (mainly speed and the ability to work with multi-dimensional data – see `?colAUC`) that can *sometimes* be helpful. But of course it gives the same answer as we have calculated over and over: ```r library(caTools) colAUC(testres, truestat, plotROC=TRUE) [,1] 0 vs. 1 0.8931711 ``` ![ROC curve from the caTools package](https://i.stack.imgur.com/DluDs.png) Final words =========== Many people seem to think that the AUC tells us how ‘good’ a test is. And some people think that the AUC is the probability that the test will correctly classify a patient. It is **not**. As you can see from the above example and calculations, the AUC tells us something about a *family* of tests, one test for each possible cutoff. And the AUC is calculated based on cutoffs one would never use in practice. Why should we care about the sensitivity and specificity of ‘nonsensical’ cutoff values? Still, that’s what the AUC is (partially) based on. (Of course, if the AUC is *very* close to 1, almost every possible test will have great discriminatory power, and we would all be very happy.) The ‘random normal–abnormal’ pair interpretation of the AUC is nice (and can be extended, for instance to survival models, where we see if its the person with the highest (relative) hazard that dies the earliest). But one would never use it in practice. It’s a rare case where one *knows* one has *one* healthy and *one* ill person, doesn’t know which person is the ill one, and must decide which of them to treat. (In any case, the decision is easy; treat the one with the highest estimated risk.) So I think studying the actual *ROC curve* will be more useful than just looking at the AUC summary measure. And if you use the ROC together with (estimates of the) *costs* of false positives and false negatives, along with base rates of what you’re studying, you can get somewhere. Also note that the AUC only measures *discrimination*, not calibration. That is, it measures whether you can discriminate between two persons (one ill and one healthy), based on the risk score. For this, it only looks at *relative* risk values (or ranks, if you will, cf. the Wilcoxon–Mann–Whitney test interpretation), not the absolute ones, which you *should* be interested in. For example, if you divide each risk estimate from your logistic model by 2, you will get exactly the same AUC (and ROC). When evaluating a risk model, *calibration* is also very important. To examine this, you will look at all patients with a risk score of around, e.g., 0.7, and see if approximately 70% of these actually were ill. Do this for each possible risk score (possibly using some sort of smoothing / local regression). Plot the results, and you’ll get a graphical measure of *calibration*. If have a model with *both* good calibration and good discrimination, then you start to have good model. :)
<https://www.medicaljournals.se/jrm/content/html/10.2340/16501977-2210> I understand the difference between statistical significance and effect size, but I am having trouble interpreting the data on Table III in the article linked above. I am evaluating the WOMAC scores. My first question is can effect size confidence intervals cross zero and still be valid? My second question is about the negative and positive values of the Hedges'g. The first study scores the WOMAC in a way where the higher the score, the better the outcome. The second two studies score the WOMAC in a way where the lower the score, the better the outcome. Does this mean that in the first study a positive Hedges'g indicate larger effect for the treatment group and a negative Hedges' g for the other two studies indicate larger effect for the treatment group?
**Confidence Interval** > > ...can effect size confidence intervals cross zero and still be valid? > > > Yes. There is no rule about the validity of numbers in a confidence interval. All a confidence interval means is that *if we would repeat this experiment many times and construct 95% CI every time, then 95% of those intervals contain the true value*. Whether such an interval contains zero is only meaningful in the context of hypothesis testing. If there is a hypothesis $\text{H}\_0: \mu\_1 - \mu\_2 = 0$, then at $\alpha = 0.05$, we cannot conclude a significant difference between $\mu\_1$ and $\mu\_2$ if the 95% confidence interval contains zero. **Hedge's g** > > My second question is about the negative and positive values of the Hedges'g. ... > > > Hedges' g is the observed difference divided by the pooled standard deviation: $\frac{\bar{y}\_1 - \bar{y}\_2}{s\_{\text{pooled}}}$. If it is positive, then $\bar{y}\_1>\bar{y}\_2$. If it is negative, then $\bar{y}\_1<\bar{y}\_2$.
I'm looking for an algorithm to construct a grammar which, given a set of words which can have multiple identical symbols, represents a compressed version of this set, that is, I can generate only the words of the set but the grammar will take less memory than the set himself. Besides, I'm looking for an algorithm which can update the grammar when I want to remove a word of the set. What type of algorithm is able to do that ? I give a concrete example: Consider a string S="abcdefghij", and then consider the finite set of words "cdhij", acdef", "fghi", "bcfgij", "defi". I would like to construct a grammar which generates only this set of words (words which can be viewed as concatenation of various substrings of any length from the original string S). Finally I would like to remove a word in the set and update subsequently the grammar. Thank you.
it is not clear what you mean by "words which have multiple identical symbols". it sounds like you mean certain symbols are interchangeable. in that case, just replace all cases of an interchangeable symbol with a single representative symbol. as to the more general question, there is an area of CS not-so-widely studied, called [grammar based codes](http://en.wikipedia.org/wiki/Grammar-based_code) which is, roughly, focused on the idea of creating compressions of strings based on grammar expansions. in this way classic compression algorithms such as eg [Lempel Ziv](http://en.wikipedia.org/wiki/Lempel_Ziv) can be regarded as special cases of grammar compressions. your problem is not so well defined unless you specify the type of grammar also, eg say CFG. if the grammar is generated by a recursive machine, then this problem is actually similar or almost identical to finding the [Kolmogorov complexity](http://en.wikipedia.org/wiki/Kolmogorov_complexity) of a string, which is undecidable in general.
> > Migrated from [stackoverflow](https://stackoverflow.com/questions/4914162/how-to-calculate-this-string-dissimilarity-function-efficiently). > > > Hello, I was looking for a string metric that have the property that moving around large blocks in a string won't affect the distance so much. So "helloworld" is close to "worldhello". Obviously Levenshtein distance and Longest common subsequence don't fulfill this requirement. Using Jaccard distance on the set of n-grams gives good results but has other drawbacks (it's a pseudometric and higher n results in higher penalty for changing single character). [original research] As I thought about it, what I'm looking for is a function f(A,B) such that f(A,B)+1 equals the minimum number of blocks that one have to divide A into (A1 ... An), apply a permutation on the blocks and get B: ``` f("hello", "hello") = 0 f("helloworld", "worldhello") = 1 // hello world -> world hello f("abba", "baba") = 2 // ab b a -> b ab a f("computer", "copmuter") = 3 // co m p uter -> co p m uter ``` This can be extended for A and B that aren't necessarily permutations of each other: any additional character that can't be matched is considered as one additional block. ``` f("computer", "combuter") = 3 // com uter -> com uter, unmatched: p and b. ``` Observing that instead of counting blocks we can count the number of pairs of indices that are taken apart by a permutation, we can write f(A,B) formally as: ``` f(A,B) = min { C(P) | P ⊆ |A|×|B|, P is a bijective function, ∀i∈dom(P) A[P(i)]=B[P(i)] } C(P) = |A| + |B| − |dom(P)| − |{ i | i,i+1∈dom(P) and P(i)+1=P(i+1) }| − 1 ``` The problem is... guess what... ... that I'm not able to calculate this in polynomial time. Can someone suggest a way to do this efficiently (possibly with minor modifications)? Or perhaps point me to already known metric that exhibits similar properties?
What you seem to be looking for is called the minimum common string partition, and it has been well studied. In particular, it's known to be NP-hard. There is also a closely related concept of edit distance with block moves, which would capture the extension you mention, where substitutions are allowed.
I'm having an argument with a colleague about the probability of a 4 digit number. The assumption is that the number is randomly and independently created. Assume the probability of generating the number 4109. Now, my colleague reckons that it's a simple P(4109) = 1/10 \* 1/10 \* 1/10 \* 1/10. (a bit like simply tossing a coin). But I'm not interested in the probability of generating the numbers 0149, 9014, 4910 etc etc. Obviously the probability of that specific sequence, 4109 is a probability found by calculating the permutation, not simple combination.... Please confirm my believe or point me to some proper material to support this?
Your colleague is right. If your digits are randomly and independently selected (assuming equal likelihood for each digit), the chance that your number is 4109 is nothing more complicated than: * the chance that the first digit is 4 * the chance that the second digit is 1 * the chance that the third digit is 0 * the chance that the fourth digit is 9 all multiplied together. That's $(1/10)^4$, as your colleague says. Where would permutations come into it? You said in the question that you're "not interested in the probability of generating the numbers 0149, 9014, 4910 etc etc", i.e. the set of all 4-digit numbers where the digits are 0,1,4,9 in any order, but that isn't what $(1/10) ^ 4$ tells you - to calculate that you'd need: * the probability that the first digit is 0,1,4, or 9 = 4/10 * the probability that the second digit is one of the three not yet selected = 3/10 * the probability that the third digit is one of the two not yet selected = 2/10 * the probability that the final digit is the remaining one from the set of 0,1,4,9 = 1/10 i.e. $\prod\_{i=1}^{4} \frac{i}{10}$ You could *then* worry about permutations to pick 4109 specifically, but this is needless complication in comparison the way your colleague said.
I have been reading about AVL trees, at the moment I'm trying to figure out how to determine the height of a tree and how to draw an AVL tree of some height with minimum number of elements. In a tutorial I found that this: would be a AVL tree of height 7 ![enter image description here](https://i.stack.imgur.com/xc90l.jpg) And this AVL tree with the height 4 ![enter image description here](https://i.stack.imgur.com/ZseOY.jpg) This is really confusing by the look I would guess that both of them are of height 4. I'm fairly new to data structures, I could not find a simple documentation/tutorial regarding this most of what i found was about Insertion/Deletion with AVL trees. So is the top tree of height 7 if not how would I draw it with the minimal number of elements. I understand the each sub tree would have to be balanced.
Generally [tree height is defined](http://en.wikipedia.org/wiki/Tree_height#Terminology) as the length of the longest path from the root to a leaf. Therefore you're right that the two trees have the same height, but they actually have height 3. As far as your second question, about how to draw the smallest possible AVL tree of a given height $n$, the trick is to think inductively. Construct a new tree of height $n$ by joining the smallest possible AVL trees of height $n-1$ and height $n-2$ as subtrees of a new common root. Using the definition of AVL trees, it's a straightforward exercise to see that (1) this new tree is a valid AVL tree and (2) it's the smallest possible AVL tree of height $n$. To get the height 7 tree in particular, find the base case minimal AVL trees of heights 0 and 1, and repeat the above construction 6 times.
Suppose I write a program as below: ``` int main() { int a = 3; int b = 4; return 0; } ``` Suppose the computer stores the address of 'a' as 0x00104. How does it associate 'a' with 0x00104?
> > Suppose the computer stores the address of 'a' as 0x00104. > > > No, 'computer'(?) nowhere stores the address of 'a'. Compiler or assembler converts source code into object modules. Those object modules have instructions and data. 'a' is symbolic address which is bound by compiler to relocatable address. The linkage editor or loader in turn bind the relocatable address to absolute address. So, there is no storage but binding. Note that the absolute binding can be done at either compile time, load time or execution time where relocatable address binding will be done in any case by the compiler.
> > Give an asymptotic upper bound for $$T(n) = \sqrt{n}·T(\sqrt{n})+n+n/\log n. $$ > > > How can I solve this recurrence relation, which involves square roots?
Dividing both sides by $n$ then introducing $S(m):=\frac{T(2^n)}{2^n}$ yields: $$S(m)=S(m/2)+1+\frac{1}{m}$$ It follows that: $$1<S(m)-S(m/2)\leq 2$$ And further: $$\forall k, 1<S(m/2^k)-S(m/2^{k+1})\leq 2$$ Now summing for $k=1\dots \log(m)-1$ gives us: $$S(m)=\Theta(\log(m))$$ And so: $$T(n)=\Theta(n\cdot\log\log n)$$
I was recently exploring CNNs and came to know that initial step consists of multiplying pixels of the input image with the corresponding value in the kernel(dot product of kernel and input image). What exactly is the reason or intuition behind multiplying individual pixel value and why not add or subtract or divide ?
Note that two RVs can have equal $k$-th moments for all $k=1,2,...$ and still be different, so they don't always capture "the essence" of the distribution. There are many counterexamples online but also in Casella and Berger. However, ff the [moment generating function](https://en.wikipedia.org/wiki/Moment-generating_function) is defined in an interval around $0$, then equality of moments provides equality of distributions.
Current floating point (ANSI C float, double) allow to represent an **approximation** of a real number. Is there any way to **represent real numbers without errors**? Here's an idea I had, which is anything but perfect. For example, 1/3 is 0.33333333...(base 10) or o.01010101...(base 2), but also 0.1(base 3) Is it a good idea to implement this "structure": ``` base, mantissa, exponent ``` so 1/3 could be 3^-1 ``` {[11] = base 3, [1.0] mantissa, [-1] exponent} ``` Any other ideas?
It all depends what you want to do. For example, what you show is a great way of representing rational numbers. But it still can't represent something like $\pi$ or $e$ perfectly. In fact, many languages such as Haskell and Scheme have built in support for rational numbers, storing them in the form $\frac{a}{b}$ where $a,b$ are integers. The main reason that these aren't widely used is performance. Floating point numbers are a bit imprecise, but their operations are implemented in hardware. Your proposed system allows for greater precision, but requires several steps to implement, as opposed to a single operation that can be performed in hardware. It's known that some real numbers are uncomputable, such as the [halting numbers](http://en.wikipedia.org/wiki/Chaitin%27s_constant). There is no algorithm enumerating its digits, unlike $\pi$, where we can calculate the $n$th digit as long as we wait long enough. If you want real precision for things irrational or transcendental numbers, you'd likely need to use some sort of system of symbolic algebra, then get a final answer in symbolic form, which you could approximate to any number of digits. However, because of the undecidability problems outlined above, this approach is necessarily limited. It is still good for things like approximating integrals or infinite series.