input
stringlengths
38
38.8k
target
stringlengths
30
27.8k
I am struggling with the following problem: Given a set of finite binary strings $S=\{s\_1,\ldots,s\_k\}$, we say that a string $u$ is a concatenation over $S$ if it is equal to $s\_{i\_{1}} s\_{i\_{2}} \cdots s\_{i\_{t}}$ for some indices $i\_1,\ldots, i\_t \in \{1,\ldots, k\}.$ Your friend is considering the following problem: given two sets of finite binary strings $A=\{a\_1,\ldots,a\_m\}$ and $B=\{b\_1,\ldots , b\_n \}$, does there exist any string $u$ so that $u$ is both a concatenation over $A$ and a concatenation over $B$? Your friend announces "at least the problem is in $\mathcal{NP}$, since I would just have to exhibit such a string $u$ in order to prove the answer is yes." You point out that this an inadequate explanation. How do we know the shortest string $u$ doesn't have length exponential in the size of the input? **Prove the following: If there is a string $u$ that is a concatenation over both $A$ and $B$, then there is such a string whose length is bounded by a polynomial in the sum of the lengths of the strings in $A\cup B$.** Now, I have actually found a solution and the solution claims that the maximum length of $u$ is at most $n^2L^2$ where we assume $m \leq n$ and $L$ denotes the maximum length of any string in $A \cup B$. The solution then goes on to exhibit a proof by contradiction using the pigeonhole principle to show that if we assume $u$ has length greater than $n^2L^2$ we arrive at a contradiction. My question is where does the bound $n^2L^2$ come from? I know that $n$ is the number of strings in set $B$ and $L$ is the max length string in $A \cup B$. But I feel as if the largest possible concatenation over both would be more like $2nL$ since we would need to include every string in both $A$ and $B$ and the longest string in either is $L$ and there are at most $n$ strings in each. What am I missing? I think I can handle the remainder of the proof once I understand where the $n^2L^2$ bound is coming from.
You can also prove this using automata theory. For a set $S = \{s\_1,\ldots,s\_k\}$ of strings over $\Sigma$, consider the following language over $\Sigma \cup \{1,\ldots,k\}$: $$ L\_S = (1s\_1+2s\_2+\cdots+ks\_k)^+. $$ This language is accepted by a DFA of size roughly $\|S\| := |s\_1|+\cdots+|s\_k|$. The exact size depends on the model of DFA - in our case it makes sense to ask for *at most* one transition out of each state with a given label (rather than *exactly* one transition, as is usually the case). Now we want to do the same, but for two sets of strings $A = \{a\_1,\ldots,a\_m\}$, $B = \{b\_1,\ldots,b\_n\}$. To this end, we consider languages $L\_A,L\_B$ which are defined as before, but with two differences: * $1,\ldots,k$ are replaced with $A\_1,\ldots,A\_m$ for $L\_A$ and $B\_1,\ldots,B\_n$ for $L\_B$. * The symbols $A\_1,\ldots,A\_m$ are ignored in $L\_B$, and the symbols $B\_1,\ldots,B\_n$ are ignored in $L\_A$. As an example, suppose that $A = \{a,ba\}$ and that $B = \{ab,a\}$. Then the following word will be in our language: $A\_1B\_1aA\_2bB\_2a$. As before, we can construct DFAs for $L\_A,L\_B$ having roughly $\|A\|,\|B|$ states, respectively. Using the product construction, we get a DFA of size roughly $\|A\|\cdot\|B\|$ for the intersection. It is well-known (and can be proved using the pigeonhole principle) that if a DFA having $N$ states accepts any word, then it accepts some word of size less than $N$. This implies your claim. --- In terms of lower bounds, it is easy to refute your conjecture that $2nL$ is enough. Consider $A = \{ 0^L \}$ and $B = \{ 0^{L-1} \}$. It is not hard to check that the minimal solution has length $L(L-1) \approx L^2$. It's less clear what is the correct dependence on $n$.
Let's say I am calculating heights (in cm) and the numbers must be higher than zero. Here is the sample list: ``` 0.77132064 0.02075195 0.63364823 0.74880388 0.49850701 0.22479665 0.19806286 0.76053071 0.16911084 0.08833981 Mean: 0.41138725956196015 Std: 0.2860541519582141 ``` In this example, according to the normal distribution, 99.7% of the values must be between ±3 times the standard deviation from the mean. However, even twice the standard deviation becomes negative: ``` -2 x std calculation = 0.41138725956196015 - 0.2860541519582141 x 2 = -0,160721044354468 ``` However, my numbers must be positive. So they must be above 0. I can ignore negative numbers but I doubt this is the correct way to calculate probabilities using standard deviation. Can someone help me to understand if I am using this in correct way? Or do I need to chose a different method? Well to be honest, math is math. It doesn't matter if it is normal distribution or not. If it works with unsigned numbers, it should work with positive numbers as well! Am I wrong? **EDIT1: Added histogram** To be more clear, I have added my real data's histogram [![enter image description here](https://i.stack.imgur.com/iLw3X.png)](https://i.stack.imgur.com/iLw3X.png) **EDIT2: Some values** ``` Mean: 0.007041500928135767 Percentile 50: 0.0052000000000000934 Percentile 90: 0.015500000000000047 Std: 0.0063790857035425025 Var: 4.06873389299246e-05 ```
In one of the comments you say you used "random data" but you don't say from what distribution. If you are talking about heights of humans, they are roughly normally distributed, but your data are not remotely appropriate for human heights - yours are fractions of a cm! And your data are not remotely normal. I'm guessing you used a uniform distribution with bounds of 0 and 1. And you generated a very small sample. Let's try with a bigger sample: ``` set.seed(1234) #Sets a seed x <- runif(10000, 0 , 1) sd(x) #0.28 ``` so, none of the data is beyond 2 sd from the mean, because that is beyond the bounds of the data. And the portion within 1 sd will be approximately 0.56.
As a mathematician/economist, I am not trained to think in classification and regression tasks. This is why I wonder: is there a clear, widely accepted definition of regression and classification problems? E.g., [this paper](https://arxiv.org/pdf/1302.1545.pdf) says that *When $Y$ has a finite number of states we refer to the task as classification. Otherwise we refer to the task as regression.* Does this mean that count data with many counts are a classification problem (e.g., Poisson regression). Or if we model life expectancy, is this generally supposed to be a classification task? Is this definition generally accepted?
To muddy the waters further, classification can mean 1. trying to find distinct classes in a dataset from scratch, which has attracted many different names, including **mathematical** or **numerical taxonomy**, but **cluster analysis** seems the most durable and popular 2. assigning observations to classes already defined, which has other names too including **identification** and **discrimination**.
I am trying to train a deep network for twitter sentiment classification. It consists of an embedding layer (word2vec), an RNN (GRU) layer, followed by 2 conv layers, followed by 2 dense layers. Using ReLU for all activation functions. I have just started using tensorboard & noticed that I seemingly have extremely small gradients in my convolutional layer weights (see figure) [![enter image description here](https://i.stack.imgur.com/IdRAo.png)](https://i.stack.imgur.com/IdRAo.png) I believe I have vanishing gradients since the distribution of CNN filter weights does not seem to change & the gradients are extremely small relative to the weights (see figure). [NOTE: the figure shows layer 1, but layers 2 looked very similar] My questions are: 1) Am I interpreting the plots correctly that I do indeed have vanishing gradients & thus my convolutional layers arent learning? Does this mean they are currently essentially worthless? 2) What can I do to remedy this situation? Thanks! **UPDATE 3/13/18** Few comments: 1) I have tried the network w/ just 1 layer and no layers (RNN-->FC), and having 2 kayers does empirically improve performance. 2) I have tried Xavier initialization and it doesnt do much (the previous default initialization mean value of .1 was very close to the Xander value) 3) By quick math, the gradients seem to change on the order of 1e-5, while the weights themselves are on the order of 1e-1. Thus at every iteration, the weights change 1e-5/1e-1\*100% = ~.01%. Is this to be expected? What is the threshold for how much the weights change until we consider them to have converged / consider the changes to be useless in the sense that they dont change outcome?
Edit: Definitely try Xavier initialization first, as the other answerer said. In other cases, where you have to increase the gradient manually... Gradient means rate of change of your loss function. If your loss function is not changing much with respect to certain weights, then changing those weights doesn't change your loss function. The weights just determine the type of linear combination from the previous layer. If the loss doesn't change when you change the linear combination, then you need to amplify the effect of increasing or decreasing the linear combination. So what you want is, whenever the weights increase, you want the linear combination to increase by more, and whenever the weights decrease, you want the linear combination to decrease by more. Let's say you multiplied a weight by a constant *k*. Then if you increased that weight the linear combination would increase more, and if you decreased the weight then the linear combination would decrease more. So it must be that multiplying your weights by a constant *k* > **1** would increase the effects on the linear combination from changing the weights. If you multiply all your weights by a constant *k*, then that's the same thing as multiplying the entire linear combination by *k*. So you want to multiply the linear combination by *k* before squishing it with your activation function. Your new activation function then would be this: > > activation = ReLU( linear combination x *k* ) > > > or > > * activation = 0 x (linear combination x *k*), if linear combination <= 0 > * activation = 1 x (linear combination x *k*), if linear combination > 0 > > > Compare to regular ReLU, which is: > > * activation = 0 x linear combination, if linear combination <= 0 > * activation = 1 x linear combination, if linear combination > 0 > > > So you can see that no matter what the case is, you want to multiply the ReLU activation by *k*. Specifically, you want to multiply the ReLU activation by *k* > **1**. In practical terms, this increases the slope of the right side of the ReLU function.
Given a list of (small) primes $ (p\_0, p\_1, \dots, p\_{n-1})$, is there an (efficient) algorithm to enumerate, in order, all numbers that can be expressed as $ \prod\_{k=0}^{n-1} p\_k^{e\_k} $, where $e\_k \in \mathbb{Z}, e\_k \ge 0 $? What about in a certain interval, potentially at an exponential starting point? For example, if we had the set $(2,3,5)$, the first few numbers would be $(2, 3, 2^2, 5, 2 \cdot 3, 2^3, 3^2, 2 \cdot 5, \dots )$. Is there an algorithm to efficiently enumerate all the numbers *not* expressible as a product of powers of primes from the set? How about in an interval? Note: I just saw the Polymath paper on deterministic prime finding in an interval ( [Deterministic methods to find primes](http://arxiv.org/abs/1009.3956) ) and thats what inspired this question. I don't know if it's important that the set be a list of primes, but I'll keep it in there just in case. EDIT: I was unclear by what I meant by 'efficient' . Let me try making it more precise: Given a list of $n$ primes $(p\_0, p\_1, \dots p\_{n-1})$ and a bound, $B$, is it possible to find in polynomial time with respect to $lg(B)$ and $n$, the next integer, $x$, such that $x > B$ and is expressible as a product of powers of primes from the list?
You can generalize the standard algorithm for enumerating Hamming numbers in increasing order by merging with a min-heap. The Hamming numbers are the numbers expressible with primes 2, 3 and 5. If you can enumerate the expressible numbers in order, the non-expressible numbers are easily found in the successive gaps. To solve the problem for an interval [i, j], find the greatest expressible number less than i and the least expressible number greater than j, and use that to initialize and terminate the algorithm. Edit: I forgot to mention how you might efficiently find the bounds for the interval problem. Modified bisection should work. You have an exponent sequence as your current guess. For each individual exponent split the difference as in bisection, yielding n variants, and then find the numerically smallest of the variants (or the largest, depending on whether you're searching for lower or upper bounds), and use that as your next guess. > > I don't know if it's important that the set be a list of primes, but I'll keep it in there just in case. > > > No, it doesn't matter. If you use the algorithm I suggested, the effect of allowing composites is that you generate duplicates when unique factorization fails; this happens when two or more numbers in the generating set aren't relatively prime. For generators 2 and 4, the sequence goes ``` 1 = 2^0 4^0, 2 = 2^1 4^0, 4 = 2^2 4^0, 4 = 2^0 4^1, ... ``` Any duplicates will occur in contiguous runs, so they are easily filtered out without keeping a black list. You could also kill them upon insertion in the min-heap.
Parity and $AC^0$ are like inseparable twins. Or so it has seemed for the last 30 years. In the light of Ryan's result, there will be renewed interest in the small classes. Furst Saxe Sipser to Yao to Hastad are all parity and random restrictions. Razborov/Smolensky is approximate polynomial with parity (ok, mod gates). Aspnes et al use weak degree on parity. Further, Allender Hertrampf and Beigel Tarui are about using Toda for small classes. And Razborov/Beame with decision trees. All of these fall into the parity basket. 1) What are other natural problems (apart from parity) that can be shown directly not to be in $AC^0$? 2) Anyone know of a drastically different approach to lower bound on AC^0 that has been tried?
[Benjamin Rossman](http://www.mit.edu/~brossman/)'s result on $AC^0$ lowerbound for k-clique from STOC 2008. --- References: * Paul Beame, "[A Switching Lemma Primer](http://www.cs.washington.edu/homes/beame/papers/primer.ps)", Technical Report 1994. * Benjamin Rossman, "[On the Constant-Depth Complexity of k-Clique](http://www.mit.edu/~brossman/k-clique-stoc.pdf)", STOC 2008.
I don't really understand when it come to mixed model, how do you know when to use linear or nonlinear model? For example, when using R function `lmer` to build linear mixed model, my model may look like this: ``` lmer( Y ~ X1 + X2 + X1*X2 + (1|Z) ) ``` where $Y$ is the response (from a repeated measured data), $X\_1$ and $X\_2$ are fixed effects and $Z$ is the random effect. Does this means when you pick these effects up to see their relation separately, like `Y~X1` and `Y~X2`, both has to be linear so than you can use linear mixed model? What if `Y~X1` is nonlinear and `Y~X2` is linear? Should I use nonlinear mixed model when this is the case?
It's not exactly about whether the relationships between Y and the various X are linear or not; a linear model is one that is linear in the *parameters* (just like the case with nonmixed models). So $Y = a + b\_1X\_1 + b\_2X\_2^2 + b\_3X\_3$ is linear, but if there are parameters (b) in the exponents, it is not. Usually, nonlinear mixed models are used when Y is not continuous. They are used for the mixed versions of logistic regression, count regression and so on.
I wanted to make a tool which minimizes the interference between antennas. Currently, the tool is very limited for prototyping reasons. It can only place an antenna every 1 meters. The available space that the antennas can be placed is 1 dimensional and is 15 meters wide. The user can only decide how many antennas will be used (e.g. 3 antennas). I decided to encode the setup as follows: a binary array where 1 represents an antenna and 0 is an empty space because it seemed straightforward. The length of the array is thus the available space to place those antennas (= 15 meters). For example, here are all the antennas on the left: **11100 00000 00000** and here they are on the right: **00000 00000 00111**. My mutate function enforces that the offspring contains only 3 antennas. If it is not the case the mutate function will try again. Same thing for the crossover. In this case, the total search space consists of 455 solutions since there are only 455 ways to place those antennas. I benchmarked a little bit the algorithm and I got the following results: * If I use a 5% of the search space, thus a population of 7 with 4 generations (7 \* 4 = 28) then I got a solution which is better than 96% of the 455 solutions. * If I use a population of 10% of the search space, thus a population of 9 with 5 generations (9 \* 5 = 45) then I got a solution which is better than 98% of the 455 solutions * The higher the mutation rate the better the results. This one is very strange since the algorithm becomes more and more a random search. I thought that it should normally give worst results. The difference of the results is 1-2% better when using a mutation rate of e.g. 0.8 instead of 1 / n = 1 / 15 (where n is the length of the encoding). Finally, I have two questions: I got a good solution but never the best one, even by using 10% of the search space. Is this normal? A higher mutation rate gives me better results? Is it because I am working with a toy problem? Or is my encoding bad?
I'll start with the second question because it's easier. How could increasing the mutation rate improve the results? Genetic algorithms can beat random searches when they converge on the right answer. Similar to other machine learning algorithms, genetic algorithms have an "explore" phase and a "converge" phase. If the algorithm doesn't run long enough (enough rounds) to enter the converge phase, then a higher mutation rate will likely give you better results. The other culprit may be that the utility score of offspring may be independent of the utility score of the parents even though their parameters are very similar. If this is the case (especially around "good" results) then convergence may be impossible and random search would outperform genetic algorithms. The other question "Why didn't I ever find the best solution?" could also have a few explanations. First, you could have just been unlucky - especially if you didn't run your experiment very many times. Intuitively, a single run searching 10% of the space will have roughly a 10% chance of finding the best solution. Second, it may be that all antenna configurations which are similar to an optimal solution have low scores. This would make it less likely to produce the optimal solution genetically.
This is **not** a class assignment. It so happened that 4 team members in my group of 18 happened to share same birth month. Lets say June. . What are the chances that this could happen. I'm trying to present this as a probability problem in our team meeting. Here is my attempt: * All possible outcome $12^{18}$ * 4 people chosen among 18: 18$C\_4$ * Common month can be chosen in 1 way: 12$C\_1$ So the probability of 4 people out of 18 sharing the same birth month is $\frac{18C\_4 \* 12C\_1}{12^{18}}$ = very very small number. Questions: 1. Is this right way to solve this problem? 2. What the probability that there is **exactly** 4 people sharing a birth month? 3. What the probability that there is **at least** 4 people (4 or more people) sharing a birth month? Please note: I know that all months are not equal, but for simplicity lets assume all months have equal chance.
You can see your argument is not correct by applying it to the standard birthday problem, where we know the probability is 50% at 23 people. Your argument would give $\frac{{23\choose 2}{365\choose 1}}{365^{23}}$, which is very small. The usual argument is to say that if we are going to avoid a coincidence we have $365-(k-1)$ choices for the $k$th person's birthday, so the probability of no coincidence in $K$ people is $\prod\_{k=1}^K \frac{365-k+1}{365}$ Unfortunately, there is no such simple argument for more than two coincident birthdays. There is only one way (up to symmetry) for $k$ people to have no two-way coincidence, but there are many, many ways to have no four-way coincidence, so the computation as you add people is not straightforward. That's why R provides `pbirthday()` and why it is still only an approximation. I'd certainly hope this wasn't a class assignment. The reason your argument is not correct is that it undercounts the number of ways you can get 4 matching months. For example, it's not just that you can choose any month of the 12 as the matching one. You can also relabel the other 11 months arbitrarily (giving you a factor of 11! ). And your denominator of $12^{18}$ implies that the ordering of the people matters, so there are more than $18\choose 4$ orderings that have 4 matches.
I would like to give a mathematics talk on the [git](https://no.wikipedia.org/wiki/Git) revision control system. It is now widely used in mathematics as well as in the computer science industry. For example, the HoTT (Homotopy Type Theory) community uses it, and it is the go to system for collaborative editing of text files, whether they be source code or latex markup. I know git uses the notion of a directed acyclic graph, which is a start. However, a good mathematics talk mentions proofs and theorems. What theorem might I prove about git that is actually relevant for its use?
A git repository can be thought of as a partially ordered set of revisions (where one revision is earlier than another in the order if it is a direct or indirect successor of the earlier one). The partial orders that you get from git repositories tend to have low width (the size of the largest set of mutually independent revisions) because the width is directly related to the number of active developers and the number of different forks any individual developer might be working on. Based on this background, I would suggest [Dilworth's theorem](https://en.wikipedia.org/wiki/Dilworth%27s_theorem), which states that the width of any partial order equals the minimum number of chains (totally ordered subsets) needed to cover all of the versions. And to make it on-topic for this board, you could also mention the graph matching based algorithms for computing the width and finding a cover by a minimum number of chains in polynomial time. One way this could be relevant for actual use in Git is in a system for visualizing the version history of a system: most Git visualization systems that I've seen draw time on the vertical axis, and independent versions of the repository horizontally, so this would give you a way to organize the visualization into a small number of independent vertical tracks. Alternatively, if you want something more ambitious and advanced, try Demaine et al.'s [blame tree data structure](http://ezyang.com/papers/demaine13-blametrees.pdf), which is directly motivated by conflict resolution in git-like version control systems.
Wouldn't data be lost when mapping 6-bit values to 4-bit values in DES's S-Boxes? If so, how can we reverse it so the correct output appears?
See Chapter 5 of the textbook "Introduction to Modern Cryptography" by Katz and Lindell.
I have been given a project to implement an [SIRS model](http://en.wikipedia.org/wiki/Epidemic_model#The_SIRS_Model). While searching how to do it, I found this site and a [question related to epidemic model](https://stats.stackexchange.com/q/16437/930). It is very much related to my project and is quite helpful. However, since I'm new to this topic, can you please help me on how to start implementing SIRS model. I have to implement one simulator in Java. I don't have any idea on how to start the implementation. I will be really grateful if you help me.
Let's go for the one-line solution: ``` replicate(1000, mean(rnorm(100, 69.5, 2.9)) - mean(rnorm(100, 63.9, 2.7))) ```
This is a basic question on Box-Jenkins MA models. As I understand, an MA model is basically a linear regression of time-series values $Y$ against previous error terms $e\_t,..., e\_{t-n}$. That is, the observation $Y$ is first regressed against its previous values $Y\_{t-1}, ..., Y\_{t-n}$ and then one or more $Y - \hat{Y}$ values are used as the error terms for the MA model. But how are the error terms calculated in an ARIMA(0, 0, 2) model? If the MA model is used without an autoregressive part and thus no estimated value, how can I possibly have an error term?
You say "the observation $Y$ is first regressed against its previous values $Y\_{t−1},...,Y\_{t−n}$ and then one or more $Y−\hat{Y}$ values are used as the error terms for the MA model." What I say is that $Y$ is regressed against two predictor series $e\_{t-1}$ and $e\_{t−2}$ yielding an error process $e\_t$ which will be uncorrelated for all i=3,4,,,,t .We then have two regression coefficients: $\theta\_1$ representing the impact of $e\_{t-1}$ and $\theta\_2$ representing the impact of $e\_{t-2}$. Thus $e\_t$ is a white noise random series containing n-2 values. Since we have n-2 estimable relationships we start with the assumption that e1 and e2 are equal to 0.0 . Now for any pair of $\theta\_1$ and $\theta\_2$ we can estimate the t-2 residual values. The combination that yields the smallest error sum of squares would then be the best estimates of $\theta\_1$ and $\theta\_2$.
``` int foo(N){ if(N <= 1){ return 0 }else{ return 1 + foo(N-1) } } ``` I can tell that the time complexity of this program is O(N) but I am unsure on how to prove it mathematically? If I can get some hints I'd grealy appreciate it.
You can use induction on input. For example, in your foo, to show that foo(N) uses exactly N comparisons, **Base case:** foo(1) uses 1 comparison **Induction hypothesis:** foo(N) uses N comparisons **Step case:** foo(N+1) does one comparison and then call foo(N), thus in total, does N+1 comparisons. You can prove similar statement for addition or all operations, and then you can give a time complexity based on those number of operations.
I was reading a tutorial on marginal densities when I came across this example (rephrased). A person is crossing the street and we want to compute the probability when he gets hit by a passing car depending on the color of the traffic light. Let H be whether the person gets hit or not, and L be the color of the traffic light. So $H = \{\text{hit, not hit} \}$ and $L = \{\text{red, yellow, green} \}$. The probability of getting hit given that the light is red can be written as: $P(H = \text{hit}| L = \text{red})$. Clearly this is a conditional probability. The probability of getting hit regardless of whatever the light is can be written as: $P(H = \text{hit})$. This is marginal, as I recently understood. How can you say: $P(H,L)$. This is a joint probability. How do you translate it to a 'layman's sentence? How is it different from "The probability of getting hit AND the light is red"? Thanks for your insights.
I have tried to explain this example with assumed values of Joint Probability: [![![enter image description here](https://i.stack.imgur.com/313KR.png)](https://i.stack.imgur.com/313KR.png)
Given an $n$ vertex undirected graph, what is the best known runtime bound for *finding* a subgraph which is a $k\times k$-biclique? Are there faster parametrized algorithms than the $\binom{n}{k}\mbox{poly}(n)$ time algorithm of "guessing" one side of the biclique and see if there are at least $k$ other vertices incident to all of them?
Parameterized by degeneracy or arboricity, it's FPT. More specifically, $O(d^3 2^d n)$ where $d$ is the degeneracy (or $a^3 2^{2a}$ for arboricity). See: * Arboricity and bipartite subgraph listing algorithms. D. Eppstein. [Inf. Proc. Lett. 51:207-211, 1994](http://dx.doi.org/10.1016/0020-0190%2894%2990121-X). Another parameterized paper has just been [accepted to SWAT 2012](http://swat2012.helsinki.fi/programme/#papers), this time parameterized by longest induced path length: * Aistis Atminas, Vadim Lozin and Igor Razgon: Linear time algorithm for computing a small biclique in graphs without long induced paths. SWAT 2012, to appear. But my understanding is that whether this is FPT or not with the natural parameter (the size of the biclique) is a big open problem.
I have R-scripts for reading large amounts of csv data from different files and then perform machine learning tasks such as svm for classification. Are there any libraries for making use of multiple cores on the server for R. or What is most suitable way to achieve that?
If it's on Linux, then the most straight-forward is [**multicore**](http://cran.r-project.org/web/packages/multicore/index.html). Beyond that, I suggest having a look at [MPI](http://www.stats.uwo.ca/faculty/yu/Rmpi/) (especially with the [**snow**](http://cran.r-project.org/web/packages/snow/index.html) package). More generally, have a look at: 1. The [High-Performance Computing view](http://cran.r-project.org/web/views/HighPerformanceComputing.html) on CRAN. 2. ["State of the Art in Parallel Computing with R"](http://cran.r-project.org/web/views/HighPerformanceComputing.html) Lastly, I recommend using the [foreach](http://cran.r-project.org/web/packages/foreach/index.html) package to abstract away the parallel backend in your code. That will make it more useful in the long run.
> > Which algorithms are used most often? > > > Please write a single algorithm per answer, try to keep your answer short (one or two lines).
[Quicksort](http://en.wikipedia.org/wiki/Quicksort)
Let's say that I know the following: * $P(A|B)$ is the probability that a storm is coming given it's cloudy. * $P(A|C)$ is the probability that a storm is coming given that the dogs bark. * $P(B)$ and $P(C)$ are independent. How do I compute the following?: * $P(A|B,C)$, the probability of a storm coming given that it's cloudy AND the dogs are barking. In layman's terms, I know that there is some likelihood that a storm is coming if it's cloudy. And, I know that there is some likelihood that a storm is coming if the dogs are barking. Therefore, shouldn't I have more confidence that a storm is coming if it's cloudy AND the dogs are barking? How do compute this? The reason that I ask this question is because I am trying to combine measurements from two different sensors that measure the same thing. If I combine the measurements, shouldn't I expect greater confidence in my measurement? This [post](https://stats.stackexchange.com/questions/288260/what-does-pabpac-simplify-to) and this [post](https://stats.stackexchange.com/questions/318674/what-is-pa-b-c-when-b-c-are-both-independent) are related to my question, but the answers fall short in that I do not know the general probabilities of $P(B)$ and $P(C)$ to compute $P(A|B,C)$.
The answer is the conflation of probabilities, explained [here](https://stats.stackexchange.com/q/194884). In the syntax that answers my question, the equation becomes: $P(A|B,C)=\eta{P(A|B) P(A|C)}$ where $\eta$ is the normalization factor: $\eta=\left({P(A|B)P(A|C) + P(\overline A|B)P(\overline A|C)}\right)^{-1}$ My question could be changed to a scenario that is more familiar to us, *"How do I quantify the probability of having cancer if I got an opinion from two different doctors?"* Surely getting a second opinion will give us more confidence about the prognosis! If $A$ indicates you having cancer and $B$ and $C$ are two different doctors' prognosis (thanks @josliber), where $P(A|B)=0.75$ and $P(A|C)=0.75$, then applying those values in the equation above gives us: $P(A|B,C)=\frac{0.75 \* 0.75}{0.75 \* 0.75 + (1-0.75)\*(1-0.75)}=0.9$ Two doctors were 75% confident about their prognosis. Combining those prognoses gives us 90% confidence that I have cancer.
One of my friends asks me the following scheduling problem on tree. I find it is very clean and interesting. **Is there any reference for it?** **Problem:** There is a tree $T(V,E)$, **each edge has symmetric traveling cost of 1**. For each vertex $v\_i$, there is a task which needs to be done before its deadline $d\_i$. The task is also denoted as $v\_i$. Each task has the uniform value 1. **The processing time is 0 for each task**, i.e., visiting a task before its deadline equals finishing it. Without loss of generality, let $v\_0$ denote the root and assuming there is no task located at $v\_0$. There is a vehicle at $v\_0$ at time 0. Besides, we assume that **$d\_i \ge dep\_i$ for every vertex**, $dep\_i$ stands for the depth of $v\_i$. This is self-evident, the vertex with deadline less than its depth should be taken as outlier. **The problem asks to find a scheduling which finishes as many tasks as possible.** **Progress:** 1. If the tree is restricted to a path, then it is in $\mathsf{P}$ via dynamic programming. 2. If the tree is generalized to a graph, then it is in $\mathsf{NP}$-complete. 3. I have a very simple greedy algorithm which is believed 3-factor apporoximation. I have not proved it completely. Rightnow, I am more interested about the NP-hard results. :-) Thanks for your advice.
*Not sure this is your answer (see below) but a bit too long for the comments.* I though your problem was something like: [$(P|tree;p\_i=1|\Sigma T\_i)$](http://www-desir.lip6.fr/~durrc/query/search.php?a1=P||&a2=|||a2&a4=|||a4&a3=|||a3&b1=|||b1&b3=|%3Bp_i%3D1|&b7=|||b7&b4=|%3Btree|&b5=|||b5&b6=||&b8=|||b8&c=||sum+T_i&problem=P|p_i%3D1%3Btree|sum+T_i), where: * $P$ stands for identical homogeneous processors, * "tree" stands for precedence constraint the form of a tree, * $p\_i=1$ stands for the weight of the tasks is equal to 1, and * $\Sigma T\_i$ stands for minimizing the sum of tardiness (i.e., the number of tasks that finish after their deadline). If this is the case, then your problem is NP-hard: you can see it as a generalization of [Minimizing total tardiness on a single machine with precedence constraints](http://joc.journal.informs.org/content/2/4/346.short). Indeed this paper states that for multiple linear chains, it is NP-hard on a single processor. The easy transformation is to take the trees of the form one root, and linear chains starting from the root. However I am surprised because you seem to say that for the case of a *single* linear chain, you would use Dynamic Programming. I don't see why you would need DP, since it seems to me that when scheduling a single linear chain you do not have much choice because of the precedence constraints: only a single choice. So maybe I misunderstood your problem.
What is the reason why we use natural logarithm (ln) rather than log to base 10 in specifying functions in econometrics?
I think that the natural logarithm is used because the exponential is often used when doing interest/growth calculation. If you are in continuous time and that you are compounding interests, you will end up having a future value of a certain sum equal to $F(t)=N.e^{rt}$ (where r is the interest rate and N the nominal amount of the sum). Since you end up with exponential in the calculus, the best way to get rid of it is by using the natural logarithm and if you do the inverse operation, the natural log will give you the time needed to reach a certain growth. Also, the good thing about logarithms (be it natural or not) is the fact that you can turn multiplications into additions. As for mathematical explanations of why we end up using an exponential when compounding interest, you can find it here:<http://en.wikipedia.org/wiki/Continuously_compounded_interest#Periodic_compounding> Basically, you need to take the limit to have an infinite number of interest rate payment, which ends up being the definition of exponential Even thought, continuous time is not widely used in real life (you pay your mortgages with monthly payments, not every seconds..), that kind of calculation is often used by quantitative analysts.
(There’s no need to write the algorithm, I just need help with the greedy choice). Problem: you are given bottles numbered 1 to n. Each bottle i has a capacity of Ci and currently contains Li. We want to poor water between the bottles so that as many bottles as possible will be filled (Li = Ci) but doing so while moving a minimal amount of water. Write a greedy algorithm that will print instructions on how to do so (poor x liters from bottle i into bottle j). Prove correctness of your algorithm, and give its time complexity. I’m having trouble solving this problem. We need to write a greedy algorithm, and so the solution is of the type: “take bottle with certain property x and poor as much as you can (until it’s empty or until the other bottle is full) into bottle with certain property y”. But putting in all of the simple properties don’t seem to work and can be refuted with a counter example. Any ideas?
Assume you have 100 empty one litre bottles and 50 filled two litre bottles. So what is your optimal solution, having 50 filled bottles by doing nothing or having 100 filled bottles by pouring 100 litres into the empty bottles? I assume the latter. You have a fixed amount of water. To have as many filled bottles as possible, you sort the bottles by capacity and find the largest n such that the n smallest bottles can be filled. There may be some water left, but not enough to fill bottle #n+1. Now comes the hard part: You may have many choices to pick the filled bottles. Say you have bottles filled with 100.999 litres total, and the smallest 200 bottles have a capacity of 1 litre to 1.010 litres total, other bottles are 2 litres or more. You may have a huge number of choices which 100 bottles to fill. I think this is equivalent to the knapsack problem, therefore NP-complete.
This is more of a 'meta' question as I cannot give a precise formulation of my question. Consider for example the category of total quasi-orders: we can then distinguish between a 'strict' order (where no two elements are equivalent) and a 'weak order' (where some elements may belong to the same equivalence class). It doesn't possible to define an intermediate notion of 'mild order' in this setting, so do you have any suggestions for other structures allowing a natural notion of 'mildness' which is well-behaved? I'd like to add a self-evident comment: in science the qualificative of 'mild' can be applied to certain notions, e.g. 'mild necessity', 'mild difficulty' or 'mild formalism' which describes well the questions that I'm asking. OTOH I guess we can't speak of a 'mildly correct' statement without resorting to statistics?
Complexity theory implicitly makes use of "mild" orders all the time between complexity classes — where there is a relation which is known in one direction, and *unknown* in another. We might define a "hazy order" $\mathscr R$ to be a class of quasi-orders $\{R\_j\}\_{j \in J}$ together with a class of forbidden relations $F$. The set of quasi-orders is upward-closed "except where forbidden": that is, for quasi-orders $R, R'$ disjoint from $F$ such that $\forall j,k: j\mathbin{R}k \implies j\mathbin{R'}k$, we have $R\in\mathscr R \implies R'\in\mathscr R$. Such a hazy order can be used to describe a single, definite quasi-order $R \subseteq A \times A$ by taking $\mathscr R = \{ R \}$ and $F = (A \times A) \smallsetminus R$. For such a hazy order $(\mathscr R, F)$ on a set $A$, we write\* * $j \leqslant\_{\mathscr R} k$ if $\forall R \in \mathscr R: j \mathbin R k$, and * $j <\_{\mathscr R} k$ if furthermore $\forall R \in \mathscr R: \neg(k \mathbin R j)$. Thus there is room for fuzziness in relations, where $j \leqslant\_{\mathscr R} k$, but neither $k \leqslant\_{\mathscr R} j$ nor $j <\_{\mathscr R} k$ obtain. Because one cannot show $k \leqslant\_{\mathscr R} j$, one would tend to treat the relation as strict when describing upper and lower bounds, but because one cannot show $j <\_{\mathscr R} k$ one cannot *rely* on the inequivalence of $j$ and $k$. At the same time, the hazy order $\{ R \}$ given by any particular quasi-order $R$ satisfies $j <\_{\{R\}} k \iff (j \leqslant\_{\{R\}} k) \mathbin{\&} \neg(k \leqslant\_{\{R\}} j)$, so describing it as a hazy order does not lead to any difference from the quasi-order $R$ itself. Such a hazy order describes uncertain quasi-orders, such as our current state of knowledge of complexity classes: we know $\mathsf {P \subseteq NP}$, but our current formal knowledge is compatible with either $\mathsf {NP \subseteq P}$ or $\mathsf {NP \not\subseteq P}$ (whereas we do know that $\mathsf {EXP \not\subseteq P}$ for example). We act conventionally as though $\mathsf{P \subset NP \subset NP^{NP} \subset \cdots}$ but acknowledge that it is conceivable that this ordering is weak rather than strict as a quasi-order on complexity classes (or rather the labels of them — which we can take as representing their *intensional* identities, as opposed to their *extensions* as sets of problems). A "mild" linear order such as the sort you describe (neither strict nor weak) could be one in which we consider some strict linear order $L \subset A \times A$, take the forbidden relations $F$ to be a proper subset of $(A \times A) \smallsetminus L$ (and in particular: one whose transitive closure as a relation does not contain the opposite relation $L^{\mathrm{op}}$ as a subset)\* and take $\mathscr R$ to be the upward closure of $\{L\}$ (i.e. under subset containment, among quasi-orders, subject to avoidance of $F$). ( \* **N.B.** These descriptions have been edited to correct errors involving the forbidden subset.)
If I do the PCA on the whole dataset I get 7 components that can explain 90% of the variance, if I split the dataset into 2 (sorted by time), the number of significant components in the first half goes to 5 (with 15 variables present in one or more components) and in the second half goes to 8 (with 21 variables present in one or more components), can we infer that some of these variables become more significant in the latter half compared to first half?
What I've seen is an interpolation of the data in order to match the length (and even the sampling) of the data. This was particulary used (successfully, I must say) in the classification of variable stars using PCA, where the data where the actual light curves of the stars. For more details, see [the paper of Deb & Singh (2009)](http://adsabs.harvard.edu/abs/2009A&A...507.1729D). I must add, however, that if you are going to do PCA interpolating data, maybe Functional Data Analysis techniques are more suitable (I've been thinking in writing a paper with this method, in fact, and compare it with the PCA approach).
I've got a few categorical predictors (like gender,...) and now I want to build regression models. So I've made the categorical predictors numeric by for example: "female" --> 1 and "male" --> 0. But when I do methods like nearest neighbors regression I have to standardize all the predictors (for example the weights). What to do here with the categorical variabels (that were made numeric)? Does this also have to be standardised? This seems so weird. Silke
An excellent introductory paper is [Chib, Siddhartha, and Edward Greenberg. “Understanding the Metropolis-Hastings Algorithm.” *The American Statistician*, vol. 49, no. 4, 1995, pp. 327–335.](https://www.jstor.org/stable/2684568) [Free download](https://biostat.jhsph.edu/%7Emmccall/articles/chib_1995.pdf) A masterful and concise discussion of the theory is [Tierney, Luke. “Markov Chains for Exploring Posterior Distributions.” *The Annals of Statistics*, vol. 22, no. 4, 1994, pp. 1701–1728.](https://www.jstor.org/stable/2242477) [Free download](http://stat.rutgers.edu/home/rongchen/papers/tierney.pdf)
I'd like to know if there have been conjectures that have long been unproven in TCS, that were later proven by an implication from another theorem, that may have been easier to prove.
[Erdös and Pósa](https://www.renyi.hu/~p_erdos/1965-05.pdf) proved that for any integer $k$ and any graph $G$ either $G$ has $k$ disjoint cycles or there is a set of size at most $f(k)$ vertices $S\in G$ such that $G\setminus S$ is a forest. (in their proof $f(k) \in O(k \cdot \log k)$). The Erdös and Pósa property of a fixed graph $H$ known as the following (not a formal definition): The class of graphs $\mathcal{C}$ admits the Erdös-Pósa property if there is a function $f$ such that for every graph $H\in \mathcal{C}$ and for any $k \in \mathbb{Z}$ and for any graph $G$ either there are $k$ disjoint isomorphic copy (w.r.t minor or subdivision) of $H$ in $G$ or there is a set of vertices $S\in G$, such that $|S|\le f(k)$ and $G\setminus S$ has no isomorphic copy of $H$. After Erdös and Pósa's result for a class of cycles which are admitting this property, it was an open question to find a proper class $\mathcal{C}$. In [graph minor V](http://www.sciencedirect.com/science/article/pii/0095895686900304) proved that every planar graph either has a bounded tree width or contains a big grid as a minor, by having the grid theorem in hand they showed that Erdös and Pósa property holds (for minor) if and only if $\mathcal{C}$ is a class of planar graphs. The problem still is open for subdivision, though. But the proof of theorem w.r.t minor is somehow simple and as best of my knowledge there is no proof without using the grid theorem. Recent [results for digraphs](http://arxiv.org/abs/1603.02504), provides answers for long standing open questions in the similar area for digraphs. e.g one very basic question was that is there a function $f$ such that for any graph $G$ and integers $k,l$, we either can find a set $S\subseteq V(G)$ of at most $f(k+l)$ vertices such that $G-S$ has no cycle of length at least $l$ or there are $k$ disjoint cycles of length at least $l$ in $G$. This is only a special case but for $l=2$ it was known as a Younger's conjecture. Before that Younger's conjecture was proven by Reed et al with quite a complicated approach. It's worth to mention that still there are some quite non-trivial cases in digraphs. e.g Theorem 5.6 in the above paper is just a positive extension of younger's conjecture to a small class of weakly connected digraphs, but with the knowledge and mathematical tools that we have it's not trivial (or maybe we don't know a simple argument for that). Perhaps by providing a better characterisation for those graphs, there will be an easier way to prove it.
I have a series of physicians' claims submissions. I would like to perform cluster analysis as an exploratory tool to find patterns in how physicians bill based on things like Revenue Codes, Procedure Codes, etc. The data are all polytomous, and from my basic understanding, a latent class algorithm is appropriate for this kind of data. I am trying my hand at some of R's cluster packages, & specifically `poLCA` & `mclust` for this analysis. I'm getting alerts after running a test model on a sample of the data using `poLCA`. ``` > library(poLCA) > # Example data structure - actual test data has 200 rows: > df <- structure(list(RevCd = c(274L, 320L, 320L, 450L, 450L, 450L, 636L, 636L, 636L, 450L, 450L, 450L, 301L, 305L, 450L, 450L, 352L, 301L, 300L, 636L, 301L, 450L, 636L, 636L, 307L, 450L, 300L, 300L, 301L, 301L), PlaceofSvc = c(23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L), TypOfSvc = c(51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L), FundType = c(3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), ProcCd2 = c(1747L, 656L, 656L, 1375L, 1376L, 1439L, 1623L, 1645L, 1662L, 176L, 1374L, 1376L, 958L, 1032L, 1368L, 1374L, 707L, 960L, 347L, 1662L, 859L, 1375L, 1654L, 1783L, 882L, 1440L, 332L, 332L, 946L, 946L)), .Names = c("RevCd", "PlaceofSvc", "TypOfSvc", "FundType", "ProcCd2"), row.names = c(1137L, 1138L, 1139L, 1140L, 1141L, 1142L, 1143L, 1144L, 1145L, 1146L, 1147L, 1945L, 1946L, 1947L, 1948L, 1949L, 1950L, 1951L, 1952L, 1953L, 1954L, 1955L, 1956L, 1957L, 1958L, 1959L, 2265L, 2266L, 2267L, 2268L), class = "data.frame") > clust <- poLCA(cbind(RevCd, PlaceofSvc, TypOfSvc, FundType, ProcCd2)~1, df, nclass = 3) ========================================================= Fit for 3 latent classes: ========================================================= number of observations: 200 number of estimated parameters: 7769 residual degrees of freedom: -7569 maximum log-likelihood: -1060.778 AIC(3): 17659.56 BIC(3): 43284.18 G^2(3): 559.9219 (Likelihood ratio/deviance statistic) X^2(3): 33852.85 (Chi-square goodness of fit) ALERT: number of parameters estimated ( 7769 ) exceeds number of observations ( 200 ) ALERT: negative degrees of freedom; respecify model ``` My novice assumption is that I need to run a greater number of iterations before I can get results that are robust? e.g. "...it is essential to run poLCA multiple times until you can be reasonably certain that you have found the parameter estimates that produce the global maximum likelihood solution." (<http://www.sscnet.ucla.edu/polisci/faculty/lewis/pdf/poLCA-JSS-final.pdf>). Alternatively, perhaps certain variables, particularly CPT & Revenue Codes, have too many unique values, and that I need to aggregate these variables into higher level categories to reduce the number of parameters? When I run the model using package `mclust`, which optimizes the model based on BIC, I don't get any such alert. ``` > library(mclust) > clustBIC <- mclustBIC(df) > summary(clustBIC, data = df) classification table: 1 2 141 59 best BIC values: VEV,2 VEV,3 EEV,3 -4562.286 -4706.190 -5655.783 ``` If anyone can shed a bit of light on the above alerts, it would be much appreciated. I was also planning on using the script found in the `poLCA` documentation to run multiple iterations of the model until the log-likelihood is maximized. However it's computationally intensive and I'm afraid the process will crash before I have a chance to post this. Sorry in advance if I've missed something obvious here; I'm new to cluster analysis.
It depends on what sense of a correlation you want. When you run the prototypical Pearson's product moment correlation, you get a measure of the strength of association and you get a test of the significance of that association. More typically however, the [significance test](http://en.wikipedia.org/wiki/Significance_testing) and the measure of [effect size](http://en.wikipedia.org/wiki/Effect_size) differ. **Significance tests:** * Continuous vs. Nominal: run an [ANOVA](http://en.wikipedia.org/wiki/Anova). In R, you can use [?aov](http://stat.ethz.ch/R-manual/R-patched/library/stats/html/aov.html). * Nominal vs. Nominal: run a [chi-squared test](http://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test#Test_of_independence). In R, you use [?chisq.test](http://stat.ethz.ch/R-manual/R-patched/library/stats/html/chisq.test.html). **Effect size** (strength of association): * Continuous vs. Nominal: calculate the [intraclass correlation](http://en.wikipedia.org/wiki/Intraclass_correlation). In R, you can use [?ICC](http://personality-project.org/r/html/ICC.html) in the [psych](http://cran.r-project.org/web/packages/psych/index.html) package; there is also an [ICC](http://cran.r-project.org/web/packages/ICC/index.html) package. * Nominal vs. Nominal: calculate [Cramer's V](http://en.wikipedia.org/wiki/Cram%C3%A9r%27s_V_%28statistics%29). In R, you can use [?assocstats](http://www.rdocumentation.org/packages/vcd/functions/assocstats) in the [vcd](http://cran.r-project.org/web/packages/vcd/index.html) package.
It's well-known that there are tons of amateurs--myself included--who are interested in the P vs. NP problem. There are also many amatuers--myself still included--who have made attempts to resolve the problem. One problem that I think the TCS community suffers from is a relatively high interested-amateur-to-expert ratio; this leads to experts being inundated with proofs that P != NP, and I've read that they are frustrated and overwhelmed, quite understandably, by this situation. Oded Goldreich has [written](http://www.wisdom.weizmann.ac.il/~oded/p-vs-np.html) on this issue, and indicated his own refusal to check proofs. At the same time, speaking from the point of view of an amateur, I can assert that there are few things more frustrating for non-expert-level TCS enthusiasts of any level of ability than generating a proof that just *seems* right, but lacking both the ability to find the error in the proof yourself and the ability to talk to anyone who can spot errors in your proof. Recently, R. J. Lipton [wrote](http://rjlipton.wordpress.com/2011/01/08/proofs-by-contradiction-and-other-dangers/) on the problem of amateurs who try to get taken seriously. I have a proposal for resolving this problem, and my question is whether or not others think it reasonable, or if there are problems with it. I think experts should charge a significant but reasonable sum of money (say, 200 - 300 USD) in exchange for agreeing to read proofs in detail and find specific errors in them. This would accomplish three things: 1. Amateurs would have a clear way to get their proofs evaluated and taken seriously. 2. Experts would be compensated for their time and energy expended. 3. There would be a significantly high cost imposed on proof-checking that the number of proofs that amateurs submit would go down dramatically. Again, my question is whether or not this is a reasonable proposal. Obviously, I have no ability to cause experts to adopt what I suggest; however, I'm hoping that experts will read what I've written and decide that it's reasonable.
For a few months at the end of my senior year of college and the beginning of my first year of graduate school I made $60/hour correcting an amateur's incorrect proofs of Fermat's Last Theorem. In this case the person was an academic in another field, so he had a reasonable understanding of the value of expert time. It was good experience all around, I made a thousand dollars at a time where I didn't have any other good sources of income, and he learned the errors he made in several drafts. I think for people who are making a genuine effort and who are willing to pay good money, it shouldn't be hard to find qualified undergraduates or young graduate students who need some money.
[This guy](http://www.johndcook.com/blog/2012/07/31/why-computers-were-invented/) asserts: > > I’ll say it — the computer was invented in order to help to clarify … a philosophical question about the foundations of mathematics. > (This problem being Entscheidungsproblem - The Decision Problem) > > > [The reference here](http://en.wikipedia.org/wiki/Entscheidungsproblem) states that the Church-Turing thesis was attempting to answer this question. My question is - is it true that modern computers are a byproduct of trying to solve 'The Decision Problem'? (My intuition told me that modern computers were more a [byproduct of trying to break Nazi encryption codes](http://en.wikipedia.org/wiki/Bletchley_Park#Recruitment)). (perhaps with some [pre-war German influence](http://en.wikipedia.org/wiki/Konrad_Zuse#Pre-World_War_II_work_and_the_Z1)).
I can see his point, but I think he's really (deliberately?) confusing computation (and the mathematics thereof) and computers. A computer is certainly a device for performing computation, but what Church and Turing created was a (well, two, but they're "the same") theoretical (read mathematical) model of the process of computation. That is, they define a mathematical system which (if you believe the Church-Turing thesis) captures what it is possible to compute on any machine that can perform mechanical computation (mechanical in the sense that it can be automated, and yes, that's a little hand wavy, but that's another story). Computers don't work like Turing Machines (or the Lambda calculus, which doesn't even pretend to be a machine). Bits of them look kind of similar, and indeed Turing does play an important role in the development of modern computers, but they're not a byproduct of the maths, any more than aeroplanes are a byproduct of the dynamics that describe airflow across their wings.
How to prove that there exist two different programs A and B such that A printing code of B and B printing code of A without giving actual examples of such programs?
This can be formulated as an instance of [minimum-cost flow problem](https://en.wikipedia.org/wiki/Minimum-cost_flow_problem). Have a graph with one vertex per agent, one vertex per task, and one vertex per category. Now add edges: * Add an edge from the source to each agent, with capacity 1 and cost 0. * Add an edge from each agent to each task, with capacity 1 and cost according to the cost of that assignment. * Add an edge from each task to the category it is part of, with capacity 1 and cost 0. * Add an edge from each category to the sink, with capacity given by the maximum number of tasks assignable in that category and cost 0. Now find the minimum-cost flow of size $t$, where $t$ is the number of tasks. There are polynomial-time algorithms for that.
consider the language:$$CLIQUE = \left\{\langle G,k\rangle \ |\ \text{ $G$ is a graph containing a clique of size at least $k$ } \right\}$$ > > Suppose there's a polynomial time algorithm for $CLIQUE$. I need to show a polynomial time algorithm for finding a clique of size $k$. > > > Now, the idea is pretty easy if there's only one clique in the graph - You remove each vertex $v\_i$ and query for $CLIQUE(G\_i, k)$. If there are two cliques in the graph this algorithm *could not* be applied since no matter which vertex will be removed there will always be a clique of size $k$. An alternative would be removing each one of the ${m}\choose{k}$ but if $k = n/2$ for example, that wouldn't be a polynomial time algorithm anymore. So my question is, can we solve this problem for the general case where there might be multiple cliques?
Keep removing vertices until the graph no longer contains a clique of size $k$, and let $v$ be the last vertex that you removed. It follows that there is some $k$-clique which contains $k$. Remove all vertices from the graph other than neighbors of $v$ (so $v$ itself is also removed), and recursively find a $(k-1)$-clique in the new graph. Add $v$ to this clique to create the desired $k$-clique. The algorithm can also be formulated iteratively: 1. Let $C = \emptyset$ (this will be the clique). 2. Let $\ell = k$ (the current size of the clique). 3. Go over all vertices $v$ in the graph: * Check if after removing $v$ from the graph, the new graph still contains an $\ell$-clique. * If so, continue to the next vertex. * Otherwise, add $v$ to $C$, decrease $\ell$, and remove from the graph all vertices other than the neighbors of $v$. 4. Return $C$.
The algorithm take in an integer $n$ and outputs the $n$th number in the Fibonacci sequence ($F\_n$). The sequence starts with $F\_0$. I am trying to prove the correctness assuming valid input: ``` int Fib(int n) { int i = 0; int j = 1; int k = 1; int m = n; while(m >= 3){ m = m-3; i = j +k; j = i+k; k = i + j; } if(m == 0 ){ return i; } else{ if(m == 1){ return j; } else{ return k; } } } ``` For a reminder, a loop invariant is a claim which holds every time just before the loop condition is checked. It holds even when the loop condition is false. I've established and proven one loop invariant that $m \geq 0$. This helps me show that the algorithm terminates since when the loop exits since $m$ is either 0, 1, or 2. However, **I'm stuck on finding another loop invariant that would help me show that the algorithm produces the correct Fibonacci number**. One pattern I found is that the result of returning i, j, or k is due to the result of the initial value of $m$ $\% 3$. Depending on the remainder, either i, j, or k is returned. I tried expanding this idea further but led myself to a dead end. I'm thinking I need to find some way to express $F\_n = F\_{n-1} + F\_{n-2}$ in terms of i, j and k in order to prove that the program outputs correctly. Am I on the right track with remainders or is there something I'm missing?
This is essentially a [Segment tree](https://en.wikipedia.org/wiki/Segment_tree) which is a data structure that augments an array with a binary tree as you describe such that: * You have fast set and get at any index * You have fast "aggregate" queries on ranges * You can support fast update queries on ranges, for some combinations of updates and queries The $j$th node at height $k$ in the tree "summarizes" a subarray $[j\*2^k, (j+1)\*2^k)$ of the original array. Since each element of the array appears in only logarithmically many such subarrays, we can do updates in $O(\log n)$ time. The range queries can use any associative operation. In your example the operation is $\max$, but other examples include sum, product, even standard deviation (via sum and sum of squares). --- I originally called this a Fenwick Tree (aka Binary Indexed Tree), which is a similar structure but which compresses the tree into only exactly $n$ storage with no overhead(but loses access to the original array).
**The Question:** Are there any good examples of [reproducible research](http://reproducibleresearch.net/index.php/Main_Page) using R that are freely available online? **Ideal Example:** Specifically, ideal examples would provide: * The raw data (and ideally meta data explaining the data), * All R code including data import, processing, analyses, and output generation, * Sweave or some other approach for linking the final output to the final document, * All in a format that is easily downloadable and compilable on a reader's computer. Ideally, the example would be a journal article or a thesis where the emphasis is on an actual applied topic as opposed to a statistical teaching example. **Reasons for interest:** I'm particularly interested in applied topics in journal articles and theses, because in these situations, several additional issues arise: * Issues arise related to data cleaning and processing, * Issues arise related to managing metadata, * Journals and theses often have style guide expectations regarding the appearance and formatting of tables and figures, * Many journals and theses often have a wide range of analyses which raise issues regarding workflow (i.e., how to sequence analyses) and processing time (e.g., issues of caching analyses, etc.). Seeing complete working examples could provide good instructional material for researchers starting out with reproducible research.
The journal Biostatistics has an Associate Editor for Reproducibility, and all its articles are marked: > > **Reproducible Research** > > > Our reproducible research policy is for papers in the journal to be > kite-marked D if the data on which they are based are freely > available, C if the authors’ code is freely available, and R if both > data and code are available, and our Associate Editor for > Reproducibility is able to use these to reproduce the results in the > paper. Data and code are published electronically on the journal’s > website as Supplementary Materials. > > > <http://biostatistics.oxfordjournals.org/> How good an idea is that? <http://biostatistics.oxfordjournals.org/content/12/1/18.abstract> comes with an R package in the supplementaries that does the analysis - haven't tried it myself yet. Also, can't find out where the openness rating is specified. Am emailing the associate editor with some questions... [edit] Roger Peng the associate editor tells me there probably is no way of finding the reproducible papers without getting the PDF. He pointed me at this one which has a nice big 'R' on it (which does not mean R-rated like movies) for reproducibility: <http://biostatistics.oxfordjournals.org/content/10/3/409.abstract> Of course the journal itself isn't free... #fail Barry
Is there any programming language in which any equivalent program has a unique normal representation, and that normal representation is decidable? Is other words, suppose A and B are programs written on that hypothetic language. Suppose, too, that for any input x, A(x) = B(x) - that is, those programs are equivalent. There should, then, be an algorithm Z for which Z(A) = Z(B). Finally, that language should be able to encode boolean logic. Is there such a language?
What you are asking for does not exist for a general-purpose programming language (by which we mean that the language can simulate Turing machines, and that Turing machines can simulate the language). Let me first recall the proof, and then turn the question around to discover something interesting. We have to make your question just a bit more precise. Let us suppose that when you speak of inputs and outputs you mean strings, and that your programs are total (defined on all inputs). Now suppose there were an algorithm $Z$ which maps programs to programs (that is, strings to strings) such that, given any two valid programs $A$ and $B$ which map strings to strings, $Z(A)$ and $Z(B)$ are defined and $$Z(A) = Z(B) \iff \forall x \in \mathtt{string} . A(x) = B(x).$$ Notice that I did not even require that $Z(A)$ be a program equivalent to $A$, it can be any string whatsoever, the important thing is that it maps $A$ and $B$ to the same string if, and only if, they represent equivalent programs. We can now solve the halting oracle as follows. Let $A$ be a program which always outputs the string 0, i.e., $A(x) = 0$ for all $x \in \mathtt{string}$, and let $x\_0 = Z(A)$. Consider any Turing machine $M$ and an input $y$. Because we assumed our language is Turing-complete, from a description of $M$ and a given input $y$ we can construct a program $B\_{M,y}$, which computes as $$B\_{M,y}(n) = \begin{cases} 1 & \text{if $M(y)$ halts in fewer than $n$ steps of simulation}\\\\ 0 & \text{otherwise} \end{cases}$$ Notice that $B\_M$ and $A$ are equivalent if, and only if, $M(y)$ diverges. But now we can decide whether $M(y)$ halts: if $Z(B\_{M,y}) = x\_0$ then $M(y)$ does not halt, otherwise it halts. The above argument shows that programs of type $\mathtt{string} \to \mathtt{string}$ do not have canonical codes. How about other kinds of programs? Well, in some cases we obviously can produce canonical codes. For instance, a program $A$ of type $\mathtt{bool} \to \mathtt{bool}$ can be represnted canonically by the list $[A(\mathtt{false}), A(\mathtt{true})]$, from which the corresponding $Z$ can be easily constructed. If we replace $\mathbb{bool}$ with some other finite datatype, we also obtain canonical codes by simply listing the values of $A$. But did you know that there are canonical codes for programs of type $(\mathtt{nat} \to \mathtt{bool}) \to \mathtt{bool}$? That is, given a program $A$ which takes as input infinite binary streams and outputs a bit, we *can* compute a corresponding canonical code $Z(A)$. See my blog post on [juggling double exponentials](http://math.andrej.com/2009/10/12/constructive-gem-double-exponentials/) where I explicitly construct $Z$. We could also ask whether it is possible to make Turing machines somehow more powerful so that we *can* compute canonical code, and thereby solve the Halting problem. Well, adding an oracle will not help because exactly the same reasoning goes through. But we use [infinite-time Turing machines](http://arxiv.org/pdf/math/9808093.pdf) (ITTM), then canonical codes for maps $\mathtt{string} \to \mathtt{string}$ *are* computable. The ITTM's therefore *can* solve the Halting problem for ordinary Turing machines, but they still cannot solve their own halting problem (which is not reducible to comparison of two functions $$mathtt{string} \to \mathtt{string}$). See my paper on [embedding $\mathbb{N}^{\mathbb{N}}$ into $\mathbb{N}$](http://math.andrej.com/2011/06/15/constructive-gem-an-injection-from-baire-space-to-natural-numbers/) for details. P.S. Apologies for blatant self-propaganda.
I need to choose a model for unsupervised machine learning problem. There are 4 clusters in 3D space. These are my requirements: * I will run the same model multiple times with different training data (it is for real-time application). * Size of training data is expected to be around 400 points. * I can assume that points for each of the clusters are drawn from a Gaussian distribution. This is not necessary requirement to be present in the model. * I need to get 4 points that represent "centers" of clusters. * In prediction time, for each new point I need some kind of number for each cluster that will represent probablity of belonging to the cluster. * I will have a lot of outliers, assume around 30%. I have tried Gaussian mixture model, and it works very good when I don't have outliers. Unfortunately, this model is very sensitive to outliers. Any suggestions how to handle the outliers with Gaussian mixture model? Or should I go with completely different model?
Here are a couple suggestions, given that Gaussian mixture models work well for you in the absence of outliers. To increase robustness to outliers, you could use a trimmed estimator for Gaussian mixture models instead of fitting with the standard EM algorithm. Some relevant papers: * [Neykov et al. (2007)](https://pdfs.semanticscholar.org/5df1/e9277ef2b34c892a649fb805645870811758.pdf). Robust fitting of mixtures using the trimmed likelihood estimator. * [Gallegos and Ritter (2009)](http://sankhya.isical.ac.in/search/71a2/A08017-final.pdf). Trimmed ML Estimation of Contaminated Mixtures. Instead of Gaussian mixture models, you could also consider student T mixture models. This will give the same properties you want (e.g. ability to compute cluster centroids and membership probabilities). Student T distributions have heavier tails than Gaussians, which increases robustness to outliers. Some relevant papers: * [Peel and McLachlan (2000)](https://people.smp.uq.edu.au/GeoffMcLachlan/pm_sc00.pdf). Robust mixture modelling using the t distribution. * [Svensen and Bishop (2005)](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/bishop-robust-mixture-neurocomputing-04.pdf). Robust Bayesian Mixture Modelling. * [Archambeau and Verleysen (2007)](https://pdfs.semanticscholar.org/0863/8752f143a9f94ed638d8ba12f47f98c20b94.pdf). Robust Bayesian clustering.
Im looking for an algorithm that can deduct a set of rules based on a dataset of "training documents" that can be applied to classify a new unseen document. The problem is that I need these rules to be viewable by the user in the form of some string representation. For example, the algorithm found that documents have a minimum word count of 1000 and that there are 4 citations in each document. The key is that these rules must be deducted by a algorithm. An example of this in practice would be: **Document 1 contains 890 words and only 2 citations** I need it to return something like: **- You should add more words to make it better - Add more citations to prove your point**
A good rule of thumb is to look at the [level of measurement](https://en.wikipedia.org/wiki/Level_of_measurement) of the target/response variable. If the response is measured on a nominal scale, the problem is a classification problem. Values on a nominal scale are for example labels of a categories where the categories have no natural order, like political parties in political science, species in biology, or parts-of-speech in grammar. If the response is measured on a ratio or interval scale, you have a regression problem. Values on an interval scale are values where you can compare the degree of difference between values, but not the ratio between them, for instance temperature (on Farenheit or Celsius scales, but not Kelvin), or date values in a calendar. Values on ratio scales can be compared both with regards to degree of difference and ratio, like most physical quantities like mass, velocity or temperature on the Kelvin scale. Ordinal scales are more difficult to place in either corner. I would generally say that you have a ranking problem with an ordinal response. However, the ranking problem can be approached using both classification, for instance using comparators, and and regression, like ordinal regression. Values on ordinal scales are ordered, or ranked, but you can't say anything meaningful about the degree of difference between any two values, for instance the ranking of racing drivers in a race.
A Vector Addition System (VAS) is a finite set of *actions* $A \subset \mathbb{Z}^d$. $\mathbb{N}^d$ is the set of *markings*. A run is a non-empty word of markings $m\_0 m\_1\dots m\_n$ s.t. $\forall i \in \{0, \dots, n-1\}, m\_{i+1}-m\_i \in A$. If such a word exists we say that $m\_n$ is *reachable* from $m\_0$. The problem of reachability for VASs is known to be decidable (but its complexity is an open problem). Now let us assume that a finite set of forbidden markings (the *obstacles*) is given. I would like to know if the problem of reachability is still decidable. Intuitively, the finite set of obstacles should interfere with paths only locally, so the problem should remain decidable. But it does not seem trivial to prove it. **EDIT**. I will keep @Jérôme's answer as the accepted one, but I would like to add a follow-up question: what if the set of markings is $\mathbb{Z}^d$?
The idea is based on a discussion I got with Grégoire Sutre this afternoon. The problem is decidable as follows. A Petri net $T$ is a finite set of pairs in $\mathbb{N}^d\times\mathbb{N}^d$ called transitions. Given a transition $t=(\vec{u},\vec{v})$, we denote by $\xrightarrow{t}$ the binary relation defined on the set of configurations $\mathbb{N}^d$ by $\vec{x}\xrightarrow{t}\vec{y}$ if there exists a vector $\vec{z}\in\mathbb{N}^d$ such that $\vec{x}=\vec{u}+\vec{z}$ and $\vec{y}=\vec{v}+\vec{z}$. We denote by $\xrightarrow{T}$ the one step reachability relation $\bigcup\_{t\in T}\xrightarrow{t}$. The reflexive and transitive closure of this relation is denoted by $\xrightarrow{T^\*}$. Let $\leq$ be the classical componentwise partial order over $\mathbb{N}^d$ and defined by $\vec{u}\leq \vec{x}$ if there exists $\vec{z}\in\mathbb{N}^d$ such that $\vec{x}=\vec{u}+\vec{z}$. The upward closure of a set $\vec{X}$ of $\mathbb{N}^d$ is the set ${\uparrow}\vec{X}$ of vectors $\{\vec{v}\in\mathbb{N}^d \mid \exists \vec{x}\in\vec{X}.\,\vec{x}\leq\vec{v}\}$. The downward closure of a set $\vec{X}$ is the set ${\downarrow}\vec{X}$ of vectors $\{\vec{v}\in\mathbb{N}^d \mid \exists \vec{x}\in\vec{x}.\,\vec{v}\leq\vec{x}\}$. Notice that if $\vec{U}={\uparrow}\vec{B}$ for some finite set $\vec{B}$ of $\mathbb{N}^d$ and if $T$ is a Petri net, we can compute a new Petri net $T\_{\vec{B}}$ such that for every configurations $\vec{x},\vec{y}$, we have $\vec{x}\xrightarrow{T}\vec{y}$ and $\vec{x},\vec{y}\in\vec{U}$ if, and only if, $\vec{x}\xrightarrow{T\_{\vec{B}}}\vec{y}$. In fact, if $t=(\vec{u},\vec{v})$ is a transition, then for each $\vec{b}\in\vec{B}$, let $t\_{\vec{b}}=(\vec{u}+\vec{z},\vec{v}+\vec{z})$ where $\vec{z}$ is the vector in $\mathbb{N}^d$ defined componentwise by $\vec{z}(i)=\max\{\vec{b}(i)-\vec{u}(i),\vec{b}(i)-\vec{v}(i),0\}$ for every $1\leq i\leq d$. Notice that $T\_{\vec{U}}=\{t\_{\vec{b}} \mid t\in T\,\vec{b}\in\vec{B}\}$ satisfies the requirement. Now, assume that $T$ is a Petri net, $\vec{O}$ the set of obstacle. We introduce the finite set $\vec{D}={\downarrow}\vec{O}$. Observe that we can compute effectively a finite set $\vec{B}$ of $\mathbb{N}^d$ such that ${\uparrow}\vec{B}=\mathbb{N}^d\backslash\vec{D}$. Let $R$ be the binary relation defined over $\mathbb{N}^d\backslash \vec{O}$ by $\vec{x} R \vec{y}$ if $\vec{x}=\vec{y}$, or there exists $\vec{x}',\vec{y}'\in \mathbb{N}^d\backslash \vec{O}$ such that $\vec{x}\xrightarrow{T}\vec{x}'\xrightarrow{T\_{\vec{B}}^\*}\vec{y}'\xrightarrow{T}\vec{y}$. Now, just observe that if there exists a run from the initial configuration $\vec{x}$ to the final one $\vec{y}$ that avoid the obstacle $\vec{O}$, then there exists one that avoid the obstacle in $\vec{O}$ and that passes by configurations in $\vec{D}\backslash \vec{O}$ at most the cardinal of that set. Hence, the problem reduces to select non-deterministically distinct configurations $\vec{c}\_1,\ldots,\vec{c}\_n$ in $\vec{D}\backslash \vec{O}$, fix $\vec{c}\_0$ as the initial configuration $\vec{x}$, $c\_{n+1}$ as the final one $\vec{y}$, and check that $\vec{c}\_j R \vec{c}\_{j+1}$ for every $j$. This last problem reduces to classical reachability questions for Petri nets.
I saw a joke on twitter today that got me thinking on how to perform a time complexity analysis of this algorithm such as you can express that the worst case is dependent on the input **value** in addition to the input size. The joke algorithm was this *sleep sort* algorithm in javascript ```javascript const arr = [20, 5, 100, 1, 90, 200, 40, 29] for(let item of input) { setTimeout(() => console.log(item), item) } ``` ``` // Console Output 1 5 20 29 40 90 100 200 ``` If we were to describe its Time Complexity and only took into consideration the size of the input, it would be `O(n)`. But from a practical standpoint that wouldn't be really accurate as the Worst Case Time of the implementation is heavily dependent on the actual value of each array element, so is it possible to convey this in a Time Complexity Analysis notation? Is there such a thing as `O(max(n) + n)`, for example?
A simple example of undecidable mathematical statements are whether multivariate integer polynomials have natural roots. This means that we an expression $E(n\_0,\ldots,n\_k)$ built from natural number constants, natural number variables $n\_0,\ldots,n\_k$, addition, substraction and multiplication. We then want to know whether a solution for $E(n\_0,\ldots,n\_k) = 0$ exists. A computer can in principle find a solution if there is one, just by exhaustively searching through all candidates, plugging them in and evaluating the expression. However, there is no general procedure to rule out the existence of a solution. Further reading: <https://en.wikipedia.org/wiki/Diophantine_set>
I am designing an automaton which determines whether a binary number is divisible by 3: $$ \{x \in \{0, 1\}^\* \mid \text{$x$ represents a multiple of three in binary} \} $$ | | 0 | 1 | | --- | --- | --- | | 0F | 0 | 1 | | 1 | 2 | 0 | | 2 | 1 | 2 | [![](https://i.stack.imgur.com/yklT4.jpg)](https://i.stack.imgur.com/yklT4.jpg) This is the transition state diagram of the automaton, whose states are $0,1,2$; the states are drawn from left to right. But here the automaton reads the input from left to right. So at first I didn't think about the order (left to right or right to left); I tried to design an automaton which reads the binary from right to left but after much time I couldn't design any automaton which reads the binary right to left and tells whether it is divisible by 3. Can someone help me out to design such an automaton, or is it possible to design an automaton which reads from right to left and can tell whether its input is divisible by 3?
When you read a number $b\_0 \ldots b\_{n-1}$ from the LSB to the MSB, its remainder modulo 3 is \begin{align} &b\_0 + 2b\_1 + 4b\_2 + 8b\_3 + 16b\_4 + 32b\_5 + \cdots \bmod 3 \\ = \, &b\_0 - b\_1 + b\_2 - b\_3 + b\_4 - b\_5 + \cdots \end{align} In other words, when reading bits with even indices, the remainder modulo 3 increases by the bit read; and when reading bits with odd indices, the remainder modulo 3 decreases by the bit read. To implement this using a DFA, you need to keep track both of the remainder modulo 3 and of the parity of the index of the current bit. In total, you will need 6 states, 2 of which will be accepting. --- As Hendrik Jan mentions in the comments, in order to know whether the number is divisible by 3, we don't actually have to maintain the remainder modulo 3. Instead, we could compute the remainder of $(-1)^{|x|-1}x$, where $x$ is the input, since this remainder is zero iff the remainder of $x$ is zero. The advantage is that the new remainder is $$ (-1)^{n-1} (b\_0 - b\_1 + \cdots + (-1)^{n-1} b\_{n-1}) \bmod 3 = b\_{n-1} - b\_{n-2} + \cdots + (-1)^{n-1} b\_0 \bmod 3, $$ which is just the remainder of the *reverse* of the input. So the DFA actually works whichever way you read the input — LSB to MSB or MSB to LSB. --- More generally, the regular languages are closed under reversal. One way to see it is that if you have a DFA for your language, you can convert it to an NFA for the reversed language by changing the direction of the arrows and switching initial and final states; you can then determinize it to produce a DFA, if you so wish.
Written in English, does "the set S contains *only* members of set T" imply that S does contain some member of set T? How would this relationship be written formally?
My view is that vernacular would consider that S is not empty, i.e. $\emptyset \neq S\subseteq T$, while mathematical language would consider that S can be empty, i.e. $\emptyset\subseteq S\subseteq T$. Lay people do not speak of empty sets, while mathematicians are aware of their role in their work. That means that the sentence is not ambiguous, but the meaning depends on the community where it is used. Of couse, people aware of the two readings can play games with it. But that is yet something else. You may notice that I did not specifically mention English, because the same is probably true of many "languages". To think that English (or German, or French) is a well defined language is unrealistic. Actually, I doubt you can find two people who agree on what is English and the meaning, or on the ambiguity, of its sentences, It might be interesting to ask the linguists.
For example, one way to view maximum weight matching is that each vertex $v$ gets a utility $f\_v= w(e\_v)$ that equals the weight of the edge it's matched on, and zero otherwise. accordingly, a maximum weight matching could then be viewed as maximizing the objective $\sum\_v f\_v$. Have any generalizations of maximum weight matching been studied that consider more general objective functions using weighted, multivariate or nonlinear $f\_v$ ? Have other variants been studied that are generalizations in a different way? pleas provide references if applicable!
Maximum weight matching on $G$ is equivalent to maximum weight independent set on [line-graph](http://en.wikipedia.org/wiki/Line_graph) of $G$, and can be written as follows $$\max\_{\mathbf{x}} \prod\_{ij \in E} f\_{ij}(x\_i,x\_j)$$ Here $\mathbf{x}\in\{0,1\}^n$ is a vector of vertex occupations, $f\_{ij}(x,y)$ returns 0 if x=y=1, 1 if x=y=0, otherwise weight of the node that's not 0. You can generalize by allowing other choices of $\mathbb{x}$ and $f$, for instance * Largest proper coloring $\mathbf{x}\in \{1,\ldots,q\}^n, f(x,y)=\delta(x-y)$ * Ising model ground-state $\mathbf{x}\in \{1,-1\}^n, f(x,y)=\exp(J x y)$ If you allow arbitrary non-negative $f$, this becomes the problem of finding the most likely setting of variables in a Gibbs random field with $f$ representing edge interaction potentials. Generalizing further to hypergraphs, your objective becomes $$\max\_{\mathbf{x}} \prod\_{e \in E} f\_{e}(x\_e)$$ Here $E$ is a set of hyper-edges (tuples of nodes), and $x\_e$ is restriction of $x$ to nodes in hyperedge $e$. Example: * Error correcting decoding, $\mathbf{x} \in \{1,\ldots,q\}^n, f(x\_e)=\exp{\text{parity} (x\_e)}$ * MAP inference in hypergraph structured probability model, $f$ arbitrary non-negative function Generalizing in another direction, suppose instead of a single maximum matching, you want to find $m$ highest weighted maximum matchings. This is a special instance of finding $k$ most probable explanations in a probabilistic model. The objective can be now written as $$sort\_\mathbf{x} \prod\_{e \in E} f\_{e}(x\_i,x\_j)$$ See [[Flerova,2010](http://www.ics.uci.edu/~dechter/publications/r177.html)] for meaning of objective above. More generally, instead of sort,$\prod$ or $\max,\prod$ over reals, we can consider a general $(\cdot,+)$ commutative semiring where $\cdot$ and $+$ are abstract operations obeying associative and distributive law. The objective we get is now $$\bigoplus\_x \bigotimes\_e f\_e(x)$$ Here, $\bigotimes$ is taken over all edges of some hypergraph $G$ over $n$ nodes, $\bigoplus$ is taken over $n$-tuples of values, each $f\_e$ takes $x$'s to $E$ and $(\bigotimes,\bigoplus,E)$ form a commutative semi-ring Examples: * Partition function of spin-interaction models: use $(\*,+)$ instead of $(\max,+)$ * Fast Fourier Transform over Abelian groups: use abelian groups instead of $\mathbb{R}$ What's bringing all of these generalizations together is that the best known algorithm for specific instances of the problem above is often the same as the most general algorithm, sometimes called "Generalized distributive law" [[Aji, 2000](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.125.8954&rep=rep1&type=pdf)], which works in $O(1)$ time for bounded tree-width hypergraphs. This puts exact solution of the problems above in a unified framework, however such framework for approximate solution is lacking (and I want to hear about it if you think otherwise)
I need a hint on the problem below. This is related to predictive analysis and chemical engineering. I don't background in chemical engineering, and that's why I am looking for some hints. I want to know if there's a technical term for the variable/problem I am working on. This would help better fine my search. Imagine I have a fluid streamline (a production line with chemicals). On the input of that line, we pour some chemicals and change some parameters (temperature, pressure, ...etc) and we want to predict the time necessary to see change at the end of that line. We can only control the input parameters, and we want to know how long would it take to see changes at the end of the pipeline (that delay time is the target var). I know some parts of this will depend on the velocity and length of the production line, but also there will some reactions happening on the road. Anything rings the bill here? I am trying to read some literature review about this, if you have some useful information and know some helpful keys words, please let me know.
Is the term you're looking for simply the **rate law** of the reaction? Every chemical reaction has an associated kinetic rate law, consisting of its **rate constant**, and its **reaction orders**. Consider the reaction: $A + B -> C$ Due to the conservation of mass, we can effectively model the **rate of formation** of the product C, as the **rate of consumption** of either reactants A or B: $d[C]/dt = -d[A]/dt = -d[B]/dt$ The brackets in this equation represents the *concentration* of each respective chemical in the solution. That is, the how much of each molecule of chemical is in a particular volume of solution. This rate of formation is effectively the rate of the reaction itself: $v = d[C]/dt$ with v being the **reaction velocity**. As we can see, the reaction velocity is itself a function of the rate of product formation, which is itself directly related to reactant consumption. This, in turn, also means that an instantaneous reaction velocity is also a function of the concentrations of reactant in that instant. Typically, to determine the rate law, the initial rate is measured from initial concentrations, as they are the easiest to measure. But how do we know HOW they are related? In short: if we don't do an experiment, **we don't**. Without doing an experiment, we can only very generally describe the rate law of a reaction. One of the most common forms of rate laws is the following: $v = k[A]^x[B]^y$ where k is our **rate constant**, and x and y are our **reaction orders**. The mystery lies with these variables, and they are determined experimentally as they vary with every reaction. I am far from an expert in kinetics, but your problem must consider the kinetics of the *particular* reaction being done. Another quirk of your problem is that the rate constant k itself depends on temperature by a rather involved equation for someone who doesn't know of, or particularly care for the intricacies of chemical kinetics. Additionally, if I'm understanding your model correctly, you'll also need to consider how the flow of the system itself may or may not impact the concentration of each chemical species. As far as I understand, the initial addition of reactants will generate a gradient of concentration that, as a result, will impact the kinetics of the reaction as a function of its rate of diffusion through the volume of the reaction. One can reasonably conceive that the flow you're describing can affect this rate of diffusion. This is something that is far beyond my realm of expertise, and is more in the realm of fluid mechanics. Of course, for the purpose of this problem, you can intelligently cheese these things, especially with the reaction rate issues by looking up relevant values from well-documented processes. If you want to do some further reading on rudimentary kinetics, consider the following: <https://www.chemguide.co.uk/physical/basicratesmenu.html#top> <https://opentextbc.ca/chemistry/chapter/12-3-rate-laws/>
I've understood the main concepts behind overfitting and underfitting, even though some reasons as to why they occur might not be as clear to me. But what I am wondering is: *isn't overfitting "better" than underfitting?* If we compare how well the model does on each dataset, we would get something like: Overfitting: Training: good vs. Test: bad Underfitting: Training: bad vs. **Test: bad** If we have a look at how well each scenario does on the training and test data, it seems that for the overfitting scenario, the model does at least well for the training data. The text in bold is my intuition that, when the model does badly on the training data, it will also do badly on the test data, which seems overall worse to me.
I liked the question and the key concept to answer it is [Bias–variance tradeoff](https://towardsdatascience.com/understanding-the-bias-variance-tradeoff-165e6942b229). Both underfitted model and overfitted model have some valid use case. For example my answer here [Is an overfitted model necessarily useless?](https://stats.stackexchange.com/questions/278882/is-an-overfitted-model-necessarily-useless/278975#278975)
I have been working on an independant project focusing on cellular automata and was wondering if this might be something that I could continue on in graduate school somewhere. I've seen some mathematics departments with symbolic dynamics as well as a few CS departments with researchers interested in CA. The list is small though. If anyone knows which departments to look at and/or any specific faculty/programs to look at then I would greatly appreciate it. Also, my project is sort of taking on a "mathematical" flavor and I was wondering if it would be better to look into mathematics over computer science departments, or is it generally pretty easy to do this type of thing in either.
The AUTOMATA workshop series focuses on cellular automata: <http://www.eng.u-hyogo.ac.jp/eecs/eecs12/automata2014/>
I want to calculate the parameter $\lambda$ of the exponential distribution $e^{-\lambda x}$ from a sample population taken out of this distribution under biased conditions. As far as I know, for a sample of n values, the usual estimator is $\hat{\lambda} = \frac{n}{\sum x\_i}$. However my sample is biased as follows: From a complete population of m elements drawn i.i.d from the exponential distribution, only the n smallest elements are known. How can I estimate the parameter $\lambda$ in this scenario? A bit more formaly, if $\{x\_1,x\_2,x\_3,...,x\_m \}$ are iid samples drawn from $e^{-\lambda x}$, such that for every $i < j$ we have $x\_i \leq x\_j$, then how can I estimate $\lambda$ from the set $\{x\_1,x\_2,x\_3,...,x\_n\}$ where $n < m$. Thanks a lot! Michael
The maximum likelihood estimator for the parameter of the exponential distribution under type II censoring can be derived as follows. I assume the sample size is $m$, of which the $n < m$ smallest are observed and the $m - n$ largest are unobserved (but known to exist.) Let us assume (for notational simplicity) that the observed $x\_i$ are ordered: $0 \leq x\_1 \leq x\_2 \leq \cdots \leq x\_n$. Then the joint probability density of $x\_1, \dots, x\_n$ is: $f(x\_1, \dots, x\_n) = {m!\lambda^n \over {(m-n)!}}\exp\left\{-\lambda\sum\_{i=1}^nx\_i\right\}\exp\left\{-\lambda(m-n)x\_n\right\}$ where the first exponential relates to the probabilities of the $n$ observed $x\_i$ and the second to the probabilities of the $m-n$ unobserved $x\_i$ that are greater than $x\_n$ (which is just 1 - the CDF at $x\_n$.) Rearranging terms leads to: $f(x\_1, \dots, x\_n) = {m!\lambda^n \over {(m-n)!}}\exp\left\{-\lambda\left[\sum\_{i=1}^{n-1}x\_i+(m-n+1)x\_n\right]\right\}$ (Note the sum runs to $n-1$ as there is a "$+1$" in the coefficient of $x\_n$.) Taking the log, then the derivative w.r.t. $\lambda$ and so on leads to the maximum likelihood estimator: $\hat{\lambda} = n / \left[\sum\_{i=1}^{n-1}x\_i+(m-n+1)x\_n\right]$
I have a graph. I need visualise it with nodes arranged in a circle. How can I know whether it is possible arrange the nodes on a circle so that there no edges intersect in the visualised graph?
If the edges are permitted to be laid both inside and outside the circle, then it is called the 2-page graphs; if edges can only be laid inside the circle, it is the 1-page graphs, which is also know as the [outerplanar graphs](http://en.wikipedia.org/wiki/Outerplanar_graph). See the [book embedding](http://en.wikipedia.org/wiki/Book_embedding) entry in Wikipedia for more information. By your comment, I guess the term you're searching for is outerplanar, since the complete graph on 4 vertices is 2-page. Outerplanar graphs can be recognized in linear time; see > > Linear algorithms to recognize outerplanar and maximal outerplanar graphs, S.L. Mitchell, Information Processing Letters, 1979. > > >
I have to proof that in a word $w$ the number of the letter d is always even. Let $L \subsetneq \Sigma^\*$ be a language over the alphabet $\Sigma = \{a,b,c,d\}$ such that a word $w$ is in $L$ if and only if it is $a$ or $b$ or of the form $w = ducvd$ where $u$ and $v$ are word of $L$. Examples: $dacad$, $ddacbdcad$, $dddbcbdcdbcbddcad$ I know there are three cases: 1. $|w| = 0$ the number of the letter d is zero and even 2. $|w| = 1$ and $w = a$ or $w = b$ same as number 1 3. $|w| = 5$ and $w = ducvd$ the number of letter d is even I know how induction works, but I don't know to write the base, hypothesis and step. I think it works with the length of $w$. Does somebody have a hint?
The proof is by induction on the length of $w$. If $|w| \leq 2$ then necessarily $w = a$ or $w = b$, and in both cases $w$ doesn't contain any $d$'s. If $|w| \geq 3$ then necessarily $w$ is of the form $ducvd$, where $u,v \in L$ are shorter words. By induction, each of $u,v$ contains an even number of $d$'s. It follows that $w$ also contains an even number of $d$'s: if $u,v$ contain $2s,2t$ many $d$'s (respectively), then $w$ contains $2s+2t+2=2(s+t+1)$, which is also even.
I've read that the chi square test is useful to see if a sample is significantly different from a set of expected values. For example, here is a table of results of a survey regarding people's favourite colours (n=15+13+10+17=55 total respondents): ``` red, blue, green, yellow 15, 13, 10, 17 ``` A chi square test can tell me if this sample is significantly different from the null hypothesis of equal probability of people liking each colour. Question: Can the test be run on the proportions of total respondents who like a certain colour? Like below: ``` red, blue, green, yellow 0.273, 0.236, 0.182, 0.309 ``` Where, of course, $0.273 + 0.236 + 0.182 + 0.309=1$. If the chi square test is not suitable in this case, what test would be? **Edit**: I tried @Roman Luštrik answer below, and got the following output, why am I not getting a p-value and why does R say "Chi-squared approximation may be incorrect"? ```r chisq.test(c(0, 0, 0, 8, 6, 2, 0, 0), p = c(0.406197174, 0.088746395, 0.025193306, 0.42041479, 0.03192905, 0.018328576, 0.009190708, 0)) Chi-squared test for given probabilities data: c(0, 0, 0, 8, 6, 2, 0, 0) X-squared = NaN, df = 7, p-value = NA Warning message: In chisq.test(c(0, 0, 0, 8, 6, 2, 0, 0), p = c(0.406197174, 0.088746395, : Chi-squared approximation may be incorrect ```
Using the extra information you gave (being that quite some of the values are 0), it's pretty obvious why your solution returns nothing. For one, you have a probability that is 0, so : * $e\_i$ in the solution of Henry is 0 for at least one i * $np\_i$ in the solution of @probabilityislogic is 0 for at least one i Which makes the divisions impossible. Now saying that $p=0$ means that it is impossible to have that outcome. If so, you might as well just erase it from the data (see comment of @cardinal). If you mean highly improbable, a first 'solution' might be to increase that 0 chance with a very small number. Given : ```r X <- c(0, 0, 0, 8, 6, 2, 0, 0) p <- c(0.406197174, 0.088746395, 0.025193306, 0.42041479, 0.03192905, 0.018328576, 0.009190708, 0) ``` You could do : ```r p2 <- p + 1e-6 chisq.test(X, p2) Pearson's Chi-squared test data: X and p2 X-squared = 24, df = 21, p-value = 0.2931 ``` But this is not a correct result. In any case, one should avoid using the chi-square test in these borderline cases. A better approach is using a bootstrap approach, calculating an adapted test statistic and comparing the one from the sample with the distribution obtained by the bootstrap. In R code this could be (step by step) : ```r # The function to calculate the adapted statistic. # We add 0.5 to the expected value to avoid dividing by 0 Statistic <- function(o,e){ e <- e+0.5 sum(((o-e)^2)/e) } # Set up the bootstraps, based on the multinomial distribution n <- 10000 bootstraps <- rmultinom(n, size=sum(X), p=p) # calculate the expected values expected <- p*sum(X) # calculate the statistic for the sample and the bootstrap ChisqSamp <- Statistic(X, expected) ChisqDist <- apply(bootstraps, 2, Statistic, expected) # calculate the p-value p.value <- sum(ChisqSamp < sort(ChisqDist))/n p.value ``` This gives a p-value of 0, which is much more in line with the difference between observed and expected. Mind you, this method assumes your data is drawn from a multinomial distribution. If this assumption doesn't hold, the p-value doesn't hold either.
Let $L\_1 \in REG$ and $L\_2 \notin REG$ prove or disprove: $\forall L\_1 ,L\_2 \text{ } $ $\text{ }L\_1^C \cup L\_2\in REG \lor L\_2\setminus L\_1\in REG$ I think that it may be disproved, but I found it very hard to disprove, because: if $L\_2 \subseteq L\_1$ then $L\_2\setminus L\_1 = \emptyset$ and if $L\_1 \subseteq L\_2$ then $ L\_1^C \cup L\_2 = (L\_1 \cap L\_2^C)^C = (L\_1 \setminus L2)^C = \Sigma ^\* $ and if $L\_1 \cap L\_2 = \emptyset$ then $ L\_1^C \cup L\_2 = L\_1^C$ and in any other scenario, any counter-example that I tried to construct was too complicated to disprove the regularity of both languages. I know that at least one of the languages must be not regular, and I proved it.
I found the full answer in a video by [David Silver](https://www.youtube.com/watch?v=lfHX2hHRMVQ&list=PLqYmG7hTraZBiG_XpjnPrSNw-1XQaM_gB&index=2). The idea is easy enough. The underlying matrix equation is $$ v = R^\pi + \gamma P^\pi\_{s, s'} v $$ Where $v$ is the value function, $R^\pi$ is the immediate reward under policy $\pi$, $\gamma$ is the discount factor, and $P^\pi\_{s, s'}$ is the transition matrix. I can write out the system of equations as below. *For the sake of convenience, I omit the $\pi$ and the $\{s, s'\}$ subscripts*. $$ \begin{bmatrix} v\_1 \\ v\_2 \\ \vdots \\ v\_n \end{bmatrix} = \begin{bmatrix} R\_1 \\ R\_2 \\ \vdots \\ R\_n \end{bmatrix} + \gamma \begin{bmatrix} p\_{11} & p\_{12} & \cdots & p\_{1n} \\ p\_{21} & p\_{22} & \cdots & p\_{2n} \\ \vdots \\ p\_{n1} & \cdots & \cdots & p\_{nn} \end{bmatrix} \begin{bmatrix} v\_1 \\ v\_2 \\ \vdots \\ v\_n \end{bmatrix} $$ I can solve the matrix equation with some simple linear algebra: $$ \begin{align\*} v &= R + \gamma P v \\ v - \gamma P v &= R \\ (I - \gamma P)v &= R \\ v &= (I - \gamma P)^{-1}R \end{align\*} $$ Now of course this solution involves inverting a matrix and that is not easy to do except in very small cases. So we can use iterative methods to solve for $v$. But iterative methods just alter the above equation such that. $$ v^{k+1} = R^\pi + \gamma P^\pi\_{s, s'} v^{k} $$
Is support vector machine with linear kernel the same as a soft margin classifier?
I think that the main takeaway here is this: the mere fact that there are these different philosophies of statistics and disagreement over them implies that translating the "hard numbers" that one gets from applying statistical formulae into "real world" decisions is a non-trivial problem and is fraught with interpretive peril. Frequently, people use statistics to influence their decision-making in the real world. For example, scientists aren't running randomized trials on COVID vaccines right now for funsies: it is because they want to make real world decisions about whether or not to administer a particular vaccine candidate to the populace. Although it may be a logistical challenge to gather up 1000 test subjects and observe them over the course of the vaccine, the math behind all of this is well-defined whether you are a Frequentist or a Bayesian: You take the data you gathered, cram it through the formulae and numbers pop out the other end. However, those numbers can sometimes be difficult to interpret: Their relationship to the real world depends on many non-mathematical things – and this is where the philosophy bit comes in. The real world interpretation depends on how we went about gathering those test subjects. It depends on how likely we anticipated this vaccine to be effective a priori (did we pull a molecule out of a hat, or did we start with a known-effective vaccine-production method?). It depends on (perhaps unintuitively) how many other vaccine candidates we happen to be testing. It depends on etc., etc., etc. Bayesians have attempted to introduce additional mathematical frameworks to help alleviate some of these interpretation problems. I think the fact that the Frequentist methods continue to proliferate shows that these additional frameworks have not been super successful in helping people translate their statistical computations into real world actions (although, to be sure, Bayesian techniques have led to many other advances in the field, not directly related to this specific problem). To answer your specific questions: you don't need to align yourself with one philosophy. It may help to be specific about your approach, but it will generally be totally obvious that you are doing a Bayesian analysis the moment you start talking about priors. Lastly, though, you should consider all of this very seriously, because as a statistician it will be your ethical duty to ensure that the numbers that you provide people are used responsibly – because correctly interpreting those numbers is a hard problem. Whether you interpret your numbers through the lens of Frequentist or Bayesian philosophy isn't a huge deal, but interpretation of your numbers requires familiarity with the relevant philosophy.
The [3SUM problem](https://en.wikipedia.org/wiki/3SUM) has two variants. In one variant, there is a single array $S$ of integers, and we have to find three different elements $a,b,c \in S$ such that $a+b+c=0$. In another variant, there are are three arrays $X,Y,Z$, and we have to find a number per array $a\in X, b\in Y, c\in Z$ such that $a+b+c=0$. Call the first variant 3SUMx1 and the second one 3SUMx3. **Given an oracle for 3SUMx3, can we solve 3SUMx1 in linear time**? One option that comes to mind is just to create 3 replicates of the input array $S$ and run: 3SUMx3($S,S,S$). However, this may return two copies of the same item. E.g. if $S$ has only two elements, 1 and -2, then the 3SUMx1 problem has no solution, but 3SUMx3($S,S,S$) will return the fake solution $(1,1,-2)$. I currently have a solution but it is not linear. To explain the solution, consider first the simpler problems 2SUMx1 and 2SUMx2, in which we only look for two numbers $a+b=0$ in either one array or two different arrays. Given an oracle for 2SUMx2, the problem 2SUMx1 can be solved in the following way. Let $n$ be the number of elements in the input array $S$. ``` For i = 1 to log(n): Partition the array S to two arrays, X and Y, based on bit i of the index. I.e.: X contains all elements S[k] where bit i of k is 0; Y contains all elements S[k] where bit i of k is 1. Run 2SUMx2(X,Y) If there is a solution, return it and exit. If no solution were found for any i, return "no solution". ``` This algorithm works because, if there are two different items, $a,b\in S$, whose sum is 0, then their index must have at least one bit different, say bit $i$. So, in iteration $i$ they will be in different parts and will be found by 2SUMx2($X$,$Y$). The time complexity is $O(n \log n)$. This algorithm can be generalized to 3SUM by using [trits](https://en.wikipedia.org/wiki/Ternary_numeral_system) instead of bits, and checking all possible pairs of them. The time complexity is therefore $O(n \log^2 n)$. Is there a linear-time algorithm?
Randomized algorithms --------------------- If you'll accept a randomized algorithm, yes, it can be done in linear time. There's a randomized algorithm whose expected running time is $O(n)$, and where the probability that it takes longer than $c \cdot n$ time is exponentially small in $c$. Here's the idea. Randomly permute the entries of $S$, then split it into three equal-sized pieces, each of length $n/3$: let $X$ be the first one-third, $Y$ the second one-third, and $Z$ the third one-third. Now call 3SUMx1(X, Y, Z). If this finds a solution, you have a solution to the 3SUMx3 problem. Moreover, I claim that if there is a solution to the 3SUMx3 problem, then with probability at least $6/27$, this procedure finds a solution. Therefore, the expected number of times you have to repeat it (before finding the first solution) is constant. The running time of each iteration is $O(n)$, and you do a constant number of iterations (on average), so the total expected running time is $O(n)$. The probability that you fail to find a solution after $k$ iterations is $(21/27)^k$, which is exponentially small in $k$. Where does $6/27$ come from? Suppose there's a solution to the 3SUMx3 problem, say, $S[i]+S[j]+S[k]=0$. What is the probability that $i,j,k$ all come from different thirds of the array? This probability is $3!/3^3 = 6/27$. So, if a solution to the 3SUMx3 problem exists, then each iteration of the above procedure has at least a probability $6/27$ of finding it. Deterministic algorithms ------------------------ What about a deterministic algorithm? I don't have a specific proposal in mind, but I feel like it's plausible that this construction could be derandomized. For instance, to give you an idea of the sort of thing I'm talking about, suppose we split the array into thirds once (deterministically). Suppose there exists a solution $S[i]+S[j]+S[k]=0$. Then there are $3^3=27$ cases for which third each $i,j,k$ falls into. After you remove symmetries, there are only $1+6+3$ cases: * Type A (1 case): All three indices fall in different thirds. * Type B (6 cases): Two of the indices fall into the same third, and the other index falls a different third. * Type C (3 cases): All three indices fall into the same third. You can test for a type-A solution in $O(n)$ time, with a single invocation of the oracle: 3SUMx1(X, Y, Z). Also, you can test for a type-C solution by making three recursive calls to the same solution (one on X, one on Y, one on Z). This leaves only the question of how to test for a type-B solution. It seems like you might also be able to test for a type-B solution using a similar recursive algorithm, though I haven't tried to work out the detailed case analysis. It seems not impossible that perhaps some ideas along these lines could lead to a deterministic algorithm -- though I haven't worked out the details and I'm just speculating at this point.
I would like to find (all) cliques in a given graph with 8,568 vertices and 12,726,708 edges. The vertex with the lowes degree has 2000, the vertext with the highest degree has 4007. The cliques should have exactly 17 vertices. The algorithm should be fast, because of the size of the graph. I had the following idea: * Delete all vertices with a degree of 16 or less. * Iterate over every vertex + try to add another vertex: - If the clique has exactly 17 vertices: Add it to a list - Else: Keep trying to add vertices + If no more vertices can be added: go back to the latest point where you had the choice to add some vertices and add one of those that could not be added before. I didn't implement this algorithm, because I think it will be quite slow. Do you know better ones?
Eppstein and Strash (2011) show that for a graph $G$ with degeneracy $d$ all maximal cliques can be listed in time $O(dn3^{d/3})$, where the degeneracy is the smallest number such that every subgraph of $G$ contains at least one vertex of degree $d$. The degeneracy is usually small, and in their paper they provide experimental results on graphs of comparable size, so you may be able to solve your problem using their algorithm. D. Eppstein, D. Strash, [*Listing All Maximal Cliques in Large Sparse Real-World Graphs*](http://arxiv.org/abs/1103.0318), 10th International Conference on Experimental Algorithms, 2011.
[Principal component analysis (PCA)](http://en.wikipedia.org/wiki/Principal_component_analysis) works like this: the first greatest variance on the first principal component, the second greatest variance on the second principal component, and so on. For me there is a problem with this iterative process. What if I know I only want two principal components in order to visualize my data in 2 dimensions? The two first PCs are not the best because the second one is the best 2nd PC but with the 1st PC they don't constitute the best couple of principal components. **Is there a way to find the "best couple" of principal components, meaning two-dimensional subspace with the greatest variance?**
The first two are the two best first two. The second one takes the first one into account.
In a standard algorithms course we are taught that **quicksort** is $O(n \log n)$ on average and $O(n^2)$ in the worst case. At the same time, other sorting algorithms are studied which are $O(n \log n)$ in the worst case (like **mergesort** and **heapsort**), and even linear time in the best case (like **bubblesort**) but with some additional needs of memory. After a quick glance at [some more running times](http://en.wikipedia.org/wiki/Sorting_algorithm#Comparison_of_algorithms) it is natural to say that quicksort **should not** be as efficient as others. Also, consider that students learn in basic programming courses that recursion is not really good in general because it could use too much memory, etc. Therefore (and even though this is not a real argument), this gives the idea that quicksort might not be really good because it is a recursive algorithm. **Why, then, does quicksort outperform other sorting algorithms in practice?** Does it have to do with the structure of *real-world data*? Does it have to do with the way memory works in computers? I know that some memories are way faster than others, but I don't know if that's the real reason for this counter-intuitive performance (when compared to theoretical estimates). --- **Update 1:** a canonical answer is saying that the constants involved in the $O(n\log n)$ of the average case are smaller than the constants involved in other $O(n\log n)$ algorithms. However, I have yet to see a proper justification of this, with precise calculations instead of intuitive ideas only. In any case, it seems like the real difference occurs, as some answers suggest, at memory level, where implementations take advantage of the internal structure of computers, using, for example, that cache memory is faster than RAM. The discussion is already interesting, but I'd still like to see more detail with respect to memory-management, since it appears that *the* answer has to do with it. --- **Update 2:** There are several web pages offering a comparison of sorting algorithms, some fancier than others (most notably [sorting-algorithms.com](http://www.sorting-algorithms.com/)). Other than presenting a nice visual aid, this approach does not answer my question.
My experience working with real world data is that **quicksort is a poor choice**. Quicksort works well with random data, but real world data is most often not random. Back in 2008 I tracked a hanging software bug down to the use of quicksort. A while later I wrote simple implentations of insertion sort, quicksort, heap sort and merge sort and tested these. My merge sort outperformed all the others while working on large data sets. Since then, merge sort is my sorting algorithm of choice. It is elegant. It is simple to implement. It is a stable sort. It does not degenerate to quadratic behaviour like quicksort does. I switch to insertion sort to sort small arrays. On many occasions I have found my self thinking that a given implementation works surprisingly well for quicksort only to find out that it actually isn't quicksort. Sometimes the implementation switches between quicksort and another algorithm and sometimes it does not use quicksort at all. As an example, GLibc's qsort() functions actually uses merge sort. Only if allocating the working space fails does it fall back to in-place quicksort [which a code comment calls "the slower algorithm"](https://sourceware.org/git/?p=glibc.git;a=blob;f=stdlib/msort.c;h=4e17a8874736d7a653b3f589fbb7ad253225c939;hb=HEAD#l226). Edit: Programming languages such as Java, Python and Perl also use merge sort, or more precisely a derivative, such as Timsort or merge sort for large sets and insertion sort for small sets. (Java also uses dual-pivot quicksort which is faster than plain quicksort.)
I'm looking for a good definition of sparse graphs. Is a sparse graph effectively a big one, with millions/billions of nodes? An example from real world is the Facebook graph. Or can sparse graphs be in small networks as well?
The common definition is that a *family* of graphs $G = (V, E)$ is sparse if $m \in o(n^2)$, with $n = |V|$ and $m = |E|$. Formally, that requires the family to be infinite, and we only know something *in the limit*. The definition does not apply to graphs of a fixed size per se. One would certainly want $m \ll n^2$ in some sense. For instance, if $m \leq 5n$ or $m \approx \log n$ you would probably say that your graph is "sparse". You can bridge this gap by requiring a bit more on the asymptotic side. Say we call a family of graphs sparse if $\qquad\displaystyle m \leq f(n) + O(1)$ with $f \in o(n^2)$; then we may want to call every graph with $m \leq f(n)$ sparse. The implicit assumption is that the error term hidden in $O(1)$ is well-behaved.
I need an idea for a new rating system.. the problem in the ordinary one (just average votes) is that it does not count how many votes... For example consider these 2 cases: 3 people voted 5/5 500 people voted 4/5 The ordinary voting systems just take the average, leading to the first one to be better.. However, I want the second one to get a higher rating, because many people have voted for the second... Any help?
You could use a system like [reddit's "best" algorithm](https://web.archive.org/web/20210525100237/https://redditblog.com/2009/10/15/reddits-new-comment-sorting-system/) for sorting comments: > > This algorithm treats the vote count > as a statistical sampling of a > hypothetical full vote by everyone, > much as in an opinion poll. It uses > this to calculate the 95% confidence > score for the comment. That is, it > gives the comment a provisional > ranking that it is 95% sure it will > get to. The more votes, the closer the > 95% confidence score gets to the > actual score > > > So in the case of 3 people voting 5/5, you might be 95% sure the "actual" rating is at least a 1, whereas in the case of 500 people voting you might be 95% sure the "actual" rating is at least a 4/5.
In [statistical inference](http://rads.stackoverflow.com/amzn/click/8131503941), problem 9.6b, a "Highest Density Region (HDR)" is mentioned. However, I didn't find the definition of this term in the book. One similar term is the Highest Posterior Density (HPD). But it doesn't fit in this context, since 9.6b doesn't mention anything about a prior. And in the suggested [solution](https://www.google.com.hk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCAQFjAA&url=http%3A%2F%2Fexampleproblems.com%2FSolutions-Casella-Berger.pdf&ei=w4g9Vf-tCpS4ogTqhIC4DQ&usg=AFQjCNGIIhxHTMo-T9wh2U9xWMJ6hput7w&bvm=bv.91665533,d.b2w), it only says that "obviously $c(y)$ is a HDR". Or is the HDR a region containing the mode(s) of a pdf? *What is a Highest Density Region (HDR)?*
I recommend Rob Hyndman's 1996 article ["Computing and Graphing Highest Density Regions"](http://www.tandfonline.com/doi/abs/10.1080/00031305.1996.10474359?journalCode=utas20#) in *The American Statistician*. Here is the definition of the HDR, taken from that article: > > Let $f(x)$ be the density function of a random variable $X$. Then the > $100(1-\alpha)\%$ HDR is the subset $R(f\_\alpha)$ of the sample space > of $X$ such that > $$R(f\_\alpha) = \{x\colon f(x)\geq f\_\alpha\},$$ > where $f\_\alpha$ is the largest constant such that > $$P\big(X\in R(f\_\alpha)\big)\geq 1-\alpha.$$ > > > Figure 1 from that article illustrates the difference between the 75% HDR (so $\alpha=0.25$) and various other 75% Probability Regions for a mixture of two normals ($c\_q$ is the $q$-th quantile, $\mu$ the mean and $\sigma$ the standard deviation of the density): ![HDR](https://i.stack.imgur.com/Dy89t.png) The idea in one dimension is to take a horizontal line and shift it up (to $y=f\_\alpha$) until the area above it and under the density is $1-\alpha$. Then the HDR $R\_\alpha$ is the projection to the $x$ axis of this area. Of course, all this works with any density, whether Bayesian posterior or other. [Here is a link to R code, which is the `hdrcde`package (and to the article on JSTOR).](http://robjhyndman.com/papers/computing-and-graphing-highest-density-regions/)
I am wondering if we have a **`A= n*p`** matrix of samples and we run a PC decomposition on it. Say the eigenvector matrix is **E**, so the samples in the eigenvector space should be > > B= A\*(E)^-1 > > > So, I am wondering if there is any rule that the columns of B to be correlated? That is, the first PC loadings and the second PC loadings to be correlated? Thanks!
PCs are calculated based on eigenvectors (e1 nad e2 in your case) of correlation or covariance matrix. Eigenvectors are orthogonal, so PCs are uncorrelated. If you would like to check the proof, you can find it on page 432 in *Applied Multivariate Statistical Analysis 6th edition* written by Richard A. Johnson and Dean W. Wichern.
Suppose I have a two phase experiment. The goal of the experiment will be to test if there are differences in proportions between two treatments. In phase one, I have no idea how many samples I will need as I have no prior information, so say I take 30 samples for each treatment and get proportions of 0.5 and 0.6. Now I want to use this information to calculate the number of samples I will need in phase two in order to have an 80% chance (power) to show a difference between the two treatments (assume alpha = 0.05). The sample size calculators I have found online do not apply as you to enter in the population proportions, but I don't have the population proportions, only estimated proportions based on the first sample. So I need to take the sample variation into account somehow in the sample size calculation. Any help would be greatly appreciated.
In various statistical software programs (and, allegedly, in some online 'calculators') you can specify typical proportions that you'd like to be able to distinguish at the 5% level of significance and with power 80%. Specifically, if reasonable proportions for Treatments 1 and 2 are $p\_1 = 0.5$ and $p\_2 = 0.6,$ then these are the 'proportions' you enter. (Of course, you won't know the *exact* proportions, but the difference between them should be the size of difference you'd like to be able to detect.) **Sample size computation from Minitab.** In particular, output from a 'power and sample size' procedure in a recent release of Minitab is shown below. For a two-sided test with the proportions guessed above, you'd need $n=388$ in each group for 80% power. ``` Power and Sample Size Test for Two Proportions Testing comparison p = baseline p (versus ≠) Calculating power for baseline p = 0.5 α = 0.05 Sample Target Comparison p Size Power Actual Power 0.6 388 0.8 0.800672 The sample size is for each group. ``` [![enter image description here](https://i.stack.imgur.com/09CNL.png)](https://i.stack.imgur.com/09CNL.png) Often tests to distinguish between two binomial proportions are done in terms of approximate normal tests, which are quite accurate for sample sizes this large and for success probabilities not too near to $0$ or $1.$ **Example of test of two proportions.** Suppose that your results are $183$ in the first group and $241$ in the second. Then Minitab's version of the one-sided test shows a highly significant difference with a P-value near $0.$ ``` Test and CI for Two Proportions Sample X N Sample p 1 182 388 0.469072 2 241 388 0.621134 Difference = p (1) - p (2) Estimate for difference: -0.152062 95% CI for difference: (-0.221312, -0.0828117) Test for difference = 0 (vs ≠ 0): Z = -4.30 P-Value = 0.000 ``` ***Similar test in R:*** For comparison, the version of the test implemented in the R procedure 'prop.test' gives the following result, also leading to rejection of the null hypothesis. (I use the version without continuity correction on account of the large sample size.) ``` prop.test(c(182,241), c(388,388), cor=F) 2-sample test for equality of proportions without continuity correction data: c(182, 241) out of c(388, 388) X-squared = 18.091, df = 1, p-value = 2.106e-05 alternative hypothesis: two.sided 95 percent confidence interval: -0.22131203 -0.08281168 sample estimates: prop 1 prop 2 0.4690722 0.6211340 ``` **Simulation of power.** The following simulation in R with 'prop.test' shows that the power of the test to distinguish between proportions $0.5$ and $0.6$ at the 5% level is roughly 80%. ``` set.seed(112) pv = replicate(10^5, prop.test(rbinom(2,388,c(.5,.6)),c(388,388),cor=F)$p.val) mean(pv <= .05) [1] 0.79673 ```
I have submitted a survey to a sample of artists. One of the question was to indicate the percentage of income derived by: artistic activity, government support, private pension, activities not related with arts. About 65% of the individuals have replied such that the sum of the percentage is 100. The others don’t: for example, there are who answers that 70% of their income derived by his/her artistic activities and 60% by income government, and so on. My question is: how should I treat these observations? Should I delete, modify or keep them? Thank you!
I cannot give you an answer for the general case of illogical responses. But for this specific type of question - been there, done that. Not only in a survey, but also in semistructured interviews, where I had a chance to observe how people come up with this kind of answer. Based on this, as well as some general experience in observing and analyzing cognitive processess, I would suggest: **normalize your data back to a sum of 100%**. The reason is that people seem to first go to the most salient category - in your case, that would be the largest income - give a gut-feeling estimate for it in percent, then start thinking of the next smaller categories and base their estimate relative to the anchor of the first category, plus that of further already mentioned categories. For an example, a train of thought will go like: "My first source of income is certainly more than half. It makes what, 60%? No, that's too low, let's say 65%. The second is about a third of that, so that would be a bit more than 20%, uh, difficult to calculate it in my head, let's round up to 25%. The third also feels like a third of the first, but it is actually always a bit more than the second, so it should be 30%. Or even 35? No, let's go with 30. Oh, and I forgot that I have a fourth source, that only happens once a year, that should be really small compared to the others, so 5 or 10%? Probabaly 5 is closer, it isn't really that much". And so you end up with an answer of 65 + 25 + 30 + 5 = 125%. Because people tend to be more aware of the relative size of the income parts to each other than of each part to the total, I would say that normalizing them is in order here, if you want to run some kind of numeric analysis on the income. I would only work with the actual reported numbers if the difference between people's beliefs and statements about their income and objective reality is an important topic for your work, for example if you are a psychologist studying cognitive biases, or if you are more interested in the self-perception of artists than in their economic circumstances. Sadly, I don't have a good literature source to prove that it really works as I described it, it is just my personal empirical observation. But I don't think that reviewers will get caught up on this kind of decision, since, as the other answers said, there is no single "right" way to treat it. If anything, they will dismiss your whole data from this question as invalid due to a flawed querying technique. The best you can do is to preemptively acknowledge it and come up with arguments why your work is nevertheless useful and why the conclusions you are drawing are still good despite this specific source of inaccuracy in the data.
> > The following articles are reprinte of [#3375492](https://math.stackexchange.com/questions/3375492/what-is-the-post-hoc-power-in-my-experiment) of math.stackexchange.com. It was recommended to ask this community at math.stackexchange.com. > > > **My motivations** I often see the claims that post-hoc power is nonsense. This kind of editorials are mass-produced and is published on many established journals. I can easily to access to the definitions that are not chunked-down to formulas or codes. > > **However, it is unclear what the post-hoc power they criticize is.** > Certainly they writes definition is written in words. However, it is not chunked down into formulas or calculation codes. Therefore, what is they want to criticize are not identified / at least not shared with me. (Both code 1 and code 2 below seem to meet their common definitions. The results are different, but different ways.) > > > The strange thing is that even though many people have been criticized so much but "what is post-hoc power?" is not seems to clear. Wouldn't it be strange to be able to understand these opinions like “it doesn't make sense because it is unique if other variables are set” or “circular theory” for objects whose calculation method is not shown? This looks like a barren on-air battle under the unclear premise. **Give calculation procedure before criticizing them!!** (This is likely to apply to all statutory ethics editorials that have been mass-produced recently.) > > The verbal explanation is written on the mass-produced editorial. They are not what I want. > - Please show me formulas or codes **instead of words**. > - Please chunk down words into the formula. > > > **Require explanations in formulas and codes instead of words.** > > I know that there is no "correct" post-hoc analysis, as it is often screamed in mass-produced editorial. “Correct post-hoc analysis” I said is synonymous with “post-hoc analysis that many people criticize.” > > > **My Question** > > What is the post-hoc power in the following experiment? > > > > > **Experiment**: > > We randomly divide 20 animals into two groups, Group A and Group B. > After that, for Group A, Foods A are fed, and for Group B, Foods B are fed. After a certain period, bodyweight > was measured, and the data were as follows. > > > Group\_A :40.2, 40.4, 40.6, 40.8, 41.0, 41.2, 41.4, 41.6, 41.8 > > Group\_B :30.1, 30.3, 30.5, 30.7, 30.9, 31.1, 31.3, 31.5, 31.7, 31.9, 32.1 > > > I would like to conduct a two-sided test with a significance level of 0.05 to see if there is a significant difference between the two groups. > > > I think it is one of the following ones. Both codes are written in "R". R source codes can be downloaded from the following [link](https://drive.google.com/drive/folders/1GIIZ7iK3Nycm6_-fcEeY0bLKnGevAMUl). **The difference between Method 1 and Method 2 is using the predetermined value (in the code of method1, we use α=0.05) or using the calculated p-value when calculating power.** **Method 1** Code01 ``` #Load data Group_A = c(40.2, 40.4, 40.6, 40.8, 41.0, 41.2, 41.4, 41.6, 41.8) Group_B = c(30.1, 30.3, 30.5, 30.7, 30.9, 31.1, 31.3, 31.5, 31.7, 31.9, 32.1) # Welch Two Sample t-test t.test(Group_A,Group_B) library(effsize) library(pwr) cd = cohen.d(Group_A, Group_B) cd pwr.t2n.test(n1 = 9, n2= 11, d = cd$estimate, sig.level = 0.05, power = NULL, alternative = c("two.sided")) ``` **Method 2** Code02 ``` # Load data Group_A = c(40.2, 40.4, 40.6, 40.8, 41.0, 41.2, 41.4, 41.6, 41.8) Group_B = c(30.1, 30.3, 30.5, 30.7, 30.9, 31.1, 31.3, 31.5, 31.7, 31.9, 32.1) # Welch Two Sample t-test twel=t.test(Group_A,Group_B) twel pwel=twel$p.value library(effsize) library(pwr) cd = cohen.d(Group_A, Group_B) cd pwr.t2n.test(n1 = 9, n2= 11, d = cd$estimate, sig.level = pwel, power = NULL, alternative = c("two.sided")) ``` Which is the “correct” post-hoc power calculation code? > > Notes: > > If your "R" environment does not have packages named "effsize" and "pwr", you need to install them previously. If the following command is executed on R while connected to the Internet, installation should start automatically. > > > ``` install.packages("effsize") install.packages("pwr") ``` **【Post-Hoc Notes】** (Added after 2019/10/06 00:56(JST)) **(1)Relationship between effect size and power** (Based on Method 01) Fig. PHN01 shows the relationship between effect size and power when using code01 above, p = 0.05, 0.025, 0.01. Where n1 = 9, n2 = 11. [![enter image description here](https://i.stack.imgur.com/aTHYx.png)](https://i.stack.imgur.com/aTHYx.png) Fig. PHN01 :Relationship between effect size and power These are calculated using the R same manner of followiing code. **Code PHN 01** ``` library(pwr) pv=0.025 pwr.t2n.test(n1 = 9, n2= 11, d = 4, sig.level = pv, power = NULL, alternative = c("two.sided")) ``` **(2)Relationship between effect size and power** (Based on Method 02) Fig. PHN02 shows the relationship between effect size and power when using code02, where n1 = 9, n2 = 11. [![enter image description here](https://i.stack.imgur.com/yypSZ.png)](https://i.stack.imgur.com/yypSZ.png) Fig. PHN02 :Relationship between effect size and power **Code PHN 02** library(effsize) library(pwr) ``` offc=1.6 offc=0.1+offc Group_A = c(30.2+offc, 30.4+offc, 30.6+offc, 30.8+offc, 31.0+offc, 31.2+offc, 31.4+offc, 31.6+offc, 31.8+offc) Group_B = c(30.1, 30.3, 30.5, 30.7, 30.9, 31.1, 31.3, 31.5, 31.7, 31.9, 32.1) print(mean(Group_A)-mean(Group_B)) twel=t.test(Group_A,Group_B) pwel=twel$p.value cd = cohen.d(Group_A, Group_B) pwr.t2n.test(n1 = 9, n2= 11, d = cd$estimate, sig.level = pwel, power = NULL, alternative = c("two.sided")) ``` **(3)Comment on Welch’s correction** There was a comment that “it is better to remove the Welch correction”. Certainly in the **R is not comprising the functionality to calculate the power it self under the Welch correction for n1≠n2 cases.** Please forget the following code. **Code PHN 03** ``` library(effsize) offc=1.6 offc=0.1+offc Group_A = c(30.2+offc, 30.4+offc, 30.6+offc, 30.8+offc, 31.0+offc, 31.2+offc, 31.4+offc, 31.6+offc, 31.8+offc) Group_B = c(30.1, 30.3, 30.5, 30.7, 30.9, 31.1, 31.3, 31.5, 31.7, 31.9, 32.1) print(mean(Group_A)-mean(Group_B)) #Option1 Var.equal twel=t.test(Group_A,Group_B, var.equal=True) pwel=twel$p.value #Option2 Hedges.correction, Optoon3 var.equal=FALSE cohen.d(Group_A, Group_B, hedges.correction=FALSE, var.equal=FALSE) sqrt((9+11)/(9*11)) cd$estimate/twel$statistic ``` **(4)The "correct" post-hoc power calculation method for when welch's correction is not required** This part has been split into the following thread: [The calculation method of post-hoc power in t-test without welch's correction](https://stats.stackexchange.com/questions/431010/the-calculation-method-of-post-hoc-power-in-t-test-without-welchs-correction) <https://gpsych.bmj.com/content/32/4/e100069> Only the case where the Welch correction was not necessary was written, but I found a paper in which the "correct" post-hoc power calculation method was written in mathematical formulas. **Here, “correct” means “criticized by mass-produced editorials”.** Post-hoc power seems to be calculated by the following formula. Here, the α is given in advance, it can be considered that it is essentially the same as the method of Code 1. However, my setting is different from the Welch test. [![enter image description here](https://i.stack.imgur.com/74TWP.png)](https://i.stack.imgur.com/74TWP.png) (PHN04-01) Here, [![enter image description here](https://i.stack.imgur.com/hkB8U.png)](https://i.stack.imgur.com/hkB8U.png) (PHN04-02) [![enter image description here](https://i.stack.imgur.com/4x3P0.png)](https://i.stack.imgur.com/4x3P0.png) (PHN04-03) And, use the following d for [![δ](https://i.stack.imgur.com/S83tg.png)](https://i.stack.imgur.com/S83tg.png), [![enter image description here](https://i.stack.imgur.com/UYhiM.png)](https://i.stack.imgur.com/UYhiM.png) (PHN04-04) > > However, I could not read out the distribution of the following statistics. (Maybe non-central t distribution, but how is the non-central parameter value?) > > > [![enter image description here](https://i.stack.imgur.com/yOwtL.png)](https://i.stack.imgur.com/yOwtL.png) (PHN04-05) > > What is this ${Z}\_{\alpha /2}$? . Zα is the upper α point of which distribution? > Is the upper α/2 point t-distribution? > > > And > > How can it be extended to Welch's case? > > > **【P.S.】** I'm not very good at English, so I'm sorry if I have some impolite or unclear expressions. I welcome any corrections and English review. (You can edit my question and description to improve them)
Let's examine the well accepted statistical definitions of "power," "power analysis," and "post-hoc," using this site's tag information as a guide. [Power](https://stats.stackexchange.com/tags/power/info) > > is a property of a hypothesis testing method: the probability of rejecting the null hypothesis given that it is false, i.e. the probability of not making a type II error. The power of a test depends on sample size, effect size, and the significance () level of the test. > > > Let's ignore for now the post-hoc issue. From that definition you can see that either of your approaches to power could be considered "correct": Method 1 is based on a significance () level of 0.05, while Method 2 is based on the significance () level that you happened to find, about 0.17. For what is useful, however, consider [power analysis](https://stats.stackexchange.com/tags/power-analysis/info): > > An inquiry into the quality of a statistical test by calculating the power - the probability of rejecting the null hypothesis given that it is false - under certain circumstances. Power analysis is often used when planning a study to determine the sample size required to achieve a nominal level of power (e.g. 80%) for a given effect size. > > > In the design phase of a study, where the importance of power analysis is unquestioned, you attempt to estimate the number of cases needed to detect a "statistically significant" effect. This typically means basing the calculations on a significance () level of 0.05. It would be hard to come up with any rationale for choosing instead a level of 0.17. So for power analysis in the *a priori* design-phase of a study your Method 1 would be the only one to make sense. Now consider [post-hoc](https://stats.stackexchange.com/tags/post-hoc/info): > > "Post-hoc" refers to analyses that are decided upon after the data has been collected, as opposed to "a priori". > > > We need to distinguish 2 types of post-hoc analysis related to power calculations. One is to treat the just-completed study as a pilot study to inform the design of a more detailed study. You use the observed difference between the groups and the observed variance of the difference as estimates of the true population values. Based on those estimates, you determine the sample size needed in a subsequent study to provide adequate power (say, 80%) to detect a statistically significant difference (say, < 0.05). That's quite appropriate. That is "post-hoc" in the sense of being based on already obtained data, but it is used to inform the design of the next study. In most cases, however, that is not how the phrase "post-hoc power analysis" is used or the way you are using the phrase. You (and many others) seek to plug into a formula to determine some type of "power" of the study and analysis you have already done. This type of "post-hoc power analysis" is fundamentally flawed, as noted for example by Hoenig and Heisey in [*The Abuse of Power*](https://doi.org/10.1198/000313001300339897). They describe two variants of such analysis. One is the "observed power," "that is, assuming the observed treatment effects and variability are equal to the true parameter values, the probability of rejecting the null hypothesis." (Note that this null hypothesis is typically tested at < 0.05, your Method 1, and is based on the sample size at hand. This seems to be what you have in mind.) Yet this "observed power" calculation adds nothing: > > Observed power can never fulfill the goals of its advocates because the observed significance level of a test ("*p* value") also determines the observed power; for any test the observed power is a 1:1 function of the *p* value. > > > That's the point that Jeremy Miles makes with his example calculations based on your two Methods. In this type of post-hoc analysis, neither Method adds any useful information. That's why you find both of us effectively saying that is no "correct" post-hoc power calculation code. Yes, you can plug numbers correctly into a formula, but to call the analysis "correct" from a statistical perspective would be an abuse of terminology. There is a second (ab)use of power calculations post-hoc, which does not seem to be what you have in mind but which should be addressed for completeness: "finding the hypothetical true difference that would have resulted in a particular power, say .9." Hoenig and Heisey show that this approach can lead to nonsensical conclusions, based on what they call: > > the “power approach paradox” (PAP): higher observed power does not imply stronger evidence for a null hypothesis that is not rejected. > > > So the statistical advice (which is what one should expect from this site) is to refrain from post-hoc power tests in the sense that you wish to use them.
I am supposed to create CFG for this languague: $L= \{w : w \in \{a, b\}^\*, |w\_b| = 3k, k \geq 0 \}$ where $|w\_b|$ is count of terminals $b$ in $w$. **For example:** aa - OK, no 'b' abb - wrong, only 2 'b' aaabbb - OK, 3 times 'b' aababbb - wrong, 4 times 'b' abbbbbaaa - wrong, 5 times 'b' abababbbaaab - OK, 6 times 'b' and so on... I can't come up with any solution. Any advice? --- My goal is to design context-free grammar, **not automaton or regular expression** (i don't know how to design automatons or RE yet). What about CFG ``` G = {{S,A}, {a,b}, R, S} ``` where R rules are: ``` 1] S -> S A b A b A b A S 2] S -> A 3] S -> ε 4] A -> a A 5] A -> ε ``` Explanation: rule 2] is for cases when there are no 'b' symbols in w rule 3] is for case of empty string rule 4] is for adding 'a' symbols between 'b', e.g. baaaabab, babaab rule 5] is for cases, when there are multiple 'b' next to each other, e.g. abbbaaa --- Is this CFG ok?
**Hint**: A string with exactly one $a$ would look like this: $$ (\text{any number of } b's)\,a\,(\text{any number of } b's) $$ and a grammar to generate the language of these strings is $S\rightarrow AaA,\ \ A\rightarrow bA\mid \epsilon$. Generalize.
I have a question regarding hyperparameter optimization in scikit learn. I am most familiar with tensorflow where you first split your data into three sets: Train, validation and test. Hyperparameters are optimized using the train and validation sets, and then the model is finally evaluated using the test set. All data is normalized using statistics from the test set, and as far as I have understood the rationale behind this is that the model has not "seen" the data from validation and test and therefore can not use their stats in any way. Anyways, my question is related to the optimizers in scikit. As far as I can see they all use cross-validation. So I have two questions: * Is there a way to set the validation set explicitly in the scikit optimizers? * If not, how do I deal with the normalization? Should I just feed the optimizer with both the train and validation sets, wouldn't that just mix the sets, and somehow mess up the rationale behind initially using the training set for normalization? Side note: I want to be able to compare models made using both scikit and tensorflow, so I want to follow procedures that are as similar as possible when comparing models.
I think it will suit you [sklearn.model\_selection.PredefinedSplit](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.PredefinedSplit.html#sklearn.model_selection.PredefinedSplit). From the user manual [User guide](https://scikit-learn.org/stable/modules/cross_validation.html#predefined-split) : For some datasets, a pre-defined split of the data into training- and validation fold or into several cross-validation folds already exists. Using Pre defined Split it is possible to use these folds e.g. when searching for hyperparameters. For example, when using a validation set, set the test\_fold to 0 for all samples that are part of the validation set, and to -1 for all other samples. I can also recommend good libraries for optimizing hyperparameters, very easy to learn, but with great functionality. They have algorithms for selecting hyperparameters not only by brute force or by chance, but also conditional hyperparameters. For example, in if parameter1 appears only if there is parameter2 . These are [optuna](https://optuna.readthedocs.io/en/stable/index.html) and [hyperopt](http://hyperopt.github.io/hyperopt/). In them, you can more flexibly specify the learning function, rather than the sklearn pipeline with all its limitations.
Every paper says that Green's construction is the best 16-input sorting network as for now. But why does Wikipedia says: "Size, lower bound: 53"? I thought "lower bound" meant:"If there exists at least an algorithm that can...". Am I wrong?
No, a lower bound means that somebody has proved that anything smaller than 53 is impossible. That doesn't mean that a 53-gate network is known or even necessarily possible; just that there cannot be a smaller one than that.
Can someone explain what is the confidence interval? Why it should be 95%? When it is used and what is it measuring? I understand it's some kind of evaluation metric, but I can't seem to find a decent explanation by connecting it to real-world examples. Any help would be greatly appreciated. thanks
**You might find it instructive to start with a basic idea: *the variance of any random variable cannot be negative.*** (This is clear, since the variance is the expectation of the square of something and squares cannot be negative.) Any $2\times 2$ covariance matrix $\mathbb A$ explicitly presents the variances and covariances of a pair of random variables $(X,Y),$ but it also tells you how to find the variance of any linear combination of those variables. This is because whenever $a$ and $b$ are numbers, $$\operatorname{Var}(aX+bY) = a^2\operatorname{Var}(X) + b^2\operatorname{Var}(Y) + 2ab\operatorname{Cov}(X,Y) = \pmatrix{a&b}\mathbb A\pmatrix{a\\b}.$$ Applying this to your problem we may compute $$\begin{aligned} 0 \le \operatorname{Var}(aX+bY) &= \pmatrix{a&b}\pmatrix{121&c\\c&81}\pmatrix{a\\b}\\ &= 121 a^2 + 81 b^2 + 2c^2 ab\\ &=(11a)^2+(9b)^2+\frac{2c}{(11)(9)}(11a)(9b)\\ &= \alpha^2 + \beta^2 + \frac{2c}{(11)(9)} \alpha\beta. \end{aligned}$$ The last few steps in which $\alpha=11a$ and $\beta=9b$ were introduced weren't necessary, but they help to simplify the algebra. In particular, what we need to do next (in order to find bounds for $c$) is *complete the square:* this is the process emulating the derivation of the quadratic formula to which everyone is introduced in grade school. Writing $$C = \frac{c}{(11)(9)},\tag{\*}$$ we find $$\alpha^2 + \beta^2 + \frac{2c^2}{(11)(9)} \alpha\beta = \alpha^2 + 2C\alpha\beta + \beta^2 = (\alpha+C\beta)^2+(1-C^2)\beta^2.$$ Because $(\alpha+C\beta)^2$ and $\beta^2$ are both squares, they are not negative. Therefore if $1-C^2$ also is non-negative, the entire right side is not negative and can be a valid variance. Conversely, if $1-C^2$ is negative, you could set $\alpha=-c\beta$ to obtain the value $(1-C^2)\beta^2\lt 0$ on the right hand side, which is invalid. You therefore deduce (from these perfectly elementary algebraic considerations) that > > *If* $A$ is a valid covariance matrix, then $1-C^2$ cannot be negative. > > > Equivalently, $|C|\le 1,$ which by $(\*)$ means $-(11)(9) \le c \le (11)(9).$ --- **There remains the question whether any such $c$ does correspond to an actual variance matrix.** One way to show this is true is to find a random variable $(X,Y)$ with $\mathbb A$ as its covariance matrix. Here is one way (out of many). I take it as given that you can construct independent random variables $A$ and $B$ having unit variances: that is, $\operatorname{Var}(A)=\operatorname{Var}(B) = 1.$ (For example, let $(A,B)$ take on the four values $(\pm 1, \pm 1)$ with equal probabilities of $1/4$ each.) The independence implies $\operatorname{Cov}(A,B)=0.$ Given a number $c$ in the range $-(11)(9)$ to $(11)(9),$ define random variables $$X = \sqrt{11^2-c^2/9^2}A + (c/9)B,\quad Y = 9B$$ (which is possible because $11^2 - c^2/9^2\ge 0$) and compute that the covariance matrix of $(X,Y)$ is precisely $\mathbb A.$ --- Finally, if you carry out the same analysis for *any* symmetric matrix $$\mathbb A = \pmatrix{a & b \\ b & d},$$ you will conclude three things: 1. $a \ge 0.$ 2. $d \ge 0.$ 3. $ad - b^2 \ge 0.$ These conditions characterize *symmetric, positive semi-definite* matrices. Any $2\times 2$ matrix satisfying these conditions indeed is a variance matrix. (Emulate the preceding construction.)
I have a CSV file with 4 million edges of a directed network representing people communicating with each other (e.g. John sends a message to Mary, Mary sends a message to Ann, John sends *another* message to Mary, etc.). I would like to do two things: 1. Find degree, betweeness and (maybe) eigenvector centrality measures for each person. 2. Get a visualization of the network. I would like to do this on the command-line on a Linux server since my laptop does not have much power. I have R installed on that server and the statnet library. I found [this 2009 post](http://www.cybaea.net/Blogs/Data/SNA-with-R-Loading-your-network-data.html) of someone more competent than me trying to do the same thing and having problems with it. So I was wondering if anyone else has any pointers on how to do this, preferably taking me step by step since I only know how to load the CSV file and nothing else. Just to give you an idea, this is how my CSV file looks like: ``` $ head comments.csv "src","dest" "6493","139" "406705","369798" $ wc -l comments.csv 4210369 comments.csv ```
From past experience with a network of 7 million nodes, I think visualizing your complete network will give you an uninterpretable image. I might suggest different visualizations using subsets of your data such as just using the top 10 nodes with the most inbound or outbound links. I second celenius's suggestion on using gephi.
I'm having trouble understanding some language notation, primarily what rules I can take away from it. The language is as follows: $\qquad L = \{a^n b^m b^p c^p b^{n-m} \mid n > 0, m < n, p > 2\}$ My goal is to create a context free grammar from this, but I can't get my ahead around the rules in place here. In plain English, it's clear that any string for this language must have a positive number of a's in it, but what is going on with the multiple b's? Do I take this as the string must be in some form like so: $\qquad aaa-b-bbb-ccc-bb$ (n=3, m=1, p=3) where the bunches of b's are just distinct sequences? Or is there something else I should be doing in order to satisfy this?
First of all, note that your quest for "rules in place" is probably doomed. This looks pretty much like an exercise problem you would pose in class; there is not necessarily an intuitive rule or semantics. It's just a playground for developing your skills. Therefore, it may be more helpful to strap on theory glasses and just look at the formal object given to you. Now, for your problem. In order to get to a context-free grammar, it can be useful to identify "phases" a grammar can work through. A good first step is to identify which parts of the word are "connected" by the restrictions in place and which are independent of each other. If you find intersecting dependency arcs, you either have a non-context-free language (which you can [check](https://cs.stackexchange.com/questions/265/how-to-prove-that-a-language-is-not-context-free)) or you need to represent your language differently (sometimes restrictions are redundant or seem to imply dependencies that are not really there¹). > > Here, note that your words have the form > > > > $\qquad a^n b^m \dots b^{n-m}$. > > > > Why can I remove a part for an intuitive look? Because I cut away $b^p c^p$ which can clearly by generated by a standard grammar and independently of the rest. Now, the above is just > > > > $\qquad a^n b^n$ > > > > if you remember to insert $b^p c^p$ at some point in $b^n$. > > > > Try to come up with a grammar along these lines. Take care about corner cases and the exact restrictions on $m,n,p$! And, of course, [prove correctness](https://cs.stackexchange.com/questions/11315/how-to-show-that-l-lg). > > > Another strategy that can sometimes work better is to build a PDA and convert it into a(n ugly) grammar by the standard construction. --- 1. A simple example: $a^nb^nc^n$ is not context-free, so $a^na^na^n$ is not. Right? Wrong: $a^na^na^n = a^{3n}$ is clearly regular.
> > Given array $A$ of length $n$, we call it *almost sorted* if there are at most $\log n$ indices satisfying $A[i] > A[i+1]$. > > > Find an algorithm that sorts the array in $O(n\log\log n)$. > > > My attempt: * Create an array $B$ of size $\log n + 1$. * Go through the array $A$, recognize the $\log n$ pairs, and insert them into $B$. * Sort $B$ using insertion sort in time $O(\log^2 n)$. At this point I am stuck.
Suppose that $A$ is an array with $m$ indices satisfying $A[i] > A[i+1]$. You can find these indices in $O(n)$. These $m$ indices split $A$ into $m+1$ nondecreasing arrays $B\_1,\ldots,B\_{m+1}$ of total length $n$. We now merge them according to the following strategy: at each step, take the two shortest arrays, and merge them. You can implement the choice mechanism in $O(m\log m) = O(n\log m)$ using a heap. When merging two arrays of length $a,b$, the running time is $O(a+b)$. Hence we would like to understand the total sum $S$ of $a+b$, where $(a,b)$ goes over all $m$ pairs of arrays being merged. To this end, we consider a *merge tree*, which is formed in the following way. We start with $m+1$ vertices corresponding to $B\_1,\ldots,B\_{m+1}$. When two arrays corresponding to vertices $x,y$ are merged, we create a new vertex with $x,y$ its only children. You can check that $$ S = \sum\_{i=1}^{m+1} |B\_i| \mathrm{depth}(B\_i). $$ Consider now a probability distribution $X$ on $[m+1]$ with $\Pr[X=i] = |B\_i|/n$. Then $S/n$ is the average codeword length of an optimal prefix code for $X$ (this is because we're essentially running Huffman's algorithm). Therefore $S/n < \log m + 1$, showing that the merging steps take $O(n\log m)$ time in total. Summarizing, the algorithm runs in time $O(n\log m)$. (Strictly speaking, to handle the cases $m=1$ and $m=0$, we should replace $m$ with $m+2$.)
When encoding a logic into a proof assistant such as Coq or Isabelle, a choice needs to be made between using a *shallow* and a *deep* embedding. In a shallow embedding logical formulas are written directly in the logic of the theorem prover, whereas in a deep embedding logical formulas are represented as a datatype. * What are the advantages and limitations of the various approaches? * Are there any guidelines available for determining which to use? * Is it possible to switch between the two representations in any systematic fashion? As motivation, I would like to encode various security related logics into Coq and am wondering what the pros and cons of the different approaches are.
Roughly speaking, with a deep embedding of a logic, you (1) define a datatype representing the syntax for your logic, and (2) give a *model of the syntax*, and (3) prove that axioms about your syntax are sound with respect to the model. With a shallow embedding, you skip steps (1) and (2), and just start with a model, and prove entailments between formulas. This means shallow embeddings are usually less work to get off the ground, since they represent work you'd typically end up doing anyway with a deep embedding. However, if have a deep embedding, it is usually easier to write reflective decision procedures, since you are working with formulas which actually have syntax you can recurse over. Also, if your model is strange or complicated, then you usually don't want to work directly with the semantics. (For example, if you use biorthogonality to force admissible closure, or use Kripke-style models to force frame properties in separation logics, or similar games.) However, deep embeddings will almost certainly force you to think a lot about variable binding and substitutions, which will fill your heart with rage, since this is (a) trivial, and (b) a never-ending source of annoyance. The correct sequence you should take is: (1) try to get by with a shallow embedding. (2) When that runs out of steam, try using tactics and quotation to run the decision procedures you want to run. (3) If that also runs out of steam, give up and use a dependently-typed syntax for your deep embedding. * Plan to take a couple of months on (3) if this is your first time out. You *will* need to get familiar with the fancy features of your proof assistant to stay sane. (But this is an investment which will pay off in general.) * If your proof assistant doesn't have dependent types, stay at level 2. * If your object language is itself dependently typed, stay at level 2. Also, do not try to go gradually up the ladder. When you decide to go up the complexity ladder, take a full step at a time. If you do things bit-by-bit, then you will get lots of theorems which are weird and unusable (eg, you'll get multiple half-assed syntaxes, and theorems which mix syntax and semantics in strange ways), which you will eventually have to throw out. EDIT: Here's a comment explaining why going up the ladder gradually is so tempting, and why it leads (in general) to suffering. Concretely, suppose you have a shallow embedding of separation logic, with the connectives $A \star B$ and unit $I$. Then, you'll prove theorems like $A \star B \iff B \star A$ and $(A \star B) \star C \iff A \star (B \star C)$ and so on. Now, when you try to actually use the logic to prove a program correct, you'll end up having something like $(I \star A) \star (B \star C)$ and you'll actually want something like $A \star (B \star (C \star I))$. At this point, you'll get annoyed with having to manually reassociate formulas, and you'll think, "I know! I'll interpret a datatype of lists as a list of separated formulas. That way, I can interpret $\star$ as concatenation of these lists, and then those formulas above will be definitionally equal!" This is true, and works! However, note that conjunction is also ACUI, and so is disjunction. So you'll go through the same process in other proofs, with different list datatypes, and then you'll have three syntaxes for different fragments of separation logic, and you'll have metatheorems for each of them, which will inevitably be different, and you'll find yourself wanting a metatheorem you proved for separating conjunction for disjunction, and then you'll want to mix syntaxes, and then you'll go insane. It's better to target the biggest fragment you can handle with a reasonable effort, and just do it.
``` int sumHelper(int n, int a) { if (n==0) return a; else return sumHelper(n-1, a + n*n); } int sumSqr(int n) { return sumHelper(n, 0); } ``` I am supposed to prove this piece of code which uses tail recursion to sum up the squares of numbers. That is, I need to prove that for $n ≥ 1$, $sumsqr(n)=1^2+2^2+\dots+n^2$. I have figured out the base case but I am stuck at the induction step. Any hints or help will be appreciated.
Assuming $n ≥ 0$, let $$ P(n) \;≡\; ∀ a.\, sumHelper(n, a) = a + ∑\_{i = 0}^n i² $$ Then we prove this is always true by induction on $n$. The base case, `n = 0`: ``` sumHelper(n, a) ={ Case n = 0 } sumHelper(0, a ={ Definition } a ={ Arithmetic } a + 0² ={ Arithmetic } a + ∑_{i = 0}^0 i² ={ Case n = 0 } a + ∑_{i = 0}^n i² ``` The induction step, assuming `P(n)` let us show `P(n+1)`, ``` sumHelper(n+1, a) ={ Definition } if (n+1==0) a else sumHelper(n+1-1, a + (n+1)*(n+1)) ={ Since we assumed n ≥ 0, we have n+1 ≠ 0. Hence we have the else-branch. } sumHelper(n+1-1, a + (n+1)*(n+1)) ={ Arithmetic } sumHelper(n, a + (n+1)*(n+1)) ={ Apply the inductive hypotheis with a ≔ a + (n+1)*(n+1) } (a + (n+1)*(n+1))) + ∑_{i = 0}^n i² ={ Arithmetic: Bringing the `n+1` term back into the sum } a + ∑_{i = 0}^{n+1} i² ``` We're done :-)
In the picture below, I'm trying to figure out what exactly this NFA is accepting. [![enter image description here](https://i.stack.imgur.com/XHuWu.png)](https://i.stack.imgur.com/XHuWu.png) What's confusing me is the $\epsilon$ jump at $q\_0$. * If a $0$ is entered, does the system move to both $q\_0$ **and** $q\_1$ (the accept state)? * If a $1$ is entered, does the system move to both $q\_1$ and $q\_2$? * Does the system only move to $q\_1$ (accept state), if *no input* is given (empty string)?
Every time you are in a state which has a $\epsilon$ transition, it means you automatically are in BOTH states, to simplify this to you: If the string is $\epsilon$ then your automata ends both in $q\_0$ and $q\_1$ If your string is '0' it'll be again in $q\_0$ and $q\_1$ If your string is '1', it'll be only in $q\_2$, because if you look from the point of $q\_0$, you have a '1' transition to $q\_2$, but you have also to look at case you're in $q\_1$(if you were in $q\_0$ you always were in $q\_1$ also) then there is no '1' transition, so this alternative path just "dies". Just by looking at these cases its easy to see that your automata accepts $\epsilon$, $0^\*$, and going from $q\_0$ to $q\_1$, the only way to reach $q\_2$ is $0^\*11^\*1$, so, this resumes your automata to $\epsilon$, $0^\*$, $0^\*11^\*1$ Hope this helped you, if you have any further doubts, just ask!
Combinatorics plays an important role in computer science. We frequently utilize combinatorial methods in both analysis as well as design in algorithms. For example one method for finding a $k$-vertex cover set in a graph might just inspect all $\binom{n}{k}$ possible subsets. While the binomial functions grows exponentially, if $k$ is some fixed constant we end up with a polynomial time algorithm by asymptotic analysis. Often times real-life problems require more complex combinatorial mechanisms which we may define in terms of recurrences. One famous example is the [fibonacci sequence](http://en.wikipedia.org/wiki/Fibonacci_number) (naively) defined as: $f(n) = \begin{cases} 1 & \text{if } n = 1 \\ 0 & \text{if } n = 0 \\ f(n-1) + f(n-2) & \text{otherwise} \end{cases} $ Now computing the value of the $n$th term grows exponentially using this recurrence, but thanks to dynamic programming, we may compute it in linear time. Now, not all recurrences lend themselves to DP (off hand, the factorial function), but it is a potentially exploitable property when defining some count as a recurrence rather than a generating function. Generating functions are an elegant way to formalize some count for a given structure. Perhaps the most famous is the binomial generating function defined as: $(x + y)^\alpha = \sum\_{k=0}^\infty \binom{\alpha}{k}x^{\alpha - k}y^k$ Luckily this has a closed form solution. Not all generating functions permit such a compact description. > > Now my question is this: how often are generating functions used in *design* of algorithms? It is easy to see how they may be exploited to understand the rate of growth required by an algorithm via analysis, but what can they tell us about a problem when creating a method to solve some problem? > > > If many times the same count may be reformulated as a recurrence it may lend itself to dynamic programming, but again perhaps the same generating function has a closed form. So it is not so evenly cut.
Generating functions are useful when you're designing counting algorithms. That is, not only when you're looking for the number of objects having a certain property, but also when you're looking for a way to enumerate these objects (and, perhaps, generate an algorithm to count the objects). There is a very good presentation in chapter 7 of [*Concrete Mathematics* by Ronald Graham, Donald Knuth, and Oren Patashnik](http://en.wikipedia.org/wiki/Concrete_Mathematics). The examples below are from these books (the mistakes and lack of clarity are mine). Suppose that you're looking for the ways to make change with a given set of coins. For example, with common US denominations¹, the possible coins are $[1], [5], [10], [25], [100]$. To give ¢42 in change, one possibility is $[25][10][5][1][1]$; another possibility is $[10][10][10][10][1][1]$. We'll write $42 \langle [25][10][5][1]^2 \rangle = \langle [10]^4 [1]^2 \rangle$. More generally, we can write a generating function for all the ways to give change: $$H = \sum\_{h\ge0} \sum\_{q\ge0} \sum\_{d\ge0} \sum\_{n\ge0} \sum\_{p\ge0} [100]^h [25]^q [10]^d [5]^n [1]^p$$ In more technical terms, $H$ is a term in the space of power series over the five variables $[100], [25], [10], [5], [1]$. Define the valuation of a monomial in this space by $$\langle [100]^h [25]^q [10]^d [5]^n [1]^p \rangle = 100 h + 25 q + 10 d + 5 n + p$$ Then the ways to give $v$ cents in change are the number of monomials whose valuation is $v$. We can express $H$ in an incremental fashion, by first writing down the ways $P$ to give change in pennies only, then the ways $N$ to give change in pennies and nickels, and so on. ($I$ means no coin.) $$\begin{gather\*} P = I + [1] + [1]^2 + [1]^3 + \ldots = \frac{I}{I - [1]} \\ N = (I + [5] + [5]^2 + [5]^3 + \ldots) P = \frac{P}{I - [5]} \\ D = (I + [10] + [10]^2 + [10]^3 + \ldots) N = \frac{N}{I - [10]} \\ Q = (I + [25] + [25]^2 + [25]^3 + \ldots) D = \frac{D}{I - [25]} \\ H = (I + [100] + [100]^2 + [100]^3 + \ldots) Q = \frac{Q}{I - [100]} \\ \end{gather\*}$$ If you want to count and not just enumerate the ways to give change, then there is a simple way to use the formal series we've obtained. Apply the homomorphism $$S: \quad [1] \mapsto X, \quad [5] \mapsto X^5, \quad [10] \mapsto X^{10}, [25] \mapsto X^{25}, [100] \mapsto X^{100} $$ The coefficient of $X^v$ in $S(C)$ is the number of ways to give $v$ cents in change. A harder example: suppose that you want to study all the ways to tile rectangles with 2×1 dominoes. For example, there are two ways to tile a 2×2 rectangle, either with two horizontal dominoes or with two vertical dominoes. Counting the number of ways to tile a $2 \times n$ rectangle is fairly easy, but the $3 \times n$ case quickly becomes nonobvious. We can enumerate all the possible tilings of a horizontal band of height 3 by sticking dominoes together, which quickly yields repetitive patterns: $$\begin{cases} U = \mathsf{o} + \mathsf{L} V + \mathsf{\Gamma} \Lambda + \mathord{\equiv} U \\ V = \substack{\mathsf{I}\\\strut} U + \substack{=\\\:-} V \\ \Lambda = \substack{\strut\\\mathsf{I}} U + \substack{\:-\\=} \Lambda \\ \end{cases}$$ where the funny shapes represent elementary domino arrangements: $\mathsf{o}$ is no domino, $\mathsf{L}$ is a vertical domino on top of the left part of a horizontal domino, $\substack{\strut\\\mathsf{I}}$ is a vertical domino aligned with the bottom of the band of height 3, $\substack{\:-\\=}$ is a horizontal domino aligned with the top of the band plus two horizontal dominoes below it and one step to the right, etc. Here, multiplication represents horizontal concatenation and is not commutative, but there are equations between the elementary patterns that form variables in this power series. As before with the coins, we can substitute $X$ for every domino and get a generating series for the number of tilings of a $3 \times (2n/3)$ rectangle (i.e. the coefficient of $X^{3k}$ is the number of ways to tile a rectangle of area $6k$, which contains $3k$ dominoes and has the width $2k$). The series can also be used in more versatile ways; for example, by distinguishing vertical and horizontal dominoes, we can count the tilings with a given number of vertical and horizontal dominoes. Again, read *Concrete Mathematics* for a less rushed³ presentation. ¹ I know my list is incomplete; assume a simplified US suitable for mathematical examples.² ² Also, if it comes up, assume spherical coins. ³ And better typeset.
Which test would I use to analyze the relationship between two dichotomous outcomes (Yes/No) where I have a reported event (Yes/No) and a Response to the event (Yes/No)?
Since you have two dichotomous outcomes your data can be represented in a $2 \times 2$ contingency table where the rows represent the status of "event" and the columns the status of "response," and the value within the cell being the total count within your sample falling into this category. The most straightforward way to analyze such data would be using either a $\chi^2$ test or a $z$ test (the two being almost equivalent). The $\chi^2$ test is looking at whether or not the two factors are independent of one another (in your case, does the event have an effect on response?), which means the probability of belonging to cell $(i, j)$, which we'll call $p\_{ij}$, is equal to $p\_{i \cdot} p\_{\cdot j}$, or the *marginal* probability of belonging to row $i$ multiplied by the marginal probability of belonging to column $j$. These marginal probabilities are not known so they're instead estimated from the data using the sample proportions $\hat{p}\_{i \cdot}$ and $\hat{p}\_{\cdot j}$. If $n$ is the total sample size we then estimate the expected count within cell $(i, j)$ when the null hypothesis of independence is true as $n \hat{p}\_{i \cdot} \cdot \hat{p}\_{\cdot j}$. This expected count gets compared to the actual observed count $c\_{ij}$ using the following statistic $$ \chi^2 = \sum\_{i=1}^{2} \sum\_{j=1}^{2} \frac{(c\_{ij} - n \hat{p}\_{i \cdot} \cdot \hat{p}\_{\cdot j})^2}{n \hat{p}\_{i \cdot} \cdot \hat{p}\_{\cdot j}} . $$ When the null hypothesis is true this follows a $\chi^2$ distribution with one degree of freedom, and so you can use this distribution to calculate $p$-values. This test is equivalent to the two-sided $z$-test which looks directly at the proportions within one of the margins and tries to determine if they differ across the other (note that a standard normal random variable squared follows a $\chi^2\_1$ distribution).
If I have a regression model: $$ Y = X\beta + \varepsilon $$ where $\mathbb{V}[\varepsilon] = Id \in \mathcal{R} ^{n \times n}$ and $\mathbb{E}[\varepsilon]=(0, \ldots , 0)$, when would using $\beta\_{\text{OLS}}$, the ordinary least squares estimator of $\beta$, be a poor choice for an estimator? I am trying to figure out an example were least squares works poorly. So I am looking for a distribution of the errors that satisfices previous hypothesis but yields bad results. If the family of the distribution would be determined by mean and variance that would be great. If not, it's OK too. I know that "bad results" is a little vague, but I think the idea is understandable. Just to avoid confusions, I know least squares are not optimal, and that there are better estimators like ridge regression. But that's not what I am aiming at. I want an example were least squares would be unnatural. I can imagine things like, the error vector $\epsilon$ lives in a non-convex region of $\mathbb{R}^n$, but I'm not sure about that. Edit 1: As an idea to help an answer (which I can't figure how to take further). $\beta\_{\text{OLS}}$ is BLUE. So it might help to think about when a linear unbiased estimator would not be a good idea. Edit 2: As Brian pointed out, if $XX'$ is bad conditioned, then $\beta\_{\text{OLS}}$ is a bad idea because variance is too big, and Ridge Regression should be used instead. I'm more interested is in knowing what distribution should $\varepsilon$ in order to make least squares work bad. $\beta\_{\text{OLS}} \sim \beta+(X'X)^{-1}X'\varepsilon$ Is there a distribution with zero mean and identity variance matrix for $\varepsilon$ that makes this estimator not efficient?
Brian Borchers answer is quite good---data which contain weird outliers are often not well-analyzed by OLS. I am just going to expand on this by adding a picture, a Monte Carlo, and some `R` code. Consider a very simple regression model: \begin{align} Y\_i &= \beta\_1 x\_i + \epsilon\_i\\~\\ \epsilon\_i &= \left\{\begin{array}{rcl} N(0,0.04) &w.p. &0.999\\ 31 &w.p. &0.0005\\ -31 &w.p. &0.0005 \end{array} \right. \end{align} This model conforms to your setup with a slope coefficient of 1. The attached plot shows a dataset consisting of 100 observations on this model, with the x variable running from 0 to 1. In the plotted dataset, there is one draw on the error which comes up with an outlier value (+31 in this case). Also plotted are the OLS regression line in blue and the least absolute deviations regression line in red. Notice how OLS but not LAD is distorted by the outlier: ![OLS vs LAD with an outlier](https://i.stack.imgur.com/uQhdp.jpg) We can verify this by doing a Monte Carlo. In the Monte Carlo, I generate a dataset of 100 observations using the same $x$ and an $\epsilon$ with the above distribution 10,000 times. In those 10,000 replications, we will not get an outlier in the vast majority. But in a few we will get an outlier, and it will screw up OLS but not LAD each time. The `R` code below runs the Monte Carlo. Here are the results for the slope coefficients: ``` Mean Std Dev Minimum Maximum Slope by OLS 1.00 0.34 -1.76 3.89 Slope by LAD 1.00 0.09 0.66 1.36 ``` Both OLS and LAD produce unbiased estimators (the slopes are both 1.00 on average over the 10,000 replications). OLS produces an estimator with a much higher standard deviation, though, 0.34 vs 0.09. Thus, OLS is not best/most efficient among unbiased estimators, here. It's still BLUE, of course, but LAD is not linear, so there is no contradiction. Notice the wild errors OLS can make in the Min and Max column. Not so LAD. Here is the R code for both the graph and the Monte Carlo: ``` # This program written in response to a Cross Validated question # http://stats.stackexchange.com/questions/82864/when-would-least-squares-be-a-bad-idea # The program runs a monte carlo to demonstrate that, in the presence of outliers, # OLS may be a poor estimation method, even though it is BLUE. library(quantreg) library(plyr) # Make a single 100 obs linear regression dataset with unusual error distribution # Naturally, I played around with the seed to get a dataset which has one outlier # data point. set.seed(34543) # First generate the unusual error term, a mixture of three components e <- sqrt(0.04)*rnorm(100) mixture <- runif(100) e[mixture>0.9995] <- 31 e[mixture<0.0005] <- -31 summary(mixture) summary(e) # Regression model with beta=1 x <- 1:100 / 100 y <- x + e # ols regression run on this dataset reg1 <- lm(y~x) summary(reg1) # least absolute deviations run on this dataset reg2 <- rq(y~x) summary(reg2) # plot, noticing how much the outlier effects ols and how little # it effects lad plot(y~x) abline(reg1,col="blue",lwd=2) abline(reg2,col="red",lwd=2) # Let's do a little Monte Carlo, evaluating the estimator of the slope. # 10,000 replications, each of a dataset with 100 observations # To do this, I make a y vector and an x vector each one 1,000,000 # observations tall. The replications are groups of 100 in the data frame, # so replication 1 is elements 1,2,...,100 in the data frame and replication # 2 is 101,102,...,200. Etc. set.seed(2345432) e <- sqrt(0.04)*rnorm(1000000) mixture <- runif(1000000) e[mixture>0.9995] <- 31 e[mixture<0.0005] <- -31 var(e) sum(e > 30) sum(e < -30) rm(mixture) x <- rep(1:100 / 100, times=10000) y <- x + e replication <- trunc(0:999999 / 100) + 1 mc.df <- data.frame(y,x,replication) ols.slopes <- ddply(mc.df,.(replication), function(df) coef(lm(y~x,data=df))[2]) names(ols.slopes)[2] <- "estimate" lad.slopes <- ddply(mc.df,.(replication), function(df) coef(rq(y~x,data=df))[2]) names(lad.slopes)[2] <- "estimate" summary(ols.slopes) sd(ols.slopes$estimate) summary(lad.slopes) sd(lad.slopes$estimate) ```
I have a series of data to which I want to fit my model. The model predicts the probability of success at a given value of x. I have a single data point at a number of points in this space. As I have a single point which is either pass or fail with a certain probability I believe I should fit using a Bernoulli maximum likelihood fit, is this correct? So i have a likelihood function which looks something like $$L(\theta,x) = \Pi^{n}\_{i} \theta^{x\_i}(1-\theta)^{1-x\_i}$$ where n are my data points? This is just different enough to the usual Binomial likelihood case to have completely thrown me.
I think what you want is [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression) (or at least the likelihood that is maximized in logistic regression). If the wiki isn't enough, search for MLE fitting of logistic regression on Google.
To optimize parameters of a model we minimize a loss function. To optimize metaparameters we look at a loss/metric on a validation set ( or, if we are worried to overfit on the validation set, we do cross validation ). (Than we check performance on the test set.) Why this difference ? Couldn't we consider metaparameters and parameters on the same ground and optimize both of them on the training set ? Is there some fundamental mistake with this approach? What would be the consequence of such a choice ? My only explanation is that we can get an estimate of the generalization error and it is best to use this to select an optimal model with smaller overfitting. One could do the same with parameters but would simply be too expensive (e.g. it is too expensive to consider in a neural net half of the weights as parameters and the other half as hyperparameters). Opinions?
There are few different reasons for the distinction between parameters and hyperparameters (the more common term for what you refer to as meta parameters). 1. Some hyperparameters can not be considered "just another parameter" because doing so would lead us to overfit. As an obvious example, discussed [here](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)#Untrainable_parameters), is how it would be hazardous to allow the polynomial degree of a polynomial regression model to be a parameter because then the model could output a function with degree n+1 (where n is the number of data points in the training set) and achieve no training loss but generalize poorly, due to [overfitting](https://en.wikipedia.org/wiki/Overfitting). 2. Further, some hyperparameters could not be efficiently trained as parameters using traditional optimization techniques. As described [here](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)#Tunability), some hyperparameters could not be optimized efficiently using traditional optimization techniques such as [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent). For more details on optimization and why certain functions/relationships are harder to optimize see [this article](https://machinelearningmastery.com/introduction-to-function-optimization/) for a lay person's view or [here](https://doi.org/10.1007/978-3-319-09174-7_15) for a more academic view if you have institutional access. 3. The hyperparameter may be something we choose to influence the time needed to train, such as the [learning rate](https://en.wikipedia.org/wiki/Learning_rate) in Neural Networks. These are just three of the multiple reasons we differentiate hyperparameters and parameters. A full detailed explanation is outside the bounds of acceptable length for Stack Exchange. ### Edit: I will add some more information here in response to the following clarifying question and suggestion for more detail: > > For point 1, also parameter optimization leads to overfitting, but we do not bother too much... Any comment on this ? > > > > > Thank you. I will accept your answer. Feel free to add a more detailed explanation or links, if you feel like :) > > > The difference in treatment between parameters and hyperparameters with regard to overfitting is a good question. To start, I do not think it is particularly accurate to say that "we do not bother too much" with regard to overfitting with parameters. In fact, this fear of overfitting and focus on generalizability may very well be the most important distinction between traditional statistical modeling and data fitting compared with machine learning. Machine learning models employ various techniques to counter overfitting in their selection of parameters. To give just a few examples, we can use [regularization](https://towardsdatascience.com/8-simple-techniques-to-prevent-overfitting-4d443da2ef7d#d178), where we punish the "size" of the model as in how far the parameter values stray from the origin. Or, we can use "early stopping", where we stop the parameters from reaching the point where they would have the least possible loss on training data in order to avoid overfitting. For a discussion of "early stopping" see [this Wikipedia article](https://en.wikipedia.org/wiki/Early_stopping#:%7E:text=In%20machine%20learning%2C%20early%20stopping,training%20data%20with%20each%20iteration.). Putting that aside, the different treatment of overfitting in parameters, as opposed to hyperparameters, has to do with the expressiveness of the hypothesis space being examined. To speak not very rigorously, the expressiveness of a class of functions (which may make up the hypothesis space) has to do with how "complicated" they can be, which impacts how "complicated" a set of data points (and thus the relationship between input values) that they can accurately model. To see more rigorous discussions on expressiveness see [Vapnik–Chervonenkis dimension](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension#VC_dimension_of_a_set-family), [Shattered Sets](https://en.wikipedia.org/wiki/Shattered_set), and perhaps most importantly [Rademacher complexity](https://en.wikipedia.org/wiki/Rademacher_complexity). So, when it comes to parameters, we often attempt to prevent overfitting by instituting some sort of punishment over what we deem to be more expressive functions – but that does not necessarily make the class of functions in the hypothesis space less expressive, as we can still theoretically get expressive outputs that overfit the relationship between the inputs and outputs if we do not have a high enough regularization term. On the contrary, when we deal with overfitting with hyperparameters, we often take steps to explicitly limit the expressiveness of the functions in the hypothesis space, which may for example decrease the number of points they can [shatter](https://en.wikipedia.org/wiki/Shattered_set), which would directly lower the expressiveness of a binary classifier according to the [Vapnik–Chervonenkis dimension](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension#VC_dimension_of_a_set-family). To give an example, without loss of generality, imagine one model where we treat the degree of a polynomial as a parameter and attempt to prevent overfitting through a regularization term which in some way punishes the degree of the polynomial, perhaps by adding $-d\*a$ to the loss, where $d$ is the degree of the polynomial and $a$, is some real number $0<a<1$. In this example, it is still "possible" to have polynomials with very high degree, and further, it is difficult to know what to set as the value for $a$, because we most likely do not know the "true degree" of "true relationship" of data inputs and outputs. Therefore, this approach makes it difficult to appropriately prevent overfitting. Now compare this to a model where the degree of the polynomial is a hyperparameter, by using some hyperparameter tuning technique such as cross-validation we can attempt to estimate what degree polynomial is needed to represent the underlying relationship between inputs and outputs, and thus restrict the hypothesis space to only those polynomials with this degree (or likely this degree or less), and as a result, directly decrease the expressiveness of the function and also eliminate the "guesswork" of how much regularization is needed for the degree of the polynomial based on the perceived relationship between the variables.
In 2005 ID software open source the game Quake 3 Arena. When they did it was discovered was an algorithm that was so ingenious and all it did was calculate the inverse of a square root. The easy way to calculate the inverse of a square root being ``` float y = 1 / sqrt(x); ``` But then again this functionality has already been figured out and can be used with the `#include <math.h>` directive. The code from the game being... ``` float InvSqrt(float x){ float xhalf = 0.5f * x; int i = *(int*)&x; i = 0x5f3759df - (i >> 1); x = *(float*)&i; x = x*(1.5f - xhalf*x*x); return x; } ``` So what exactly is the purpose of calculating the inverse of a square root in a non-standard way. Also, why would a game engine be interested in calculating the inverse of a square root anyway?
Inverse square root is used a lot for vector normalisation : $$xn = \frac{x }{ \sqrt{x^2 + y^2 + z^2}}$$ Which has many uses in computer graphics, such as calculating illumination. With a traditional FPU, even a good one, this is a very time consuming operation : the multiplications and additions are fast (and some can be paralleled) but division is slow (tens of cycles), square root is slower (or just as slow as division). $\frac{1 }{ \sqrt{x}}$ can be calculated directly (instead of square root then division) by using the Newton-Raphson root finding method, that converges quite fast with this function. While floating-point units offers great precision, computer graphics don't need that much, as everything is constrained into a few thousands pixels and hundreds of colours hues. So, here is the need of a quick algorithm that gives a good enough value. And this is the brilliant solution. There are two parts : ``` float xhalf = 0.5f * x; int i = *(int*)&x; i = 0x5f3759df - (i >> 1); x = *(float*)&i; ``` This first part calculates an approximate value, it does floating point calculations using integers, using some tricks. Floating point numbers have a exponent part 2^N and a mantissa between 1.0 and 1.9999. An integer shift does something like a square root for the exponent part ($\frac{1 }{ \sqrt{2^N}} = 2^{-N/2}$) : This is what does "-(i >> 1)" part above. The 0x5f3759df constant does a good enough interpolation for the mantissa. ``` x = x*(1.5f - xhalf*x*x); return x; ``` The second part is a application of the Newton-Raphson algorithm, to add more precision to the result. It can be repeated for even more accurate results. But, again, for graphics, it's good enough.
I found that often in literature that likelihood values are often used to compare different estimation method for the same model. And I got the impression that is the only way likelihood values are used. However, I wonder what else we can say about the likelihood function. For example, can we compare two totally different models' likelihood functions? What's the minimum of the likelihood (obviously not zero)? What's a good guess about the maximum without going through into iterative method of estimation? (Can we build up inequality equations and claim boundaries for the likelihood function?)
1. Can we compare two totally different models' likelihood functions? Not exactly, but you can compare those models using their likelihood functions indirectly by using the Akaike information criterion (AIC), the Bayesian information criterion (BIC), or the Deviance information criterion (DIC). 2. What's the minimum of the likelihood (obviously not zero)? The likelihood function associated to [some models](https://stats.stackexchange.com/q/27911) can be lower bounded for some data sets, but this is not true in general. The only general lower bound is zero. More generally, the likelihood surface often contains flat ridges when the associated Fisher information matrix is singular at the true value of the parameters (see [this paper](http://biomet.oxfordjournals.org/content/84/1/187.short)). 3. What's a good guess about the maximum without going through into iterative method of estimation? There is no general rule: this is more of an optimisation problem than something related to the likelihood function. 4. Can we build up inequality equations and claim boundaries for the likelihood function The exact likelihood is defined as a probability (see Chapter 9 of [this book](http://www.amazon.ca/Probability-Statistical-Inference-Volume/dp/0387961836) or [Wikipedia](http://en.wikipedia.org/wiki/Likelihood_function#Likelihoods_for_continuous_distributions)), then this object is upper bounded by $1$. The problem is that people typically use the continuous approximation of this function (see the aforementioned reference) which is not upper bounded in general and it might have singularities. **Other interesting scenarios where the likelihood function is important:** * Hypothesis testing: Likelihood ratio test. * Some information criteria, for example AIC, BIC, and DIC. * Bayesian inference: the posterior distribution is proportional to $Likelihood\times Prior$. * The profile likelihood: used to obtain interval inferences, such as confidence intervals, for the parameters of interest.
I'm new to the CS field and I have noticed that in many of the papers that I read, there are no empirical results (no code, just lemmas and proofs). Why is that? Considering that Computer Science is a science, shouldn't it follow the scientific method?
In programming languages research many ideas for new programming language constructs or new type checking mechanisms stem from theory (perhaps informed by experience in practice, perhaps not). Often a paper is written about such mechanisms from a formal/theoretical/conceptual perspective. That's relatively easy to do. Next comes the first hurdle: implementing the new constructs in the context of an existing compiler and experimenting with it, in terms of efficiency or flexibility. This too is relatively easy. But can we then say that the programming construct constitutes an advance to the science of programming? Can we say that it makes writing programs easier? Can we say that it makes the programming language better? The answer is no. A proper empirical evaluation involving scores of experienced programmers over large periods of time would be needed to answer those kinds of questions. This research is hardly ever done. The only judge of the value of a programming language (and its constructs) is the popularity of the language. And for programming language purists, this goes against what our hypotheses tell us.
In " Gödel, Escher, Bach" Hofstadter introduces the programming languages [Bloop and Floop](https://en.wikipedia.org/wiki/BlooP_and_FlooP). Relevant here is mostly that Floop is Turing complete, while Bloop differs from Floop in one aspect: all loops must be bounded. So only for-loops, no while-loops. Hofstadter shows that not all computable functions are computable by a Bloop program (so that Bloop is apparently really weaker than floop) by exhibiting a function called BLUEDIAG that is not computable by a bloop program. While the function itself is quite explicit, the proof (reproduced below) of *why* this is not programmable in Bloop somehow seems to never use any other property of Bloop except that there are only countably infinitely many Bloop programs. In particular it is not clear to me *where in the proof it is used that loops must be bounded*. To make this point clearer, my question is this: > > If I would explicitly code a program to compute the BLUEDIAG function in Floop, where would I use an unbounded loop? > > > I will show where I got stuck in my own attempt. First a summary of the proof. (If my notation differs from Hofstadter's it is because I am translating back to English from a translated version of the book) BLUEPROGRAMS is a list of all Bloop programs that take one integer input and have one integer output for each input. The programs on the list are sorted by length and within that alphabetically so that there is no ambiguity about the order. BLUEPROGRAMS{N} denotes the Nth entry on this list. BLUEPROGRAMS{N}[M] denotes the output of the program BLUEPROGRAMS{N} when fed the input M. The function BLUEDIAG is defined by BLUEDIAG[N] = BLUEPROGRAMS{N}[N] + 1. Now if BLUEDIAG *were* computable by a Bloop program then this program should appear somewhere on the list BLUEPROGRAMS, say at position $X$. We then get the strange situation that BLUEPROGRAMS{X}[X] = BLUEDIAG[X] = BLUEPROGRAMS{X}[X] + 1, where the first equality represents the assumption that BLUEPROGRAMS{X} calculates BLUEDIAG and the second comes from the definition of BLUEDIAG. Since no number Y satisfies Y = Y + 1 the number BLUEPROGRAMS{X}[X] cannot exist, hence the program BLUEPROGRAMS{X} does not exist. So far so good. As far as proofs go this is really easy, but at the price of obfuscating what about all this is special to Bloop. Now for the quest of making a program that computes BLUEDIAG. My plan of attack is to make three programs. Program 1 takes as input a natural number and as output the code of the Nth blueprogram. Program 2 takes as in put the code of a blueprogram B and a number M and outputs the same number this program B would produce when given the input M Program 3 takes as input a number K and outputs the number K + 1. I convinced myself that program 1 can be written in Bloop: we can show that the Nth blue program has no more than N + 100 characters, so we can just loop through the huge but bounded list of gibberish strings of at most N + 100 characters and count how many of those are bloop programs. I also convinced myself that program 3 is in bloop. So this isolates the problem of needing unbounded loops to program 2. But here I get stuck. When I try to write a program that simulates a different program whose code it is given as input I don't see how to do it - not with unbounded loops and also not without it. It seems to require the program to 'break free' from its domain of well defined software and start talking to the hardware in some sort of meta-language: 'stop treating this in the way I treat it (as a string) but view it in the same way you view me'. I'm not sure how floop (or any language really) would have this power, let alone how unbounded loops come into play. Any help is appreciated!
Program 2 can be a FlooP interpreter. When started in a state in which the source code of a FlooP program is in internal memory, it will execute it. It can do this by keeping its internal memory subdivided into three parts: 1. The FlooP program's source code (represented as a sequence of numbers according to some representation you have chosen, e.g. ASCII, or a more concise bytecode representation). 2. A sequence of numbers representing the current point of execution within that program (e.g., if ASCII is used, there might be an index into the sequence of ASCII characters, but for each loop we're in, we want to store the start of the loop to jump back to after an iteration has finished, and for each bounded loop we're in, we also need to count the number of iterations executed thus far). 3. The contents of the variables of the FlooP program being executed (`OUTPUT` and `CELL(i)` for each `i`) Part 1 is fixed, but 2 and 3 are variable in size and they can grow arbitrarily large (depending on the program being interpreted). This is not a blocking issue, e.g. they can be interleaved. The FlooP program implementing the interpreter will alternate between scanning statements in the source code (within memory area #1) and executing them (on memory area #3). For each type of statement, a function can be defined to execute that statement; for control structures, more than one function may be required. The interpreter will scan the source code, find a statement or control structure, and invoke the functions defined to execute them. I'm not filling in the functions, but I hope this will suffice to convince you that it's doable. (Doesn't Hofstadter explain this? I do not have a copy handy.)
I have been working on a set of data which contains information on the width, age, weight of statues and relate them to the price (I am not actually working on that, but I cannot disclose the topic of my work). I came up with the following regression: $Price = -9 -width + 4 log10(age) + 8 height$ the minimum-maximum interval for each parameter are: ``` width = [0,1[ age = [200,4000] height = [0.65,0.89] ``` My price therefore varies between about 4 and 12. As the value of the constant is -9, which is of the same order of magnitude as my price range, I am wondering if this regression could be criticized for being to generic, with a lot of the price variation unaccounted for and "hidden" inside the constant. Am I trying to over interpret my data here? Could my dataset be missing a crucial variable, for example the weight?
We rarely report or interpret the intercept in a linear regression model. In your case it is an extrapolation of the data. The intercept would be interpreted as a expected price for a product with width 0, an age of 1, and a height of 0. That is nonsense. A value of -9 is an artifact of the projection. If you want a more interpretable intercept, center the covariates. Then the intercept is the average price for all covariates taking their average values.
For example, if n=14, the output should be 10; n=22, the output should be 30=10+20; n=102, output=(10+...+100)+101+102=5703 In this problem, n is smaller than $10^{18}$ , and the algorithm should finish within 1 second.
Let $S(m)$ be the sum of all numbers with at most $m$ digit, i.e $S(m)=10^{m+1}\cdot (10^{m+1} - 1) / 2$. Let $C(m)$ be the number of at most m digit numbers which contains $0$. Let $F(m)$ be the sum of numbers with zero in them such that they have at most $m$ digits. Then is easy to find a recursive relation for $C(m)$ (hint: what is its dual?), after that you can find a recursive relation between $C(m),F(m),S(m)$ as follow: The main problem is finding $F(m)$, but we know that, new number of m digit with zero in it, is created by number of $m-1$ digit with zero in it and padded by the numbers $0...9$, or $m-1$ digit numbers without zero padded with new zero to the end, or $m-1$ digit numbers with zero in them: $F(m) = [F(m-1) \cdot 10 + C(m-1) \cdot 45] + (S(m-1) - F(m-1))\cdot 10 +F(m-1)\Rightarrow$ $F(m) = F(m-1)+S(m-1)\cdot 10+C(m-1)\cdot 45$ Now for calculating upto specific number, first you need to find the number of its digit, then, write it as sum of power of 10, then you can find each possible sum to each digit (from right to left). For such a small range this works much less than a second (because is $O(\log n)$), anyway you can preprocess it.
Consider the following local search approximation algorithm for the unweighted max cut problem: start with an arbitrary partition of the vertices of the given graph $G = (V,E) $, and as long as you can move 1 or 2 vertices from one side of the partition to the other, or switch between 2 vertices on opposite sides of the partition, in a way that improves the cut, do so. I know that this algorithm is a 2-approximation algorithm. I want to prove that this approximation is tight. That is, the algorithm is not an $\alpha$-approximation algorithm for any $\alpha < 2$ . I found an example for the tightness of the "regular" local search approximation algorithm of max-cut, where at each iteration, you can move only 1 vertex from one side of the partition to the other, and can't switch between two vertices on opposite sides. The example is of the complete bipartite graph $ K\_{2n,2n} $. If the initial cut includes $n$ vertices from each side of the graph in each side of the partition, the cut will include half of the edges, while the optimal cut will include all of them. The "regular" algorithm will not be able to improve the cut from this initial position. However, this example doesn't work for the algorithm described on top, because we can improve the cut by switching between two vertices from opposite sides of the partition. Can someone please describe an example or a give clue for an example that can prove that the approximation is tight? Thanks.
Let's denote by $u\_1,\ldots,u\_{2n}$ and $v\_1,\ldots,v\_{2n}$ the nodes in the two sides of $K\_{2n,2n}$ respectively. We remove the $2n$ edges $(u\_1,v\_1),(u\_2,v\_2),\ldots,(u\_{2n},v\_{2n})$ from $K\_{2n,2n}$. You can verify the resulting graph is an example showing the approximation ratio of your algorithm is at least $2-1/n$, when the initial cut is $\left(\{u\_1,\ldots,u\_n,v\_1,\ldots,v\_n\},\{u\_{n+1},\ldots,u\_{2n},v\_{n+1},\ldots,v\_{2n}\}\right)$.
I have a problem with this following question: > > Prove that the language $\{uw : |u|=2|w|\}$ is regular. > > > I tried to give this regular expression $(uw²)^\*$ to resolve it.
Any string of the form $uw$ where $|u| = 2|w|$ is a string whose length is a multiple of $3$. We are therefore considering the language of strings over some alphabet $\Sigma = \{ a\_1, \ldots, a\_k \}$ (not specified in the question). whose length is a multiple of $3$. Let $\Sigma$ be the regular expression $a\_1 + \cdots + a\_k$. Then a regular expression descring the language in question is $$(\Sigma\Sigma\Sigma)^\*$$
Consider the equivalence relation $\sim$ on boolean matrices $A,B\in\{0,1\}^{m\times n}$ which is defined as follows: $A\sim B$ :iff there are permutation matrices $P\in\{0,1\}^{n\times n}, Q\in\{0,1\}^{m\times m}$, so that $B=QAP$ In other words two matrices are equivalent, if they are equal up to permutation of rows and columns. A ***canonisation function for $\sim$*** is any function $N$ on the set of all boolean matrices with $N(A)\in \{0,1\}^{m\times n}$ for all $m,n\geq 1$ and $A\in \{0,1\}^{m\times n}$ with the following two properties: 1. $N(A)\sim A$ for every $A$ 2. $A\sim B \Leftrightarrow N(A)=N(B)$ for all $A,B\in\{0,1\}^{m\times n}$ Now i want to find a canonisation function for $\sim$ that is most efficiently computable. One possible canonisation function is the function $\mathrm{MaxLex}$ which maps every matrix $A$ to the lexicographically largest $B$ that is equivalent to $A$. For this i first define a linear order on bitvectors as follows: For $b,c\in\{0,1\}^k$ $b<\_{llex} c$ iff: the number of ones in $c$ is larger than in $b$ or ($b$ and $c$ have the same number of ones and $b(i)>c(i)$ for the first index $i$ with $b(i)\neq c(i)$) Then a lexicographic order $<\_{lex}$ on matrices of equal dimension is defined as follows: $A<\_{lex}B$ iff $w\_A<\_{llex}w\_B$ where $w\_X=X(1,-)X(2,-)\ldots X(m,-)\in\{0,1\}^{mn}$ denotes the bit vector that results from the concatenation of the rows $X(i,-)$ of $X$. Then $\mathrm{MaxLex}(A):=\max\_{<\_{lex}} \{ B : A\sim B\}$ I have found a simple recursive algorithm that computes $\mathrm{MaxLex}$, which has however a worst case runtime of $\mathcal{O}(m!)$. Now my questions are: 1. Is there a more efficient algorithm that computes $\mathrm{MaxLex}$ than my $\mathcal{O}(m!)$ algorithm? 2. Is there a polynomial time algorithm that computes $\mathrm{MaxLex}$ 3. Is there a canonisation function for $\sim$ that is computable in polynomial time? 4. Is anything known about the computational complexity of this problem? I am thankful for any tips, pointers or comments. I have already googled this problem but couldn't find anything. The only thing i found which resembles this is the decision problem of whether a boolean matrix is equivalent to a triangle matrix, which according to [this posting](https://mathoverflow.net/questions/191963/transforming-a-binary-matrix-into-triangular-form-using-permutation-matrices) is in NP, but not known to be NP-hard nor in P.
This kind of problem has been studied, e.g. in the exploitation of symmetries in model-checking and in satisfaction constraint problems. The short answer is that it is $NP$-hard. I suggest this draft by Junttila as a starting point: [A note on the computational complexity of a string orbit problem](https://users.ics.aalto.fi/tjunttil/publications/orbit.ps). It addresses the complexity question (in the subcase of vectors), and references important related work.
To be specific, the problem is formalized as follows. Given a set of integers $\{a\_1,\ldots,a\_n\}$, determine whether there exist non-negative integers $x\_1,\ldots,x\_n$ such that $a\_1x\_1+\cdots+a\_nx\_n=0$ and at least one of $x\_1,\ldots,x\_n$ is positive. Note this is not a duplicate of [this problem](https://cs.stackexchange.com/q/24117/83244). In that problem, the target sum is given as input, while in our problem, the target sum is fixed to be $0$. I tried to follow the idea of the answer to that question to come up with a new reduction, but since a basic idea of that answer is to use big elements as well as big target to prevent an element from being chosen multiple times, while in our problem the target sum is $0$, I failed. Related question: [Is the following Subset Sum variant NP-complete?](https://cs.stackexchange.com/q/74661/83244)
The problem can be solved in polynomial time. If any of the $a\_i$ are zero, then the answer is "yes" (you can set the corresponding $x\_i$ to 1, and all other $x\_j$ to 0). So let's assume all of the $a\_i$ are non-zero. If all of the $a\_i$ are strictly positive, then the answer is "no" (if $x\_i$ is the one that's set to something non-zero, then $x\_i>0$ and $a\_i>0$, so $a\_1 x\_1 + \dots + a\_n x\_n \ge a\_i x\_i > 0$). So we can assume at least one of the $a\_i$ is strictly negative. If all of the $a\_i$ are strictly negative, then the answer is "no" (if $x\_i$ is the one that's set to something non-zero, then $x\_i>0$ and $a\_i<0$, so $a\_1 x\_1 + \dots + a\_n x\_n \le a\_i x\_i < 0$). So we can assume at least one of the $a\_i$ is strictly negative. If at least one of the $a\_i$ is strictly positive and at least one is strictly negative, say $a\_i>0$ and $a\_j<0$, then the answer is "yes" (you can take $x\_i=-a\_j$, $x\_j=a\_i$, and set all other $x$'s to zero; then $a\_1 x\_1 + \dots + a\_n x\_n = a\_i x\_i + a\_j x\_j = -a\_i a\_j + a\_i a\_j = 0$). These cases cover all the possibilities: at least one of the above cases must hold. So, we obtain a polynomial-time algorithm for the original problem. Credit: I got the key idea from [Discrete lizard](https://cs.stackexchange.com/questions/89194/is-subset-sum-problem-with-multiplicities-np-complete/89214#comment191798_89195).
What is the limit to the number of independent variables one may enter in a multiple regression equation? I have 10 predictors that I would like to examine in terms of their relative contribution to the outcome variable. Should I use a bonferroni correction to adjust for multiple analyses?
You need to think about what you mean by a "limit". There are limits, such as when you have more predictors than cases, you run into issues in parameter estimation (see the little R simulation at the bottom of this answer). However, I imagine you are talking more about soft limits related to statistical power and good statistical practice. In this case the language of "limits" is not really appropriate. Rather, bigger sample sizes tend to make it more reasonable to have more predictors and the threshold of how many predictors is reasonable arguably falls on a continuum of reasonableness. You may find the [discussion of rules of thumb for sample size in multiple regression](https://stats.stackexchange.com/questions/10079/rules-of-thumb-for-minimum-sample-size-for-multiple-regression) relevant, as many such rules of thumb make reference to the number of predictors. A few points * If you are concerned more with overall prediction than with statistical significance of individual predictors, then it is probably reasonable to include more predictors than if you are concerned with statistical significance of individual predictors. * If you are concerned more with testing a specific statistical model that relates to your research question (e.g., as is common in many social science applications), presumably you have reasons for including particular predictors. However, you may also have opportunities to be selective in which predictors you include (e.g., if you have multiple variables that measure a similar construct, you might only include one of them). When doing theory based model testing, there are a lot of choices, and the decision about which predictors to include involves close connection between your theory and research question. * I don't often see researchers using bonferroni corrections being applied to significance tests of regression coefficients. One reasonable reason for this might be that researchers are more interested in appraising the overall properties of the model. * If you are interested in assessing relative importance of predictors, I find it useful to examine both the bivariate relationship between the predictor and the outcome, as well as the relationship between the predictor and outcome controlling for other predictors. If you include many predictors, it is often more likely that you include predictors that are highly intercorrelated. In such cases, interpretation of both the bivariate and model based importance indices can be useful, as a variable important in a bivariate sense might be hidden in a model by other correlated predictors ([I elaborate more on this here with links](http://jeromyanglim.blogspot.com/2009/09/variable-importance-and-multiple.html)). --- A little R simulation --------------------- I wrote this little simulation to highlight the relationship between sample size and parameter estimation in multiple regression. ``` set.seed(1) fitmodel <- function(n, k) { # n: sample size # k: number of predictors # return linear model fit for given sample size and k predictors x <- data.frame(matrix( rnorm(n*k), nrow=n)) names(x) <- paste("x", seq(k), sep="") x$y <- rnorm(n) lm(y~., data=x) } ``` The `fitmodel` function takes two arguments `n` for the sample size and `k` for the number of predictors. I am not counting the constant as a predictor, but it is estimated. I then generates random data and fits a regression model predicting a y variable from `k` predictor variables and returns the fit. Given that you mentioned in your question that you were interested in whether 10 predictors is too much, the following function calls show what happens when the sample size is 9, 10, 11, and 12 respectively. I.e., sample size is one less than the number of predictors to two more than the number of predictors ``` summary(fitmodel(n=9, k=10)) summary(fitmodel(n=10, k=10)) summary(fitmodel(n=11, k=10)) summary(fitmodel(n=12, k=10)) ``` ### > summary(fitmodel(n=9, k=10)) ``` Call: lm(formula = y ~ ., data = x) Residuals: ALL 9 residuals are 0: no residual degrees of freedom! Coefficients: (2 not defined because of singularities) Estimate Std. Error t value Pr(>|t|) (Intercept) -0.31455 NA NA NA x1 0.34139 NA NA NA x2 -0.45924 NA NA NA x3 0.42474 NA NA NA x4 -0.87727 NA NA NA x5 -0.07884 NA NA NA x6 -0.03900 NA NA NA x7 1.08482 NA NA NA x8 0.62890 NA NA NA x9 NA NA NA NA x10 NA NA NA NA Residual standard error: NaN on 0 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: NaN F-statistic: NaN on 8 and 0 DF, p-value: NA ``` Sample size is one less than the number of predictors. It is only possible to estimate 9 parameters, one of which is the constant. ### > summary(fitmodel(n=10, k=10)) ``` Call: lm(formula = y ~ ., data = x) Residuals: ALL 10 residuals are 0: no residual degrees of freedom! Coefficients: (1 not defined because of singularities) Estimate Std. Error t value Pr(>|t|) (Intercept) 0.1724 NA NA NA x1 -0.3615 NA NA NA x2 -0.4670 NA NA NA x3 -0.6883 NA NA NA x4 -0.1744 NA NA NA x5 -1.0331 NA NA NA x6 0.3886 NA NA NA x7 -0.9886 NA NA NA x8 0.2778 NA NA NA x9 0.4616 NA NA NA x10 NA NA NA NA Residual standard error: NaN on 0 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: NaN F-statistic: NaN on 9 and 0 DF, p-value: NA ``` Sample size is the same as the number of predictors. It is only possible to estimate 10 parameters, one of which is the constant. ### > summary(fitmodel(n=11, k=10)) ``` Call: lm(formula = y ~ ., data = x) Residuals: ALL 11 residuals are 0: no residual degrees of freedom! Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.9638 NA NA NA x1 -0.8393 NA NA NA x2 -1.5061 NA NA NA x3 -0.4917 NA NA NA x4 0.3251 NA NA NA x5 4.4212 NA NA NA x6 0.7614 NA NA NA x7 -0.4195 NA NA NA x8 0.2142 NA NA NA x9 -0.9264 NA NA NA x10 -1.2286 NA NA NA Residual standard error: NaN on 0 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: NaN F-statistic: NaN on 10 and 0 DF, p-value: NA ``` Sample size is one more than the number of predictors. All parameters are estimated including the constant. ### > summary(fitmodel(n=12, k=10)) ``` Call: lm(formula = y ~ ., data = x) Residuals: 1 2 3 4 5 6 7 8 9 10 11 0.036530 -0.042154 -0.009044 -0.117590 0.171923 -0.007976 0.050542 -0.011462 0.010270 0.000914 -0.083533 12 0.001581 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.14680 0.11180 1.313 0.4144 x1 0.02498 0.09832 0.254 0.8416 x2 1.01950 0.13602 7.495 0.0844 . x3 -1.76290 0.26094 -6.756 0.0936 . x4 0.44832 0.16283 2.753 0.2218 x5 -0.76818 0.15651 -4.908 0.1280 x6 -0.33209 0.18554 -1.790 0.3244 x7 1.62276 0.21562 7.526 0.0841 . x8 -0.47561 0.18468 -2.575 0.2358 x9 1.70578 0.31547 5.407 0.1164 x10 3.25415 0.46447 7.006 0.0903 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2375 on 1 degrees of freedom Multiple R-squared: 0.995, Adjusted R-squared: 0.9452 F-statistic: 19.96 on 10 and 1 DF, p-value: 0.1726 ``` Sample size is two more than the number of predictors, and it is finally possible to estimate the fit of the overall model.
Can someone explain how FDR procedures are able to estimate an FDR without a model / assumption of the base rate of true positives?
I think that's a really good question; too many people use the Benjamini-Hochberg procedure (abbreviated BH; possibly the most popular procedure to control the FDR) as a black box. Indeed there is an underlying assumption it makes on the statistics and it is nicely hidden in the definition of the p-values! For a well-defined p-value $P$ it holds that $P$ is uniformly distributed ($P\sim U[0,1]$) under the null hypothesis. Sometimes it might even be that $\Pr[P\leq t] \leq t$, i.e. that $P$ is stochastically smaller than uniform, but this only makes the procedures more conservative (and therefore still valid). Thus, by calculating your p-values, using a t-test or really any test of your choice, you are providing the information about the distribution under the null hypothesis. But notice here that I kept talking about the null hypothesis; so what you mentioned about knowledge of the base rate of true positives is *not* needed, you only need knowledge of the base rate of false positives! Why is this? Let $R$ denote the number of all the rejected (positive) hypotheses and $V$ the false positives, then: $$ \text{FDR} = \mathbb E\left[\frac{V}{\max(R,1)}\right] \approx \frac{\mathbb E[V]}{\mathbb E[R]}$$ So to estimate the FDR you need a way of estimating $\mathbb E[R]$, $\mathbb E[V]$. We will now look at decision rules which reject all p-values $\leq t$. To make this clear in the notation I will also write $FDR(t),R(t),V(t)$ for the corresponding quantities/random variables of such a procedure. Since $\mathbb E[R(t)]$ is just the expectation of the total number of rejections, you can unbiasedly estimate it by the number of rejections you observe, so $\mathbb E[R(t)] \approx R(t)$, i.e. simply by counting how many of your p-values are $\leq t$. Now what about $\mathbb E[V]$? Well assume $m\_0$ of your $m$ total hypotheses are null hypotheses, then by the uniformity (or sub-uniformity) of the p-values under the null you get: $$\mathbb E[V(t)] = \sum\_{i \text{ null}} \Pr[P\_i \leq t] \leq m\_0 t$$ But we still do not know $m\_0$, but we know that $m\_0 \leq m$, so a conservative upper bound would just be $\mathbb E[V(t)] \leq m t$. Therefore, since we just need an upper bound on the number of false positives, it is enough that we know their distribution! And this is exactly what the BH procedure does. So, while Aarong Zeng's comment that "the BH procedure is a way to control the FDR at the given level q. It's not about estimating the FDR" is not false, it is also highly misleading! The BH procedure actually *does* estimate the FDR for each given threshold $t$. And then it chooses the biggest threshold, such that the estimated FDR is below $\alpha$. Indeed the "adjusted p-value" of hypothesis $i$ is essentially just an estimate of the FDR at the threshold $t=p\_i$ (up to isotonization). I think the standard BH algorithm hides this fact a bit, but it is easy to show the equivalence of these two approaches (also called the "equivalence theorem" in the Multiple testing literature). As a final remark, there do exist methods such as Storey's procedure which even estimate $m\_0$ from the data; this can increase power by a tiny bit. Also in principle you are right, one could also model the distribution under the alternative (your true positive base rate) to get more powerful procedures; but so far the multiple testing research has mainly focused on maintaining control of type-I error rather than maximizing power. One difficulty would also be that in many cases each of your true alternatives will have a different alternative distribution (e.g. different power for different hypotheses), while under the null all p-values have the same distribution. This makes the modelling of the true positive rate even more difficult.
Suppose in a given plain there are fixed number of lines. A point P lies on one of the line. How to find which line intersects the point P ? I am giving an example![enter image description here](https://i.stack.imgur.com/0CN2J.png) In the above graph point P is on the line CE. We can determine it visually.But my problem is how to make the computer understand it. Is there any algorithm available to make it so ?
Any finite line is described by two points. assuming we are talking 2D here, then your graph can be described as a set of points, having X and Y coordinations, and a set of edges $E$, which describe which pair of points are connected with a line. Elementary geometry tell us the function expressing the points on these lines is $Y=(X-X\_0)\frac{Y\_1-Y\_0}{X\_1-X\_0}+Y\_0$ ($(X\_0,Y\_0), (X\_1,Y\_1)$ denote the two point's coordinations) To check whether a point belongs to this line, insert the point's coordinates to the equation and see if the equation holds. since the line is not infinite, we should also check if the point is between the two points defining the line. If the equation holds, it is sufficient to check if one of the point's coordinates (say $X$) is between the two point's defining the line ($X\_0$ and $X\_1$). So, for any line in the graph, perform the check explained above, if the above criteria holds, the line contains P. Checking if P is on a specific line takes constant amount of computational effort, so the running time of this procedure is $O(E)$.
I solved this problem from codechef: [problem link](https://discuss.codechef.com/questions/60008/taqtree-editorial "problem link") and now I want to change it a bit. Instead of find out the distance between node $u$ and $v$ I want to answer $k$ queries of the form: find node $u$ to which distance from node $v$ is maximum but I'm not able to see fast enough algorithm for it. I think that heavy light decomposition is still my best option. Do you have any ideas? Finding node to which distance is maximum in this way is $O(n)$ but the challenge here is that I have to answer to $k$ queries so the total complexity is $O(nk)$. I believe that this can be done in $O((n+k)\log n)$.
Start from any node and find (say, with a BFS) a farthest node from it; call it $u$. Then start from $u$ and find a farthest node from it; call it $v$. Nodes $u$ and $v$ achieve the absolute maximum distance in the tree. Now with 2 more BFS, find the distances of all nodes from $u$, and the distances of all nodes from $v$. At this point you are ready to answer the queries: for each queried node $z$, the most distant node is either $u$ or $v$, depending on which has the larger distance. This solves your problem in $O(n + k)$ time.
does a good known algorithm exists for this problem? On input I have two series of timestamps "when the event was observed". Theoretically the recorded timestamps should be very well aligned. Visualized ideal situation on two time lines "s" and "r" as recorded from the two devices: ![Ideal input situation](https://i.stack.imgur.com/0y3Wp.png) but more likely they will not be so nicely aligned and there might be missing events from timeline s or r: ![Visualized timestamps of events in real world conditions](https://i.stack.imgur.com/wqFQQ.png) I am looking for algorithm to match events from "s" and "r" like this: ![Matched events](https://i.stack.imgur.com/hBkxc.png) So that the result will be something like: (s1,r1); (s2,r2); (s3,null); (s4,r3);(null,r4)... Or something similar. Maybe with some "confidence" rating. It feels like there should be some well known algorithm for this problem, but I am a bit out of my element here. Thanks.
Here is a pragmatic approach: 1. Look for matching pairs. Consider a pair $(s,r)$ (where $s$ is from the first device and $r$ from the second device) to be a match if their timestamps are very close, i.e., $|s-r| \le \epsilon$ for some small fixed value of $\epsilon$ that you choose in advance. $\epsilon$ is a tolerance. To find matching pairs, for each $s$, you look for a value $r$ in the range $[s-\epsilon,s+\epsilon]$. You can quickly find all matching pairs by first sorting $s$ and $r$ individually, then doing a linear scan across them. 2. Count how many values don't have a match. Use the number of values that don't have a match to derive a confidence score. The more mismatches, the less confident you are. There are a number of elaborations possible: 1. For instance, if you've found $n$ matching pairs $(s\_1,r\_1),\dots,(s\_n,r\_n)$, you can use $|s\_1-r\_1|+\dots+|s\_n-r\_n|$ as an additional confidence measure: the higher this is, the worse a match it is. You could compute a weighted sum of $|s\_1-r\_1|+\dots+|s\_n-r\_n|$ and the number of mismatched pairs. 2. If you frequently find that many points can be matched to multiple other possible points (one $s$ has multiple $r\_i$'s it could match to), you could try framing this as a [bipartite graph matching problem](https://en.wikipedia.org/wiki/Matching_%28graph_theory%29). Basically, you have a vertex for each sample point, and an edge between $s$ and $r$ if $|s-r|\le \epsilon$; now find a maximum matching in this bipartite graph, and that will give you a way to find the maximum number of matched pairs. Then you can use the number of unmatched values as a confidence score. You can also use some kind of search over a range of $\epsilon$ values, if it's not clear how to choose $\epsilon$ a priori. However, this is probably overkill and unnecessary, if events are well-aligned (if the typical distance between two events from a same device is much larger than the typical alignment error in the timestamp for an event at one device vs the timestamp at the other device). If events are well-aligned, simply choosing a suitable threshold $\epsilon$ and applying the simple approach at the top of my answer will probably suffice. 3. If you expect systematic clock skew, where one device's clock is a bit ahead of the other device's clock (by a constant), you could correct for this in a number of ways: * One simple pragmatic way is to look for a set of matching pairs $(s\_1,r\_1),\dots,(s\_n,r\_n)$ in a first pass (this first pass doesn't have to be perfect, so you could use a larger value of $\epsilon$). Then, use the median of $s\_1-r\_1, \dots, s\_n-r\_n$ as your estimate of the clock skew; say the median is $\Delta$. Now replace each $s\_i$ with $s'\_i=s\_i-\Delta$. Finally, do a second pass where you match the (corrected) $s'\_i$'s to the $r\_j$'s, maybe with a smaller value of $\epsilon$. In practice this might work very well. * Alternatively, you could use [RANSAC](https://en.wikipedia.org/wiki/RANSAC) to fit a line $s=r+c$ to the data, where the constant $c$ is unknown. Normally RANSAC is described as a way to fit a line $y=ax+c$ (in this setting, $s=ar+c$), but in your application, we can fix $a=1$ (assuming both clocks run at the same rate and we care only about clock skew) and then apply RANSAC methods to find the best fit line, i.e., to find the constant $c$ that provides the best fit.
I am performing a meta-analysis. I have a subgroup made only of 2 studies and they show opposite results. The 1st has excellent outcomes the 2nd very bad ones. Is it true that I am not allowed to compute the data from these two (different) studies under a random-effect model? I have been told that it is methodologically not correct to perform a meta-analysis of 2 studies with such opposite results. They suggested me to report the raw data from these studies separately (without an overall outcome calculated by a random-effect model). Any suggestion? Or any place where I can find the explanation about this point?
try to find the confounding variable.. see if you get any interactions or mabe run another test with different methodology (and more variables that may give you a clearer picture.
How would we solve the knapsack problem if we now have to fix the number of items in the knapsack by a constant $L$? This is the same problem (max weight of $W$, every item have a value $v$ and weight $w$), but you must add *exactly* $L$ item(s) to the knapsack (and obviously need to optimize the total value of the knapsack). Every way I've thought of implementing this so far (dynamic programming, brute force) has resulted in either failure, or lifetime-of-universe level computation times. Any help or ideas are appreciated. Edit: I am looking for pseudo-polynomial time solutions
You can transform this problem into an instance of Knapsack. Let $n$ be the number of items, $V$ be the maximum value of an item and suppose that each item weighs at most $W$ (otherwise it can be discarded). To ensure that you select at least $L$ items: * Add $n(V+1)$ to the value of every item. * Now the problem is equivalent to that of maximizing the number of selected items, breaking ties in favor of set of items with largest original total value, subject to weight constraints. There is a solution that selects at least $L$ items iff the optimal solution has a value of at least $n(V+1)L$. To ensure that you select at most $L$ items: * Add $(W+1)$ to the weight of every item. * Add $L(W+1)$ to $W$. * Now every subset of more than $L$ items weighs at least $(L+1)(W+1) = L(W+1) + (W+1) > L(W+1) + W$, and is hence not feasible. Every subset of $L$ items that had an overall weight of $w$, now weighs $L(W+1) + w$, and hence is feasible iff $w \le W$. A subset with less than $L$ items might be feasible even if its overall weight is more than $W$, however its total value will always be smaller than $n(V+1)L$.
I was taught to control for multiple comparisons, i.e. when I do more than one test at some significance level, alpha, to lower alpha as given by some choice of a multiple comparisons procedure. Anyone that can answer my question knows what I am talking about. My question is do I account for multiple comparisons: per section of a paper - so that each part, or section, of a paper has a level of alpha. per paper - so that the whole paper has a level of alpha per dataset - so that the whole analysis of the dataset has a level of alpha per research question- which aims to answer a question and may incorporate multiple datasets and analyses of them, so that the answer to the question has a level of alpha, and may constituent multiple papers, or a single paper with addendums. or per individual - my whole lifetime - so that I have a level of alpha (this is clearly a joke, although to be able to say that I have a significance level of alpha would be both hilarious and impressive.) Or some thing else I haven't thought of, or it seems to you I misunderstand something.
Stirling's approximation gives $$\Gamma(z) = \sqrt{\frac{2\pi}{z}}\,{\left(\frac{z}{e}\right)}^z \left(1 + O\left(\tfrac{1}{z}\right)\right)$$ so $$\frac{\Gamma(\frac{n+1}{2})}{\Gamma(\frac{n}{2})} = \dfrac{\sqrt{\frac{2\pi}{\frac{n+1}{2}}}\,{\left(\frac{\frac{n+1}{2}}{e}\right)}^{\frac{n+1}{2}}}{\sqrt{\frac{2\pi}{\frac{n}{2}}}\,{\left(\frac{\frac{n}{2}}{e}\right)}^{\frac{n}{2}}}\left(1 + O\left(\tfrac{1}{n}\right)\right)\\= {\sqrt{\frac{\frac{n+1}{2}}{e}}}\left(1+\frac1n\right)^{\frac{n}{2}}\left(1 + O\left(\tfrac{1}{n}\right)\right) \\= \sqrt{\frac{n}{2}} \left(1 + O\left(\tfrac{1}{n}\right)\right)\\ \to \sqrt{\frac{n}{2}}$$ and you may have a slight typo in your question In fact when considering limits as $n\to \infty$, you should not have $n$ in the solution; instead you can say the ratio tends to $1$ and it turns out here that the difference tends to $0$. Another point is that $\sqrt{\frac{n}{2}-\frac14}$ is a better approximation, in that not only does the difference tend to $0$, but so too does the difference of the squares.
With the advent of internet (and common sense) there is more and more demand for open-access research. Several researchers (including me) find it frustrating that published peer-reviewed research articles are behind paywalls. I am looking for journals and conferences (related to theoretical computer science, graph theory, combinatorics, combinatorial optimization) that make all accepted publications freely available to everyone. Some such journals are [Theory of Computing](http://theoryofcomputing.org/), [The Electronic Journal of Combinatorics](http://www.combinatorics.org/), [Logical Methods in Computer Science](https://lmcs.episciences.org/) If you know more such journals (or) conferences, please mention them in your answers. EDIT : As suggested by David Eppstein in his answer, I am adding one more constraint. Please write only those journals/conferences that do not charge authors exorbitant fees to publish.
Conferences with proceedings published in the [LIPIcs](http://www.dagstuhl.de/en/publications/lipics/) series: * STACS * FSTTCS * CCC (since 2015) * TQC (Theory of Quantum Computation, Communication and Cryptography) * ICALP * APPROX/RANDOM * SoCG * SWAT * ESA * MFCS
While trying to fix a bug in a library, I searched for papers on finding subranges on red and black trees without success. I'm considering a solution using zippers and something similar to the usual *append* operation used on deletion algorithms for immutable data structures, but I'm still wondering if there's a better approach I wasn't able to find, or even some minimum complexity boundary on such an operation? Just to be clear, I'm talking about an algorithm that, given a red&black tree and two boundaries, will produce a new red&black tree with all the elements of the first tree that belong within those boundaries. Of course, an upper bound for the complexity would be the complexity of traversing one tree and constructing the other by adding elements.
This answer combines some of my comments to the question and expand them. The subrange operation on red-black trees can be performed in worst-case O(log n) time, where n is the number of elements in the original tree. Since the resulting tree will share some nodes with the original tree, this approach is suitable only if trees are immutable (or trees are mutable but the original tree is no longer needed). First notice that the subrange operation can be implemented by two split operations. Here the *split* operation takes a red-black tree T and a key x and produces two trees L and R such that L consists of all the elements of T less than x and R the elements of T greater than x. Therefore, our goal now is to implement the split operation on red-black trees in worst-case O(log n) time. How do we perform the split operation on red-black trees in O(log n) time? Well, it turned out that there was a well-known method. (I did not know it, but I am no expert of data structures.) Consider the *join* operation, which takes two trees L and R such that every value in L is less than every value in R and produces a tree consisting of all the values in L and R. The join operation can be implemented in worst-case time O(|rL−rR|+1), where rL and rR are the ranks of L and R, respectively (that is, the number of black nodes on the path from the root to each leaf). The split operation can be implemented by using the join operation O(log n) times, and the total worst-case time is still O(log n) by considering a telescoping sum. Sections 4.1 and 4.2 of a book [Tar83] by Tarjan describe how to implement the join and the split operations on red-black trees in worst-case time O(log n). These implementations destroy original trees, but it is easy to convert them to immutable, functional implementations by copying nodes instead of modifying them. As a side note, the Set and the Map modules of [Objective Caml](http://caml.inria.fr/ocaml/) provide the split operation as well as other standard operations on (immutable) balanced binary search trees. Although they do not use red-black trees (they use balanced binary search trees with the constraint that the left height and the right height differ by at most 2), looking at their implementations might be useful, too. Here is [the implementation of the Set module](http://caml.inria.fr/cgi-bin/viewcvs.cgi/ocaml/trunk/stdlib/set.ml?view=log). References [Tar83] Robert Endre Tarjan. *Data Structures and Network Algorithms*. Volume 44 of *CBMS-NSF Regional Conference Series in Applied Mathematics*, SIAM, 1983.
This is my first post, so I do apologize if I’ve missed something important. I’m preparing to pursue a PhD in statistics, and I’ve recently realized that my linear algebra knowledge is not “up to par.” The class that I took only stopped at orthogonal sub spaces, and I know that’s just not sufficient. Does anyone have any recommendations for a linear algebra textbook that would be sufficient for self study and get me to the level that I need? I’ve heard some, good things about *Linear Algebra Done Right* and *Linear Algebra Done Wrong*. Thanks so much.
These spring to mind : *Matrix algebra useful for statistics.* Searle, S. R., & Khuri, A. I. (2017). John Wiley & Sons. *Linear Algebra and Matrix Analysis for Statistics* Banerjee, S., & Roy, A. (2014). Crc Press. *Numerical linear algebra for applications in statistics.* Gentle, J. E. (2012). Springer Science & Business Media. Also, this is a classic, although not specific to statistics: *Introduction to linear algebra (Vol. 3)* Strang, G.(1993). Wellesley, MA: Wellesley-Cambridge Press.
Suppose I'm interested in $\Pr(g(x)=1)$, where $g:\mathbb{R}\rightarrow\{0,1\}$. In this context, $x$ is the realization of a random variable, $X$. I would like to emphasize this by rewriting the probability as: $$ \Pr(g(x)=1) = \Pr(g(X)=1 \mid X=x) $$ Is it kosher to write this? Or am I violating some rule of conditional probability?
The role of random variation in population statistics is a matter of some judgement. Many people believe that possible variation in complete counts, including the census, has no bearing on analysis. I believe that if you just want to count and describe what happened and aren't really interested in a model, reasons for variation, and likely future patterns of events then you don't need to account for any random variation or, perhaps more accurately, deviation from a model. I have never had a problem like this. Nearly all analyses of census data need to account for variation over time in the population, particularly so in small subpopulations. If you are interested in making any statements that generalize in any way beyond the one time experience of a population, then measuring and accounting for variation, often called random, is necessary. There are a few circumstances in which you are only interested in what did happen. If you are a tax collector, for example. But even a tax collector wants to make a budget for next year and has to take some account of possible change. I
**Now I see it can't hold. Thank you for the counter examples... You guys rule!** **Thank you very much for your comments!** I added, however, some observations that were missing. Most importantly is the fact that we can assume that there exist a positive covariance between X and Y. At first, it seemed to me that it would be easy to demonstrate... but I still did not manage to solve this problem. Can you guys give me a hand? Suppose we make use of $\mathbf{i)}$ a time series $X = [x\_1,...,x\_N]$ containing only positive entries (i.e. $0 \leq x\_i$ for all $i$), $\mathbf{ii)}$ a vector of weights of the same of length given by $Y = [y\_1,...,y\_N]$ where $0 \leq y\_i \leq 1$ for all $i$ to build $\mathbf{iii)}$ a time series $Z = [z\_1,...,z\_N]$, where the $i$th term is given by $z\_i = x\_i y\_i$, i.e., $Z = [x\_1 y\_1,...,x\_N y\_N]$. Clearly, as $Y \in [0,1]$, we have that $0 \leq Z \leq X$ for all $i$. $\mathbf{Question)}$ Can we demonstrate that $\text{var}(Z) \leq \text{var}(X)$? For example, if $X = [2, 6, 99, 12, 3, 1]$ and $Y = [0.34, 0.01, 0.2, 1, 0.3, 0.17]$, we have $ Z = [x\_1 y\_1,...,x\_N y\_N] = [0.68, 0.06, 19.8, 12, 0.9, 0.17]$ $ \widehat{\sigma}^{2}\_{X} = 1494.70$ $ \widehat{\sigma}^{2}\_{Z} = 69.81$ $\mathbf{Important} \text{ } \mathbf{observations}$: 1) $X$ and $Y$ are stationary, ergodic random processes 2) $X$ is not a constant time series, in a sense that $\text{var}(X) \geq 0$ 3) It can be assumed that $\text{var}(X) \geq \text{var}(Y) \geq 0$ 4) There exist a positive covariance $X$ between and $Y$ * Possible implication of 4)? As $0 \leq Z \leq X$, we could define a given time series $W \geq 0$ such as $Z + W = X$. Thus, $\text{var}(X) = \text{var}(Z + W) = \text{var}(Z) + \text{var}(W) + 2\text{cov}(Z,W)$. Note that if $\text{cov}(Z,W) \geq 0$ then $\text{var}(X) \geq \text{var}(Z)$ because $\text{var}(W)$ is also greater than zero. Does the fact that $\text{cov}(X,Y) \geq 0$ infer that $\text{cov}(Z,W) \geq 0$? There is any condition that guarantees $\text{cov}(Z,W) > 0$ Why I was so convinced about $\text{var}(Z) \leq \text{var}(X)$? In the application I am interested in, I have observed that the relation $\text{var}(Z) \leq \text{var}(X)$ is attended at every time I run my algorithm. If I cannot demonstrate that $\text{var}(Z) \leq \text{var}(X)$ holds given the observations 1) to 4), I would like to know what is forcing that relation, like, for example, $\text{cov}(Z,W) \geq 0$ as mentioned above. Thanks again for the replies! Cheers
Clearly not. An easy counterexample (here done in R), that I think satisfies all your constraints: ``` set.seed(239843) x=rnorm(100,100,1) y=rep(c(0.01,0.99),times=50) z=x*y var(x) [1] 0.8413043 var(y) [1] 0.2425253 var(z) [1] 2425.296 ``` What's going on: 1. x is a series with mean 100 and sd 1. 2. y alternates between 0.01 and 0.99. 3. z=xy therefore alternates between (about) 1 and 99, but is always $<x$ > > Alternative [more general] question) Assuming finite variances, is it true that for any random variable a and b such that 0≤a≤b, we have var(a)≤var(b)? > > > Even more clearly not; without the need for a "y" like variable, it's pretty obvious: Consider one set of values that alternates between 1 and 99, and a second one that alternates between 100 and 101. --- Adding in the new condition that X and Y have positive covariance: ``` set.seed(239843) oldx=rnorm(100,100,1) y=rep(c(0.01,0.99),times=50) x = oldx + y # oldX and Y are independent, so X and Y now have +ve covariance z=x*y cov(x,y) [1] 0.2739745 # sample covariance happens to be positive in this case also var(x);var(y);var(z) [1] 1.065326 [1] 0.2425253 [1] 2481.243 ``` If you work out the answers for this case algebraically (compute the population variances and relevant population covariance), you'll see this isn't just a numerical accident from a fortunate choice of seed.
1- Is there any specific properties for adjacency matrix when a graph is planar? 2- Is there any thing special for computing the permanent of adjacency matrix when a graph is planar?
Computing determinant and permanent of planar graphs are as hard as computing them in general graphs. They are complete for **GapL** and **#P** respectively. See this paper by [Datta, Kulkarni, Limaye, Mahajan](http://portal.acm.org/citation.cfm?id=1714450.1714453) for more details.