input
stringlengths
38
38.8k
target
stringlengths
30
27.8k
I wanted to recreate the model mentioned in this paper:<https://arxiv.org/pdf/1610.09204v1.pdf> . I am using keras with tensorflow backend, and a gtx 1050ti. I am an ML beginner, and thought this would be a good way to get a hands on feel for things. However, My model is not converging(loss is same as first epoch). This is what I read from that paper: > > The first convolutional layer re- ceives an input of 56px by 56px > images with RGB channels. It uses 32 filters of size 5×5×3, stride 1 > and then sampled with max pooling of size 2 × 2, stride 1. The second > convolutional layer has 64 filters of size 5×5×32, stride 1 and a max > pooling of size 2 × 2, stride 1. The results of the second max pooling > provide the first fully-connected layer with a vector of length 12,544 > (14 × 14 × 64) which are used by 512 neurons. The final > fully-connected output layer uses a 20-wide softmax [21] which > represents the probability of each respective 20 class labels. This > architecture is similar to the LeNet model [3], but with using > rectified linear unit (ReLU) [22] activation functions instead of > sigmoid activation functions. We also use dropout [23], a technique to > prevent overfitting, with a keep probability of 0.5 for the > fully-connected layers. > > > and my code is: ``` model = Sequential() model.add(Convolution2D(32, 5, 5, border_mode='same', input_shape=(70,52, 3))) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Convolution2D(64, 5, 5, border_mode='same')) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(output_dim=512)) model.add(Activation("relu")) model.add(Dense(output_dim=2)) model.add(Activation("softmax")) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(x_train, y_train, nb_epoch=70, batch_size=500,verbose=1) ``` the full code can be found here : <https://gist.github.com/harveyslash/5c98f9fdab0d53a2a48f477a52d8588d> I have scrapped the data from goodreads Help appreciated ! **EDIT** I forgot to actually ask what i wanted. Since its my first experiment, i would like to ask what are some things that I should do to make my model converge.
You may try [Stochastic Gradient Descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent) optimizer with a learning rate decay and [nesterov momentum](http://www.jmlr.org/proceedings/papers/v28/sutskever13.pdf). You can also try a different `batch_size`. Also you are missing drop out layers between the fully connected layers which the authors used. Try ``` ... # flatten the conv layers model.add(Flatten()) model.add(Dropout(0.5)) # fully connected 512 model.add(Dense(output_dim=512)) model.add(Activation("relu")) model.add(Dropout(0.5)) # fully connected output layer model.add(Dense(output_dim=2)) model.add(Activation("softmax")) # compile model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=10e-4, decay=10e-6, momentum=0.99, nesterov=True), metrics=['accuracy']) # train model.fit(x_train, y_train, nb_epoch=70, batch_size=64, verbose=1) ``` In my experience this usually helps a lot. If you get this setting to converge, you may try `rmsprop` and `adam`.
Consider a Poisson process with rate $\lambda$ and let $L$ be the time of the last arrival in the interval $[0,t]$, with $L=0$ if there was no arrival. How can I prove that t-L has exponential distribution with rat $\lambda$? I tried to prove it by the following relation \begin{equation} P(t-L>x)=P(N(x)=0) \end{equation} However it leads us to a correct answer but I think this relation can not be true. because $P(N(x))=0$ doesn't have any information about t! Actually we know that $t-L>x$ means that $N(x)=0$ but the reverse is not obvious.So all we can say is: $P(t-L>x)<=P(N(x)=0)$. The purpose of this discussion is to find $E[t-L]$ by the knowledge of distribution of $L$ or $t-L$!
As I mentioned in comments, showing what minimizes $\sum (x\_i-\alpha)^2$ can be done in several ways, such as by simple calculus, or by writing $\sum (x\_i-\alpha)^2=\sum (x\_i-\bar{x}+\bar{x}-\alpha)^2$. Let's look at the second one: $\sum (x\_i-\alpha)^2=\sum (x\_i-\bar{x}+\bar{x}-\alpha)^2$ $\hspace{2.55cm}=\sum (x\_i-\bar{x})^2+\sum(\bar{x}-\alpha)^2+2\sum(x\_i-\bar{x})(\bar{x}-\alpha)$ $\hspace{2.55cm}=\sum (x\_i-\bar{x})^2+\sum(\bar{x}-\alpha)^2+2(\bar{x}-\alpha)\sum(x\_i-\bar{x})$ $\hspace{2.55cm}=\sum (x\_i-\bar{x})^2+\sum(\bar{x}-\alpha)^2+2(\bar{x}-\alpha)\cdot 0$ $\hspace{2.55cm}=\sum (x\_i-\bar{x})^2+\sum(\bar{x}-\alpha)^2$ Now the first term is unaltered by the choice of $\alpha$ and the last term can be made zero by setting $\alpha=\bar{x}$; any other choice leads to a larger value of the second term. Hence that expression is minimized by setting $\alpha=\bar{x}$.
Suppose $L\_1$ is a regular language and $L\_2$ a non-regular one, then: is $L\_1\setminus L\_2$ REGULAR/NON REGULAR/BOTH OF THEM? is $L\_2\setminus L\_1$ REGULAR/NON REGULAR/BOTH OF THEM?
First, we know that, $L$ is a regular language if and only if its complement be regular language. On the other hand, $$L\_1\setminus L\_2=L\_1\cap L\_2^c.$$ Suppose $\Sigma=\{a,b\}$, Let $L\_1=\Sigma^\*$ , and $L\_2=\Sigma^\*\setminus \{a^nb^n\}$, obviously, $L\_2$ isn't regular, so $$L\_1\setminus L\_2=\{a^nb^n\} $$ consequently, $L\_1\setminus L\_2$ can be a non-regular. Let $L\_1=\emptyset$, and $L\_2$ be any non-regular language, so $$L\_1\setminus L\_2=\emptyset$$ consequently, $L\_1\setminus L\_2$ can be regular. for the second proposition, let $L\_1=\emptyset$, and $L\_2$ be a non-regular language, so $L\_2\setminus L\_1$ is non-regular, and if we set $L\_1=\Sigma^\*$, and $L\_2$ be a non-regular language, then $L\_2\setminus L\_1=\emptyset$ that show us $L\_2\setminus L\_1$ can be regular. **Note that, difference between two non-regular, regular languages can be regular or not.**
The reverse inclusion is obvious, as is the fact that any self-reducible NP language in BPP is also in RP. Is this also known to hold for non-self-reducible NP languages?
As with most questions in complexity, I'm not sure there will be a full answer for a very long time. But we can at least show that the answer is non-relativizing: there is an oracle relative to which inequality holds and one relative to which equality holds. It's fairly easy to give an oracle relative to which the classes are equal: any oracle which has $\mathrm{BPP} = \mathrm{RP}$ will work (eg any oracle relative to which "randomness doesn't help much"), as will any oracle which has $\mathrm{NP} \subseteq \mathrm{BPP}$ (eg any oracle relative to which "randomness helps a lot"). There are a lot of these, so I won't bother with the specifics. It's somewhat more challenging, though still fairly straightforward, to design an oracle relative to which we get $\mathrm{RP} \subsetneq \mathrm{BPP} \cap \mathrm{NP}$. The construction below actually does a bit better: for any constant $c$, there is an oracle relative to which there is a language in $\mathrm{coRP} \cap \mathrm{UP}$ which is not in $\mathrm{RPTIME}[2^{n^c}]$. I'll outline it below. We'll design an oracle $A$ that contains strings of the form $(x,b,z)$, where $x$ is an $n$-bit string, $b$ is a single bit, and $z$ is a bit string of length $2n^c$. We will also give a language $L^A$ which will be decided by a $\mathrm{coRP}$ machine and a $\mathrm{UP}$ machine as follows: * The $\mathrm{coRP}$ machine, on input $x$, guesses $z$ of length $2|x|^c$ randomly, queries $(x,\mathtt{0},z)$, and copies the answer. * The $\mathrm{UP}$ machine, on input $x$, guesses $z$ of length $2|x|^c$, queries $(x,\mathtt{1},z)$, and copies the answer. To make the above-specified machines actually meet their promises, we need $A$ to satisfy some properties. For every $x$, one of these two options must be the case: * **Option 1:** At most half of $z$ choices have $(x,\mathtt{0},z) \in A$ *and* zero $z$ choices have $(x,\mathtt{1},z) \in A$. (In this case, $x \not\in L^A$.) * **Option 2:** Every $z$ choice has $(x,\mathtt{0},z) \in A$ *and* precisely one $z$ choice has $(x,\mathtt{1},z) \in A$. (In this case, $x \in L^A$.) Our aim will be to specify $A$ satisfying these promises so that $L^A$ diagonalizes against every $\mathrm{RPTIME}[2^{n^c}]$ machine. To try to keep this already long answer short, I'll drop the oracle construction machinery and a lot of the unimportant details, and explain how to diagonalize against a particular machine. Fix $M$ a randomized Turing machine, and let $x$ be an input so that we have full control over the selection of $b$'s and $z$'s so that $(x,b,z) \in A$. We will break $M$ on $x$. * **Case 1:** Suppose there is a way to select the $z$'s so that $A$ satisfies the first option of its promise, and $M$ has a choice of randomness which accepts. Then we will commit $A$ to this selection. Then $M$ cannot simultaneously satisfy the $\mathrm{RP}$ promise and reject $x$. Nevertheless, $x \not\in L^A$. So we have diagonalized against $M$. * **Case 2:** Next, assume that the previous case did not work out. We will now show that then $M$ can be forced either to break the $\mathrm{RP}$ promise or to reject on some choice of $A$ satisfying the second option of its promise. This diagonalizes against $M$. We will do this in two steps: 1. Show that for every fixed choice $r$ of $M$'s random bits, $M$ must reject when all of its queries of the form $(x,\mathtt{0},z)$ are in $A$ and all of its queries of the form $(x,\mathtt{1},z)$ are not in $A$. 2. Show that we can flip an answer $(x,\mathtt{1},z)$ of $A$ for some choice of $z$ without affecting the acceptance probability of $M$ by much.Indeed, if we start with $A$ from step 1, $M$'s acceptance probability is zero. $A$ doesn't quite satisfy the second option of its promise, but we can then flip a single bit as in step 2 and it will. Since flipping the bit causes $M$'s acceptance probability to stay near zero, it follows that $M$ cannot simultaneously accept $x$ and satisfy the $\mathrm{RP}$ promise. It remains to argue the two steps in Case 2: 1. Fix a choice of random bits $r$ for $M$. Now simulate $M$ using $r$ as the randomness and answering the queries so that $(x,\mathtt{0},z) \in A$ and $(x,\mathtt{1},z) \not\in A$. Observe that $M$ makes at most $2^{n^c}$ queries. Since there are $2^{2n^c}$ choices of $z$, we can fix the unqueried choices of $z$ to have $(x,\mathtt{0},z) \not\in A$, and have $A$ still satisfy the first option of its promise. Since we couldn't make Case 2 work for $M$, this means $M$ must reject on all its choices of randomness relative to $A$, and in particular on $r$. It follows that if we select $A$ to have $(x,\mathtt{0},z) \in A$ and $(x,\mathtt{1},z) \not\in A$ for every choice of $z$, then for every choice of random bits $r$, $M$ rejects relative to $A$. 2. Suppose that for every $z$, the fraction of random bits for which $M$ queries $(x,\mathtt{1},z)$ is at least $1/2$. Then the total number of queries is at least $2^{2n^c} 2^{2^{n^c}}/2$. On the other hand, $M$ makes at most $2^{2^{n^c}} 2^{n^c}$ queries across all its branches, a contradiction. Hence there is a choice of $z$ so that the fraction of random bits for which $M$ queries $(x,\mathtt{1},z)$ is less than 1/2. Flipping the value of $A$ on this string therefore affects the acceptance probability of $M$ by less than $1/2$.
I just read the "[Is integer factorization an NP-complete problem?](https://cstheory.stackexchange.com/q/159/1800)" question ... so I decided to spend some of my reputation :-) asking another question $Q$ having $P(\text{Q is trivial}) \approx 1$: > If $A$ is an oracle that solves integer factorization, what is the power of $P^A$? > I think it makes RSA-based public-key cryptography insecure ... but apart from this, are there other remarkable results?
Obviously any decision problem that can be reduced to factoring can be solved with a factoring oracle. But since we're given the ability to make multiple queries, I tried to think of a non-trivial problem for which one would want to make multiple queries. The problem of computing the Euler totient function seems like such a problem. I don't know how to solve the decision version of this problem by a Karp-reduction to the decision version of factoring. But with Turing reductions, it's easy to reduce this to factoring.
I am having trouble in writing the specific role of Turing machine? can it solve all the algorithm a digital computer can solve(i.e. today's PC's)?
The Turing machine is a theoretical computational model which is studied in undergraduate courses due to its simplicity and for historical reasons. Historically, the Turing machine was the first widely accepted definition of computation, and for many years, it was arguably the simplest definition to explain. Nowadays, however, we can define computation equivalently using our favorite programming language – one of the wonders of computability theory is that there are many models of computation which are completely equivalent in power, as long as we don't care too much about time and space usage. Turing machines are also polynomially equivalent to RAM machines, and so to imperative languages, and this means that you can define the classes P and NP using either Turing machines or (say) C programs, and obtain completely equivalent definitions. Turing machines have the advantage that their semantics are very simple to state and analyze. While it is still a big mess to construct a universal Turing machine (in this sense Turing machines are not better than C programs), other basic theorems lend themselves to an easy proof using the Turing machine model. The most obvious example is the Cook–Levin theorem, which states that SAT is NP-complete. Complexity classes with limited resources also have relatively simple definitions using Turing machines (for example, logspace), whereas a definition using C programs would have to be somewhat subtler. Modern computers are based not on the Turing machine but instead on ideas of von Neumann. In this sense the Turing machine is not a realistic model, and generally speaking the RAM model should be prefered. However, the RAM model has several different variants which are *not* equivalent, whereas Turing machines are more standardized, with different variants being mostly essentially equivalent (even when time and space usage are measured).
I have a sample size of 6. In such a case, does it make sense to test for normality using the Kolmogorov-Smirnov test? I used SPSS. I have a very small sample size because it takes time to get each. If it doesn't make sense, how many samples is the lowest number which makes sense to test? *Note:* I did some experiment related to the source code. The sample is time spent for coding in a version of software **(version A)** Actually, I have another sample size of 6 which is time spent for coding in **another** version of software **(version B)** I would like to do hypothesis testing using **one-sample t-test** to test whether the time spent in the code version A is differ from the time spent in the code version B or not (This is my H1). The precondition of one-sample t-test is that the data to be tested have to be normally distributed. That is why I need to test for normality.
As @whuber asked in the comments, a validation for my categorical NO. edit : with the shapiro test, as the one-sample ks test is in fact wrongly used. Whuber is correct: For correct use of the Kolmogorov-Smirnov test, you have to specify the distributional parameters and not extract them from the data. This is however what is done in statistical packages like SPSS for a one-sample KS-test. You try to say something about the distribution, and you want to check if you can apply a t-test. So this test is done to **confirm** that the data does not depart from normality significantly enough to make the underlying assumptions of the analysis invalid. Hence, You are not interested in the type I-error, but in the type II error. Now one has to define "significantly different" to be able to calculate the minimum n for acceptable power (say 0.8). With distributions, that's not straightforward to define. Hence, I didn't answer the question, as I can't give a sensible answer apart from the rule-of-thumb I use: n > 15 and n < 50. Based on what? Gut feeling basically, so I can't defend that choice apart from experience. But I do know that with only 6 values your type II-error is bound to be almost 1, making your power close to 0. With 6 observations, the Shapiro test cannot distinguish between a normal, poisson, uniform or even exponential distribution. With a type II-error being almost 1, your test result is meaningless. To illustrate normality testing with the shapiro-test : ``` shapiro.test(rnorm(6)) # test a the normal distribution shapiro.test(rpois(6,4)) # test a poisson distribution shapiro.test(runif(6,1,10)) # test a uniform distribution shapiro.test(rexp(6,2)) # test a exponential distribution shapiro.test(rlnorm(6)) # test a log-normal distribution ``` The only where about half of the values are smaller than 0.05, is the last one. Which is also the most extreme case. --- if you want to find out what's the minimum n that gives you a power you like with the shapiro test, one can do a simulation like this : ``` results <- sapply(5:50,function(i){ p.value <- replicate(100,{ y <- rexp(i,2) shapiro.test(y)$p.value }) pow <- sum(p.value < 0.05)/100 c(i,pow) }) ``` which gives you a power analysis like this : ![enter image description here](https://i.stack.imgur.com/4JAeJ.png) from which I conclude that you need roughly minimum 20 values to distinguish an exponential from a normal distribution in 80% of the cases. code plot : ``` plot(lowess(results[2,]~results[1,],f=1/6),type="l",col="red", main="Power simulation for exponential distribution", xlab="n", ylab="power" ) ```
Can epsilon be in the input alphabet of an FST ?
The alphabet $\Sigma$ of an automaton can be *any* nonempty set of finite symbols. Surely Greek-speakers would be upset if we forbade $\{\alpha, \beta, \dots, \epsilon, \dots, \omega\}$ as a valid alphabet! If you want, you can use $\{\clubsuit,\diamondsuit,\heartsuit,\spadesuit\}$ as your alphabet. Or $\{A, \alpha, \clubsuit,8\}$. Or any other finite, nonempty set. However, certain symbols have special meanings and we tend not to use them in automaton alphabets. For example, if you want your alphabet to consist of the symbols $\}$ and $\{$, things get a bit awkward: you start writing $\Sigma = \{\},\{\}$ and, er, yeah, that doesn't quite work. Likewise, it's confusing if $\Sigma$ contains symbols such as ${}^\*$, $($, $)$, $+$ and so on: you can easily write $\Sigma = \{{}^\*, (, ), +\}$ but, now, when you try to write regular expressions over that alphabet, it's impossible to tell which characters in the regular expression are symbols from $\Sigma$ and which are operators in the regular expression. $\epsilon$ fits into this second class. When $\epsilon$ isn't in the alphabet, we use that symbol to denote the empty string. As such, if you include $\epsilon\in\Sigma$, it becomes unclear whether writing "$\epsilon$" means "the empty string" or "the string containing one symbol, which is the Greek equivalent of 'e'". So, as a practical matter, we usually prefer not to use $\epsilon$ as a symbol in the alphabet. If you do need to include $\epsilon$ in your alphabet, you should use some other symbol to denote the empty string and you should say so. Similar reasoning applies to all other symbols that could be confused in this and other ways.
If I have 39% of students at a school that exhibit a specific, objective, measurable behavior, can I extrapolate this and say that any student at that school has a 39% chance of exhibiting that behavior?
Logistic regression was invented by statistician DR Cox in 1958 and so predates the field of machine learning. Logistic regression is *not* a classification method, thank goodness. It is a direct probability model. If you think that an algorithm has to have two phases (initial guess, then "correct" the prediction "errors") consider this: Logistic regression gets it right the first time. That is, in the space of additive (in the logit) models. Logistic regression is a direct competitor of many machine learning methods and outperforms many of them when predictors mainly act additively (or when subject matter knowledge correctly pre-specifies interactions). Some call logistic regression a type of machine learning but most would not. You could call some machine learning methods (neural networks are examples) statistical models.
A long time ago I read a newspaper article where a professor of some sort said that in the future we will be able to compress data to just two bits (or something like that). This is of course not correct (and it could be that my memory of what he exactly stated is not correct). Understandably it would not be practical to compress ***any*** string of 0's and 1's to just two bits because (even if it was technically possible), too many different kind of strings would end up compressing to the same two bits (since we only have '01' and '10' to choose from). Anyway, this got me thinking about the feasibility of compressing an arbitrary length string of 0's and 1's according to some scheme. For this kind of string, is there a known relationship between the string length (ratio between 0's and 1's probably does not matter) and maximum compression? In other words, is there a way to determine what is the minimum (smallest possible) length that a string of 0's and 1's can be compressed to? (Here I am interested in the mathematical maximum compression, not what is currently technically possible.)
[Kolmogorov complexity](https://en.wikipedia.org/wiki/Kolmogorov_complexity) is one approach for formalizing this mathematically. Unfortunately, computing the Kolmogorov complexity of a string is an uncomputable problem. See also: [Approximating the Kolmogorov complexity](https://cs.stackexchange.com/q/3501/755). It's possible to get better results if you analyze the *source* of the string rather than *the string itself*. In other words, often the source can be modelled as a probabilistic process, that randomly chooses a string somehow, according to some distribution. The entropy of that distribution then tells you the mathematically best possible compression (up to some small additive constant). --- On the impossibility of perfect compression, you might also be interested in the following. * [No compression algorithm can compress all input messages?](https://cs.stackexchange.com/q/7531/755) * [Compression functions are only practical because "The bit strings which occur in practice are far from random"?](https://cs.stackexchange.com/q/40684/755) * [Is there any theoretically proven optimal compression algorithm?](https://cs.stackexchange.com/q/3316/755)
So I currently have a text pattern detection challenge to solve at work. I am trying to make an outlier detection algorithm for a database, for string columns. For example let's say I have the following list of strings: ```py ["abc123", "jkj577", "lkj123", "uio324", "123123"] ``` I want to develop an algorithm that would detect common patterns in the list of strings, and the indicate which strings are not in this format. For example, in the example above, I would like this algorithm to detect the following regular expression: ```py r"[a-z]{3}\d{3}" ``` given that the majority of the entries in the list obey this pattern, except the last one, which should be marked as an outlier. The first idea that come to my mind was to use a genetic algorithm to find the regular expression pattern, where the fitness function is the number of entries on the list that match the pattern. I haven't worked out the details (crossvers function, etc..), and there is already the difficulty in the sense that the pattern ".\*" will match everything, hence will always maximize the fitness function. Anybody already worked on a similar problem? What are my options here? Thank you!
The problem you face is part of what is called in literature [*grammar learning* or *grammar inference*](https://en.wikipedia.org/wiki/Grammar_induction) which is part of both Natural Language Processing and Machine Learning and in general is a very difficult problem. However for certain cases like [regular grammars/languages (ie learning regular expressions / DFA learning)](https://en.wikipedia.org/wiki/Induction_of_regular_languages) there are satisfactory solutions up to limitations. A survey and references on grammar inference and inference of regular grammars: [Learning DFA from Simple Examples](https://faculty.ist.psu.edu/vhonavar/Papers/parekh-dfa.pdf) > > Efficient learning of DFA is a challenging research problem in > grammatical inference. It is known that both exact and approximate > (in the PAC sense) identifiability of DFA is hard. Pitt, in his > seminal paper posed the following open research problem:“Are DFA > PAC-identifiable if examples are drawn from the uniform distribution, > or some other known simple distribution?”. We demonstrate that the > class of simple DFA (i.e., DFA whose canonical representations have > logarithmic Kolmogorov complexity) is efficiently PAC learnable > under the Solomonoff Levin universal distribution. We prove > that if the examples are sampled at random according to the > universal distribution by a teacher that is knowledgeable about the > target concept, the entire class of DFA is efficiently PAC learnable > under the universal distribution. Thus, we show that DFA are > efficiently learnable under the PACS model. Further, we prove > that any concept that is learnable under Gold’s model for > learning from characteristic samples, Goldman and Mathias’ > polynomial teachability model, and the model for learning from > example based queries is also learnable under the PACS model > > > [An $O(n^2)$ Algorithm for Constructing Minimal Cover Automata for Finite Languages](http://www.cs.smu.ca/%7Enic.santean/art/algorithm.pdf) > > Cover automata were introduced in [1] as an ecient representation of > finite languages. In [1], an algorithm was given to transforma DFA > that accepts a finite language to a minimal deterministic finite > cover automaton (DFCA) with the time complexity $O(n^4)$, where $n$ is > the number of states of the given DFA. In this paper, we introduce a > new efficient transformation algorithm with the time complexity > $O(n^2)$, which is a significant improvement from the previous > algorithm. > > > There are even libraries implementing algorithms for grammar-inference and DFA learning: 1. [libalf](http://libalf.informatik.rwth-aachen.de/) 2. [gitoolbox for Matlab](https://code.google.com/p/gitoolbox/) *source: [stackoverflow](https://stackoverflow.com/questions/15512918/grammatical-inference-of-regular-expressions-for-given-finite-list-of-representa)*
I've just started Wasserman's *All of Statistics* and he starts by saying: "The sample space $\Omega$, is the set of possible outcomes of an experiment. Points $\omega$ in $\Omega$ are called sample outcomes or realizations. Events are subsets of $\Omega$." So sample outcomes are just the one-element subsets of $\Omega$, while events are all subsets of $\Omega$?
Most posteriors prove to be difficult to optimize analytically (i.e. by taking a gradient and setting it equal to zero), and you'll need to resort to some numerical optimization algorithm to do MAP. As an aside: MCMC is unrelated to MAP. MAP - for *maximum a posteriori* - refers to finding a local maximum of something proportional to a posterior density and using the corresponding parameter values as estimates. It is defined as $$ \hat{\theta}\_{MAP} = \text{argmax}\_{\theta} \, p(\theta \, | \, D) $$ MCMC is typically used to *approximate expectations* over something proportional to a probability density. In the case of a posterior, that's $$ \hat{\theta}\_{MCMC} = n^{-1} \sum\_{i=1}^{n} \theta^{0}\_{i} \approx \int\_{\Theta}\theta \, p(\theta \, | \, D)d\theta $$ where $\{\theta^{0}\_{i}\}^{n}\_{i=1}$ is a collection of parameter space positions visited by a suitable Markov chain. In general, $\hat{\theta}\_{MAP} \neq \hat{\theta}\_{MCMC}$ in any meaningful sense. The crux is that MAP involves *optimization*, while MCMC is based around *sampling*.
I remember I might have encountered references to problems that have been proven to be solvable with a particular complexity, but with no known algorithm to actually reach this complexity. I struggle wrapping my mind around how this can be the case; how a non-constructive proof for the existence of an algorithm would look like. Do there actually exist such problems? Do they have a lot of practical value?
Some early results from late 80s: * Fellows and Langston, "[Nonconstructive tools for proving polynomial-time decidability](http://dl.acm.org/citation.cfm?doid=44483.44491)", 1988 * Brown, Fellows, Langston, "[Polynomial-time self-reducibility: theoretical motivations and practical results](http://www.tandfonline.com/doi/abs/10.1080/00207168908803783)", 1989 From the abstract of the second item: > > Recent fundamental advances in graph theory, however, have made available powerful new nonconstructive tools that can be applied to guarantee membership in P. These tools are nonconstructive at two distinct levels: they neither produce the decision algorithm, establishing only the finiteness of an obstruction set, nor do they reveal whether such a decision algorithm can be of any aid in the construction of a solution. We briefly review and illustrate the use of these tools, and discuss the seemingly formidable task of finding the promised polynomial-time decision algorithms when these new tools apply. > > >
For a binary search tree (BST) the inorder traversal is always in ascending order. **Is the converse also true?**
Yes. The proof is straightforward. Assume you have a tree with a sorted in-order traversal, and assume the tree is not a BST. This means there must exist at least one node which breaks the BST assumption; let's call this node $v$. Now, there are two ways $v$ could break the BST assumption. One way is if there's a node in $v$'s left subtree with label greater than $v$. The other way is if there's a node in $v$'s right subtree with label less than $v$. But everything in $v$'s left subtree must come before it in the in-order traversal, and everything in its right subtree must come after it. And we assumed that the in-order traversal is sorted. Thus, there's a contradiction, and such a tree cannot exist. (*EDIT:* As RemcoGerlich points out, this is only true if your tree is known to be binary. But if it's not binary, an in-order traversal isn't defined, afaik.)
How I can transform my target variable(**Y**)? As it is list, I cann`t use it for fitting model, because I must use integers for fitting.
I'm fairly new to Pandas/Python but have 20+ years as a SQLServer DBA, architect, administrator, etc.. I love Pandas and I'm pushing myself to always try to make things work in Pandas before returning to my comfy, cozy SQL world. **Why RDBMS's are Better:** The advantage of RDBMS's are their years of experience optimizing query speed and data read operations. What's impressive is that they can do this while simultaneously balancing the need to optimize write speed and manage highly concurrent access. Sometimes these additional overheads tilt the advantage to Pandas when it comes to simple, single-user use cases. But even then, a seasoned DBA can tune a database to be highly optimized for read speed over write speed. DBA's can take advantage of things like optimizing data storage, strategic disk page sizing, page filling/padding, data controller and disk partitioning strategies, optimized I/O plans, in-memory data pinning, pre-defined execution plans, indexing, data compression, and many more. I get the impression from many Pandas developers that they don't understand the depth that's available there. What I think usually happens is that if Pandas developer never has data that's big enough to need these optimizations, they don't appreciate how much time they can save you out of the box. The RDBMS world has 30 years of experience optimizing this so if raw speed on large datasets are needed, RDBMS's can be beat. **Why Is Python/Pandas Better:** That said, speed isn't everything and in many use cases isn't the driving factor. It depends on how you're using the data, whether it's shared, and whether you care about the speed of the processing. RDBMS's are generally more rigid in their data structures and put a burden on the developer to be more deterministic with data shapes. Pandas lets you be more loose here. Also, and this is my favorite reason, you're in a true programming language. Programming languages give you infinitely more flexibility to apply advanced logic to the data. Of course there's also the rich ecosystem of modules and 3rd party frameworks that SQL can't come close to. Being able to go from raw data all the way to web presentation or data visualization in one code base is VERY convenient. It's also much more portable. You can run Python almost anywhere including public notebooks that can extend the reach of your results to get to people more quickly. Databases don't excel at this. **My Advice?** If you find yourself graduating to bigger and bigger datasets you owe it to take the plunge and learn how RDBMS's can help. I've seen million row, multi-table join, summed aggregate queries tuned from 5 minutes down to 2 seconds. Having this understanding in your tool belt just makes you a more well rounded data scientist. You may be able to do everything in Pandas today but some day your may have an assignment where RDBMS is the best choice.
Let's use Traveling Salesman as the example, unless you think there's a simpler, more understable example. My understanding of P=NP question is that, given the optimal solution of a difficult problem, it's easy to check the answer, but very difficult to find the solution. With the Traveling Salesman, given the shortest route, it's just as hard to determine it's the shortest route, because you have to calculate every route to ensure that solution is optimal. That doesn't make sense. So what am I missing? I imagine lots of other people encounter a similar error in their understanding as they learn about this.
Your version of the TSP is actually NP-hard, exactly for the reasons you state. It is hard to check that it is the correct solution. The version of the TSP that is NP-complete is the decision version of the problem (quoting Wikipedia): > > [The decision version of the TSP (where given a length L, the task is to decide whether the graph has a tour of at most L) belongs to the class of NP-complete problems.](https://en.wikipedia.org/wiki/Travelling_salesman_problem) > > > In other words, instead of asking "What is the shortest possible route through the TSP graph?", we're asking "Is there a route through the TSP graph that fits within my budget?".
When I do my laundry I tend to make a pile of unmatched socks, putting new socks on the top of the pile and matching off pairs if two of the same sock are near the top of the stack. Since eventually socks will get buried deep in the pile I occasionally dump some of the sock pile back into the laundry pile. I started to wonder if there was an efficient way to choose when and how I return socks from the sock pile to the laundry pile. So I made up a formalism. We have two collections of socks, the first one $L$ represents the laundry pile and the second one $S$ represents the sock pile. We have perfect knowledge of the contents of both collections. We then have three actions: * Move the top sock from $S$ to $L$ * Move a random sock from $L$ to the top of $S$ * Remove the top two socks of $S$ iff they match. (*Make a pairing*) Each sock has exactly one match and at the beginning of execution all the socks are in $L$. Our goal is to empty both $L$ and $S$ so that all of the socks have been matched off in as little time as possible. I want to measure the efficiency of an algorithm as expected number of performed operations, as a function of the number $n$ of socks. What is the most efficient algorithm for this task? What is its asymptotic expected number of operations? --- ### My Algorithm Here's the best algorithm I was able to come up with. In the following, it should go without saying that if you ever encounter a pair on the top of $L$ you should remove it. We start with phase one. In phase one we will count the number of complete pairs in $L$ if there are any pairs in $L$ we will move an sock from $L$ to $S$, if there are none we will move an sock from $S$ to $L$. We repeat this process until there are exactly three socks, two of them constituting a pair, in $L$, then we begin phase two. In phase two we move one sock from $L$ to $S$ if it is not in the pair, we move the last two socks of $L$ to $S$ creating a pair, if it is in the pair we have two socks left in $L$ one that matches the top and one that does not. We keep moving socks from $L$ to $S$ moving them back if we do not create a pair. Once we have created a pair we move back to phase one. --- The idea for this question is similar to [this](https://cs.stackexchange.com/questions/16133/sock-matching-algorithm) question, however the actual models for sock matching are radically different.
Suppose there are $n$ socks, of $m$ types. It's easy to pair off all the socks with $O(n^2)$ expected running time. I will show an algorithm that achieves $O(nm)$ expected running time. I don't know whether this is optimal. Notation ======== Let $T$ denote the set of types of the socks, so $m=|T|$. I'll assume two socks can be paired iff they're of the same type. Let $p(t)$ denote the probability that, if you pick a sock uniformly at random from all of the available socks, its type will be $t$. Algorithm ========= Here's the algorithm. Define $t^\*$ to be the type of the most common sock (i.e., with maximal $p(\cdot)$ value). Start with an empty stack. Draw a random sock from $L$; if it isn't of type $t^\*$, throw it back and repeat until the stack contains a single sock of type $t^\*$. Now do that again: draw a random sock from $L$; if it isn't of type $t^\*$, throw it back and repeat until the stack contains two socks of type $t^\*$. Remove the pair. Now you have $n-2$ socks; recurse. What's the expected running time of this algorithm? It takes $1/p(t^\*)$ draws until you see the first sock of type $t^\*$ (multiply by two, to take into account throwing back the ones you didn't want). Then it takes another $1/p(t^\*)$ draws to get the second sock of type $t^\*$ (again, multiply by two for the same reason). So, it will take about $4/p(t^\*)$ draws to find one pair. How large could this be? In other words, how small could $p(t^\*)$ be? Well, it's easy to see that $p(t^\*) \ge 1/m$. Consequently, after at most $4m$ draws, we have removed one pair. We repeat until we've found all pairs, i.e., $n/2$ times. (At each stage, the number of types can only decrease so the value of $p(t^\*)$ can only increase.) So, the total number of operations is at most $4m \times n/2 = O(nm)$. --- Possible direction for improvement ================================== If you wanted to improve further, you could try experimenting with the following idea. Unfortunately, I don't know how to analyze its worst-case running time in terms of $n$ and $m$. Pick a threshold $q$. Define $T\_q = \{t \in T : p(t) \ge q\}$. Now replace the algorithm above with one that repeatedly draws from $L$ and throws back until the stack contains a single sock whose type is in $T\_q$; then repeatedly draw and throw back until you find another sock of the same type. Once you find a match, pair them off and recurse. How long will it take to find the first match? It will take $1/\sum\_{t \in T\_q} p(t)$ iterations to find the first sock, and (crudely) $\le 1/q$ iterations to find the second stock. More precisely, the total number of iterations will be (after some manipulation) $${1+|T\_q| \over \sum\_{t \in T\_q} p(t)}$$ You can now find the $q$ that minimizes that value, then apply the strategy above. I don't know whether this leads to any improvement in asymptotic running time, as a function of $n$ and $m$.
I am analyzing a dataset in Python for strictly learning purpose. In the code below that I wrote, I am getting some errors which I cannot get rid off. Here is the code first: ``` plt.plot(decade_mean.index, decade_mean.values, 'o-',color='r',lw=3,label = 'Decade Average') plt.scatter(movieDF.year, movieDF.rating, color='k', alpha = 0.3, lw=2) plt.xlabel('Year') plt.ylabel('Rating') remove_border() ``` I am getting the following errors: ``` 1. TypeError: 'str' object is not callable 2. NameError: name 'remove_border' is not defined ``` Also, the label='Decade Average' is not showing up in the plot. What confuses me most is the fact that in a separate code snippet for plots (see below), I didn't get the 1st error above, although `remove_border` was still a problem. ``` plt.hist(movieDF.rating, bins = 5, color = 'blue', alpha = 0.3) plt.xlabel('Rating') ``` Any explanations of all or some of the errors would be greatly appreciated. Thanks Following the comments, I am posting the data and the traceback below: decade\_mean is given below. ``` year 1970 8.925000 1980 8.650000 1990 8.615789 2000 8.378947 2010 8.233333 Name: rating, dtype: float64 ``` traceback: ``` TypeError Traceback (most recent call last) <ipython-input-361-a6efc7e46c45> in <module>() 1 plt.plot(decade_mean.index, decade_mean.values, 'o-',color='r',lw=3,label = 'Decade Average') 2 plt.scatter(movieDF.year, movieDF.rating, color='k', alpha = 0.3, lw=2) ----> 3 plt.xlabel('Year') 4 plt.ylabel('Rating') 5 remove_border() TypeError: 'str' object is not callable ``` I have solved remove\_border problem. It was a stupid mistake I made. But I couldn't figure out the problem with the 'str'.
Seems that `remove border` is not defined. You have to define the function before used. I do not know where the string error comes, is not clear to me. If you post the full traceback it will be clearer. Finally your label is not show because you have to call the method `plt.legend()`
I'm reading CLRS and there is something I don't understand regarding counting the number of parenthesization, in the Matrix-chain multiplication chapter, the book says: > > Denote the number of alternative parenthesizations of a sequence of $n$ matrices by $P(n)$. When $n$ = 1, we have just one matrix and therefore only one way to fully parenthesize the matrix product. When $n$ $\geq$ 2, a fully parenthesized matrix product is the product of two fully parenthesized matrix subproducts, and the split between the two subproducts may occur between the $k$th and ($k$ + 1)st matrices for any $k$ = 1, 2, ..., $n$ - 1. > > > what I don't understand is the the part about the split? what exactly does it mean to split the two subproducts?
The split in a product is between the two outermost pairs of parantheses. For example, in $((a\*b)\*(c\*d))\*(e\*f)$, the split is between the $d$ and $e$ because the last multiplication that is performed is between the results of the products $abcd$ and $ef$. The split tells you where the last multiplication is performed and let’s you decompose the problem into two subproblems.
I want to plot the bytes from a disk image in order to understand a pattern in them. This is mainly an academic task, since I'm almost sure this pattern was created by a disk testing program, but I'd like to reverse-engineer it anyway. I already know that the pattern is aligned, with a periodicity of 256 characters. I can envision two ways of visualizing this information: either a 16x16 plane viewed through time (3 dimensions), where each pixel's color is the ASCII code for the character, or a 256 pixel line for each period (2 dimensions). This is a snapshot of the pattern (you can see more than one), seen through `xxd` (32x16): ![Pattern to analyze](https://i.stack.imgur.com/zOFSK.gif) Either way, I am trying to find a way of visualizing this information. This probably isn't hard for anyone into signal analysis, but I can't seem to find a way using open-source software. I'd like to avoid Matlab or Mathematica and I'd prefer an answer in R, since I have been learning it recently, but nonetheless, any language is welcome. --- Update, 2014-07-25: given Emre's answer below, this is what the pattern looks like, given the first 30MB of the pattern, aligned at 512 instead of 256 (this alignment looks better): ![Graphical pattern](https://i.stack.imgur.com/4tDIA.png) Any further ideas are welcome!
I would use a visual analysis. Since you know there is a repetition every 256 bytes, create an image 256 pixels wide by however many deep, and encode the data using brightness. In (i)python it would look like this: ``` import os, numpy, matplotlib.pyplot as plt %matplotlib inline def read_in_chunks(infile, chunk_size=256): while True: chunk = infile.read(chunk_size) if chunk: yield chunk else: # The chunk was empty, which means we're at the end # of the file return fname = 'enter something here' srcfile = open(fname, 'rb') height = 1 + os.path.getsize(fname)/256 data = numpy.zeros((height, 256), dtype=numpy.uint8) for i, line in enumerate(read_in_chunks(srcfile)): vals = list(map(int, line)) data[i,:len(vals)] = vals plt.imshow(data, aspect=1e-2); ``` This is what a PDF looks like: ![A PDF file visualized](https://i.stack.imgur.com/bicgF.png) A 256 byte periodic pattern would have manifested itself as vertical lines. Except for the header and tail it looks pretty noisy.
By my understanding, operating system is the abstraction layer above hardware. Which means that an operating system that supports two different CPU architectures can run the same code. But I still cant understand the details/steps of executing a given program. Suppose, I have a program that takes 2 numbers from the user, adds them and displays the answer. There are few steps in which the program does its work (Might be missing something or wrong somewhere, feel free to correct): 1) Double clicking the program file icon. (a) How do the GUI and the mouse (and RAM) interact to identify which icon is clicked? 2) Loading of program in the Main memory using the address of the icon clicked. (b) How is the os involved in finding that file from the disk? 3) Input of 2 numbers (CPU reads instructions of taking input from keyboard). (c) Will the example of 2 different keyboards (For eg: one with 'fn' key like in laptops and one of full size) be a good example for explaining the need of device controllers and drivers? 4) Adding of the 2 numbers (Arithmetic operations). (d) Is the os responsible for providing the CPU with the addresses of the operands and operators? 5) Displaying the output on the monitor. I understand that the question might be broad, but I am not able to piece together all these things just by reading books (like 'Operating System Concepts by Galvin'). Also, I like as much detail as possible as it makes things more clear.
At one level you have hardware: A computer with a CPU, RAM, hard drive, graphics card, monitor, keyboard and so on. Then on the lowest level of the operating system you have code that can talk to these devices. That code allows the operating system to read or write data from the hard drive, determine the location of the mouse, and so on. At a higher level of the operating system, the OS has code to assign address space to processes, start processes and kill them, allow these processes indirect access to the hardware. Above that, it is just code. You have a mouse reporting it's location, you have graphics hardware that can display a cursor, so someone wrote code that keeps track of the mouse location and displays the cursor at the point corresponding to the mouse location. And then someone wrote code to display icons. And more and more and more code on top of that. And that's all, really.
I am currently estimating a stochastic volatility model with Markov Chain Monte Carlo methods. Thereby, I am implementing Gibbs and Metropolis sampling methods. Assuming I take the mean of the posterior distribution rather than a random sample from it, is this what is commonly referred to as *Rao-Blackwellization*? Overall, this would result in taking the mean over the means of the posterior distributions as parameter estimate.
> > Assuming I take the mean of the posterior distribution rather than a > random sample from it, is this what is commonly referred to as > Rao-Blackwellization? > > > I am not very familiar with stochastic volatility models, but I do know that in most settings, the reason we choose Gibbs or M-H algorithms to draw from the posterior, is because we don't know the posterior. Often we want to estimate the posterior mean, and since we don't know the posterior mean, we draw samples from the posterior and estimate it using the sample mean. So, I am not sure how you will be able to take the mean from the posterior distribution. Instead the Rao-Blackwellized estimator depends o the knowledge of the mean of the full conditional; but even then sampling is still required. I explain more below. Suppose the posterior distribution is defined on two variables, $\theta = (\mu, \phi$), such that you want to estimate the posterior mean: $E[\theta \mid \text{data}]$. Now, if a Gibbs sampler was available you could run that or run a M-H algorithm to sample from the posterior. If you can run a Gibbs sampler, then you know $f(\phi \mid \mu, data)$ in closed form and you know the mean of this distribution. Let that mean be $\phi^\*$. Note that $\phi^\*$ is a function of $\mu$ and the data. This also means that you can integrate out $\phi$ from the posterior, so the marginal posterior of $\mu$ is $f(\mu \mid data)$ (this is not known completely, but known upto a constant). You now want to now run a Markov chain such that $f(\mu \mid data)$ is the invariant distribution, and you obtain samples from this marginal posterior. The question is **How can you now estimate the posterior mean of $\phi$ using only these samples from the marginal posterior of $\mu$?** This is done via Rao-Blackwellization. \begin{align\*} E[\phi \mid data]& = \int \phi \; f(\mu, \phi \mid data) d\mu \, d\phi\\ & = \int \phi \; f(\phi \mid \mu, data) f(\mu \mid data) d\mu \, d\phi\\ & = \int \phi^\* f(\mu \mid data) d\mu. \end{align\*} Thus suppose we have obtained samples $X\_1, X\_2, \dots X\_N$ from the marginal posterior of $\mu$. Then $$ \hat{\phi} = \dfrac{1}{N} \sum\_{i=1}^{N} \phi^\*(X\_i), $$ is called the Rao-Blackwellized estimator for $\phi$. The same can be done by simulating from the joint marginals as well. **Example** (Purely for demonstration). Suppose you have a joint unknown posterior for $\theta = (\mu, \phi)$ from which you want to sample. Your data is some $y$, and you have the following full conditionals $$\mu \mid \phi, y \sim N(\phi^2 + 2y, y^2) $$ $$\phi \mid \mu, y \sim Gamma(2\mu + y, y + 1) $$ You run the Gibbs sampler using these conditionals, and obtained samples from the joint posterior $f(\mu, \phi \mid y)$. Let these samples be $(\mu\_1, \phi\_1), (\mu\_2, \phi\_2), \dots, (\mu\_N, \phi\_N)$. You can find the sample mean of the $\phi$s, and that would be the usual Monte Carlo estimator for the posterior mean for $\phi$.. Or, note that by the properties of the Gamma distribution $$E[\phi | \mu, y] = \dfrac{2 \mu + y}{y + 1} = \phi^\*.$$ Here $y$ is the data given to you and is thus known. The Rao Blackwellized estimator would then be $$\hat{\phi} = \dfrac{1}{N} \sum\_{i=1}^{N} \dfrac{2 \mu\_i + y}{y + 1}. $$ Notice how the estimator for the posterior mean of $\phi$ does not even use the $\phi$ samples, and only uses the $\mu$ samples. In any case, as you can see you are still using the samples you obtained from a Markov chain. This is not a deterministic process.
I think, maybe some formalism could exist for the task which makes it significantly easier. My problem to solve is that I invented a reentrant algorithm for a task. It is relative simple (its pure logic is around 10 lines in C), but this 10 lines to construct was around 2 days to me. I am 99% sure that it is reentrant (which is not the same as thread-safe!), but the remaining 1% is already enough to disrupt my nights. Of course I could start to do that on a naive way (using a formalized state space, initial conditions, elemental operations and end-conditions for that, etc.), but I think some type of formalism maybe exists which makes this significantly easier and shorter. Proving the non-reentrancy is much easier, simply by showing a state where the end-conditions aren't fulfilled. But of course I constructed the algorithm so that I can't find a such state. I have a strong impression, that it is an algorithmically undecidable problem in the general case (probably it can be reduced to the halting problem), but my single case isn't general. I ask for ideas which make the proof easier. How are similar problems being solved in most cases? For example, a non-trivial condition whose fulfillment would decide the question into any direction, would be already a big help.
[Assembly language](https://en.wikipedia.org/wiki/Assembly_language) is a way to write instructions for the computer's [instruction set](https://en.wikipedia.org/wiki/Instruction_set), in a way that's slightly more understandable to human programmers. Different architectures have different instruction sets: the set of allowed instructions is different on each architecture. Therefore, you can't hope to have a write-once-run-everywhere assembly program. For instance, the set of instructions supported by x86 processors looks very different from the set of instructions supported by ARM processors. If you wrote an assembly program for an x86 processor, it'd have lots of instructions that are not supported on the ARM processor, and vice versa. The core reason to use assembly language is that it allows very low-level control over your program, and to take advantage of all of the instructions of the processor: by customizing the program to take advantage of features that are unique to the particular processor it will run on, sometimes you can speed up the program. The write-once-run-everywhere philosophy is fundamentally at odds with that.
**Edit:** It seems that I have made the question too general, so I will provide a specific example of the type of problem I am trying to solve. I have a database that contains every item that is sold at a grocery store, along with each defining feature of the items (i.e, price, country of origin, food category, producer...). I also have a database with customer purchases, so for each customer it lists *n* items that were bought. For each customer I want to understand **"as best as I can"** the underlying reasoning for why they chose that group of *n* items in a quantitative manner. A core caveat is that this is not being asked from an academic or theoretical viewpoint. This is purely practical **Original question:** When drawing a random sample from data, it is typically tested to check if the sample is properly representative of the total population. Assume a scenario where a subset exists within a population, you know that it was not selected at random and that the individual points where chosen due to some sort of criteria. If the sample was not chosen at random then it must have some distinguishable features and bias when compared to the population. Are there any specific quantitative methods used for decomposing the differences in between a subset and the population, outside of just plotting distributions, one vs the other. Also are there any Python packages or tools for this? **In plain terms:** **I have a basket with a thousand items and I know the features/characteristics of each item. Someone comes and picks 10 items based on some preferences/characteristics/bias. I now want to understand the underlying reasoning for why they chose that group of 10 items in quantitative manner"**
You can do clustering and then select the subsets so you are sure that your subset has similar characteristics of main dataset and other subset. For the purpose of train-test split, I usually split main data into different clusters, and then split each cluster to 80-20 for training-test sets using sklearn train\_test\_split(... stratify=y\_clus). You can use my code; however, it's not always returning the best results and I may need to check different random\_state values to find the best model. In the first step, you need to encode your categorical variables and scale the numerical ones. ``` from sklearn import decomposition, datasets, model_selection, preprocessing, metrics from sklearn.preprocessing import StandardScaler, OneHotEncoder, MinMaxScaler, LabelEncoder from sklearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer categorical_features = ['gender', 'marital','province','agegroup','isdirector'] categorical_transformer = Pipeline(steps=[ ('onehot', OneHotEncoder(handle_unknown='ignore'))]) numeric_features = [col for col in df2.columns[1:-1] if col not in categorical_features] #numeric_features=[el for el in numeric_features if el!='age'] numeric_transformer = Pipeline(steps= ('scaler', StandardScaler()) ]) preprocessor = ColumnTransformer( transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features)]) y_encoder = LabelEncoder() y = y_encoder.fit_transform(df2['sales']) X = df2[numeric_features + categorical_features] ``` and the second step is to call the dataset\_builder(). ``` _, y_train, _, y_test, _, y_val, X_train_sc, X_test_sc, X_val_sc = dataset_builder(X,y, do_clustering=True, singleclass=singcls,dataset_type='TVT', random_state=rnd_data) ``` The skipped variables ( \_ ) are X\_train, X\_test, X\_val for the unscaled (original) X. **BUT HOW IT WORKS????** The code use following function to do the clustering. I modified the code found on [SciPy Hierarchical Clustering and Dendrogram Tutorial](https://joernhees.de/blog/2015/08/26/scipy-hierarchical-clustering-and-dendrogram-tutorial/) ``` # hierarchical/agglomerative from scipy.cluster.hierarchy import dendrogram, linkage, fcluster import numpy as np import warnings def classclustering(X_sc,y=None, Z=None, nclusters=0, method='ward', metric='euclidean', maxdepth_show = 20,show_charts=True): """ Z: linkage matrix method: The linkage algorithm to use. Please check <scipy.cluster.hierarchy.linkage> single, complete, weighted,centroid, median, ward Methods ‘centroid’, ‘median’ and ‘ward’ are correctly defined only if Euclidean pairwise metric is used. metric: Pairwise distances between observations in n-dimensional space. Please check <scipy.spatial.distance.pdist> euclidean, minkowski, cityblock, seuclidean (standardized Euclidean), cosine, correlation, hamming, jaccard, chebyshev, canberra, braycurtis, mahalanobis, yule, matching, dice, kulsinski, rogerstanimoto, russellrao, sokalmichener, sokalsneath, wminkowski """ def performclustering(X_sc, Z=None, nclusters=0, method='ward', metric='euclidean', maxdepth_show = 20): linked=Z if linked is not None: # use previous linkage for custom number of clusters. if nclusters<2: raise Exception("nclus must be greater than 1 when linkage matrix (Z) has been used!") clus=fcluster(linked, nclusters, criterion='maxclust') else: # faster calculation by showing only the first 20 clusters, p=20 linked = linkage(X_sc, method, metric) labelList = range(1, 11) if show_charts: plt.figure(figsize=(10, 7)) dendrogram(linked, orientation='top', #labels=labelList, distance_sort='descending', truncate_mode='lastp', # show only the last p merged clusters p=maxdepth_show, # show only the last p merged clusters show_leaf_counts=True, # otherwise numbers in brackets are counts leaf_rotation=90., leaf_font_size=12., show_contracted=True # to get a distribution impression in truncated branches ) plt.show() # Elbow Method # calculating the best number of clusters. It's 4 or 6 for only numberical data, and 3 or 9 for all data last = linked[-20:, 2] last_rev = last[::-1] idxs = np.arange(1, len(last) + 1) acceleration = np.diff(last, 2) # 2nd derivative of the distances acceleration_rev = acceleration[::-1] k = acceleration_rev.argmax() + 2 # if idx 0 is the max of this we want 2 clusters if show_charts: plt.plot(idxs, last_rev) plt.xticks(np.arange(min(idxs), max(idxs)+1, 2.0)) plt.xlabel("Number of clusters") plt.plot(idxs[:-2] + 1, acceleration_rev) plt.show() if nclusters>0: print("\033[1;31;47m Warning....\n ncluster has been set. Optimal number of clusters (%s) has been disabled!\n"%k+'\033[0m') else: nclusters=k if show_charts: print ("clusters:", nclusters) clus=fcluster(linked, nclusters, criterion='maxclust') return clus, linked, nclusters if y is None: # single-class clustering if type(Z)==list: raise Exception("Multi-class clustering is not working with predefined Linkage Matrix (Z)!") else: clus,linked, nclus = performclustering(X_sc, Z, nclusters, method, metric, maxdepth_show) else: # perform multi-class clustering if Z is not None: raise Exception("Multi-class clustering is not working with predefined Linkage Matrix (Z)!") else: y_classes = set(y) #clus_y=[] linked=[] if show_charts: print("===========================") clus= np.zeros(X_sc.shape[0],dtype=int) tmpclus_old=[0] nclus=0 for cl in y_classes: if show_charts: print("Cluster analysis for class: %s"%cl) mask = y==cl # indices tmpclus, tmplinked, tmp_nclus = performclustering(X_sc[mask,:], Z, nclusters, method, metric, maxdepth_show) nclus += tmp_nclus #clus_y.append(tmpclus) linked.append(tmplinked) clus[mask]=tmpclus+max(tmpclus_old) tmpclus_old = tmpclus if show_charts: print("===========================") return clus,linked, nclus ``` To use the function, you just need to feed it with scaled data if you have categorical variables. The function can do clustering based on X only, or doing clustering for each calsses in y (clustering for YES, NO, ... separately). ``` scaler = preprocessor.fit(X) X_sc = scaler.transform(X) # single-class clustering clus,Z,nclus= classclustering(X_sc,show_charts=True) # multi-class clustering #clus,Z, nclus = classclustering(X_sc, y, show_charts=True) ``` The output would be something like this: [![enter image description here](https://i.stack.imgur.com/kPFa7.png)](https://i.stack.imgur.com/kPFa7.png) and number of clusters is the peak in orange line: [![enter image description here](https://i.stack.imgur.com/WCE3h.png)](https://i.stack.imgur.com/WCE3h.png) Now, if you are going to split your data into training-test (dataset\_type='TT') or training-validation-test sets (dataset\_type='TVT'), use following function: ``` import imblearn.over_sampling as OverSampler X_labels = '' categorical_features_onehot = '' def dataset_builder(X,y, do_clustering=True, singleclass=True, dataset_type='TVT', random_state=2): X_train, X_val, X_test, y_train, y_val, y_test = [],[],[],[],[],[] dataset_type=dataset_type.lower() if dataset_type not in ['tt','tvt']: raise Exception("Unknown dataset_type!") if not do_clustering: if dataset_type=='tt': X_train, y_train, X_test, y_test, X_val,y_val = train_test_builder(X, y, validation_size=0, test_size=0.2, random_state=random_state) else: X_train, y_train, X_test, y_test, X_val,y_val = train_test_builder(X, y, validation_size=0.15, test_size=0.15, random_state=random_state) else: scaler = preprocessor.fit(X) X_sc = scaler.transform(X) if singleclass: # single-class clustering clus,Z,nclus= classclustering(X_sc,show_charts=False) else: # multi-class clustering clus,Z, nclus = classclustering(X_sc, y, show_charts=False) if dataset_type=='tt': for cl in set(clus): mask = clus==cl X_clus = X[mask] y_clus = y[mask] X_train_clus, y_train_clus, X_test_clus, y_test_clus, _, _ = train_test_builder(X_clus, y_clus, validation_size=0, test_size=0.2, random_state=random_state) X_train.append(X_train_clus) X_test.append(X_test_clus) y_train.append(y_train_clus) y_test.append(y_test_clus) # method 1.2, fastest X_train = np.concatenate(X_train,axis=0) X_test = np.concatenate(X_test,axis=0) y_train = np.concatenate(y_train,axis=0) y_test = np.concatenate(y_test,axis=0) # convert to dataframe X_train = pd.DataFrame(X_train,columns=X.columns) X_test = pd.DataFrame(X_test,columns=X.columns) else: for cl in set(clus): mask = clus==cl X_clus = X[mask] y_clus = y[mask] X_train_clus, y_train_clus, X_test_clus, y_test_clus, X_val_clus, y_val_clus = train_test_builder(X_clus, y_clus, validation_size=0.15, test_size=0.15, random_state=random_state) X_train.append(X_train_clus) X_val.append(X_val_clus) X_test.append(X_test_clus) y_train.append(y_train_clus) y_val.append(y_val_clus) y_test.append(y_test_clus) global xt,xv,xtt xt,xv,xtt = X_train,X_val,X_test # method 1.2, fastest X_train = np.concatenate(X_train,axis=0) X_val = np.concatenate(X_val,axis=0) X_test = np.concatenate(X_test,axis=0) y_train = np.concatenate(y_train,axis=0) y_val = np.concatenate(y_val,axis=0) y_test = np.concatenate(y_test,axis=0) # convert to dataframe X_train = pd.DataFrame(X_train,columns=X.columns) X_val = pd.DataFrame(X_val,columns=X.columns) X_test = pd.DataFrame(X_test,columns=X.columns) # preprocessing based on X_train: scaler = preprocessor.fit(X_train) X_train_sc, X_test_sc, X_val_sc = [],[],[] X_train_sc = scaler.transform(X_train) X_test_sc = scaler.transform(X_test) if len(X_val)>0: X_val_sc = scaler.transform(X_val) # dummy categorical vars name created by preprocessor ohe=scaler.named_transformers_['cat'] ohe=ohe.named_steps['onehot'] global categorical_features_onehot categorical_features_onehot = ohe.get_feature_names(categorical_features) global X_labels X_labels = numeric_features+list(categorical_features_onehot) return X_train, y_train, X_test, y_test, X_val, y_val, X_train_sc, X_test_sc, X_val_sc ``` My code uses some global variales such as preprocessor, categorical\_features\_onehot (the label of dummy variables)
I've tried the following LP relaxation of maximum independent set $$\max \sum\_i x\_i$$ $$\text{s.t.}\ x\_i+x\_j\le 1\ \forall (i,j)\in E$$ $$x\_i\ge 0$$ I get $1/2$ for every variable for every cubic non-bipartite graph I tried. 1. Is true for all connected cubic non-bipartite graphs? 2. Is there LP relaxation which works better for such graphs? **Update 03/05**: Here's the result of clique-based LP relaxation suggested by Nathan [![](https://i.stack.imgur.com/8CJTK.png)](https://i.stack.imgur.com/8CJTK.png) I've summarized experiments [here](http://yaroslavvb.blogspot.com/2011/03/linear-programming-for-maximum.html) Interestingly, there seem to be quite a few non-bipartite graphs for which the simplest LP relaxation is integral.
Non-bipartite connected cubic graphs have the unique optimal solution $x\_i = 1/2$; in a bipartite cubic graph you have an integral optimal solution. --- *Proof:* In a cubic graph, if you sum over all $3n/2$ constraints $x\_i + x\_j \le 1$, you have $\sum\_i 3 x\_i \le 3n/2$, and hence the optimum is at most $n/2$. The solution $x\_i = 1/2$ for all $i$ is trivially feasible, and hence also optimal. In a bipartite cubic graph, each part has half of the nodes, and the solution $x\_i = 1$ in one part is hence also optimal. Any optimal solution must be tight, that is, we must have $\sum\_i 3 x\_i = 3n/2$ and hence $x\_i + x\_j = 1$ for each edge $\{i,j\}$. Thus if you have an odd cycle, you must choose $x\_i = 1/2$ for each node in the cycle. And then if the graph is connected, this choice gets propagated everywhere.
Given a directed Graph and two vertices $S$ and $D$ (source and destination) such that each of its edges has a weight of the form: $A\_i+B\_ix\_i = V$ where $A\_0$ is a non negative integer, $B\_0$ is a positive integer, $x\_i$ a variable that can take non negative integer values (each variable used only once in the graph). Find a positive value $V$ such that it there exists a path from $S$ to $D$ in Graph such that $V$ satisfies all edges in that path. Of course the trivial method would be testing each positive integer value of $V$. But this might take an exponential time. Is there something better we can do to achieve the solution in polynomial time, or that is the best we have ?
Assume first that the $x\_i$ can take any arbitrary values. Then the constraint $A\_i + B\_i x\_i = V$ just states that $V \equiv A\_i \pmod{B\_i}$. Two such constraints are contradictory if $A\_i\not\equiv A\_j \pmod{(B\_i,B\_j)}$, where $(B\_i,B\_j)$ is the GCD of $B\_i$ and $B\_j$. The Chinese Remainder Theorem should show (after some work) that any set of constraints in which no two are contradictory can be simultaneously satisfied. (Prove or refute!) The constraint that $x\_i \geq 0$ doesn't actually impose any further constraints. Indeed, take a solution $V$ to the original problem. By adding to $V$ a large enough multiply of $\prod\_i B\_i$, we obtain another solution in which all $x\_i$ are non-negative. This reduces your problem to the following problem: > > Given a directed graph and a set of forbidden pairs of edges, does there exist a path from $s$ to $t$ that doesn't contain a forbidden pair of edges? > > > Unfortunately, this problem is NP-complete, by reduction from SAT. Given $m$ clauses $\varphi\_1,\ldots,\varphi\_m$, there will be $m+1$ vertices $v\_0,\ldots,v\_m$. For each literal $\ell \in \varphi\_i$, there is a corresponding edge from $v\_{i-1}$ to $v\_i$. Two complementary literals form a forbidden pair. A legal path from $v\_0$ to $v\_m$ exists iff the instance is satisfiable. There are now two options: * Either your particular instance has more structure, which you can employ to get a more efficient solution; * Or your original problem is also NP-hard.
Suppose that we have continuous data $(X\_1,Y\_1),\dots,(X\_n,Y\_n)$. Suppose that $r\_{x,y}$ is the Karl-Pearson correlation coefficient between $X\_i$'s and $Y\_i$'s. For what range of values of $r\_{x,y}$, can we really decide that there may indeed be a linear relationship between $X\_i$'s and $Y\_i$' and proceed to predict $Y$ by using a linear regression? I'm sure the topic concerning this question should be a well-studied one. I did a little search here; couldn't find relevant posts. Any answers to the above question or pointers to such a study is greatly appreciated.
Often the term "significance" is used in the meaning "$\rho$ is statistically significantly different from zero". This is, however, not what most users of $\rho$ are interested in, because the null hypothesis that $\rho$ is exactly zero is almost certainly false. Hence even the tiniest deviation from zero becomes "significant" for a sample size that is large enough. It is generally of more interest whether a correlation is *strong*. What is considered a "strong" correlation depends on the field, but here is a rule of thumb taken from an introductory textbook ([here](https://methods.sagepub.com/base/download/DatasetStudentGuide/pearson-in-gho-2012) is an online reference for the same rule): \begin{eqnarray\*} |\rho|\leq 0.3: & & \mbox{weak correlation}\\ 0.3 < |\rho|\leq 0.7: & & \mbox{moderate correlation}\\ |\rho|> 0.7: & & \mbox{strong correlation}\\ \end{eqnarray\*} I would thus suggest, *not* to do a hypothesis test against $\rho=0$, but to report a confidence interval for $\rho$. You can find the formulas, [e.g., here](https://ncss-wpengine.netdna-ssl.com/wp-content/themes/ncss/pdf/Procedures/PASS/Confidence_Intervals_for_Pearsons_Correlation.pdf), and most statistical packages provide functions that compute it for you, for example `cor.test` in R. Then you can see how far this interval overlaps with the "weak" range.
I am training a model on a dataset and all types of relevant algorithms I have used converge close to the same accuracy score, meaning that no one is significantly better performing than the other. For example, if you're training a random forest and a neural network on MNIST, you'll observe an accuracy score of around 98%. Why is this the case, that bottlenecks in performance seem to be dictated by input data rather than the choice of the algorithm?
There's a lot of truth in what you say and that's certainly the argument in what some people have branded [data centric AI](https://datacentricai.org/). For a start, a lot of academic research looks at optimizing some measure (e.g. accuracy) on a fixed given dataset (e.g. ImageNet), which kind of makes sense to measure progress in algorithms. However, in practice, instead of tinkering with minute improvements in algorithms it is often better to just get more data (or label in different ways). Similarly, in Kaggle competitions there will often be pretty small differences between well-tuned XGBoost, LightGBM, Random Forrest and certain Neural Network architectures on tabular data (plus you can often squeeze out a bit more by ensembling them), but in practice you might be pretty happy with just using of these (never mind that you could be better by a few decimal points that for many applications might be irrelevant, or at least less important than the model running fast and cheaply). On the other hand, it is clear that some algorithms are just much better at certain tasks than others. E.g. look at the spread in performance on [ImageNet](https://en.wikipedia.org/wiki/ImageNet), results got better year by year and e.g. the error rate got halved from 2011 to 2012 when a convolutional neural network got used. You even see a big spread in neural network performance when assessed on a [newly created similar test set](https://arxiv.org/abs/1902.10811) ranging from below 70% to over 95%. That certainly is a huge difference in performance. Or, if you get a new image classification task and have just 50 to 100 images of some reasonable size (i.e. 100 or more pixels or so in each dimension) from each class, your first thought should really be transfer learning with some kind of neural network (e.g. convolutional NN or some vision transformer) picked based on trading off good performance on ImageNet with feasible size. In contrast, it's pretty unlikely that training a RF, XGBoost, or a neural network from scratch would come anywhere near that approach in performance. Additionally, let's not forget that often a lot is to be gained by creating the right features (especially in tabular data) or by representing the data in a good way (e.g. it turns out that you can turn audio data into spectrograms and then use neural networks for images on that, and that works pretty well). While, if one misses creating the right features or represents the data in a poor way, even a theoretically good model will struggle.
In college we have been learning about theory of computation in general and Turing machines more specifically. One of the great theoretical results is that at the cost of a potentially large alphabet (symbols), you can reduce the number of states down to only 2. I was looking for examples of different Turing Machines and a common example presented is the Parenthesis matcher/checker. Essentially it checks if a string of parentheses, e.g `(()()()))()()()` is balanced (the previous example would return 0 for unbalanced). Try as I may I can only get this to be a three state machine. I would love to know if anyone can reduce this down to the theoretical minimum of 2 and what their approach/states/symbols was! Just to clarify, the parentheses are "sandwiched" between blank tape so in the above example `- - - - - - - (()()()))()()() - - - - - - -` would be the input on the tape. The alphabet would include `(`,`)`,`1`,`0`,`-`, and the `*halt*` state does not count as a state. For reference the three state approach I have is as follows: Description of states: ``` State s1: Looks for Closing parenthesis State s2: Looks for Open parenthesis State s3: Checks the tape to ensure everything is matched Symbols: ),(,X ``` Transitions Listed as: ``` Action: State Symbol NewState WriteSymbol Motion ``` ``` // Termination behavior Action: s2 - *halt* 0 - Action: s1 - s3 - r //Transitions of TM Action: s1 ( s1 ( l Action: s1 ) s2 X r Action: s1 X s1 X l Action: s2 ( s1 X l Action: s2 X s2 X r Action: s3 ( *halt* 0 - Action: s3 X s3 X r Action: s3 - *halt* 1 - ``` Forgive the informal way of writing all this down. I am still learning the theoretical constructs behind this.
If looking for the key 60 we reach a number $K$ less than 60, we go right (where the larger numbers are) and we never meet numbers less than $K$. That argument can be repeated, so the numbers 10, 20, 40, 50 must occur along the search in that order. Similarly, if looking for the key 60 we reach a number $K$ larger than 60, we go leftt (where the smaller numbers are) and we never meet numbers larger than $K$. Hence the numbers 90, 80, 70 must occur along the search in that order. The sequences 10, 20, 30, 40, 50 and 90, 80, 70 can then be shuffled together, as long as their subsequences keep intact. Thus we can have 10, 20, 40, 50, 90, 80, 70, but also 10, 20, 90, 30, 40, 80, 70, 50. We now can compute the number, choosing the position of large and small numbers. See the comment by Aryabhata. We have two sequences of 4 and 3 numbers. How many ways can I shuffle them? In the final 7 positions I have to choose 3 positions for the larger numbers (and the remaining 4 for the smaller numbers). I can choose these in $7 \choose 3$ ways. After fixing these positions we know the full sequence. E.g., my first example has positions S S S S L L L the second has S S L S L L S. You ask for a generalization. Always the $x$ numbers less than the number found, and the $y$ numbers larger are fixed in their relative order. The smaller numbers must go up, the arger numbers must go down. The number is then $x+y \choose y$. PS (edited). Thanks to Gilles, who noted that 30 is not in the question.
I am using LibSVM library for classification. For my problem I am using polynomial kernel and I need to select best parameters (`d` = degree of polynomial kernel, and `C` = soft margin constant). The [LibSVM guide](http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf) suggests for grid search for this job. In this library there exists a parameter selection tool (`grid.py`), but it seems to me this is implemented to tune parameters of RBF kernel (gamma). My questions are: 1. Is there any other good solution for selecting suitable parameters rather than Grid search? 2. Can anybody give me some hints / sample code for selecting parameters for polynomial kernel?
Grid search is a sensible procedure as @JohnSmith suggests, however it is not the only stable technique. I generally use the [Nelder-Mead simplex algortihm](http://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method), which I have found to be very reliable and more efficient than grid search as less time is spent investigating areas of hyper-parameter space that give poor models. If you are a MATLAB user, you can get my implementation of this method [here](http://theoval.cmp.uea.ac.uk/matlab/#optim). Nelder Mead simplex methods are attractive as they don't require gradient information (I suppose you can think of the gradient of the simplex as being an approximation of the local gradient) and is very easily implemented. Also gradient descent optimisation of the span bound is a good way of optimising the hyper-parameters, see [Chapelle *et al.*](http://olivier.chapelle.cc/pub/mlj02.pdf) (also investigate the other papers that Olivier has written, there are some real gems). One advantage of grid search however is that it is quite easy to over-fit the cross-validation error (or span bound etc.) when optimising the hyper-parameters, especially if there is little data and many hyper-parameters. This means you can get a very poor classifier if you tune the hyper-parameters to convergence, so a coarse grid search can often perform better (I have a paper in preparation on this).
I'm trying to read and understand the paper [Attention is all you need](https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf) and in it, they used positional encoding with sin for even indices and cos for odd indices. In the paper (Section 3.5), they mentioned > > Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. > > > My question is that if there is no recurrence, why not use One Hot Encoding. What is the advantage of using a sinusoidal positional encoding?
Your entire code is correct except at the last point where you are equating with `df['columnname'].mode()`. The value here should have a dtype int or string but this has a dtype object. Just replace it with `df['columnname'].mode().values` and you are good to go. Also, I see a lot of stuff that is not required here. Since you are using `pd.loc`, you can simply write: ``` df.loc[df['columnname'] == "not known",'columnname'] = df['columnname'].mode().values ```
I have a dataset, let's say of counts of apples and oranges. I would like to have a metric that equally reflects the ratio of apples to oranges but if I simply use the ratio: apple count + 1 : orange counts +1 I get very large numbers when I have more apples than oranges but very small numbers when I have more oranges than apples. I would like a ratio that is more normally distributed so that when apples = oranges my ratio = 1 but my max value is 2 and min value is 0. Any suggestions on how to achieve this?
Let's forget apples and oranges and generalise painlessly to counts $n\_1 + n\_2 =: n$ and work with the proportions $p\_1 = n\_1/n, p\_2 = n\_2/n$. As your own answer implies, the problem with $\ln (n\_1/n\_2) = \ln n\_1 - \ln n\_2 = \ln p\_1 - \ln p\_2$ is that either count $n\_1, n\_2$ could be zero. Hence over many years there have been proposals of various fudges of the form $\ln [(n\_1 + c)/(n\_2 + k)] = \ln (n\_1 + c) - \ln (n\_2 + k)$ for positive $c, k$ where $c$ and $k$ need not be different. However, a fudge-free alternative is to use some folded power, say $p\_1^\lambda - p\_2^\lambda$, so that for example $\lambda = 1/2$ gives folded square roots and $\lambda = 1/3$ gives folded cube roots. Positive features of such folded powers include 1. symmetry of definition, so that folded powers for $\lambda > 0$ range from $-1$ to $1$ (and so could be translated to the interval from $0$ to $2$ simply by adding $1$). 2. being defined simply for $n\_1 = 0$ or $n\_2 = 0$ or both. There is no guarantee that the results will be normally distributed, nor is there for any transform of $n\_1$ and $n\_2$. However, the results are likely to be closer to a normal distribution, and indeed to symmetry, than the results of $n\_1/n\_2$ would be. More on folded powers at [What is the most appropriate way to transform proportions when they are an independent variable?](https://stats.stackexchange.com/questions/195293/what-is-the-most-appropriate-way-to-transform-proportions-when-they-are-an-indep) which in turn gives further references.
I was wondering if there's any good R libraries out there for deep learning neural networks? I know there's the `nnet`, `neuralnet`, and `RSNNS`, but none of these seem to implement deep learning methods. I'm especially interested in unsupervised followed by supervised learning, and [using dropout to prevent co-adaptation](http://arxiv.org/pdf/1207.0580.pdf). /edit: After a few years, I've found the [h20 deep learning package](https://cran.r-project.org/web/packages/h2o/index.html) very well-designed and easy to install. I also love the [mxnet package](https://github.com/dmlc/mxnet/tree/master/R-package), which is (a little) harder to install but supports things like covnets, runs on GPUs, and is really fast.
There's another new package for deep networks in R: [deepnet](http://cran.r-project.org/web/packages/deepnet/index.html) I haven't tried to use it yet, but it's already been incorporated into the [caret](http://cran.r-project.org/web/packages/caret/index.html) package.
Suppose that you're given a fair coin and you would like to simulate the probability distribution of repeatedly flipping a fair (six-sided) die. My initial idea is that we need to choose appropriate integers $k,m$, such that $2^k = 6m$. So after flipping the coin $k$ times, we map the number encoded by the k-length bitstring to outputs of the die by dividing the range $[0,2^k-1]$ into 6 intervals each of length $m$. However, this is not possible, since $2^k$ has two as its only prime factor but the prime factors of $6m$ include three. There should be some other simple way of doing this, right?
Another approach to simulate a roll of a dN using a dM (in the case of the specific question asked a d6 using a d2) is to partition the interval [0, 1) into N equal intervals of length 1/N, [0, 1/N), [1/N, 2/N), ..., [(N-1)/N, N). Use the dM to generate a base-M fraction, 0.bbbb..., in [0, 1). If that falls in [(i-1)/N, i/N), take i as the roll of the dN. Note that you only have to generate enough base-M digits of the fraction to determine which interval it is in.
After reading [this question](https://cstheory.stackexchange.com/questions/6770/how-do-you-decide-when-you-have-enough-research-results-to-write-a-paper-and-to-w) and David's answer I thought a more general question asking for tips would be useful. So > > how can authors make (and write) a paper stronger? > > >
This is sort of an unserious answer in that it's not what I consciously do most of the time, but: Think about what the next paper after yours might be — not the next paper that takes some small follow-on problem from your paper and solves that too, but the next paper that takes it to another level, finds tight upper and lower bounds, shows that it's an instance of a more general phenomenon, and generally makes your paper obsolete. Then write about the results in that paper instead of the ones you already have.
The [Bellman-Ford algorithm](http://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm) determines the shortest path from a source $s$ to all other vertices. Initially the distance between $s$ and all other vertices is set to $\infty$. Then the shortest path from $s$ to each vertex is computed; this goes on for $|V|-1$ iterations. My questions are: * Why does there need to be $|V|-1$ iterations? * Would it matter if I checked the edges in a different order? Say, if I first check edges 1,2,3, but then on the second iteration I check 2,3,1. MIT Prof. Eric said the order didn't matter, but this confuses me: wouldn't the algorithm incorrectly update a node based on edge $x\_2$ if its value was dependent on the edge $x\_1$ but $x\_1$ is updated after $x\_2$?
Consider the shortest path from $s$ to $t$, $s, v\_1, v\_2, \dots, v\_k, t$. This path consists of at most $|V|-1$ edges, because repeating a vertex in a shortest path is always a bad idea (or at least there is a shortest path which does not repeat vertices), if we do not have negative weight cycles. In round one, we know that the edge $(s, v\_1)$ will be relaxed, so the distance estimate for $v\_1$ will be correct after this round. Note that we have no idea what $v\_1$ is at this point, but as we've relaxed *all* edges, we must have relaxed this one as well. In round two, we relax $(v\_1, v\_2)$ at some point. We still have no idea what $v\_1$ or $v\_2$ are, but we know their distance estimates are correct. Repeating this, after some round $k+1$, we have relaxed $(v\_k, t)$, after which the distance estimate for $t$ is correct. We have no idea what $k$ is until the entire algorithm is over, but we know that it will happen at some point (assuming no negative weight cycles). So, the crucial observation is that after round $i$, the $i$-th node of the shortest path must have its distance estimate set to the correct value. As the path is at most $|V|-1$ edges long, $|V|-1$ rounds suffices to find this shortest path. If a $|V|$th round still changes something, then something weird is going on: all paths should already be 'settled' to their final values, so we must have the situation that some negative weight cycle exists.
I am looking for an algorithm to convert a digraph (directed graph) to an undirected graph in a reversible way, ie the digraph should be reconstructable if we are given the undirected graph. I understand that this will come in expense of the undirected graph having more vertices but I do not mind. Does one know how to do this or can suggest any references? Thanks in advance. --- Update: Regarding the answer of AdrianN below. It might be a good starting point but I don't think it works in its current form. Here is an image of why I think it doesn't: ![enter image description here](https://i.stack.imgur.com/Kh4eq.png) --- Update after D.W.'s comment: I consider the vertices of the graphs to be unlabeled. If a solution involves labeling the vertices (like AdrianN's does), then it should give the same (isomorphic) undirected graph no matter how the labeling is done. My definition of "isomorphic" for graphs with labeled vertices is that there is a permutation of the labeling that relates the two graphs, but I am not sure of the exact definition for unlabeled graphs...
To convert a directed graph $D$ to an undirected graph $G$ one do the following: 1. Number the nodes of $D$ 2. Create two undirected graphs $G'$, $G''$ on the same vertex set as $D$ 3. For every edge $u$,$v$ in $D$ add the edge to $G'$ if $u<v$, else add the edge to $G''$ 4. G is the disjoint union of $G'$ and $G''$ When doing the disjoint union one must take care to make it reversible. ![Example](https://i.stack.imgur.com/ypNwH.png)
Generally speaking, what are the differences between an MLE and a MAP estimator? If I wanted to improve the performance of a model, how would these differences come into play? Are there specific assumptions about the model or the data that would cause one to be favored over the other?
MLE is informed entirely by the likelihood and MAP is informed by both prior and likelihood. Both methods return point estimates for parameters via calculus-based optimization. MLE comes from frequentist statistics where practitioners let the likelihood "speak for itself." Whereas MAP comes from Bayesian statistics where prior beliefs (usually informed by domain knowledge of parameters) effectively regularize the point estimate. Note: MAP, while Bayesian, is atypical of Bayesian philosophy. Bayesian statistics generally treats parameters, themselves, as distributions as opposed to point estimates. Sampling techniques such as MCMC, or newer methods like variational inference can help approximate the distribution.
As this thread title gives away I need to prove $x^y$ to be a primitive recursive function. So mathematically speaking, I think the following are the recursion equations, well aware that I am assigning to $0^0$ the value $1$, which shouldn't be, since it is an "indeterminate" form. \begin{cases} x^0=1 \\ x^{n+1} = x^n\cdot x \end{cases} More formally I would write: \begin{cases} h(0) = 1 \\ h(x,y+1) = g(y,h(x,x),x) \end{cases} as $g(x\_1, x\_2, x\_3) = h\left(u^3\_2(x\_1, x\_2, x\_3),u^3\_3(x\_1, x\_2, x\_3)\right)$ and provided $h(x,y) = x \cdot y$ is primitive recursive. Is my proof acceptable? Am I correct, am I missing something or am I doing anything wrong?
### Counting in the general case The problem you are interested in is known as #SAT, or model counting. In a sense, it is the classical #P-complete problem. Model counting is hard, even for $2$-SAT! Not surprisingly, the exact methods can only handle instances with around hundreds of variables. Approximate methods exist too, and they might be able to handle instances with around 1000 variables. Exact counting methods are often based on DPLL-style exhaustive search or some sort of knowledge compilation. The approximate methods are usually categorized as methods that give fast estimates without any guarantees and methods that provide lower or upper bounds with a correctness guarantee. There are also other methods that might not fit the categories, such as discovering backdoors, or methods that insist on certain structural properties to hold on the formulas (or their constraint graph). There are practical implementations out there. Some exact model counters are CDP, Relsat, Cachet, sharpSAT, and c2d. The sort of main techniques used by the exact solvers are partial counts, component analysis (of the underying constraint graph), formula and component caching, and smart reasoning at each node. Another method based on knowledge compilation converts the input CNF formula into another logical form. From this form, the model count can be deduced easily (polynomial time in the size of the newly produced formula). For example, one might convert the formula to a binary decision diagram (BDD). One could then traverse the BDD from the "1" leaf back to the root. Or for another example, the c2d employs a compiler that turns CNF formulas into deterministic decomposable negation normal form (d-DNNF). If your instances get larger or you don't care about being exact, approximate methods exist too. With approximate methods, we care about and consider the quality of the estimate and the correctness confidence associated with the estimate reported by our algorithm. One approach by Wei and Selman [2] uses MCMC sampling to compute an approximation of the true model count for the input formula. The method is based on the fact that if one can sample (near-)uniformly from the set of solution of a formula $\phi$, then one can compute a good estimate of the number of solutions of $\phi$. Gogate and Dechter [3] use a model counting technique known as SampleMinisat. It's based on sampling from the backtrack-free search space of a boolean formula. The technique builds on the idea of importance re-sampling, using DPLL-based SAT solvers to construct the backtrack-free search space. This might be done either completely or up to an approximation. Sampling for estimates with guarantees is also possible. Building on [2], Gomes et al. [4] showed that using sampling with a modified randomized strategy, one can get provable lower bounds on the total model count with high probabilistic correctness guarantees. There is also work that builds on belief propagation (BP). See Kroc et al. [5] and the BPCount they introduce. In the same paper, the authors give a second method called MiniCount, for providing upper bounds on the model count. There's also a statistical framework which allows one to compute upper bounds under certain statistical assumptions. ### Algorithms for #2-SAT and #3-SAT If you restrict your attention to #2-SAT or #3-SAT, there are algorithms that run in $O(1.3247^n)$ and $O(1.6894^n)$ for these problems respectively [1]. There are slight improvements for these algorithms. For example, Kutzkov [6] improved upon the upper bound of [1] for #3-SAT with an algorithm running in time $O(1.6423^n)$. As is in the nature of the problem, if you want to solve instances in practice, a lot depends on the size and structure of your instances. The more you know, the more capable you are in choosing the right method. --- [1] [Vilhelm Dahllöf, Peter Jonsson, and Magnus Wahlström. Counting Satisfying Assignments in 2-SAT and 3-SAT. In Proceedings of the 8th Annual International Computing and Combinatorics Conference (COCOON-2002), 535-543, 2002.](http://cs5824.userapi.com/u11728334/docs/38f6e8064f19/Oscar_H_Ibarra_Computing_and_Combinatorics_8_co.pdf#page=547) [2] [W. Wei, and B. Selman. A New Approach to Model Counting. In Proceedings of SAT05: 8th International Conference on Theory and Applications of Satisfiability Testing, volume 3569 of Lecture Notes in Computer Science, 324-339, 2005.](http://www.cs.cornell.edu/selman/papers/pdf/05.sat.new-approach-model-counting.pdf) [3] [R. Gogate, and R. Dechter. Approximate Counting by Sampling the Backtrack-free Search Space. In Proceedings of AAAI-07: 22nd National Conference on Artificial Intelligence, 198–203, Vancouver, 2007.](http://sami.ics.uci.edu/papers/gogate_2.pdf) [4] [C. P. Gomes, J. Hoffmann, A. Sabharwal, and B. Selman. From Sampling to Model Counting. In Proceedings of IJCAI-07: 20th International Joint Conference on Artificial Intelligence, 2293–2299, 2007.](http://www.ijcai.org/papers07/Papers/IJCAI07-369.pdf) [5] [L. Kroc, A. Sabharwal, and B. Selman. Leveraging Belief Propagation, Backtrack Search, and Statistics for Model Counting. In CPAIOR-08: 5th International Conference on Integration of AI and OR Techniques in Constraint Programming, volume 5015 of Lecture Notes in Computer Science, 127–141, 2008.](http://www.cs.cornell.edu/~kroc/pub/bpminicountCPAIOR08slides.pdf) [6] [K. Kutzkov. New upper bound for the #3-SAT problem. Information Processing Letters 105(1), 1-5, 2007.](http://www.cc.ntut.edu.tw/~cmliu/Alg/NTUT_Alg_S08g/ALG08-papers/K07.pdf)
These two seem very similar and have almost an identical structure. What's the difference? What are the time complexities for different operations of each?
**Summary** ``` Type BST (*) Heap Insert average log(n) 1 Insert worst log(n) log(n) or n (***) Find any worst log(n) n Find max worst 1 (**) 1 Create worst n log(n) n Delete worst log(n) log(n) ``` All average times on this table are the same as their worst times except for Insert. * `*`: everywhere in this answer, BST == Balanced BST, since unbalanced sucks asymptotically * `**`: using a trivial modification explained in this answer * `***`: `log(n)` for pointer tree heap, `n` for dynamic array heap **Advantages of binary heap over a BST** * average time insertion into a binary heap is `O(1)`, for BST is `O(log(n))`. **This** is the killer feature of heaps. There are also other heaps which reach `O(1)` amortized (stronger) like the [Fibonacci Heap](https://en.wikipedia.org/wiki/Fibonacci_heap), and even worst case, like the [Brodal queue](https://en.wikipedia.org/wiki/Brodal_queue), although they may not be practical because of non-asymptotic performance: <https://stackoverflow.com/questions/30782636/are-fibonacci-heaps-or-brodal-queues-used-in-practice-anywhere> * binary heaps can be efficiently implemented on top of either [dynamic arrays](https://en.wikipedia.org/wiki/Dynamic_array) or pointer-based trees, BST only pointer-based trees. So for the heap we can choose the more space efficient array implementation, if we can afford occasional resize latencies. * binary heap creation [is `O(n)` worst case](https://en.wikipedia.org/wiki/Binary_heap#Building_a_heap), `O(n log(n))` for BST. **Advantage of BST over binary heap** * search for arbitrary elements is `O(log(n))`. **This** is the killer feature of BSTs. For heap, it is `O(n)` in general, except for the largest element which is `O(1)`. **"False" advantage of heap over BST** * heap is `O(1)` to find max, BST `O(log(n))`. This is a common misconception, because it is trivial to modify a BST to keep track of the largest element, and update it whenever that element could be changed: on insertion of a larger one swap, on removal find the second largest. <https://stackoverflow.com/questions/7878622/can-we-use-binary-search-tree-to-simulate-heap-operation> (mentioned [by Yeo](https://stackoverflow.com/a/27074221/895245)). Actually, this is a *limitation* of heaps compared to BSTs: the *only* efficient search is that for the largest element. **Average binary heap insert is `O(1)`** Sources: * Paper: <http://i.stanford.edu/pub/cstr/reports/cs/tr/74/460/CS-TR-74-460.pdf> * WSU slides: [https://web.archive.org/web/20161109132222/http://www.eecs.wsu.edu/~holder/courses/CptS223/spr09/slides/heaps.pdf](https://web.archive.org/web/20161109132222/http://www.eecs.wsu.edu/%7Eholder/courses/CptS223/spr09/slides/heaps.pdf) Intuitive argument: * bottom tree levels have exponentially more elements than top levels, so new elements are almost certain to go at the bottom * heap insertion [starts from the bottom](https://en.wikipedia.org/wiki/Binary_heap#Insert), BST must start from the top In a binary heap, increasing the value at a given index is also `O(1)` for the same reason. But if you want to do that, it is likely that you will want to keep an extra index up-to-date on heap operations <https://stackoverflow.com/questions/17009056/how-to-implement-ologn-decrease-key-operation-for-min-heap-based-priority-queu> e.g. for Dijkstra. Possible at no extra time cost. **GCC C++ standard library insert benchmark on real hardware** I benchmarked the C++ `std::set` ([Red-black tree BST](https://stackoverflow.com/questions/2558153/what-is-the-underlying-data-structure-of-a-stl-set-in-c/51944661#51944661)) and `std::priority_queue` ([dynamic array heap](https://stackoverflow.com/questions/11266360/when-should-i-use-make-heap-vs-priority-queue/51945521#51945521)) insert to see if I was right about the insert times, and this is what I got: [![enter image description here](https://i.stack.imgur.com/2Kcl0.png)](https://i.stack.imgur.com/2Kcl0.png) * [benchmark code](https://github.com/cirosantilli/linux-kernel-module-cheat/blob/52a203a1e22de00d463be273d47715059344a94b/userland/cpp/bst_vs_heap_vs_hashmap.cpp) * [plot script](https://github.com/cirosantilli/linux-kernel-module-cheat/blob/52a203a1e22de00d463be273d47715059344a94b/bst-vs-heap-vs-hashmap.gnuplot) * [plot data](https://github.com/cirosantilli/media/blob/f5e3457835746c2a319664160a897ed264e16622/data/bst_vs_heap_vs_hashmap.dat) * tested on Ubuntu 19.04, GCC 8.3.0 in a Lenovo ThinkPad P51 laptop with CPU: Intel Core i7-7820HQ CPU (4 cores / 8 threads, 2.90 GHz base, 8 MB cache), RAM: 2x Samsung M471A2K43BB1-CRC (2x 16GiB, 2400 Mbps), SSD: Samsung MZVLB512HAJQ-000L7 (512GB, 3,000 MB/s) So clearly: * heap insert time is basically constant. We can clearly see dynamic array resize points. Since we are averaging every 10k inserts [to be able to see anything at all above system noise](https://stackoverflow.com/questions/51952471/why-do-i-get-a-constant-instead-of-logarithmic-curve-for-an-insert-time-benchmar/51953081#51953081), those peaks are in fact about 10k times larger than shown! The zoomed graph excludes essentially only the array resize points, and shows that almost all inserts fall under 25 nanoseconds. * BST is logarithmic. All inserts are much slower than the average heap insert. * BST vs hashmap detailed analysis at: <https://stackoverflow.com/questions/18414579/what-data-structure-is-inside-stdmap-in-c/51945119#51945119> **GCC C++ standard library insert benchmark on gem5** [gem5](http://gem5.org/) is a full system simulator, and therefore provides an infinitely accurate clock with with `m5 dumpstats`. So I tried to use it to estimate timings for individual inserts. [![enter image description here](https://i.stack.imgur.com/reK3u.png)](https://i.stack.imgur.com/reK3u.png) Interpretation: * heap is still constant, but now we see in more detail that there are a few lines, and each higher line is more sparse. This must correspond to memory access latencies are done for higher and higher inserts. * TODO I can't really interpret the BST fully one as it does not look so logarithmic and somewhat more constant. With this greater detail however we can see can also see a few distinct lines, but I'm not sure what they represent: I would expect the bottom line to be thinner, since we insert top bottom? Benchmarked with this [Buildroot setup](https://github.com/cirosantilli/linux-kernel-module-cheat/tree/7ccc1d3a8fa02967422cd2d25fe08e23d060db95#bst-vs-heap) on an aarch64 [HPI CPU](https://github.com/cirosantilli/linux-kernel-module-cheat/tree/7ccc1d3a8fa02967422cd2d25fe08e23d060db95#gem5-run-benchmark). **BST cannot be efficiently implemented on an array** Heap operations only need to bubble up or down a single tree branch, so `O(log(n))` worst case swaps, `O(1)` average. Keeping a BST balanced requires tree rotations, which can change the top element for another one, and would require moving the entire array around (`O(n)`). **Heaps can be efficiently implemented on an array** Parent and children indexes can be computed from the current index [as shown here](http://web.archive.org/web/20180819074303/https://www.geeksforgeeks.org/array-representation-of-binary-heap/). There are no balancing operations like BST. Delete min is the most worrying operation as it has to be top down. But it can always be done by "percolating down" a single branch of the heap [as explained here](https://en.wikipedia.org/w/index.php?title=Binary_heap&oldid=849465817#Extract). This leads to an O(log(n)) worst case, since the heap is always well balanced. If you are inserting a single node for every one you remove, then you lose the advantage of the asymptotic O(1) average insert that heaps provide as the delete would dominate, and you might as well use a BST. Dijkstra however updates nodes several times for each removal, so we are fine. **Dynamic array heaps vs pointer tree heaps** Heaps can be efficiently implemented on top of pointer heaps: <https://stackoverflow.com/questions/19720438/is-it-possible-to-make-efficient-pointer-based-binary-heap-implementations> The dynamic array implementation is more space efficient. Suppose that each heap element contains just a pointer to a `struct`: * the tree implementation must store three pointers for each element: parent, left child and right child. So the memory usage is always `4n` (3 tree pointers + 1 `struct` pointer). Tree BSTs would also need further balancing information, e.g. black-red-ness. * the dynamic array implementation can be of size `2n` just after a doubling. So on average it is going to be `1.5n`. On the other hand, the tree heap has better worst case insert, because copying the backing dynamic array to double its size takes `O(n)` worst case, while the tree heap just does new small allocations for each node. Still, the backing array doubling is `O(1)` amortized, so it comes down to a maximum latency consideration. [Mentioned here](https://stackoverflow.com/a/41338070/895245). **Philosophy** * BSTs maintain a global property between a parent and all descendants (left smaller, right bigger). The top node of a BST is the middle element, which requires global knowledge to maintain (knowing how many smaller and larger elements are there). This global property is more expensive to maintain (log n insert), but gives more powerful searches (log n search). * Heaps maintain a local property between parent and direct children (parent > children). The top note of a heap is the big element, which only requires local knowledge to maintain (knowing your parent). Comparing BST vs Heap vs Hashmap: * BST: can either be either a reasonable: + unordered set (a structure that determines if an element was previously inserted or not). But hashmap tends to be better due to O(1) amortized insert. + sorting machine. But heap is generally better at that, which is why [heapsort](https://en.wikipedia.org/wiki/Heapsort) is much more widely known than [tree sort](https://en.wikipedia.org/wiki/Tree_sort) * heap: is just a sorting machine. Cannot be an efficient unordered set, because you can only check for the smallest/largest element fast. * hash map: can only be an unordered set, not an efficient sorting machine, because the hashing mixes up any ordering. **Doubly-linked list** A doubly linked list can be seen as subset of the heap where first item has greatest priority, so let's compare them here as well: * insertion: + position: - doubly linked list: the inserted item must be either the first or last, as we only have pointers to those elements. - binary heap: the inserted item can end up in any position. Less restrictive than linked list. + time: - doubly linked list: `O(1)` worst case since we have pointers to the items, and the update is really simple - binary heap: `O(1)` average, thus worse than linked list. Tradeoff for having more general insertion position. * search: `O(n)` for both An use case for this is when the key of the heap is the current timestamp: in that case, new entries will always go to the beginning of the list. So we can even forget the exact timestamp altogether, and just keep the position in the list as the priority. This can be used to implement an [LRU cache](https://stackoverflow.com/a/34206517/895245). Just like [for heap applications like Dijkstra](https://stackoverflow.com/questions/14252582/how-can-i-use-binary-heap-in-the-dijkstra-algorithm), you will want to keep an additional hashmap from the key to the corresponding node of the list, to find which node to update quickly. **Comparison of different Balanced BST** Although the asymptotic insert and find times for all data structures that are commonly classified as "Balanced BSTs" that I've seen so far is the same, different BBSTs do have different trade-offs. I haven't fully studied this yet, but it would be good to summarize these trade-offs here: * [Red-black tree](https://en.wikipedia.org/wiki/Red%E2%80%93black_tree). Appears to be the most commonly used BBST as of 2019, e.g. it is the one used by the GCC 8.3.0 C++ implementation * [AVL tree](https://en.wikipedia.org/wiki/AVL_tree). Appears to be a bit more balanced than BST, so it could be better for find latency, at the cost of slightly more expensive finds. Wiki summarizes: "AVL trees are often compared with red–black trees because both support the same set of operations and take [the same] time for the basic operations. For lookup-intensive applications, AVL trees are faster than red–black trees because they are more strictly balanced. Similar to red–black trees, AVL trees are height-balanced. Both are, in general, neither weight-balanced nor mu-balanced for any mu < 1/2; that is, sibling nodes can have hugely differing numbers of descendants." * [WAVL](https://en.wikipedia.org/wiki/WAVL_tree). The [original paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.380.9360&rep=rep1&type=pdf) mentions advantages of that version in terms of bounds on rebalancing and rotation operations. **See also** Similar question: <https://stackoverflow.com/questions/6147242/heap-vs-binary-search-tree-bst>
As far as I know (despite some variations which provide empirical average-case improvements) quick-sort is worst case $O(n^2)$ with the original [Hoare partition scheme](https://en.wikipedia.org/wiki/Quicksort#Hoare_partition_scheme) having a particularly bad behavior for already sorted lists, reverse-sorted, repeated element lists. On the other hand a B-Tree has $O(\log n)$ insertions, meaning worst-case $O(n\log n)$ to process an array of $n$ elements. Also an easy optimization to memoize the memory-address of the lowest and highest nodes (would make it possible to process sorted / reverse-sorted / repeated-element lists in $O(n)$). While there are more favored sorting algorithms than quicksort now (e.g. timsort) what originally favored its use? Is it the susceptibility to parallelization (also in place swaps, lower memory complexity)? Otherwise why not just use a B-tree?
A B-tree has one significant disadvantage on the fastest deep cache machines, it depends on pointers. So as the size grows each access have a greater and greater risk of causing a cache-miss/TLB-miss. Effectively getting a K value of z\*x, x=sum(cache/TLB-miss per access, L1-TLB misses are typically size of tree / total cache size), z ~= access time of least cache or main memory that can hold the entire tree. On the other hand the "average case" quicksort streams memory at maximum pre-fetcher speed. Only drawback here is the average case also cause a stream to be written back. And after some partitions the entire active set sit in caches and will get streamed quicker. Both algorithms suffers heavily from branch mis-predictions but quicksort, just need to backup a bit, B-Tree additionally needs to read in a new address to fetch from as it has a data dependency which quicksort doesn't. Few algoritmes are implemented as pure theoretically functions. Nearly all have some heuristics to fix their worst problems, Tim-sort excepted as its build of heuristics. merge-sort and quick-sort are often checked for already sorted ranges, just like Tim-sort. Both also have an insertion sort for small sets, typically less than 16 elements, Tim-sort is build up of these smaller sets. The C++ `std::sort` is a quicksort hybrid with insertion sort, with the additional fallback for the worst case behaviour, if the dividing exceed twice the expected depth it changes to a heap-sort algorithm. The original quicksort used the first element of the array as pivot, this was quickly abandoned for a (pseudo)random element, typically the middle. Some implementations changed to median-of-three (random elements) to get a better pivot, recently a median-of-5-median (of all elements) was used, and last I saw in some presentation from Alexandrescu was a median-of-3-medians (of all elements) to get the a pivot that was close to the actual median (1/3 or a 1/5 of the span).
I'm reading about [SVM](https://en.wikipedia.org/wiki/Support_vector_machine) and I learned that we use a kernel function so the data become linearly separable in the high dimensional (vector?) space. But then I also learned that they use the soft-margin idea. But my question is why to use a soft-margin if the data is going to be linearly separable anyway in the high space? Or does that mean that even after mapping with the kernel it doesn't necessarily mean that it will become linearly separable?
You are convoluting two different things. The classification algorithm used by SVM is always linear (e.g. a hyperplane) in some feature space induced by a kernel. Hard margin SVM, which is typically the first example you encounter when learning SVM, requires linearly separable data in feature space or there is no solution to the training problem. Typically, this first example works in input space but the same can be done in any feature space of your choosing. > > But my question is why to use a soft-margin if the data is going to be linearly separable anyway in the high space? > > > Soft-margin SVM does *not* require data to be separable, not even in feature space. This is the key difference between hard and soft margin. Soft-margin SVM allows instances to fall within the margin and even on the wrong side of the separating hyperplane, but penalizes these instances using hinge loss. > > Or does that mean that even after mapping with the kernel it doesn't necessarily mean that it will become linearly separable? > > > The use of a nonlinear kernel never gives any guarantees to make any data set linearly separable in the induced feature space. This is not necessary. The reason we use kernels is to map the data from input space onto a higher dimensional space, in which a (higher dimensional) hyperplane will be *better* at separating the data. That is all. If data is perfectly separable in feature space, your training accuracy is $1$ by definition. This is still rare even when using kernels. You *can* find kernels that make data linearly separable, but this usually requires very complex kernels which lead to results that generalize poorly. An example of this would be an RBF kernel with very high $\gamma$, which basically yields the unit matrix as kernel matrix (this is perfectly separable but will generalize badly on unseen data).
I'm studying and trying to implement convolutional neural networks, but I suppose this question applies to multilayer perceptrons in general. The output neurons in my network represent the activation of each class: the most active neuron corresponds to the predicted class for a given input. To consider a cross-entropy cost for training, I'm adding a softmax layer at the end of the network, so that each neuron's activation value is interpreted as a probability value. My question is: should the neurons in the output layer apply a non-linear function to the input? My intuition is that it is not necessary: * if the input to the $i$-th output neuron is the dot product $x^T\theta\_i$ between a vector $x$ (coming from the previous layer) and the weights $\theta\_i$ for that neuron, * and if I employ a monotonic non-linear function like the sigmoid or the ReLU * then the larger activation output will still correspond to the largest $x^T\theta\_i$, so from this point of view the non-linear function would not change the prediction. Is something wrong with this interpretation? Are there some training factors that I'm overlooking which make the output non-linearity necessary? And if I'm right, would anything change if instead of using the sigmoid function I use the ReLU $$\max(0,x^T\theta\_i)$$ function, which is not strictly monotonic? **EDIT** With reference to Karel's answer, whose answer basically was "it depends", here is a more detailed description of my network and doubt: Suppose I have N hidden layers, and my output layer is just a softmax layer over a set of neurons representing classes (so my expected output is the probability that the input data belongs to each class). Assuming the first N-1 layers have nonlinear neurons, what is the difference between using nonlinear vs linear neurons in the N-th hidden layer? ---
You should not use a non-linearity for the last layer before the softmax classification. The ReLU non-linearity (used now almost exclusively) will in this case simply throw away information without adding any additional benefit. You can look at the [caffe implementation](https://github.com/BVLC/caffe/blob/master/models/bvlc_alexnet/deploy.prototxt) of the well-known AlexNet for a reference of what's done in practice.
I have a model for predicting multiple count variables (multivariate count regression) given some covariates. Are there any publicly available datasets I could experiment with?
If you have money to pay for it, you can get exactly this kind of data from HLDI: <http://www.iihs.org/research/hldi/composite> I don't know how much they charge or what their T&Cs are, but I know that it's good data and it's very comprehensive :-)
Consider the recurrence $\qquad\displaystyle T(n) = \sqrt{n} \cdot T\bigl(\sqrt{n}\bigr) + c\,n$ for $n \gt 2$ with some positive constant $c$, and $T(2) = 1$. I know the Master theorem for solving recurrences, but I'm not sure as to how we could solve this relation using it. How do you approach the square root parameter?
If you write $m=\log n \space$ you have $T(m) = {m \over 2}\cdot T({m\over 2}) + c\cdot 2^m\space$. Now you know the recursion tree has hight of order $O(\log m)$, and again it's not hard to see it's $O(2^m)\space$ in each level, so total running time is in: $O((\log m) \cdot 2^m)\space$, which concludes $O(n \cdot \log \log n)\space$ for $n$. In all when you see $\sqrt n $ or $n^{a \over b}, a<b \space$, is good to check logarithm. P.S: Sure proof should include more details by I skipped them.
A Boolean function $f : \{0, 1\}^n → \{0, 1\}$ is called monotone if changing any of the $n$ input bits $x\_1, \ldots , x\_n$ from $0$ to $1$ can only ever change the output $f(x\_1, \ldots ,x\_n)$ from $0$ to $1$, never from $1$ to $0$. I know how to do a simple proof of exhaustion for $f : \{0, 1\}^1$ and $f : \{0, 1\}^2$, but I do not know how to prove the following statement for $f:\{0,1\}^n$: any monotone Boolean function is computable by a circuit containing only AND and OR gates.
You can prove this by induction. We will construct a formula with constants, and then you can eliminate the constants (unless the function itself is constant), if you wish, using the simplification rules $x \land 0 = 0$, $x \land 1 = x$, $x \lor 0 = x$, $x \lor 1 = 1$. When $n = 0$, the function is just a constant. Given a function $f$ of $n$ variables $f(x\_1,\ldots,x\_n)$, we can always construct two functions of the previous consecutive $(n-1)$ variables $f\_0(x\_1,\ldots,x\_{n-1}) = f(x\_1,\ldots,x\_{n-1},0)$ and $f\_1(x\_1,\ldots,x\_{n-1}) = f(x\_1,\ldots,x\_{n-1},1)$, and both $f\_0,f\_1$ are monotone by the inductive hypothesis. Also there must be well-formed formulas representing Boolean functions $f\_0,f\_1$, respectively. I claim that $$ f = f\_0 \lor (x\_n \land f\_1). $$ because substituting $x\_n = 0$ we just get $f\_0$, and substituting $x\_n = 1$ we get $f\_0 \lor f\_1 = f\_1$. Now we can conclude indeed $f$ is monotone since if $f(\vec{x},0)=f\_0$ is true then so is $f(\vec{x},1)=f\_1$ by above stated inductive hypothesis, and this finishes the inductive step.
[R-squared (coefficient of determination)](https://en.wikipedia.org/wiki/Coefficient_of_determination) is usually used to assess the goodness of fit of a regression model to the data. Here, I provide two simple datasets that I think their best-fit lines are equally good, but they got two different r-squared values. ``` x1 = [1,2,3] y1 = [1,2,3.5] x2 = [1,2,3] y2 = [2,4,6.5] ``` The best fit line to `x1,y1` got `r2=0.9868`, and the best fit line to `x2,y2` got `r2=0.9959`. While the r-squared values are different for these two best-fit lines, the residuals for different points are exactly the same for them: `[-0.083,0.167,-0.083]`. I think these two lines are equally good in fitting their respective data, while they get different r-squared values. What is wrong with my intuition about coefficient of determination.
WLOG you can focus on imbalance in a single factor, rather than a more nuanced concept of "data sparsity", or small cell counts. In statistical analyses *not* focused on learning, we are faced with the issue of providing adequate inference while controlling for one or more effects through adjustment, matching, or weighting. All of these have similar power and yield similar estimates to propensity score matching. Propensity score matching will balance the covariates in the analysis set. They all end up being "the same" in terms of reducing bias, maintaining efficiency because they block confounding effects. With imbalanced data, you may naively believe that your data are sufficiently large, but with a sparse number of people having the rarer condition: variance inflation diminishes power substantially, and it can be difficult to "control" for effects when those effects are strongly associated with the predictor and outcome. Therefore, at least in regression (but I suspect in all circumstances), the only problem with imbalanced data is that you effectively have *smaller sample size* than the $N$ might represent. If any method is suitable for the number of people in the rarer class, there should be no issue if their proportion membership is imbalanced.
I was trying out various projects available for question generation on GitHub namely NQG,question-generation and a lot of others but I don't see good results form them either they have very bad question formation or the questions generated are off-topic most of the times, Where I found one project that actually generates good questions [bloomsburyai/question-generation](https://github.com/bloomsburyai/question-generation) It basically accepts a context(paragraph) and an answer to generate the question and I am trying to validate the questions generated by passing the generated question along with the paragraph to allenNLP [Answer generation for a question](https://demo.allennlp.org/reading-comprehension) And then I am trying to make sure the generated answers are correct for the questions generated with calculating the sentence embedding for both the answers(AllenNLP and PotentialAnswer) using [Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/2) and a cosine distance to get how similar the answers match and the filtering question that has least cosine distance. Wanted to know if this is the best approach or Is there a state of the art implementation for question generation? Please suggest
For your first part of the question as to which question generation approaches are good - Neural question generation is being pretty popular (as of 2018/2019) among NLP enthusiasts but not all systems are great enough to be used directly in production. However, here are a few recent ones which reported the state-of-art performances in 2019 and have shared their codes too: 1. <https://github.com/ZhangShiyue/QGforQA> 2. <https://github.com/PrekshaNema25/RefNet-QG> This one is a 2020 one (now that NLP performances have been improved with Transformers) 3. <https://github.com/patil-suraj/question_generation> Besides, if you want more control as to understand and fix for wrongly generated questions, I would suggest the more traditional rule-based approach like the below which is more reliable than the above neural ones ones and generates a larger amount of question-answer pairs than the above 2: 1. [http://www.cs.cmu.edu/~ark/mheilman/questions/](http://www.cs.cmu.edu/%7Eark/mheilman/questions/) 2. <https://bitbucket.org/kaustubhdhole/syn-qg/src/master/> To answer your second question, if your QG model is generating an answer, then it makes sense to use cosine similarity. Assuming your question generation is at the sentence level, you will mostly have short answer spans and hence averaging Glove or Paragram word vectors might serve you better results than the Universal Sentence Encoder.
Let A be an fixed array of size n. Q(i,j,k) is number of elements from A[i] to A[j] which are less than k. Currently I am using segment tree with each node containing sorted array of leaf elements. This answers the query in O((log n)^2) with O(n\*log n) space and per-processing. Is there a way to answer this in O(log n) while keeping the per-processing and space complexity same.
This is essentially the 2D orthogonal range counting problem from computational geometry, which has attracted a lot of attention over the years. [Fractional cascading](http://en.wikipedia.org/wiki/Fractional_cascading) definitely could be applied to reduce the query time to logarithmic while preserving the other big-O bounds. In a nutshell, the idea is to make auxiliary data structures that relate the positions of elements in each parent array to positions in the child array, so that, given the results of a search in the parent, searching the child for the same key is constant-time.
The Question is from [DP tiling a 2xN tile with L shaped tiles and 2x1 tiles?](https://cs.stackexchange.com/questions/85611/dp-tiling-a-2xn-tile-with-l-shaped-tiles-and-2x1-tiles) I want an explanation about this question or theory
* Let $f[m]$ be the number of ways to cover the shape shown below, an $m$ by $2$ rectangle. Our ultimate goal is $f[n]$. ``` ┌───────────┐ 2 │ │ └───────────┘ m ``` * Let $g[m]$ be the number of ways to cover the first shape shown below, an $m$ by $2$ rectangle with an extra 1x1 square at the top-right corner. By symmetry, $g[m]$ is also the number of ways to cover the second shape shown below. ``` m+1 m ┌─────────────┐ ┌───────────┐ 2 │ ┌─┘ 2 │ └─┐ └───────────┘ └─────────────┘ m m+1 ``` To find the recurrence relation, try covering the space at the rightmost boundary of the above shapes in all possible ways. Consider $f[m]$. We have the following 4 ways to cover the rightmost space. ``` ┌─────────┬─┐ ┌──────┬────┐ ┌───────┬───┐ ┌─────────┬─┐ │ │ │ │ ├────┤ │ └─┐ │ │ ┌─┘ │ └─────────┴─┘ └──────┴────┘ └─────────┴─┘ └───────┴───┘ What is left: (m-1)x2 (m-2)x2 (m-2)x2+1 (m-2)x2+1 ``` So, we have $\quad\quad f[m] = f[m - 1] + f[m - 2] + g[m - 2] \cdot 2 $ for $m\ge2$. Consider $g[m]$. We have the following 2 ways to cover the rightmost space of the first shape. ``` m+1 m+1 ┌─────────┬───┐ ┌─────────┬───┐ │ │ ┌─┘ │ └─┬─┘ └─────────┴─┘ └───────────┘ What is left: (m-1)x2 (m-1)x2+1 ``` So we have $\quad\quad g[m] = f[m - 1] + g[m-1]$ for $m\ge1$. Applying the above two recurrence equations, we can compute all $f[m]$ and $g[m]$, in order of increasing $m$, starting from $m=2$, given the initial conditions, $f[0]=1$, $f[1]=1$, $g[0]=0$ and $g[1]=1$. ```python # Python program to compute the first n+1 values of f def show_number_of_ways(n): f = [0] * (n+1) g = [0] * (n+1) f[0] = f[1] = g[1] = 1 g[0] = 0 for m in range(2, n+1): f[m] = f[m - 1] + f[m - 2] + 2 * g[m - 2] g[m] = f[m - 1] + g[m - 1] print(f) show_number_of_ways(10) # [1, 1, 2, 5, 11, 24, 53, 117, 258, 569, 1255] ``` --- Here is a way to derive a simpler recurrence relation that involves $f$ only. [Glorfindel's answer](https://cs.stackexchange.com/a/126239) explains how to compute the number of patterns by "cutting the rightmost elementary block." To recap, there are one elementary block of size $1\times2$, one elementary block of $2\times2$ and two elementary blocks of $n\times2$ for $n\ge3$. Let $f(n)$ be the number of patterns for $2\times n$. we have the following base cases and recurrence relation, $$f(0)=1,\ f(1)=1,\ f(2)=2,$$ $$f(n)=f(n-1)+f(n-2)+2f(n-3)+2f(n-4)+\cdots+2f(0),\text{ for }n\ge3 $$ The above formulas leads to an algorithm that computes $f(n)$ with $O(n^2)$ time-complexity and $O(n)$ space-complexity. We can do better. Replacing $n$ with $n-1$, we have $$f(n-1)=f(n-2)+f(n-3)+2f(n-4)+2f(n-5)+\cdots+2f(0),\text{ for }n\ge4 $$ Subtracting the above two equations, we get $$f(n)-f(n-1)=f(n-1)+f(n-3)$$ So we have for $n\ge4$, $$f(n)=2f(n-1)+f(n-3)\tag{simple}$$ Since $f(3)=5=2f(2)+f(0)$, the above recurrence relation holds for all $n\ge3$. This leads to an algorithm that computes $f(n)$ with $O(n)$ time-complexity and $O(1)$ space-complexity. ```python # Python program to compute the first n+1 values of f def show_number_of_ways(n): f = [0] * (n+1) f[0] = f[1] = 1 f[2] = 2 for i in range(3, n+1): f[i] = 2 * f[i - 1] + f[i - 3] print(f) show_number_of_ways(10) # [1, 1, 2, 5, 11, 24, 53, 117, 258, 569, 1255] ``` --- We can also derive the simple recurrence relation directly from the first two mutual recursive relations between $f$ and $g$. The equality $g[m] = f[m - 1] + g[m-1]$ tells us $f[m-1] = g[m]-g[m-1]$, and, hence, $f[m] = g[m+1]-g[m]$ and $f[m-2] = g[m-1]-g[m-2]$. Applying them to eliminate $f$ away in $f[m] = f[m - 1] + f[m - 2] + g[m - 2] \cdot 2$, we get $g[m+1]=2g[m]+g[m-2]$. Since $f[m]$ is a linear combination of $g[m+1]$ and $g[m]$, $f$ satisfies the same kind of recurrence relation, i.e., $f[m]=2f[m-1]+f[m-3]$. Checking the valid range of the index $m$, we see that it is value for all $m\ge3$.
Problem statement: Given a graph G(V,E) which is not acyclic and may have negative edge weights (and thus may possibly have negative-length cycles), how does one detect if the graph has a zero-length cycle, and no negative-length cycles? Background information: The question came up when I tried to implement in code the solution to what is called the "tramp steamer" problem: Given a graph G(V,E) in which each node is associated with a cost $c\_i$ and each edge is associated with a time (number of days) $t\_i$, find the cycle with the smallest ratio of cost to time $\frac{\sum\_i(c\_i)}{\sum\_i(t\_i)}$ (i.e. minimal cost/day cycle). One solution is to do a binary search with the range of possible rations, trying to identify the minimum cost-to-time ratio $\mu$. In each iteration of the binary search in the range $[left...right]$, you "guess" a value of $\mu = \frac{left+right}{2}$ and then run Bellman-Ford: * if a negative cycle exists then the value of $\mu$ is too high; the range is reset to $[left...\mu]$ * if all cycles are positive then the value of $\mu$ is too low; the range is reset to $[\mu...right]$ * if there is a zero-length cycle (with all other cycles being positive), then we have found the best value for $\mu$ and we can stop, returning the zero-length cycle as the answer How exactly is the third case detected? Assume the graph A->B->C->D->{A,B} with two cycles (fro D back to A or B) where the cycle B->C->D->B is the optimal one and yields zero length for some selection of $\mu$ for which the other cycle is positive. Suppose we happen to try this particular value of $\mu$ (let's assume it's during the very first iteration because we got lucky). If I am using vertex A as the fixed point from which I run Bellman-Ford on each iteration, it will complete successfully without detecting a negative cycle. But how would the zero-length cycle be identified? Currently I only handle the first 2 cases and my implementation keeps iterating until the $[left...right]$ range becomes becomes too small, so that floating point precision can't handle a smaller range. At that point I detect that $\frac{left+right}{2}$ is equal to one of the range limits (either left or right) and stop the binary search. How would I go about detecting that a particular value of $\mu$ has produced a zero-length cycle?
Adding to the discussion, you can also detect negative or zero-length cycles using Floyd-Warshall in $O(n^3)$ by setting the initial value of the minimum distance of each node to itself as infinity. Also, there is this [problem](https://oj.uz/problem/view/APIO17_merchant) (APIO 2017 Travelling Merchant) that is slightly similar to your problem, with a solution available [here](https://usaco.guide/problems/apio-2017traveling-merchant/solution) that also uses binary search and cycle detection.
I have a question regarding multilevel page tables. As far as I know, in most 64 bit systems only 48 bits are used in page tables which would allow for 256TB of virtual memory to be addressed. For personal computers, I don't understand how they would ever be able to support that amount of memory being virtualized. If the processes currently running on your computer think they have a total of 256TB available, if they do end up using all that memory, the contents would eventually need to be flushed back to disk. This is not possible since your disk can't possibly hold that much memory. I'm thinking on an analogy to a bank that tells its customers they can each take out loans of 1,000 gold bars. They give their customers a note saying they have 1,000 gold bars worth of funds. Now say that the customers have used their loans to obtain different goods which they want to store in the bank. The bank would not be able to hold all those goods, and the people (processes) wouldn't be able to hold those goods either since once a computer is shut off all the memory not on disk is volatile. In a sense the bank is offering more than it can really hold. For a more practical example, say I have 1GB of disk memory available and my virtual memory system is able to virtualize 3GB of memory. Now say I open 3 text files and start typing stuff into them, eventually reaching that 3GB limit. Now I want to save those files, but I only have 1GB available on disk! Isn't virtual memory unreliable from a process point of view since it's saying "I can store all this stuff for you" but at the end of the day it doesn't have the resources to do so?
Who says that the virtual memory is restricted to mapping the local file devices? When DEC introduced the Alpha, the first mass-market 64-bit processor chip, one idea was to map the whole of any database into VM. I believe that Oracle did/does so. The key to its viability is density of references, so that a good proportion of any realised page is useful. Similar considerations apply to the page tables themselves. You want/need associative tables that tell you **fast** where in real memory every active realised virtual page lives. Anything else is a waste of space. No computer has unlimited amounts of anything. And managing virtual memory needs code: inherently imperfect. Your argument explains why it is better to write applications in C than in Java - all that unreliable JVM memory management taking up space and time. Hang on! JVM memory management is an order of magnitude more reliable than anything you or I might write in an application. Similarly, the simplicity of modelling that a large virtual memory can provide may well lead to *improved* reliability. If it didn't have some such benefits, it wouldn't be there.
Suppose that a certain disease ($D$) has a prevalence of $\dfrac3{1000}$. Also suppose that a certain symptom ($S$) has a prevalence (in the general population = people with that disease$ $D and people without that disease [probably with other disease, but it's not important]) of $\dfrac5{1000}$. In a previous research, it was discovered that the conditional probability $P(S|D) = 30\%$ (the probability to have the symptom $S$, given the disease $D$ is $30\%$). **First question**: Could be $P(S|D)$ interpreted as equivalent to the prevalence of the symptom $S$ in the group of people having the disease $D$? **Second question**: I want to create in R a dataset, which shows that: $$P(D|S) = \frac{P(S|D)P(D)} {P(S)}$$ With my fictional data, we can compute $P(D|S)=0.18$, that is interpreted in this way: given a patient with the symptom $S$, the probability that he has the disease $D$ is $18\%$. How to do this? If I use simply the `sample` function, my dataset is lacking of the information that $P(S|D)=30\%$: ``` symptom <- sample(c("yes","no"), 1000, prob=c(0.005, 0.995), rep=T) disease <- sample(c("yes","no"), 1000, prob=c(0.002, 0.998), rep=T) ``` So my question is: how to create a good dataset, including the conditional probability I desire? *EDIT*: I posted the same question also on stackoverflow.com (<https://stackoverflow.com/questions/7291935/how-to-create-a-dataset-with-conditional-probability>), because, in my opinion, my question is inherited to the R language program, but also to statistical theory.
You know the following marginal probabilities ``` Symptom Total Yes No Disease Yes a b 0.003 No c d 0.997 Total 0.005 0.995 1.000 ``` and that `a/(a+b) = 0.3` so this becomes ``` Symptom Total Yes No Disease Yes 0.0009 0.0021 0.003 No 0.0041 0.9929 0.997 Total 0.005 0.995 1.000 ``` and indeed `a/(a+c) = 0.18` as you stated. So in R you could code something like ``` diseaserate <- 3/1000 symptomrate <- 5/1000 symptomgivendisease <- 0.3 status <- sample(c("SYDY", "SNDY", "SYDN", "SNDN"), 1000, prob=c(diseaserate * symptomgivendisease, diseaserate * (1-symptomgivendisease), symptomrate - diseaserate * symptomgivendisease, 1 - symptomrate - diseaserate * (1-symptomgivendisease)), rep=TRUE) symptom <- status %in% c("SYDY","SYDN") disease <- status %in% c("SYDY","SNDY") ``` though you should note that 1000 is a small sample when one of the events has a probability of 0.0009 of happening.
[This answer](https://cs.stackexchange.com/a/53190/49261) says: > > We can have uncountable languages only if we allow words of infinite length. > > > So does that means any (finite / infinite) language or any (finite / infinite) set of languages over any (finite / infinite) alphabet will be uncountable of words have infinite length? Let me simplify this by asking specifically for the simplest language of infinite length string: > > Does the language of "finite" number of words, each of which can be of "infinite length" and are based on "finite" alphabet is uncountable? > > >
I will answer the question "is there a language which is countable and contains a string of infinite length?" The answer is yes. Consider the symbols $\{0, 1\}$ and the language consisting of strings which do not contain the symbol $1$. The string of infinitely many $0$s and no $1$s is in the language, but there are still countably many strings in the language (match the empty string with the natural number 0, match the string of infinitely many $0$s with the natural number 1, match the string $0$ with 2, the string $00$ with 3, $000$ with 4, and so on.) Now, given an infinite string over this set of symbols it is only semi-decidable if the string belongs to this language. That is, there is no TM which will tell you if an infinite string contains no $1$s, but a TM can tell you if a string does contain a $1$. Since your question seems to indicate that you view the cardinality of a set and the decidability of set membership as being related, you may be interested in apartness relations and, more generally, in constructive mathematics: <https://en.wikipedia.org/wiki/Apartness_relation>
After reading this [article](https://www.analyticsvidhya.com/blog/2016/03/practical-guide-principal-component-analysis-python/#comment-129382) I have got a question about PCA. Author was talking about whether to use test set while computing PCA. > > But, few important points to understand: > > > 1) We should not combine the train and test set to obtain PCA components > of whole data at once. Because, this would violate the entire > assumption of generalization since test data would get ‘leaked’ into > the training set. In other words, the test data set would no longer > remain ‘unseen’. Eventually, this will hammer down the generalization > capability of the model. > > > 2) We should not perform PCA on test and train > data sets separately. Because, the resultant vectors from train and > test PCAs will have different directions ( due to unequal variance). > Due to this, we’ll end up comparing data registered on different axes. > Therefore, the resulting vectors from train and test data should have > same axes. > > > Author mentioned "Because, this would violate the entire assumption of generalization since test data would get ‘leaked’ into the training set. In other words, the test data set would no longer remain ‘unseen’." As I understand it... test data can only affect which PC(s) we will choose ( because of changed variance) but data can't leak really( let's say only metadata leak). For each observation we still have data only from it's own previous variables(predictors) just in lower dimensionality(projected). Am I right? or during calculations of matrixes data from test set can leak into?
The author is right. You are wrong. There's no such thing as "only metadata leak" or "can't leak really". It's like saying "not pregnant really" -- either you're pregnant, or you're not. Same here -- either data can leak, or it can't. In this case, data can leak. Maybe only partial data, but it's hard to know just how partial and just how bad the impact of that might be. It might be very bad, or it might not; but since you can't know, testing with a methodology where you know the results might be meaningless isn't a good idea.
To my knowledge, lots of languages can be classified as undecidable after applying Rice's theorem, for example {"M" | L(M) is regular}. But what I am not sure is, how to determine if a language is enumerable after applying Rice's theorem? I reckon we can examine that whether the M halt on some specific input, i.e. test if L(M) is an empty set. Am I right? Is there any principled way to determine a language's(composed of lots of Turing machines) enumerability?
You can use the fact that if $L$ is undecidable and $\overline{L}$ is enumerable then $L$ is not enumerable (since if both $L$ and $\overline{L}$ are enumerable, then $L$ is decidable). In order to determine whether an undecidable language is enumerable or not, you can try to construct an enumerator for the language or its complement.
I have developed a recent interest in propensity scores. I have been using the SPSS tool created by Dr. F. Thoemmes to calculate propensity scores using bivariate "treatment" variables (e.g., depression) and several covariates (e.g., age, sex, persons in household). I am then given a resulting propensity score, but am left wondering what to do with it. I have read that what is typical would be to match two individuals who have nearly identical propensity scores (e.g., two people get group number "17") but who actually differ on your treatment variable (e.g., depression), and then do a paired t-test based on group number and dependent variables (e.g., household income). In this example, we would see how two individuals with all of the same propensities (e.g., age, sex, persons in household) but differing on your treatment variable actually differ on your dependent. This idea makes sense to me, but the software actually does not do matching based on propensity scores, and I don't know how to match them using SPSS or Excel, and I don't want to currently bother to learn how to do so in another program/language (e.g, R). This laziness, lets call it, has forced me to do more research. Two authors state: "After the matching is completed, the matched samples may be compared by an unpaired t-test. (“Matching” erroneously suggests that the resulting data should be analyzed as if they were matched pairs. The treated and untreated samples should be regarded as independent, however, because there is no reason to believe that the outcomes of matched individuals are correlated in any way)." (Schafer & hang, 2008). Other research seems to suggest people often input propensity scores in logistic regressions next to their independent variable of interest, and see how the independent variable predicts while "propensity" is controlled for. Although this line of research is interesting, I must admit I am slightly lost regarding what methods are possible/best in terms of conducting quantitative analyses AFTER propensity score calculations. Any guidance on this matter would be appreciated. I will likely have follow-up questions, too! EDIT: I want to highlight that I am concerned about what types of inferential analyses to do AFTER I get the propensity scores calculated for each individual. For instance, perhaps I could calculate propensity score of being depressed (yes,no) based on covariates (age, number of people in household, smoking, sex, state). The program calculates a propensity score as a new variable for each individual. AFTER this, I am interested in seeing if depression is associated with household income while "controlling for"/"considering"/"matching" (Chose word based on what method you suggest, perhaps) the effect associated with propensity.
You may want to consider other strategies based on propensity scores, like including them as model covariates, or very similar concepts, like Inverse-Probability-of-Treatment weights. These might work in situations where you can't, or don't want, to deal with matching. [This](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1407370/) seems like a decent overview.
Is it generally true that for random variables $X$ and $Y$, regardless of being dependent or independent, that $E[X] = E[X \mid Y \le a] + E[X \mid Y>a]$ ?
Not quite, if we use the [law of total expectation](http://en.wikipedia.org/wiki/Law_of_total_expectation) we would have that $$ E(X) = E(X| Y \le a)P(Y \le a) + E(X|Y > a) P(Y>a) $$
Lots of people use the term *big data* in a rather *commercial* way, as a means of indicating that large datasets are involved in the computation, and therefore potential solutions must have good performance. Of course, *big data* always carry associated terms, like scalability and efficiency, but what exactly defines a problem as a *big data* problem? Does the computation have to be related to some set of specific purposes, like data mining/information retrieval, or could an algorithm for general graph problems be labeled *big data* if the dataset was *big enough*? Also, how *big* is *big enough* (if this is possible to define)?
Data becomes "big" when a single [commodity computer](http://en.wikipedia.org/wiki/Commodity_computing) can no longer handle the amount of data you have. It denotes the point at which you need to start thinking about building supercomputers or using clusters to process your data.
I have several collections of discrete datasets of integer values, say, from 1 to 10, inclusive. I am interested in characterizing the various distributions in these datasets, and it is important for my purposes whether each distribution is shaped unimodally or multimodally. I am not interested in the explicit discrete mode in the sense that, for example, 8 is the most common value. Rather, I am interested in whether each distribution is shaped unimodally or multimodally. For example, what I would call a unimodal shape: ![Unimodal shape](https://i.stack.imgur.com/yjEgB.png) And a bimodal shape: ![Bimodal shape](https://i.stack.imgur.com/amdzi.png) Obviously, the above two distributions exhibit very different shapes. And, as you can see, some of the datasets are very large, containing tens or hundreds of thousands of values. The discrete nature of the datasets is somewhat problematic because the most common test for unimodality, the Hartigan-Hartigan Dip test, assumes a continuous distribution. I am not a statistician, but it appears as if this is a rather rigid assumption. I tested an R implementation of the Dip test on what appeared to be some (artificial) perfectly unimodal discrete data and detected an incredibly small *p*-value, suggesting that the data was actually multimodal. The two other commonly mentioned tests for unimodality appear to be the Silverman test and the excess mass test. I know little about the latter, but [this explanation of the former](http://adereth.github.io/blog/2014/10/12/silvermans-mode-detection-method-explained/) seemed to hint that the Silverman test applies to discrete data, although it was not said explicitly. So, my questions: 1) Am I thinking about this in the right way? Is "unimodality" the correct term here? 2) What is/are the best statistical test(s) to use for my data? Is the Silverman test an appropriate choice? 3) As I am not a statistician, where, if possible, might I find an already working implementation of the above statistical test (ideally Python or R)?
The implication of the question is that these datasets tabulate counts of values drawn independently from a discrete distribution defined on an ordered set of values such as $1,2,\ldots, 10.$ When that is the case, these counts have a multinomial distribution. If by "mode" we mean a strict local maximum height in the graph (padding the left and right of the graph with zeros), or something like that, and if the counts are all relatively large (more than 5 or so ought to do), then an attractive method to assess the number of modes in the underlying distribution is with bootstrapping. The problem this solves is that the number of modes in the distribution might differ from the number of modes in the data. By reconstructing the experiment from the distribution *defined by the data*, we can see to what extent the number of modes might vary. This is "bootstrapping." Carrying out the bootstrapping is easy: write a function to compute the number of modes in a graph and another one to repeatedly sample from the graph's data and apply that function to the sample. Tabulate its results. Example`R` code is below. When given a dataset like the second one in the question, it plots this chart of the bootstrapped mode frequencies: [![Figure](https://i.stack.imgur.com/Mi0mB.png)](https://i.stack.imgur.com/Mi0mB.png) In 676 of 1000 bootstrap samples there were two modes; in 293 there were three; and in 31 there were four. This indicates the data are consistent with an underlying distribution with two or perhaps three modes. There is some possibility of four. The likelihood of more than four is tiny. These results intuitively make sense, because in the dataset the frequencies of the values $8,9,10$ were close and relatively small. It is possible the true frequency of $9$ is less than those of either $8$ or $10,$ causing there to be modes at $1,8,$ and $10.$ The bootstrapping gives us a sense of how much variation in modes is likely based on the random variation implied by the assumed sampling scheme. The results for the first set of data are always two modes. That is because the variation among counts in the thousands or tens of thousands is so small that it is extremely unlikely these data came from a distribution with any other modes besides the obvious ones at $1$ and $8.$ ``` # # Compute strict modes. # Input consists of the counts in the data, in order, including any zeros. # n.modes <- function(x) { n <- length(x)+1 i <- c(0, x) < c(x, 0) sum(i[-n] & !i[-1]) } # # Bootstrap the mode count in a dataset. # n.modes.boot <- function(x, n.boot=1e3) tabulate(apply(rmultinom(n.boot, sum(x), x), 2, n.modes), ceiling(length(x)/2+1)) # # Plot the bootstrap results. # library(ggplot2) n.modes.plot <- function(f) { X <- data.frame(Frequency=f / sum(f)) X$Count <- factor(1:nrow(X)) X <- subset(X, Frequency > 0) ggplot(X, aes(Count, Frequency, fill=Count)) + geom_col(show.legend=FALSE) } # # Show some examples. # x <- c(70, 30,20,40,60,70,110,170,180,165) f <- n.modes.boot(x) print(n.modes.plot(f)) ```
I have trouble proving the following fact in my econometrics homework. The lecturer said that I should merely look at my statistics books, but I cannot seem to find it anywhere! Thus, sorry if it is (too) ignorant a question. Suppose that random variables $\varepsilon\_{1t}$, $\varepsilon\_{2t} \sim IIN(0,\Sigma)$ (i.e. identically independently normally distributed with a vector of means equal to $0$ and a variance-covariance matrix $\Sigma$). How can I then show that $\varepsilon\_{1t}=\lambda\varepsilon\_{2t}+u\_t$, where $\lambda = \frac {\sigma\_{12}} {\sigma\_{22}}$ and $Var(u\_t)=\sigma\_{11}-\frac{\sigma\_{12}^2}{\sigma\_{22}}$ and $u\_t$ is a disturbance term? ($\sigma\_{ij}$ denotes the corresponding element of the variance-covariance matrix). All help is greatly appreciated. :) --- Clarification: I just wanted to add that this question comes from a time-series context. Thus, IIN means that the $\varepsilon$'s are independent over time (i.e. no autocorrelation) and that the distribution does not change. However, there is contemporaneous correlation between the $\varepsilon$'s as they come from a bivariate distribution.
The [Extreme Optimization](http://www.extremeoptimization.com) .NET library has an ARIMA model - <http://www.extremeoptimization.com/QuickStart/CSharp/ArimaModels.aspx> . The library isn't free but there is a free trial period. I haven't used this part of it myself - but I've used other parts and they worked quite well for what I needed. * reddal
I am a beginner in ML so apologize in advance if this sounds silly. I did a logistic regression on a real data set and I am having problems measuring how well my model fits. I still don't understand how to apply the F1 score in my case. After performing the error analysis on the cross validation set I got the following values: Precision: 0.8642534 Recall: 0.8488889 Accuracy: 0.8222222 F1 score: 0.8565022 Are those good values? What do I compare them with? Is this a sign of a strong relationship between my predictor variables and the response variable? The classes on the response variable are not skewed (I am predicting a gender). Any help ... I would much appreciate. Thanks a lot
Logistic regression is **not** a classifier. It is a probability estimator. Any classification that you do is completely outside the scope of logistic modeling. Some good ways to judge the quality of predictions from logistic regression include high-resolution nonparametric calibration plots, Brier score, and $c$-index (concordance probability; ROC area). The R `rms` package's `lrm, calibrate, validate` functions make these easy to do, and `calibrate` and `validate` correct for overfitting (bias/optimism) using the bootstrap (by default) or cross-validation.
What is the difference between Outlier and Anomaly in the context of machine learning. My understanding is that both of them refer to the same thing.
An anomaly is a result that can't be explained given the base distribution (an impossibility if our assumptions are correct). An outlier is an unlikely event given the base distribution (an improbability).
I ran an OLS regression, and the p-value of one variable is 0.065, which is more than the threshold to consider statistically significant. However, it's not too bad either, as my data has only 300+ data points. Therefore I think the explanation needs to be taken with a bit of salt. How would you explain such a finding to non-stats people?
1. Use "words", do not talk to non-technical people about p-values. They won't understand. 2. Use your domain knowledge. It might be on the borderline of the 5% significance level, but your domain knowledge might tell you that it is an important factor. 3. If your domain knowledge tells you that the coefficient *must* be positive (or *must* be negative), then you can do a one-sided test. 4. It is still significant at the 10% level anyway. Edit: To address @whuber's comment: My answer was intended to focus on domain knowledge, not not obscuring the fact that a statistical test was failed. In short, see @whuber's comment.
It is a very standard situation that we have a class of problems $P$, and a subclass $\tilde P\subset P$ which has a certain restricted property. For example, $P$ could be the problem of searching through an arbitrary list of people for someone with height $1.8$ meters, while $\tilde P$ could be the same problem, except that it is known that the list is ordered by height. Searching through an ordered list is far more efficient than through an arbitrary list. **My question is: what would be the term for an algorithm using/relying on a certain particular structure of the problem in order to run more efficiently than would be needed in the general case?** **Or alternatively, what is the term for "omitting calculations" that are necessary to do in the general case but that can be omitted in the restricted problem case?**
The concept you are looking for is very close to that of **[promise problems](https://en.wikipedia.org/wiki/Promise_problem)**. Although promise problems are usually related to decision problems, you could easily generalize the concept to search problems. The algorithm solving a promise problem is allowed to have an unspecified behavior on instances which do not fulfil the promise. Hence, you could set the promise to be the property you are relying on and then analyze the complexity of algorithms which must work correctly only for inputs that satisfy the promise. --- (Adding an alternative answer in response to criticism to my other answer. This seems to better address the penultimate question in the OP (i.e, relying on properties of the input), while the other one (i.e., "reductions") better suits "omitting calculations", that is, the last question in the OP.)
let $A = [ a\_1 \dots a\_n ]$ be a *sorted* list of integers of length $n$. By a simple algorithm that works in-place and in linear time we can remove all duplicates and output a sorted list of unique integers that are precisely the integers which appear . More generally, it suffices to have a list with the promise that all identical integers are merely grouped to apply the same idea. Now let $A$ be a list of integers, neither sorted nor even grouped as above. Let us call the problem of outputting a not-necessesarily sorted list of integers that are precisely the integers appearing $A$ by the term "uniquing problem". An algorithm to solve the "uniqueing problem" might sort the list and remove multiple entries in the obvious way, thus requireing a running time of O( n log(n) ). Can we do better? I don't think so, although there might be a speed up because we actually do not require any sorting on the output list, so this algorithm performs actually too much. Best, Martin
In practice you can solve this problem easily in linear expected time with a hash table: just hash everything and eliminate the duplicates when they collide with each other. See, e.g. [this Python recipe](http://code.activestate.com/recipes/52560-remove-duplicates-from-a-sequence/) which primarily uses hashing but falls back on other strategies for objects that can't be hashed. Whether you accept this as an acceptable theoretical answer depends somewhat on your model of computation.
I'm trying to design a statistical test that will confirm whether an event ***x*** causes another event ***y***. Or more plainly, whether a specific marketing action actually increases telephone calls from potential customers. Let's say there's a list of data like this: [![enter image description here](https://i.stack.imgur.com/7gzQo.png)](https://i.stack.imgur.com/7gzQo.png) The data contain around 100 data points, each recording a telephone call event *(y)*, the date of the call, and the days since the last marketing event *(x)*. The set of hypotheses is as follows: ``` H0: Regardless of days since last marketing event, calls do not occur more frequently. H1: Calls occur most frequently between 2 and 4 days since the last marketing event. ``` In the full data set (+100 data points) the number of data points in the group (Let's call it group A: 2<=x<=4) is larger than that of the group (group B: 2>x>4). But I need a test to help me be sure that the difference is statistically significant. I mean I could just look at the two sums of each group (ex. A=65, B=44) but of course that's not convincing or appropriate. What test would best fit these data? **Updates / Clarifications:** * 'days\_since\_marketing\_event' can be any whole number. * the core question I'm trying to answer is does x cause y.
In the list of data points you provide I see the following: * There are $N=14$ observations. * Each observation of your variable is a single integer value. * The empirical probability mass function of your variable *days since last marketing call* (let's call that variable $x$, Ok? Ok.) is: + $P(x=0) = 2/14$ + $P(x=1) = 1/14$ + $P(x=2) = 6/14$ + $P(x=3) = 3/14$ + $P(x=4) = 2/14$ Now let's look at your null and alternative hypotheses for a moment. You wrote: > > $H\_0$: Regardless of days since last marketing event, calls do not occur more frequently. > > > and > > $H\_{1}$: Calls occur most frequently between 2 and 4 days since the last marketing event. > > > Typically, when performing frequentist hypothesis tests the alternative hypothesis represents the complement (or negation) of the null… that is, if the null hypothesis says $H\_{0}$: *The world looks like **this***, then the alternative hypothesis would be $H\_{1}$: *It **is not the case that** the world looks like **this.*** $H\_{1}$ is literally just a negation (complement) of $H\_{0}$. So I think your alternative hypothesis could be rewritten with that in mind. But before we rewrite $H\_{1}$, let's look at your null hypothesis, which has, if your will pardon my saying so, a funky complex grammatical construction: * You begin with a dependent clause `Regardless of days since last marketing event` * You conclude with an independent clause that is an incomplete comparison: `calls do not occur more frequently` ("more frequently" ***than what?***) Let me suggest (and feel free to write in comments and correct me if I am wrong) that you are trying to express the simple idea **no value for *days since last marketing call* ($x$) is more or less likely than any other value**, or put even more simply **$x$ has a discrete uniform distribution.** This would imply that (assuming $x$ can take only values between 0 and 4, inclusive) that $P(x=2) = \frac{1}{5} = 0.2$ because there are five equally likely values. There are a few ways you could pose a null and alternative hypothesis around this view of your data: $H\_{0}: P(x=2)=0.2$ $H\_{1}: P(x=2) \ne 0.2$ In order to reject $H\_{0}$ or not, you would need to answer the question *Assuming $x$ **is** distributed discrete uniform from 0 to 4, how likely is it that I would have observed $x=2$ when $N=14$?* The binomial distribution can help us with the arithmetic: under $H\_{0}$ the probability of observing 6 data points where $x=2$ out of 14 observations equals $\frac{14!}{6!(14-6)!}0.2^{6}(1-.2)^{14-6} = 0.0323$. If that probability is smaller than your willingness to make a Type I error (i.e. smaller than your $\alpha$, then you would reject $H\_{0}$ and conclude that $P(x=2) \ne 0.2$, and therefore your data are unlikely to be discretely uniformly distributed at the $\alpha$ level. Of course, my reframing of your null hypotheses may not be quite right. For example, perhaps values of $x>4$ are possible? Or perhaps you are interested in testing your empirical PMF versus a theoretical discrete uniform PMF (the one-sample Kolmogorov-Smirnof test may prove useful here)? Critically: what is the specific question about $x$ you want to answer?
computational complexity involves large amounts of Combinatorics and number theory, some ingridiences from stochastics, and an emerging amount of algebra. However, being a analysist, I wonder whether there are applications of analysis into this field, or maybe ideas inspired by analysis. All I know which slightly corresponds to this is the Fourier transform on Finite groups. Can you help me?
Flajolet and Sedgewick published the book "Analytic Combinatorics" <http://algo.inria.fr/flajolet/Publications/AnaCombi/anacombi.html>. I don't know much about that topic, but people in the field use tools from complex analysis. So far, their applications seem more on analysis of algorithms, not on computational complexity, as far as I see.
Imagine you have a study with two groups (e.g., males and females) looking at a numeric dependent variable (e.g., intelligence test scores) and you have the hypothesis that there are no group differences. **Question:** * What is a good way to test whether there are no group differences? * How would you determine the sample size needed to adequately test for no group differences? **Initial Thoughts:** * It would not be enough to do a standard t-test because a failure to reject the null hypothesis does not mean that the parameter of interest is equal or close to zero; this is particularly the case with small samples. * I could look at the 95% confidence interval and check that all values are within a sufficiently small range; perhaps plus or minus 0.3 standard deviations.
Following Thylacoleo's answer, I did a little research. The [equivalence](http://cran.r-project.org/web/packages/equivalence/equivalence.pdf) package in R has the `tost()` function. See Robinson and Frose (2004) "[Model validation using equivalence tests](http://research.eeescience.utoledo.edu/lees/papers_PDF/Robinson_2004_EcolModell.pdf)" for more info.
A property-casualty insurance company issues automobile policies on a calendar year basis only. Let $X$ be a random variable representing the number of accident claims reported during calendar year 2005 on policies issued during calendar year 2005. Let $Y$ be a random variable representing the total number of accident claims that will eventually be reported on policies issued during calendar year 2005—the ultimate total claim count. The probability that an individual accident claim on a 2005 policy is reported during calendar year 2005 is $d$. Assume that the reporting times of individual claims are mutually independent. Assume also that $Y$ has the negative binomial distribution, with fixed parameters $r$ and $p$, given by $$ \mathbb{P}[Y=y]=\binom{r+y-1}{y}p^{r}(1-p)^{y} $$ for $y=0,1,\ldots$. Calculate $\mathbb{P}[Y=y|X=x]$, the probability that the total number of claims reported on 2005 policies is $y$, given that $x$ claims have been reported by the end of the calendar year. ***Remark:*** I know that the solution requires the use of Bayes' Theorem, the Theorem of Total Probability, and the identity $\binom{y}{x}\binom{r+y-1}{y}=\binom{r+x-1}{x}\binom{(r+x)+(y-x)-1}{y-x}$. I have not been able to correctly describe $X$ or include $d$ in the analysis. I need your help to understand this better.
**Hint:** It might assist you to look at a well-known relationship between the [geometric distribution](https://en.wikipedia.org/wiki/Geometric_distribution) and the [negative-binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution#Related_distributions). (Have a look at their moment generating functions and see if you notice anything about them.)
I want to predict whether people will renew their yearly subscriptions. I want to make this prediction though for each user on every day of their subscription up until the day before the subscription ends (i.e., make predictions for each user on day 1, 2, 3... 364). A simple approach is to obtain training data by randomly sampling users at some point in their history (e.g., look at the information we knew about user 1 when they were 300 days away from renewal; look at what we knew about user 2 when they were 23 days away from renewal, etc) and then build a model with a "days away" variable. One issue with this approach is that unless you have a ton of data, the model won't perform as well on any given "days away" (e.g., 30 days before) as a model in which all data was trained with the same `days_away` (all users data was limited to what we knew about them exactly 30 days before they were up for renewal). I'm curious now about other people's thoughts about how to best train a model(s) that can update its prediction each day as the prediction event gets closer in time. Are there better ways to approach it than the approach outline above?
I am not sure if most answers consider the fact that splitting categorical variables is quite complex. Consider a predictor/feature that has "q" possible values, then there are ~ $2^q$ possible splits and for each split we can compute a gini index or any other form of metric. It is conceptually easier to say that "every split is performed greedily based on metric (MSE for continuous and e.g. gini index for categorical)" but it is important to addess the fact that number of possible splits for a given feature are exponential in the number of categories. It is correct observation that CART handles it without exponential complexity, but the algorithm it uses to do so is highly non-trivial, and one should acknowledge the difficulty of the task.
You all may know that simple Matrix Multiplication algorithm: ``` for(i = 0; i < n; i++) for(j = 0; j < n; j++) for(k = 0; k < n; k++) C[i,j] += A[i,k] * B[k,j] end for end for end for ``` When we want to parallelize it, we can, at first, parallelize the outer loop, which is the most easiest thing to understand, since it is clear that it does not have data dependencies with other loops. Most tutorial on the web stops here, like we can not parallelize anymore. But it seems for me that even the second outer loop (the one in the middle): ``` for(j = 0; j < n; j++) ``` can be parallelized. What about the inner loop? Is thus possible to parallelize all three loops of a matrix multiplication algorithm or must we stop with parallelizing only the first two loops?
Ypu're starting from a completely wrong point. The execution time of matrix multiplication does not come from the number of multiplications and additions, it's the number of uncached memory accesses that kill you. Reading a number that's not in any processor cache takes about 100 times longer than a multiplication. So the first step is rearranging the order of operations to perform as many operations as possible using only data that is present in the fastest processor cache. That's your first step before you even think about doing things in parallel. The next step is adding multiple sums in parallel, still in one thread. Instead of summing up C[i,j] for example you add six sums C[i, j], C[i,j+1], C[i, j+2], C[i+1, j], C[i+1,j+1], C[i+1, j+2] in parallel. This means you are limited by the throughput of operations, not the latency. The next step is using SIMD instructions. Your processor quite likely has instructions that perform 2, 4 or 8 floating-point operations just as fast as a single floating-point operation. Once you have done all this, especially the first part where the work is divided up into "blocks", you can easily process multiple blocks on multiple threads. What you need to be careful about is that you have enough cache memory to do so.
Deep learning methods are often said to be very data-inefficient, requiring 100-1000 examples per class, where a human needs 1-2 to reach comparable classification accuracy. However, modern datasets are huge (or can be made huge), which begs the question of whether we really need data-efficient algorithms. Are there application areas where a data-efficient machine learning algorithm would be very useful, despite making trade-offs elsewhere, *e.g.* training or inference efficiency? Would an ML algorithm that is, say, 100x more data-efficient, while being 1000x slower, be useful? People who work on data-efficient algorithms often bring up robotics for "motivation". But even for robotics, large datasets can be collected, as is done in this data-collection factory at Google: [![enter image description here](https://i.stack.imgur.com/frkuJ.jpg)](https://i.stack.imgur.com/frkuJ.jpg) Basically, my concern is that while data-efficient algorithms exist (*e.g.* ILP, graphical models) and could be further improved, their practical applicability is squeezed between common tasks, where huge datasets exist, and rare ones, that may not be worth automating (leave something for humans!).
> > Would an ML algorithm that is, say, 100x more data-efficient, while being 1000x slower, be useful? > > > You have almost answered your own question. There are multiple factors at play here: * The cost of gathering a data point * The cost of training a model with an additional data point * The cost of making the model learn more from a data point * The benefit gained from training the model with an additional data point You are seeking to maximize the expression (benefits - costs). If you measure or estimate these factors accurately enough, and convert to comparable units (such as monetary equivalents perhaps), you'll find it easy to determine what to improve most easily. As others have said, there are various applications with completely different such factors.
I'm building a model and trying to get the relationship between two multi-level categorical variables. For example, I want to know the relationship between race and likelihood of graduate from collage, we have 5 races and YES or NO for graduation. How to find the which race have higher likelihood or which have lower? Thanks
Traditional statistics like Chi-squared tests and Cramer's V can be used to determine relationship between two categorical features.
I've seen "residuals" defined variously as being either "predicted minus actual values" or "actual minus predicted values". For illustration purposes, to show that both formulas are widely used, compare the following Web searches: * [residual "predicted minus actual"](https://www.google.com/search?q=residual%20%22predicted%20minus%20actual%22) * [residual "actual minus predicted"](https://www.google.com/search?q=residual%20%22actual%20minus%20predicted%22) In practice, it almost never makes a difference, since the sign of the invidividual residuals don't usually matter (e.g. if they are squared or the absolute values are taken). However, my question is: **is one of these two versions (prediction first vs. actual first) considered "standard"?** I like to be consistent in my usage, so if there is a well-established conventional standard, I would prefer to follow it. However, if there is no standard, I am happy to accept that as an answer, if it can be convincingly demonstrated that there is no standard convention.
The residuals are always actual minus predicted. The models are: $$y=f(x;\beta)+\varepsilon$$ Hence, the residuals $\hat\varepsilon$, which are estimates of errors $\varepsilon$: $$\hat\varepsilon=y-\hat y\\\hat y=f(x;\hat\beta)$$ I agree with @whuber that the sign doesn't really matter mathematically. It's just good to have a convention though. And the current convention is as in my answer. Since OP challenged my authority on this subject, I'm adding some references: * "[(2008) Residual. In: The Concise Encyclopedia of Statistics. Springer, New York, NY](https://link.springer.com/referenceworkentry/10.1007/978-0-387-32833-1_353), which gives the same definition. * Fisher's "Statistical Methods for Research Workers" 1925, has the same definition too, see Section 26 in [this 1934 version](http://www.haghish.com/resources/materials/Statistical_Methods_for_Research_Workers.pdf). Despite unassuming title, this is an important work in historical context
I understand that in programming languages in general, function routines have what I can generally name a parameter set (or "list"), usually symbolized with parenthesis and if it is not empty, it is also utilized in function calls. In JavaScript: ```js function myFunction (passed_parameter_1, passed_parameter_2) { // Some code... } myFunction(corresponding_passed_argument_1, corresponding_passed_argument_2); ``` My problem ---------- I never learned in programming courses: * What is the name of that feature * why, in programming languages in general, a user *must* have this feature, even if the parameter set in `routine` and the corresponding argument set in `call`, are both empty. * Why not let a user define a group of parameters (or not) per need, only inside a routine's scope instead Interim note ------------ * One could claim it is needed for a compiler to know what is a function but a `keyword` such as `function` reinforce the opinion that it isn't. * One can claim it is needed for separation of `internal implementation` and `external interface` but such separation could be done as two different sub-scopes of the main scope (`{}`), for example: > > > ``` > function myFunction { > par myPar1 > par myPar2 > > // some code > > } > > myFunction {1,2} > > ``` > > My question ----------- Why, in programming languages in general, must function declarations be followed by a parameter set, external to the sub-routine scope?
In my opinion, your question is formed on the basis of working with a small number of programming languages and extrapolating conclusions that may not be valid. Here are some assumptions I have recognised that may not be valid which I will later explore in greater detail: * All function calls must use parenthesis. * All programming languages use these syntactic notations to mean the same thing * There is some common underlying nature that requires things to be like this I further understand you hinted that you have no significant background in Computer Science theory, but some of the background material to why functions are usually written like this, is about the theoretical evolutionary route that most current languages came from; So, lets review some of that, as it may help readers (such as students of Computer Science and programming per se) understand your question and our answers more fully. Consider an early computer language, such as [Fortran](https://en.wikipedia.org/wiki/Fortran), which was devised in 1954. It had functions and used parentheses. Actually, at that time the character set available on computer was so limited that it had no other bracketing symbols, and only had upper case letters. It did not have access to `[]{}<>` symbols. Originally all comparisons were made to zero so that comparison operators were not needed, but when they were later introduced they had to be written as .GT. and .LT. due to the lack of characters. The result of this was we could write: ``` FUNCTION B B=1.0 RETURN FUNCTION C(D) C = D RETURN DIMENSION A(10) A(1) = C(1) + B STOP END ``` You can see that we have used parentheses for `A(1)` (which is an array) and `C(1)` which is a function call, but not for `+ B` which is also a function call. We have used parentheses for `A(10)` which is an array declaration and also for `C(D)` which is a function declaration with a formal parameter. Thee design on the language is such that the compiler can work out what is intended and instruct the computer in its machine code to take appropriate actions. However, sometimes this is not convenient for the compiler writer as that may require multiple passes (or readings) of the program to determine what use of parenthesis was intended before determining the semantics of that fragment of code. [Algol 60](https://en.wikipedia.org/wiki/ALGOL_60) also has a similar situation, shown in this example: > > **integer** **function** one; > > **begin** > > one := 1 > > **end**; > > > **integer** **function** two (**integer** p); > > **begin** > > two := p > > **end**; > > > **begin** > > **integer** **array** a [ 1: 10 ]; > > a[one] := two(1) > > **end** > > > In the above example `one` is a function that uses no parentheses in its declaration or invocation, whereas `two` is a function that used parentheses to indicate the formal parameter list (**integer** p) and also the actual argument `(1)`. this time they are not confused with arrays at all, because the array exclusively used `[]` in both its declaration and application. So thus invalidating your assumption on what `()` might mean. We also get some symbol overloading in more modern advanced languages. Take [Algol 68](https://en.wikipedia.org/wiki/ALGOL_68) for example. It also uses parentheses for arrays and functions (but also almost everything else), but also like Algol 60 that proceeded it, it uses square brackets `[]`: > > ( [] **int** a := (1,2,3); > > print((a[1])) ) > > > Here '[]' are used to indicate the array declaration, whereas `(1,2,3)` indicates the array constant used to initialise it. The print statement is even more complex. The inner parentheses `(a[1])` indicate an array constant containing one element and the outer `print()` indicates a function call, and the `[1]` indicates an array selection operation. The all-encompassing `()` are used for the program *begin* and *end*. One more historical language is important here, which is [LISP](https://en.wikipedia.org/wiki/Lisp_(programming_language)). Using LISP we can explore functions as first class objects, lambda functions and other aspects of functions and the notation associated with them. In LISP we can store a function in a named variable like this: ``` (setq double (function (lambda (x) (+ x x)) )) ``` Notice that everything is using parentheses; that is the nature of LISP. LISP is constructed entirely of lists denoted by the tuples in parentheses like this: ``` (x x x x) ``` Those lists can be either or both data and functions, it makes no difference. Now that the name `double` contains a function we can do various things with it. We can **invoke** the function like this: ``` (funcall double 11) ``` This leads into coverage of the notation used in command command languages such as Windows/DOS command prompt and unix shells. In them each line is essentially a function call: ``` cd path/file ``` Function calls can be nested by using *backtick notation*: ``` cd `which file` ``` Here no parentheses have been used at all, yet it is a notation familiar to many experienced computer users. I have not really covered other uses of functions and their notations, such as functions as first class objects, passing functions as arguments, partial parameterisation, lazy evaluation and many many more concepts that could shine a light onto why function notation is as it is. We have to have a notation to ultimately say: This name is a function and I want it to be invoked at this point. Some of what we are exploring here is the separation between programming language syntax and its semantics. How much of the symbols indicate structure and how much communicate meaning (semantics), and how can this be determined by automated means. An introduction (to a deeper) theoretical topic can be started by looking at [Chomsky Language Hierarchy](https://en.wikipedia.org/wiki/Chomsky_hierarchy). Algol 60's nice separation of `[]` from `()` meant that a *Context Free Grammar* could be used to describe the syntax of the language. Algol 68, in contrast, by the use of symbol overloading (and for other reasons) mean that a more complex *Context Sensitive Grammar* (known as W-grammars) needed to be used. The more complex the grammar needed to express the language the more complex the compiler implementation. Language designers wanted both a powerful and expressive language, but also one that could be implemented without undue complexity in the compiler. This often resulted in some stereotyping of language features in ways that were know to be implemented more easily. In the evolution of languages they developed into different families as they grew from one to another. To prevent the overloading of `()` meaning *begin* and *end* the use of the symbols `{}` became common place, and the use of `[]` for array was considered sensible. However many languages use `()` to indicate both formal parameters and actual arguments, but this is by no means a universal characteristic, and there are exceptions to everything. When we think that many languages seem to use the `name()` notation for function invocation, what we are really seeing is that many languages in common use today inherit a set of notations from the **C** family of languages through the Algol route. That is all we are seeing is the genealogy of language in the notations. I hope by a brief tour on the history of the use of parenthesis in the evolution of computer languages and its relation to computer science theory you have gained an insight into what you are trying to understand. --- And if it matters: yes I have worked on Fortran, Algol 60, Algol 68 and other esoteric language compilers
I have some questions about the AIC and hope you can help me. I applied model selection (backward, or forward) based on the AIC on my data. And some of the selected variables ended up with a p-values > 0.05. I know that people are saying we should select models based on the AIC instead of the p-value, so seems that the AIC and the p-value are two difference concepts. Could someone tell me what the difference is? What I understand so far is that: 1. For backward selection using the AIC, suppose we have 3 variables (var1, var2, var3) and the AIC of this model is AIC\*. If excluding any one of these three variables would not end up with a AIC which is significantly lower than AIC\* (in terms of ch-square distribution with df=1), then we would say these three variables are the final results. 2. A significant p-value for a variable (e.g. var1) in a three variable model means that the standardized effect size of that variable is significantly different from 0 (according to Wald, or t-test). What's the fundamental difference between these two methods? How do I interpret it if there are some variables having non-significant p-values in my best model (obtained via the AIC)?
AIC and its variants are closer to variations on $R^2$ then on p-values of each regressor. More precisely, they are penalized versions of the log-likelihood. You don't want to test differences of AIC using chi-squared. You could test differences of the log-likelihood using chi-squared (if the models are nested). For AIC, lower is better (in *most* implementations of it, anyway). No further adjustment needed. You really want to avoid automated model selection methods, if you possibly can. If you must use one, try LASSO or LAR.
Wikipedia's [Minimum Spanning Tree](https://en.wikipedia.org/wiki/Minimum_spanning_tree) reads: > > **Minimum-cost edge** > > > If the minimum cost edge e of a graph is unique, then this edge is included in any MST. > > > Proof: if e was not included in the MST, removing any of the (larger cost) edges in the cycle formed after adding e to the MST, would yield a spanning tree of smaller weight. > > > So far so good, but what about the **two** minimum cost edges? Since I cannot think of any counter-example for which it doesn't hold, I have written the following Python script. It generates a random graph, weights it with consecutive integers (1, 2, 3, ...), calculates its (unique) MST, and repeats until the second lightest weight of the MST is not 2. ```python n = 8 p = 0.3 while True: G = nx.erdos_renyi_graph(n, p) for (w, (x, y)) in enumerate(G.edges(), 1): G[x][y]["weight"] = w edges = list(nx.minimum_spanning_edges(G, data=False)) weights = sorted(G[x][y]["weight"] for (x, y) in edges) # print(weights) if weights[1] != 2 and len(weights) > 1: print(weights) break ``` You can convince yourself that this script works as intended by uncommenting the first `print`, or by replacing `weights[1] != 2` by `weights[2] != 3` (which makes it stop when the MST doesn't includes the edge of weight 3). However, it doesn't terminate, which seems to indicate that the Wikipedia's proposition could be extended to the **two** lowest cost edges. There is a possibility that such graphs are rare, that the parameters of my random generator are biased against them, or my consecutive weights are wrong somehow. What do you think? Can you prove this extension, or produce a counter-example?
If the MST does not contain at least one among the two minimum-cost edge then add it to the MST (you get a graph with a single loop and the loop has length 3 or larger) remove from the loop the edge having larger cost (it costs more than the one you just introduced) and you are left with a Tree and this tree is Spanning and it costs less than the original one. Proved by contradiction!
I understand k fold cross validation has been discussed many times here and I read some of them. I understand how k Fold works, but what confuses me is the following: Cross validation in my understanding is used for model validation, which validates an existing model. For example, a model builds on full dataset. But what I read about k fold is that it will build 5 different models based on 4/5 of the full date set (sampled 5 times) and test it on the rest 1/5 dataset, if k =5. It seems it has nothing to do with validating an existing model. Could anyone kindly comment on this?
Cross-validation estimates the performance of a *method* of producing a model, not of the model itself. The performance of the model can reasonably be expected to be consistent with the performance of models generated by this method, but the indirect inference there is usually not stated.
Recently I had done some analysis of the effects of reputation on upvotes (see the [blog-post](http://stats.blogoverflow.com/2011/08/04/does-jon-skeet-have-mental-powers-that-make-us-upvote-his-answers-the-effect-of-reputation-on-upvotes/)), and subsequently I had a few questions about possibly more enlightening (or more appropriate) analysis and graphics. So a few questions (and feel free to respond to anyone in particular and ignore the others): 1. In its current in incarnation, I did not mean center the post number. I think what this does is give the false appearance of a negative correlation in the scatterplot, as there are more posts towards the lower end of the post count (you see this doesn't happen in the Jon Skeet panel, only in the mortal users panel). Is it innapropriate to not mean-center the post number (since I mean centered the score per user average score)? 2. It should be obvious from the graphs that score is highly right skewed (and mean centering did not change that any). When fitting a regression line, I fit both linear models and a model using the Huber-White sandwhich errors (via [`rlm` in the MASS R package](http://www.ats.ucla.edu/stat/r/dae/rreg.htm)) and it did not make any difference in the slope estimates. Should I have considered a transformation to the data instead of robust regression? Note that any transformation would have to take into account the possibility of 0 and negative scores. Or should I have used some other type of model for count data instead of OLS? 3. I believe the last two graphics, in general, could be improved (and is related to improved modelling strategies as well). In my (jaded) opinion, I would suspect if reputation effects are real they would be realized quite early on in a posters history (I suppose if true, these may be reconsidered "you gave some excellent answers so now I will upvote all of your posts" instead of "reputation by total score" effects). How can I create a graphic to demonstrate whether this is true, while taking into account for the over-plotting? I thought maybe a good way to demonstrate this would be to fit a model of the form; $$Y = \beta\_0 + \beta\_1(X\_1) + \alpha\_1(Z\_1) + \alpha\_2(Z\_2) \cdots \alpha\_k(Z\_k) + \gamma\_1(Z\_1\*X\_1) \cdots \gamma\_k(Z\_k\*X\_1) + \epsilon $$ where $Y$ is the `score - (mean score per user)` (the same as is in the current scatterplots), $X\_1$ is the `post number`, and the $Z\_1 \cdots Z\_k$ are dummy variables representing some arbitrary range of post numbers (for example $Z\_1$ equals `1` if the post number is `1 through 25`, $Z\_2$ equals `1` if the post number is `26 through 50` etc.). $\beta\_0$ and $\epsilon$ are the grand intercept and error term respectively. Then I would just examine the estimated $\gamma$ slopes to determine if reputation effects appeared early on in a posters history (or graphically display them). Is this a reasonable (and appropriate) approach? It seems popular to fit some type of non-parametric smoothing line to scatterplots like these (such as loess or splines), but my experimentation with splines did not reveal anything enlightening (any evidence of postive effects early on in poster history was slight and tempermental to the number of splines I included). Since I have a hypothesis that the effects happen early on, is my modelling approach above more reasonable than splines? Also note although I've pretty much dredged all of this data, there are still plenty of other communities out there to examine (and some like superuser and serverfault have similarly large samples to draw from), so it is plenty reasonable to suggest in future analysis that I use a hold-out sample to examine any relationship.
This is a brave try, but with these data alone, **it will be difficult or impossible to answer your research question** concerning the "effect of reputation on upvotes." The problem lies in separating the effects of other phenomena, which I list along with brief indications of how they might be addressed. * **Learning effects**. As reputation goes up, experience goes up; as experience goes up, we would expect a person to post better questions and answers; as their quality improves, we expect more votes per post. Conceivably, *one way to handle this in an analysis would be to identify people who are active on more than one SE site*. On any given site their reputation would increase more slowly than the amount of their experience, thus providing a handle for teasing apart the reputation and learning effects. * **Temporal changes in context.** These are myriad, but the obvious ones would include + *Changes in numbers of voters over time*, including an overall upward trend, seasonal trends (often associated with academic cycles), and outliers (arising from external publicity such as links to specific threads). *Any analysis would have to factor this in when evaluating trends in reputation for any individual*. + *Changes in a community's mores over time*. Communities, and how they interact, evolve and develop. Over time they may tend to vote more or less often. *Any analysis would have to evaluate this effect and factor it in*. + *Time itself.* As time goes by, earlier posts remain available for searching and continue to garner votes. Thus, *caeteris paribus*, *older* posts ought to produce more votes than newer ones. (This is a *strong* effect: some people consistently high on the monthly reputation leagues have not visited this site all year!) This would mask or even invert any actual positive reputation effect. *Any analysis needs to factor in the length of time each post has been present on the site*. * **Subject popularity.** Some tags (e.g., [r](/questions/tagged/r "show questions tagged 'r'")) are far more popular than others. Thus, changes in the kinds of questions a person answers can be confounded with temporal changes, such as a reputation effect. Therefore, *any analysis needs to factor in the nature of the questions being answered.* * **Views** [added as an edit]. Questions are viewed by different numbers of people for various reasons (filters, links, etc.). It's possible the number of votes received by answers are related to the number of views, although one would expect a declining proportion as the number of views increases. (It's a matter of how many people who are truly interested in the question actually view it, not the raw number. My own--anecdotal--experience is that roughly half the upvotes I receive on many questions come within the first 5-15 views, although eventually the questions are viewed hundreds of times.) Therefore, *any analysis needs to factor in the number of views,* but probably not in a linear way. * **Measurement difficulties.** "Reputation" is the sum of votes received for different activities: initial reputation, answers, questions, approving questions, editing tag wikis, downvoting, and getting downvoted (in descending order of value). Because these components assess different things, and not all are under the control of the community voters, *they should be separated for analysis*. A "reputation effect" presumably is associated with upvotes on answers and, perhaps, on questions, but should not affect other sources of reputation. *The starting reputation definitely should be subtracted* (but perhaps could be used as a proxy for some initial amount of experience). * **Hidden factors.** There can be many other confounding factors that are impossible to measure. For example, there are various forms of "burnout" in participation in forums. What do people do after an initial few weeks, months, or years of enthusiasm? Some possibilities include focusing on the rare, unusual, or difficult questions; providing answers only to unanswered questions; providing fewer answers but of higher quality; etc. Some of these could mask a reputation effect, whereas others could mistakenly be confused with one. *A proxy for such factors might be changes in rates of participation by an individual*: they could signal changes in the nature of that person's posts. * **Subcommunity phenomena.** A hard look at the statistics, even on very active SE pages, shows that a relatively small number of people do most of the answering and voting. A clique as small as two or three people can have a profound influence on the growth of reputation. A two-person clique will be detected by the site's built-in monitors (and one such group exists on this site), but larger cliques probably won't be. (I'm not talking about formal collusion: people can be members of such cliques without even being aware of it.) *How would we separate an apparent reputation effect from activities of these invisible, undetected, informal cliques?* Detailed voting data could be used diagnostically, but I don't believe we have access to these data. * **Limited data.** To detect a reputation effect, you will likely need to focus on individuals with dozens to hundreds of posts (at least). That drops the current population to less than 50 individuals. With all the possibility of variation and confounding, that is far too small to tease out significant effects unless they are very strong indeed. *The cure is to augment the dataset with records from other SE sites*. Given all these complications, it should be clear that the exploratory graphics in the blog article have little chance of revealing anything unless it is glaringly obvious. Nothing leaps out at us: as expected, the data are messy and complicated. It's premature to recommend improvements to the plots or to the analysis that has been presented: **incremental changes and additional analysis won't help until these fundamental issues have been addressed**.
I'm reading about an operating system but some concept confuses me. **What doesn't confuse me:** When an interrupt or system call or processor exception occurs, it happens when user mode tries to switch to the kernel mode, the operating system uses PCB(process control block) to save needed stuff, namely - hardware completing previous instructions, saves program counter, stack pointer, registers, changes execution state and so on. For each process has a separate process control block. This seems logical. **What confuses me:** But after several subchapters, I've read that interrupt or system call or processor exception occurs, an operating system needs to save some information - program counter, stack pointer, registers, execution state and so on. But this time saves not in PCB, it saves in "interrupt stack". It also said for each process needs the separate location of "kernel stack". After that appears a few questions: * Is "interrupt stack" and "kernel stack" the same thing, just different named? * If the operating system already has PCB, why it needs to use "kernel stack. Yes, for each process operating system has separate PCB. And the separate location for each process in kernel stack. * Does the operating system use both kernel stack and PCB? If it does, why not just one of these? * What difference between PCB and kernel stack, if they are used for the same tasks P.S Operating System Principles and Practice
To be in NPC a problem, C, has to be in NP, and every problem in NP has to be polynomial-time reducible to C. If P=NP every problem in NP can be solved in polynomial time. This means that if P=NP, we can for most problems in P define the following reduction to C; Solve the problem in polynomial time, if the instance you are looking at returns true then reduce it to an input to C for which we know that C returns true - Otherwise reduce it to an input for C that we know returns false. So for an example; if we take C to the problem of determining if a number is even. And we want to reduce the hamiltonian path problem. We solve the hamiltonian path problem; which we can do in polynomial time since we assume P=NP. If we find that a hamiltonian path exists we select 2 as input to C. If we find that no hamiltonian path exists then we select 1. This of course fails if C always returns true, or always returns false, so those two problems, as noted by Tom, while part of P, can not be part of NPC. However, all other problems in P will be in NPC if P=NP.
AlexNet architecture uses zero-paddings as shown in the pic. However, there is no explanation in the paper why this padding is introduced. [![enter image description here](https://i.stack.imgur.com/oJVjf.png)](https://i.stack.imgur.com/oJVjf.png) Standford CS 231n course teaches we use padding to preserve the spatial size: [![enter image description here](https://i.stack.imgur.com/dtybe.png)](https://i.stack.imgur.com/dtybe.png) **I am curious if that is the only reason for zero padding? Can anyone explain the rationale behind zero padding? Thanks!** **Reason I am asking** Let's say I don't need to preserve the spatial size. Can I just remove padding then w/o loss of performance? I know it results in very fast decrease in spatial size as we go to deeper layers, but I can trade-off that by removing pooling layers as well.
It seems to me the most important reason is to preserve the spatial size. As you said, we can trade-off the decrease in spatial size by removing pooling layers. However many recent network structures (like [residual nets](https://arxiv.org/abs/1512.03385), [inception nets](https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf), [fractal nets](https://arxiv.org/abs/1605.07648)) operate on the outputs of different layers, which requires a consistent spatial size between them. Another thing is, if no padding, the pixels in the corner of the input only affect the pixels in the corresponding corner of the output, while the pixels in the centre contribute to a neighbourhood in the output. When several no-padding layers get stacked together, the network sort of ignores the boarder pixels of the image. Just some of my understandings, I believe there are other good reasons.
Given two **increasing** functions $f$ and $g$ with values in the **natural numbers**, is it always the case that either $f\in O(g)$ or $g\in O(f)$. If the statement is true, then can anyone provide a counterexample for when any of the two conditions in bold are relaxed. If the statement is false, can someone provide a counterexample to show why I believe that the statement is true and my attempt at the proof is by assuming without loss of generality that $f\notin O(g)$ $f\notin O(g)\implies\neg(\exists N\in\mathbb{N}$ s.t $f(n)\leq Cg(n)$, for some $C>0$ and $\forall n>N)$ $\implies\forall N\in\mathbb{N}\space,\space f(n)>Cg(n)$, for some $C>0$ and $\forall n>N$ Have I negated the expression correctly, because I can't see how I can get $g\in O(f)$ from this expression
Here is a simple counterexample: $$ \begin{align\*} &f(2m) = 2^{2^{4m}} & &g(2m) = 2^{2^{4m+1}} \\ &f(2m+1) = 2^{2^{4m+3}} & &g(2m+1) = 2^{2^{4m+2}} \end{align\*} $$ The first few values (starting at $0$) are $$ f = 2^{2^0}, 2^{2^3}, 2^{2^4}, 2^{2^7}, 2^{2^8}, 2^{2^{11}}, \ldots \\ g = 2^{2^1}, 2^{2^2}, 2^{2^5}, 2^{2^6}, 2^{2^9}, 2^{2^{10}}, \ldots $$ When $n$ is even, $g(n)$ is much larger than $f(n)$. When $n$ is odd, $f(n)$ is much larger than $g(n)$. Hardy (*Orders of infinity*) showed that if $f,g$ are both *logarithmo-exponential* functions then exactly one of the following holds: $|f| = o(|g|), |f|=\Theta(|g|), |f| = \omega(|g|)$ (here $|f|$ is the norm of $f$, which can be complex-valued). In particular, either $|f| = O(|g|)$ or $|g| = O(|f|)$. Here logarithmo-exponential functions are those that can be formed using the four arithmetic operations and the two functions $\log$ and $\exp$; complex constants can be used, so you can represent, for example $\sin$ (though not $\arcsin$).
As a preperation of an exam about algorithms and complexity, I am currently solving old exercises. One concept I have already been struggling with when I encountered it for the first time is the concept of amortised analysis. What is amortised analysis and how to do it? In our lecture notes, it is stated that "amortised analysis gives bounds for the "average time" needed for certain operation and it can also give a bound for the worst case". That sounds really useful but when it comes to examples, I have no idea what I have to do and even after having read the sample solution, I have no idea what they are doing. > > Let's add up 1 in base 2, i.e. 0, 1, 10, 11, 100, 101, 110, 111, 1000, ... Using amortised analysis, show that in each step only amortised constantly many bits need to be changed. > > > (the exercise originally is in German, so I apologise for my maybe not perfectly accurate translation) Now the standard solution first defines $\phi(i) := c \cdot \# \{\text{1-bits in the binary representation}\}$ for some constant $c > 0$. I think this is what is called the potential function which somehow corresponds to the excessive units of time (but I have no idea why I would come up with this particular definition). Assuming that we have to change $m$ bits in the $i$-th step. Such a step always is of the form $$\dots \underbrace{0 1 \dots 1}\_m \to \dots \underbrace{1 0 \dots 0}\_m.$$ This statement is understandable to me, however again I fail to see the motivation behind it. Then, out of nowhere, they come up with what they call an "estimate" $$a(i) = m + c(\phi(i) - \phi(i-1)) = m + c(-m + 2)$$ and they state that for $c=1$, we get $a(i)=2$ which is what we had to show. What just happened? What is $a(i)$? Why can we choose $c=1$? In general, if I have to show that in each step only amortised constantly many "units of time" are needed, does that mean that I have to show that $a(i)$ is constant? There are a few other exercises regarding amortised analysis and I don't understand them either. I thought if someone could help me out with this one, I could give the other exercises another try and maybe that'll help me really grasp the concept. Thanks a lot in advance for any help.
Let $a\_i$ be the amortized costs of operation $i$, $c\_i$ be the actual costs of operation $i$, and $D\_i$ the data structure after operation $i$. The amortized costs of an operation are **defined** as $$a\_i:=c\_ i+ \Phi(D\_i) -\Phi(D\_{i-1}).$$ You usually assume the following 1. The potential is always positive, that is $\forall\colon i \Phi(D\_i)\ge 0$, 2. In the beginning the potential is 0, that is $\Phi(D\_0)=0$. These two assumptions are not always necessary but they are commonly used and make the life easier. Now you can consider the potential at costs that have already been paid by previous operations. To justify this definition consider the accumulated costs of all operations. We get the following telescoping sum $$ \sum\_{i=1}^n a\_i = \sum\_{i=1}^n (c\_i +\Phi(D\_i) -\Phi(D\_{i-1})) = \Phi(D\_n) -\Phi(D\_{0}) + \sum\_{i=1}^n c\_i \ge \sum\_{i=1}^n c\_i. $$ So you see that the sum of the amortized costs exceeds the total actual costs. Hence the amortized costs give an upper bound. I have not seen the use of the $c$ in your formula. But the $c$ can be simply added to the potential function $\Phi$. So think of the $c$ as a factor that weights the potential, and be redefining the function $\Phi$ you can get rid of it. So to answer your concrete questions. $a(i)$ denotes the amortized costs for the $i$th operation, that is the actual costs in a sequence of operation charged to operation $i$. The $c$ is some positive factor you can pick. And the $a(i)$s are not constant in general (they are in your example) - you don't have to show anything for the $a(i)$ costs, the values $a(i)$ are the result of your analysis. Since you seem to speak German, you might also have a look at my [German class notes](http://cs.uni-muenster.de/u/schulz/WS10/potential.pdf) to the potential function.
Maybe my question doesn't make sense, because I lack some more thorough understanding, but I was curious if arithmetic was Turing complete? As I understand it, a "model of computation" is a mechanism where you can compute outputs from inputs. Thus a "computation" is just a mapping from inputs to outputs. So if say the universe of possible inputs and outputs is: 1 and 2. This would be all possible computations: ``` 1 -> 1 1 -> 2 1 -> 1,2 1 -> 2,1 2 -> 1 2 -> 2 2 -> 1,2 2 -> 2,1 1,2 -> 1 1,2 -> 2 1,2 -> 1,2 1,2 -> 2,1 2,1 -> 1 2,1 -> 2 2,1 -> 1,2 2,1 -> 2,1 ``` Now, I think this isn't even technically the full set, because the full set would be infinite, since I could have repeated inputs and outputs like `1,1 -> 2,2,1,1`. But at least this is the general gist which I understand. And in my "model of computation", I should be able to say, apply the computation X to some inputs, where X is one of the above mapping, and get back the corresponding outputs. So from this, I understand that the Turing Model is proven to be able to map all inputs to outputs over the universe of non complex numbers. So my question is, would arithmetic be Turing Complete in its ability to map inputs to outputs? Or is there some mappings that can not be formulated using arithmetic, but can using the Turing model?
It depends what you mean by "arithmetic". It's a fairly well-known result that [Peano arithmetic](https://en.wikipedia.org/wiki/Peano_axioms) (PA) is powerful enough to model Turing machines. There are other models of arithmetic, such as Presburger arithmetic (which is strictly weaker; it's essentially PA without multiplication) and [real closed fields with a partial order](https://en.wikipedia.org/wiki/Real_closed_field), which are known to be decidable.
I try to minimize the function $$ f(x\_1, …x\_n)=\sum\limits\_{i}^n-a\_i\cos(4(x\_i-b\_i)) +\sum\limits\_{ij}^\text{edge}- \cos(4(x\_i-x\_j)), \quad x\_i,b\_i\in (-\pi, \pi)$$ where $\sum\limits\_{ij}^{edge}$ only sums over predefined edges (the edges are sparse, typical there are 4 edges for each $i$). The variables $a\_i, b\_i$ are constants, $ a\_i \in (0,1)$; many $a\_i$s are 0. The gradient at each iteration can be directly computed. I observe that Gradient Descent converges very fast, but produces very bad results (very easily falls into local optimal). I have no idea whether Conjugate Gradient Algorithm or BFGS algorithm will make a significant difference.$$$$ While general optimization such as simulated annealing and GA do not use the gradient information (they seem powerful but blind to local derivative), thus I am looking for techniques making good use of gradient information.
One alternative is to use [Newton's method](https://en.wikipedia.org/wiki/Newton's_method_in_optimization). Newton's method uses information from the first and second derivative (i.e., the Hessian matrix), so it can converge in fewer steps than gradient descent (which uses only information from the first derivative) -- but the cost is that each single step is slower, because it requires more complex computations. If Newton's method is too slow, there are faster variants, such as BFGS and other [quasi-Newton methods](https://en.wikipedia.org/wiki/Quasi-Newton_method) or the conjugate gradient method. Another approach is to continue using gradient descent, but make some adjustments. One simple thing you can try is to apply it many times, each time with a different random starting point. Also there are various tuning knobs you can tweak, such as adjusting the step size or using momentum. A third possibility could be to try to make a "smarter" choice of starting point for gradient descent: you can approximate each cosine by a quadratic function (using the Taylor series expansion at a particular point), obtain a function $f^\*$ that is a quadratic approximation to $f$, and then find the exact minimum for $f^\*$ using least-squares method. This basically amounts to applying one step of Newton's method and then continuing with gradient descent from there. Trying several different methods shouldn't be too hard if you use the right software, as there are many software packages that let you specify the function and then make it easy to try multiple different optimization methods.
I working on a problem to identify subgroups within a population. After writing some code to get my data into the correct format I was able to use the apriori algorithm for association rule mining. When I look at the results I see something like the following: ``` rule 1 0.3 0.7 18x0 -> trt1 rule 2 0.4 0.7 17x0 -> trt1 rule 3 0.3 0.7 16x1 -> trt1 ``` The variables in the group come from how I discretized the data and can be read as follows ``` (variable name = 17x)(value = 1). ``` I want to make sure that I'm reading this correctly in that someone that got a response of 0 in group 17 or somebody that got a response of 0 in group 18 would fall into the trt1 group but not people that got a response of 0 in both categories since there is no ``` 18x0 17x0 -> trt1 ``` rule.
When interpreting traditional (frequency-confidence) association rules, it is important to note that the discoveries do not necessarily express positive statistical dependence, but they may also express negative dependence, independence or statistically insignificant positive dependence (that doesn't hold in the population). So, the absence of a certain rule does not mean that the rule antecedent and consequent are not positively associated. If you want to find positive dependencies in the population, you need to use other search criteria. In principle, it is possible to search first for all association rules with so small minimum frequency and confidence thresholds that no true associations are missed and then filter the results with other measures that estimate the strength and/or significance of the dependence (e.g. leverage, lift, chi^2, mutual information, Fisher's p, etc). Some apriori implementations even offer this option but the choice of measures may be limited. However, this approach is often infeasible, because the number of rules explodes exponentially (the total number of all possible rules is O(2^k) where k is the number of attributes). There are also efficient algorithms that find only some condensed presentation of all frequent and confident association rules (e.g. all rule where the antecedent is a closed set) and that can be used with minimal thresholds. However, they may be difficult to interpret because they have extra criteria which association rules are presented and you still need to do the filtering afterwards. A better approach is to use algorithms that search directly with statistical goodness measures without any (or at most minimal) minimum frequency requirements. Such methods are nowadays getting more popular and you can find free source codes in the internet (note that the patterns may be called also classification rules or dependency rules). For a short review of such methods (and a detailed description of one algorithm), see e.g. Hämäläinen, W.: Kingfisher: an efficient algorithm for searching for both positive and negative dependency rules with statistical significance measures. Knowledge and Information Systems: An International Journal (KAIS) 32(2):383-414, 2012 (also <https://pdfs.semanticscholar.org/59ff/5cda9bfefa3b188b5302be36e956b717e28e.pdf>)
I've done a bit of reading and I'm more confused than I started. What is the correct way to build a classification (binary) model that doesn't give overly optimistic (or pessimistic) results. Suppose I have a data set of 7000 samples with around 700-800 features. The classes are about 70/30 biased towards the positive class. I've been using an SVM whose parameters I preset and doing 10-fold CV. I then take mean and variance of the false positive rate and the false negative rate as my model performance metric. I now would like to do a grid search on the parameters (which will likely entail another inner cross validation). I don't think doing it on the entire set first is valid since it breaks the cross validation independence but if I do it for each of the folds I'll have 10 different models. What is the correct workflow for training classifiers when the sample size isn't big enough to split into multiple pieces?
> > I don't think doing it on the entire set first is valid since it breaks the cross validation independence > > > That thought is correct: you want to look into [nested cross validation](https://stats.stackexchange.com/search?q=nested+cross+validation) (In principle you can nest all kinds of validation schemes: nesting of single splits leads to the typical training + optimization/hyperparameter tuning (aka validation set) + validation of final model (aka test set) setup). > > but if I do it for each of the folds I'll have 10 different models. > > > Yes and no. Yes, in the outer cross validation you generate a number of tuned surrogate models. But if all is well with your models, they should end up with the same hyperparameters, and there really shouldn't be any decisions or choice involved: One of the key assumptions for cross validation that the modeling is *stable*, leading to equivalent (if not equal) surrogate models - which are in turn assumed to be equivalent to the "final" or "big" model trained (using the same tuning routine) on the whole data set: if you observe instability already among the surrogate models, extrapolation of performance characteristics to the final model is a shot into the dark. So if you can show that the tuned surrogate models are stable including their hyperparameters (i.e. the tuned surrogate models have the same hyperparameters although the tuning was done separately for each surrogate model) you're fine. A nice side effect of the cross validation is that you can check whether the tuning is in fact stable: if it isn't you have deeper problems and need to rethink your modeling approach (constraints/regularization/fixing hyperparameters externally). \* Things get more difficult if you can have different but equivalent sets of hyperparameters, i.e. the hyperparameter space has several equivalent minima.
Many machine learning classifiers (e.g. support vector machines) allow one to specify a kernel. What would be an intuitive way of explaining what a kernel is? One aspect I have been thinking of is the distinction between linear and non-linear kernels. In simple terms, I could speak of 'linear decision functions' an 'non-linear decision functions'. However, I am not sure if calling a kernel a 'decision function' is a good idea. Suggestions?
Kernel is a way of computing the dot product of two vectors $\mathbf x$ and $\mathbf y$ in some (possibly very high dimensional) feature space, which is why kernel functions are sometimes called "generalized dot product". Suppose we have a mapping $\varphi \, : \, \mathbb R^n \to \mathbb R^m$ that brings our vectors in $\mathbb R^n$ to some feature space $\mathbb R^m$. Then the dot product of $\mathbf x$ and $\mathbf y$ in this space is $\varphi(\mathbf x)^T \varphi(\mathbf y)$. A kernel is a function $k$ that corresponds to this dot product, i.e. $k(\mathbf x, \mathbf y) = \varphi(\mathbf x)^T \varphi(\mathbf y)$. Why is this useful? Kernels give a way to compute dot products in some feature space without even knowing what this space is and what is $\varphi$. For example, consider a simple polynomial kernel $k(\mathbf x, \mathbf y) = (1 + \mathbf x^T \mathbf y)^2$ with $\mathbf x, \mathbf y \in \mathbb R^2$. This doesn't seem to correspond to any mapping function $\varphi$, it's just a function that returns a real number. Assuming that $\mathbf x = (x\_1, x\_2)$ and $\mathbf y = (y\_1, y\_2)$, let's expand this expression: $\begin{align} k(\mathbf x, \mathbf y) & = (1 + \mathbf x^T \mathbf y)^2 = (1 + x\_1 \, y\_1 + x\_2 \, y\_2)^2 = \\ & = 1 + x\_1^2 y\_1^2 + x\_2^2 y\_2^2 + 2 x\_1 y\_1 + 2 x\_2 y\_2 + 2 x\_1 x\_2 y\_1 y\_2 \end{align}$ Note that this is nothing else but a dot product between two vectors $(1, x\_1^2, x\_2^2, \sqrt{2} x\_1, \sqrt{2} x\_2, \sqrt{2} x\_1 x\_2)$ and $(1, y\_1^2, y\_2^2, \sqrt{2} y\_1, \sqrt{2} y\_2, \sqrt{2} y\_1 y\_2)$, and $\varphi(\mathbf x) = \varphi(x\_1, x\_2) = (1, x\_1^2, x\_2^2, \sqrt{2} x\_1, \sqrt{2} x\_2, \sqrt{2} x\_1 x\_2)$. So the kernel $k(\mathbf x, \mathbf y) = (1 + \mathbf x^T \mathbf y)^2 = \varphi(\mathbf x)^T \varphi(\mathbf y)$ computes a dot product in 6-dimensional space without explicitly visiting this space. Another example is Gaussian kernel $k(\mathbf x, \mathbf y) = \exp\big(- \gamma \, \|\mathbf x - \mathbf y\|^2 \big)$. If we Taylor-expand this function, we'll see that it corresponds to an infinite-dimensional codomain of $\varphi$. Finally, I'd recommend an online course ["Learning from Data"](https://work.caltech.edu/telecourse.html) by Professor Yaser Abu-Mostafa as a good introduction to kernel-based methods. Specifically, lectures ["Support Vector Machines"](http://www.youtube.com/watch?v=eHsErlPJWUU&hd=1), ["Kernel Methods"](http://www.youtube.com/watch?v=XUj5JbQihlU&hd=1) and ["Radial Basis Functions"](http://www.youtube.com/watch?v=O8CfrnOPtLc&hd=1) are about kernels.
I am a microbiologist and I am currently self-studying machine learning from some open lecture videos. I am finding it pretty difficult to understand proofs that are somewhat "obvious", my poor mathematical knowledge is to blame for that. Can you please recommend a text-book that deals with the mathematical proofs of the theorems? Or approaches machine learning from the mathematical side?
MOOC Course on Machine Learning: <https://www.coursera.org/course/ml> Guide to Mathematical Proofs from berkeley: <https://math.berkeley.edu/~hutching/teach/proofs.pdf> Enjoy!
In consecutive throws of an ordinary die, which of the following two possibilities is more likely to happen first: a) Two successive occurrences of 5 or b) Three successive appearances of numbers divisible by 3? I thought that "time to event" random variable follows a geometric distribution. In the first case the probability of the event would be $p\_1=1/36$ so the waiting time would be $E(X\_1)=\frac{1-p\_1}{p\_1}=35$, plus 2 (for the successive occurrences of 5) = 37. Accordingly the probability of the second event would be $p\_2=\frac{2^3}{6^3}$ and the waiting time for the second event would be $E(X\_2)=\frac{1-p\_2}{p\_2}=26$, plus 3 = 29. As I was not sure about the validity of the above considerations, I decided to try some monte carlo simulations (the code can be found here: <http://ideone.com/TbLdDe>). According to the results, the second event will happen first, indeed. But the expected values are larger than the ones calculated before (About 42 and 39, accordingly) What is the right way to calculate the waiting time?
Generalizing the problem, let $X$ be the random variable for the number of trials it takes to achieve $r$ successes in a row, with $p=$ the probability of success. The probability of having *no* successes in a row at time $t$, for $t\ge 1$, is the probability of a failure at time $t$ preceeded by a sequence to time $t-1$ that has not yet had $r$ successes, which is: $$(1-p)(1-P(X\le t-1))$$ The probability of having $r$ successes at time $t+r$ following no successes at time $t$ is: \begin{align} P(X=t+r) &= p^r(1-p)(1-P(X\le t-1)) \\ P(X=t+r+1) &= p^r(1-p)(1-P(X\le t)) \end{align} Then taking sums from $t=0$ to $\infty$: \begin{align} \sum\_{t=0}^\infty P(X=t+r+1) &= \sum\_{t=0}^\infty p^r(1-p)(1-P(X\le t))\\ \sum\_{t=0}^\infty P(X=t) - \sum\_{t=0}^r P(X=t) &= p^r(1-p)\sum\_{t=0}^\infty (1-P(X\le t))\\ \end{align} Finally using the facts that: * the sum of a probability distribution is 1: $\sum\_{t=0}^\infty P(X=t) = 1$ * the probabilities of seeing $r$ successes in the first $r$ trials are trivially $P(X=t)=0$ for $t<r$ and $P(X=r)=p^r$ * the expectation of a random variable taking non-negative values can be written in terms of its cumulative distribution function: $E[X] = \sum\_{x=0}^\infty (1-F(x))$ we have $$E(X) = \frac1{p^r(1-p)}(1-p^r)$$ For $r=1$, as expected this simplifies to the same formula as for the [geometric distribution](http://en.wikipedia.org/wiki/Geometric_distribution): $E(X) = 1/p$. The cases * $p=1/6, r=2 \implies E(X)=6^3/5\times(1-1/6^2)=42$, and * $p=1/3,r=3 \implies E(X)= 3^4/2\times(1-1/3^3)=39$ confirm your simulations.
Let $X\_1$, $X\_2$, ..., $X\_n$ be iid RV's with range $[0,1]$ but unknown distribution. (I'm OK with assuming that the distribution is continuous, etc., if necessary.) Define $S\_n = X\_1 + \cdots + X\_n$. I am given $S\_k$, and ask: What can I infer, in a Bayesian manner, about $S\_n$? That is, I am given the sum of a sample of size $k$ of the RV's, and I would like to know what I can infer about the distribution of the sum of all the RV's, using a Bayesian approach (and assuming reasonable priors about the distribution). If the support were $\{0,1\}$ instead of $[0,1]$, then this problem is well-studied, and (with uniform priors) you get beta-binomial compound distributions for the inferred distribution on $S\_n$. But I'm not sure how to approach it with $[0,1]$ as the range... **Full disclosure**: I already [posted this on MathOverflow](https://mathoverflow.net/questions/90580/bayesian-inference-on-sum-of-random-variables), but was told it would be better posted here, so this is a re-post.
Forgive the lack of measure theory and abuses of notation in the below... Since this is Bayesian inference, there must be some prior on the unknown in the problem, which in this case is the distribution of $X\_1$, an infinite-dimensional parameter taking values in the set of distributions on $[0, 1]$ (call it $\pi$). The data distribution $S\_k|\pi$ converges to a normal distribution, so if $k$ is large enough ([Berry-Esseen theorem](http://en.wikipedia.org/wiki/Berry-Esseen_theorem)) we can just slap in that normal as an approximation. Furthermore, if the approximation is accurate the only aspect of the prior $p(\pi)$ that matters in practical terms is the induced prior on $(\text{E}\_\pi(X\_1),\text{Var}\_\pi(X\_1))=(\mu,\sigma^2)$. Now we do standard Bayesian prediction and put in the approximate densities. ($S\_n$ is subject to the same approximation as $S\_k$.) $p(S\_n|S\_k) = \int p(\pi|S\_k)p(S\_n|\pi,S\_k)d\pi$ $p(S\_n|S\_k) = \int \frac{p(\pi)p(S\_k|\pi)}{p(S\_k)}p(S\_n|\pi,S\_k)d\pi$ $p(S\_n|S\_k) \approx \frac{\int p(\mu,\sigma^2)\text{N}(S\_k|k\mu,k\sigma^2)\text{N}(S\_n|(n-k)\mu + S\_k, (n-k)\sigma^2) d(\mu,\sigma^2)}{\int p(\mu,\sigma^2)\text{N}(S\_k|k\mu,k\sigma^2) d(\mu,\sigma^2)}$ For the limits of the integral, $\mu \in [0, 1]$, obviously; I think $\sigma^2 \in [0,\frac{1}{4}]$? Added later: no, $\sigma^2 \in [0,\mu(1-\mu)].$ This is nice -- the allowed values of $\sigma^2$ depend on $\mu$, so info in the data about $\mu$ is relevant to $\sigma^2$ too.
I have a dataset size of ~500000 with input dimension 46. I am trying to use Pybrain to train the network but the training is extremely slow for the whole dataset. Using batches of 50000 data points, each batch takes more than 2 hours for training. What are caveats of optimizing the network design so that the training is faster?
Here are some of the things that influence your training speed: * Number of weights in your network * Speed of your CPU * Package you are using (mostly engine it is working on, in PyLearn this is Theano) * If all your data fits in memory or you are reading from disk in between batches With regards to network design the only thing you can really do is make the network more shallow to reduce the number of weights. To reduce the number of epochs there might be other options like adding residual connections but that will not decrease the training time of 1 epoch. Without more information it is unclear where the bottleneck is, but 20 hours for one epoch seems a bit high. The easiest and biggest improvement you will be able to get is to use a good GPU, which should be possible using pylearn since it is built on top of Theano.
How to efficiently ¹⁾ choose from a set of numbers $S$, a given number $n$ of disjoint subsets, each with a given sum $K$ of chosen elements? ¹⁾ Not as in $P$, I just want something smarter than $O(n^{|S|})$. --- **Ex.** Let’s say we want $n=3$ subsets with the sum of $K=3$ chosen from $S=[1,1,1,2,2,2]$. The correct solution is $[1,2], [1,2], [1,2]$. --- **Ex.** Let’s say we want $n=2$ subsets with the sum of $K=5$ chosen from $S=[5,4,3,2,2,2,2,1]$. One correct solution is $[5], [1,4]$. Another one would be $[2,2,1], [2,3]$. --- Etc. Just *a* correct solution, regardless of which one.
Let us define a predicate $$T(i,k\_1,...,k\_n)\in\{True, False\}$$ Where $T(i,k\_1,...,k\_n)$ means "using the $i$ first values of $S$, we can find $n$ disjoint subsets with sums $k\_1, ... k\_n$". The answer you are looking for is $T(|S|,K,..,K)$ where $K$ appears $n$ times. We have the following recurrence formula : $$T(i,k\_1,...,k\_n) = \left\{ \begin{matrix} T(i-1,k\_1,...,k\_n) & \text{// we don't use value $x\_i$} \\ \vee T(i-1,k\_1-x\_i,...,k\_n) \text{ if $x\_i \geq k\_1$} &\text{// we put value $x\_i$ in set 1}\\ \vdots \\ \vee T(i-1,k\_1,...,k\_n-x\_i) \text{ if $x\_i \geq k\_n$} &\text{// we put value $x\_i$ in set n}\\ \end{matrix}\right.$$ This formula is a big boolean "or", I don't know how I could make it look better. We also the following initialisation : $\forall i, T(i,0,...,0) = True$. You can see it as a big $n+1$-dimensional array, where the first dimension has length $|S|$, and the others have length $K$, which gives $O(K^n|S|)$ values. In that array, all the values can be computed from the other values in time $O(n)$ (the big "or" is over $n+1$ values). Therefore, the worst case complexity is $O(nK^n|S|)$. About implementation, it might be easier to do with recursively with memorization. However, you're gonna blow up the stack really fast, so you might want to think of something iterative. To get the actual values in each set, run a backtracking algorithm once you have that truth array.
Can we algorithmically infer the number of comets that orbit the Earth, based upon periodic observations of them, if we cannot tell the comets apart? In more detail: ### The problem * There is an unknown, fixed number of comets, `numComets` that pass by Earth and are visible from the ground. * Each comet takes a fixed number of years to orbit Earth. * Based only on this knowledge and the dates recorded that a comet was sighted in the night sky, determine `numComets`, **AND** determine the orbital period of each comet. You can use an unlimited amount of observation data (time), but the solution in the shortest amount of time is preferable. ### Assumptions you can make * All comets look identical from the ground; there is no way to visually identify them. * No other object will be mistaken for a comet, and all comets will be seen. * No comet takes more than 100 years to orbit Earth, and no new comets will be introduced. As current answers have pointed out, the problem as it lay now is unsolvable. Would a solution be possible if `numComets` is known? (Even roughly get the right answer?) What modifications to the problem would have to be made otherwise, to still encasulate the spirit of the problem, and make it solvable? (or is it dead with no hope of solving accurately?) ### Further assumption All comets start at a different point in their orbit. I.e. They do not all start their orbit at the same place like racehorses coming out of a gate. ### Clarification Comets are only recorded once per year. So there is no difference between a comet that passes at the beginning of a year and at the end. The problem could just as easily have been worded as days.
Based on the restriction you gave, that you cannot identify one comet from another, there is no difinative method of calculation. Here is a proposed situation, There are 2 comets following the same trajectory, they each orbit at exactly ten year orbit durations, so each one is seen every ten years, they are however exactly 5 years apart in their orbits. From the ground if we cannot differentiate between these two comets we only see a comet appear overhead regularly every 5 years, there is no way to tell if this is one comet appearing ever 5 years, or two comets appearing every ten years, but 5 years apart. From this it is impossible to deterministically say how many comets there are, so no algorithm can give you numComets with the assumptions that you have specified.
When designing an algorithm for a new problem, if I can't find a polynomial time algorithm after a while, I might try to prove it is NP-hard instead. If I succeed, I've explained why I couldn't find the polynomial time algorithm. It's not that I know for sure that P != NP, it's just that this is the best that can be done with current knowledge, and indeed the consensus is that P != NP. Similarly, say I've found a polynomial-time solution for some problem, but the running time is $O(n^2)$. After a lot of effort, I make no progress in improving this. So instead, I might try to prove that it is 3SUM-hard instead. This is usually a satisfactory state of affairs, not because of my supreme belief that 3SUM does indeed require $\Theta(n^2)$ time, but because this is the current state of the art, and a lot of smart people have tried to improve it, and have failed. So it's not my fault that it's the best I can do. In such cases, the best we can do is a hardness result, in lieu of an actual lower bound, since we don't have any super-linear lower bounds for Turing Machines for problems in NP. Is there a uniform set of problems that can be used for all polynomial running times? For example, if I want to prove that it is unlikely that some problem has an algorithm better than $O(n^7)$, is there some problem X such that I can show it is X-hard and leave it at that? **Update**: This question originally asked for families of problems. Since there aren't that many families of problems, and this question has already received excellent examples of individual hard problems, I'm relaxing the question to any problem that can be used for polynomial-time hardness results. I'm also adding a bounty to this question to encourage more answers.
Yes, the best known algorithm for $k$-SUM runs in $O(n^{\lceil k/2 \rceil})$ time, so it's very possible that you could argue some $n^7$ problem is difficult, because if it's in $n^{6.99}$ then you can solve $14$-SUM faster. Note the $k$-SUM problem gets "easier" as $k$ increases: given an improved algorithm for $k$-SUM, it's fairly easy to get an improved algorithm for $2k$-SUM: take all $O(n^2)$ pairs of $n$ numbers in your given $2k$-SUM instance, replacing each pair with the sum of the two, and look for a sum of $k$ numbers out of those that equal $0$. Then, an $O(n^{k/2-\varepsilon})$ algorithm for $k$-SUM implies an $O(n^{k-2\varepsilon})$ algorithm for $2k$-SUM. Put another way, a tight lower bound for $2k$-SUM is a stronger assumption than a tight lower bound for $k$-SUM. Another candidate for a hard problem is $k$-Clique. [See my $O(\log n)$-Clique answer for more on that](https://cstheory.stackexchange.com/questions/609/what-are-the-best-known-upper-bounds-and-lower-bounds-for-computing-olog-n-cliq/614#614). If you can show (for example) that a better algorithm for your problem implies an $O(n^2)$ algorithm for $3$-clique, then a super-breakthrough would be required to improve on your algorithm. Parameterized complexity gives many examples of other problems like this: $k$-Clique is hard for the class $W\[1\]$, and $k$-SUM is hard for $W\[2\]$. Let me warn you that although problems like this are very convenient to work with, problems like $3$-SUM are *not* among the "hardest" in $TIME[n^2]$, e.g., it is very unlikely that every problem in $TIME[n^2]$ can actually be linear-time reduced to $3$-SUM. This is because $3$-SUM can be solved with $O(\log n)$ bits of nondeterminism in linear time, so if everything in quadratic time can be reduced to $3$-SUM, then $P \neq NP$ and other fantastic consequences results. More on this point can be found in the article ["How hard are $n^2$-hard problems?"](http://portal.acm.org/citation.cfm?id=181465) (At some point, "3SUM-hard" was called "$n^2$-hard"; this SIGACT article rightly complained about that name.)
I'll propose this question by means of an example. Suppose I have a data set, such as the boston housing price data set, in which I have continuous and categorical variables. Here, we have a "quality" variable, from 1 to 10, and the sale price. I can separate the data into "low", "medium" and "high" quality houses by (arbitrarily) creating cutoffs for the quality. Then, using these groupings, I can plot histograms of the sale price against each other. Like so: [![housing quality and sale price](https://i.stack.imgur.com/9doGl.png)](https://i.stack.imgur.com/9doGl.png) Here, "low" is $\leq 3$, and "high" is $>7$ on the "quality" score. We now have a distribution of the sale prices for each of the three groups. It is clear that there is a difference in the center of location for the medium and high quality houses. Now, having done all this, I think "Hm. There appears to be a difference in center of location! Why don't I do a t-test on the means?". Then, I get a p-value that appears to correctly reject the null hypothesis that there is no difference in means. Now, suppose that I had nothing in mind for testing this hypothesis *until* I plotted the data. *Is this data dredging?* Is it still data dredging if I thought: *"Hm, I bet the higher quality houses cost more, since I am a human that has lived in a house before. I'm going to plot the data. Ah ha! Looks different! Time to t-test!"* Naturally, it is not data-dredging if the data set were collected with the intention of testing this hypothesis from the get-go. But often one has to work with data sets given to us, and are told to "look for patterns". How does someone avoid data dredging with this vague task in mind? Create hold out sets for testing data? Does visualization "count" as snooping for an opportunity to test a hypothesis suggested by the data?
Briefly disagreeing with/giving a counterpoint to @ingolifs's answer: yes, visualizing your data is essential. But visualizing before deciding on the analysis leads you into Gelman and Loken's [garden of forking paths](http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf). This is not the same as data-dredging or p-hacking, partly through intent (the GoFP is typically well-meaning) and partly because you may not run more than one analysis. But it *is* a form of snooping: because your analysis is data-dependent, it can lead you to false or overconfident conclusions. You should in some way determine what your *intended* analysis is (e.g. "high quality houses should be higher in price") and write it down (or even officially preregister it) before looking at your data (it's OK to look at your *predictor* variables in advance, just not the response variable(s), but if you really have no *a priori* ideas then you don't even know which variables might be predictors and which might be responses); if your data suggest some different or additional analyses, then your write-up can state both what you meant to do initially and what (and why) you ended up doing it. If you are really doing pure exploration (i.e., you have no *a priori* hypotheses, you just want to see what's in the data): * your thoughts about holding out a sample for confirmation are good. + In my world (I don't work with huge data sets) the loss of resolution due to having a lower sample size would be agonizing + you need to be a bit careful in selecting your holdout sample if your data are structured in any way (geographically, time series, etc. etc.). Subsampling as though the data are iid leads to overconfidence (see Wenger and Olden *Methods in Ecology and Evolution* 2012), so you might want to pick out geographic units to hold out (see DJ Harris *Methods in Ecology and Evolution* 2015 for an example) * you can admit that you're being purely exploratory. Ideally you would eschew p-values entirely in this case, but at least telling your audience that you are wandering in the GoFP lets them know that they can take the p-values with enormous grains of salt. My favorite reference for "safe statistical practices" is Harrell's *Regression Modeling Strategies* (Springer); he lays out best practices for inference vs. prediction vs. exploration, in a rigorous but practical way.
Let's say you have pixel bitmaps that look something like this: [![example snake](https://i.stack.imgur.com/vAI2n.png)](https://i.stack.imgur.com/vAI2n.png) From this I can easily extract a contour, which will be a concave polygon defined by a set of 2D points. The question is what is the fastest algorithm to pick, from the set of polygon points, the ones that are closest to the natural "endings" of the contour (i.e. the two tips at the end of the U in this case) - so there are $2$ points in the output. Somehow it looks like it should be related to curvature, although the algorithm should support a large variety of possible shapes, including S, W and other largely curved shapes so I'm hesitant to set any kind of threshold on curvature. I've tried convex hull methods as well as a couple of variations of the [rotating calipers](http://en.wikipedia.org/wiki/Rotating_calipers) method but still I've found nothing that will convince me that I can reliably and quickly identify the endings/tips of any curved thin line. It's always impressive how humans can pick up on these natural features so fast!
Run Flood Fill from any point and the farthest points in the both directions are your result. If you find that the distance in one of the directions is zero it means that it was one of them. Exploiting the very same idea, if you try finding pixels with the least number of surrounding pixels and apply limited BFS Flood Fill to find out where it can go - at the endpoints there is only one way out. Also going by curvature and then applying the same idea would tell apart the extra bends from results.
I was given the following question (please don't mind the programming language semantics, it's a language-agnostic question): Given a list of `Person`s, and two arbitrary `Person`s out of that list, we need to find the minimum *nth-degree* relationship between them. Here are the definitions of `Person` and a "relationship": * A `Person` is defined as having 2 properties: `Name` and `Age`: ``` class Person { public string Name { get; set; } public int Age { get; set; } } ``` * A relationship between two `Person`s is defined as follows: 1. Two `Person`s are considered to be in a *1st-degree* relationship if they have *either* the same name or the same age. 2. Two `Person`s are considered to be in a *nth-degree* relationship if they have n people of 1st-degree connecting them. Example input: Given the following list of `Person`s: ``` persons = [{ Name = "John", Age = 60 }, { Name = "John", Age = 50 }, { Name = "Ted", Age = 50 }] ``` Then: 1. The two `John`s have a **1st degree** relationship (because they have the same name). 2. The second `John` and `Ted` have a **1st degree** relationship (because they have the same age). 3. Hence, the first `John` and `Ted` have a **2nd degree** relationship (because the second `John` connects them). Now, I understand that it's a simple Dijkstra's algorithm question, but what I don't know is **how should we build the graph of `Person`s**? I'm looking for an algorithm, but preferably code, that can build the graph in a time complexity which is better than $O(|V|^2)$. If you think this question can be solved without building a graph (e.g., using BFS as mentioned in the comments), please let me know how can this be done, but I still want to know how to build the graph.
Since your only parameters are name and age, you can make two lists, sort them, and then construct a graph using adjacency list: $n \gets$ size of the set $\mathbf{persons}$; $V \gets \{\ \}$; // set of vertices $\mathit{adj} \gets [\ ]$; // adjacency list **for** $i = 1 \to n$ **do** $\mathbf{persons}[i].\mathit{id} \gets i$; $\mathit{adj}[i] \gets [\ ]$; //adjacency list of vertex $v\_i$ **endfor** $A \gets$ $\mathbf{persons}$ sorted by $\mathit{name}$; $B \gets$ $\mathbf{persons}$ sorted by $\mathit{age}$; **for** $i = 2 \to n$ **do** **if** $A\_i.\mathit{name} == A\_{i-1}.\mathit{name}$ **then** $Q \gets \{(i-1)\}$; $j \gets i$; **while** $A\_j.\mathit{name} == A\_{i-1}.\mathit{name}$ **do** $Q \gets Q \cup \{j\}$; $j \gets j+1$; **endwhile** add\_clique($Q$); // this function runs in $|Q|$ and adds edges between all pairs in $Q$ $i \gets j$; **endif** **endfor** **for** $i = 2 \to n$ **do** **if** $B\_i.\mathit{age} == B\_{i-1}.\mathit{age}$ **then** $Q \gets \{(i-1)\}$; $j \gets i$; **while** $B\_j.\mathit{name} == B\_{i-1}.\mathit{name}$ **do** $Q \gets Q \cup \{j\}$; $j \gets j+1$; **endwhile** add\_clique($Q$); $i \gets j$; **endif** **endfor** The above operations take $O(n \log n)$ time to sort, and $O(n)$ time to create an adjacency list, which is less than $O(n^2)$ unless the number of edges are $O(n^2)$.
Can you give some real world examples of what graphs algorithms people are actually using in applications? Given a complicated graphs, say social networks, what properties/quantity people want to know about them? —- It would be great if you can give some references. Thanks.
Chances are high you found your way to the Stack Exchange network via Google or another search engine. Google uses the [PageRank algorithm](https://en.wikipedia.org/wiki/PageRank), which models webpages and links in them as a directed graph. The algorithm itself is perhaps more linear algebra than graph theory (it looks for an eigenvector for the graph's adjacency matrix), but given that the majority of the Earth population uses it on a daily/weekly basis, it should definitely count as an important real world application of graphs.