input
stringlengths
38
38.8k
target
stringlengths
30
27.8k
Through backward elimination I get a ranking of features over multiple datasets. For example in the dataset 1 I have the following ranking, the feature in the top being the most important: 1. feat. 1 2. feat. 2 3. feat. 3 4. feat. 4. ... , whereas for dataset 2 I have for example the following ranking: 1. feat. 3 2. feat. 1 3. feat. 2 4. feat. 4. I want to filter out those features which end up in the top of the ranking the most (incorporating that finishing in the top is better than finishing in the 3rd place). Which kind of ranking metric can I use for this problem?
Sounds like a job for [Dynamic Time Warping](https://en.wikipedia.org/wiki/Dynamic_time_warping), there are implementations in Python and R.
I am given the following decision problem: A program $ \Pi $ takes as input a pair of strings and outputs either $true$ or $false$. It is guaranteed that $\Pi$ terminates on any input. Does there exist a pair ($I\_1,I\_2$) of strings such that $\Pi$ terminates on ($I\_1,I\_2$) with output value $true$? It is clear that $\Pi$ is semi-decidable and to proof this, I am asked to give a semi-decision procedure. However, how do I enumerate *all* possible pairs strings? Or how do I enumerate all possbile (single) strings in general? Of course, such a program may never terminate, but that is no problem because I am only asking for semi-decidability. EDIT2: [Solution (Java)](http://pastebin.com/7BBCbdq9)
Asking how you can study computer science without computers is a bit like asking how you can study cosmology without telescopes. Sure, it's nice to be able to look at the things you're studying and it's often very helpful to be able to play around with things. But there's a whole lot you can do without access to a computer: *in extremis*, you could probably do almost all of a undergrad course with no computers. In practical terms, access to computers helps reinforce a lot of what you learn in a computer science course. Programming courses are, obviously, much more natural with access to a computer. On the other hand, being forced to write code on paper does encourage people to think about their code and make sure it really works, rather than just running it through a compiler again and again until it compiles and then running trivial test cases again and again until the obvious bugs go away. Topics that would be most natural without computers would be the more mathematical ones. All the background mathematics, such as combinatorics and probability. Computability, formal languages, logic, complexity theory, algorithm design and analysis, information and coding theory. Anything to do with quantum computation!
As we know Graph Isomorphism is in NP but it is not known to be NP-Complete or P-Complete. I was wondering if there are any problems that are known to be in PSPACE but not known to be PSPACE-Complete and not lie in PH?
Copying my comment: If you meant to ask for a problem analogous to GI, then perhaps you're asking for a problem that's not in PH and not PSPACE-complete. Problems complete for any class not known to be contained in PH, but contained in PSPACE, will work as an example. So take any problem complete for BQP, QMA, PP, etc.
How to get the expected time complexity of while loop below? ``` While infinity: case1: Return 0 with a probability of p(1 - p) case2: Return 1 with a probability of p(1 - p) case3: otherwise repeat this loop until return 0 or 1 ``` I can understand the probability that this loop runs only one time is $2p(1 - p)$. But I cannot understand how much the expected run time of this is. Can anyone let me know it and why?
A simple re-look at the terms used in question will provide the answer. A process is a program in execution. Often a process consist of multiple software threads. The work of the process is divided among the threads. If the work done by threads is relatively independent, they can execute concurrently on available processor cores. Most modern processor consist of multiple core, where each core is capable of executing, **at-least** one software thread. So if the processor consist of n cores, n different software threads can execute concurrently on n cores. It is not necessary that all n threads belong to same process. The number of software threads **can be higher than n (the number of cores)**, if the threads perform a lot of relatively slower memory access and/or I/O operations. So more than one thread can share a core, running alternately, giving a user the impression that they are running concurrently. They are actually executing in time-sliced manner. One thread run for few cycles then it is removed from the processor and another thread execute for few cycles and so no. No parallel execution of a multi-threaded process is possible unless threads of the process execute concurrently.
I was wondering what tools that people in this field (theoretical computer science) use to create presentations. Since a great deal of computer science is not just writing papers but also giving presentations I thought that this would be an important soft question. This is inspired by the previous question [what tools do you use to write papers](https://cstheory.stackexchange.com/questions/2255/what-tools-do-you-use-to-write-papers). The most common that I have seen are as follows. * [Beamer](https://bitbucket.org/rivanvx/beamer/wiki/Home) * [Microsoft PowerPoint](http://en.wikipedia.org/wiki/Microsoft_PowerPoint) * [LaTeX](http://www.latex-project.org/) * [GraphViz](http://www.graphviz.org/) I was wondering if there are any other tricks that I am missing?
[Keynote](http://www.apple.com/iwork/keynote/) is one of the popular software, though I use PowerPoint
It's my understanding that when you XOR something, [the result is the sum of the two numbers mod $2$](https://en.wikipedia.org/wiki/Exclusive_or#Properties). Why then does $4 \oplus 2 = 6$ and not $0$? $4+2=6$, $6%2$ doesn't equal $6$. I must be missing something about what "addition modulo 2" means, but what? > > 100 // 4 > > > 010 // XOR against 2 > > > 110 = 6 // why not zero if xor = sum mod 2? > > >
The confusion here stems from a missing word. A correct statement is "The result of XORing two **bits** is the same as that of adding those two **bits** mod 2." For example, $(0+1)\bmod 2 = 1\bmod 2 = 1=(0\text{ XOR }1)$ and $(1+1) \bmod 2= 2\bmod 2 = 0 =(1\text{ XOR }1)$
What is the difference between a single processing unit of CPU and single processing unit of GPU?  Most places I've come along on the internet cover the high level differences between the two. I want to know what instructions can each perform and how fast are they and how are these processing units integrated in the compete architecture?   It seems like a question with a long answer. So lots of links are fine.  In the CPU, the FPU runs real number operations. How fast are the same operations being done in each GPU core? If fast then why is it fast?   I know my question is very generic but my goal is to have such questions answered.
This are not Real numbers as $\mathbb{R}$, but at this point - CPU has double precision floating point numbers, GPU very low number of units processing them, floats on GPU are *halfs*. This is due to graphics (this was the main goal before parallel processing), where results are rounded to display, so speed vs accuracy tradeof went that way. GPU core frequencies are smaller than CPUs, number of operations is very limited on GPU (boosted by video decoder), and there is a huge difference in branch prediction - CPU has very long and complex prediction, while GPU just recently got it added. Single core on GPU: it is Streaming Multiprocessor (there are about 4 - 16 per card), it includes cuda cores (which is about 32-64), and they work in lock-step, so it differs from CPU threads (not locked). It is hard to compare like this, but in short - single core on GPU is still parallel unit working slower than CPU core, less memory, registers and instructions than CPU, with very short branching prediction and preferable *half* floats, nowadays normal floats but having about one-two processing units for double precision, some time ago integer operations were slower on GPU (not onlu by frequency) - but this changed recently. The same operation on floats - they are slower on GPU than CPU due to frequency. You might be interested in [AMD architecture](http://developer.amd.com/resources/documentation-articles/developer-guides-manuals/), [Nvidia architecture](https://developer.nvidia.com/key-technologies) and [Intel architecture](http://www.intel.eu/content/www/eu/en/processors/architectures-software-developer-manuals.html) to compare instructions set and hardware differences further.
I'll explain my problem with an example. Suppose you want to predict the income of an individual given some attributes: {Age, Gender, Country, Region, City}. You have a training dataset like so ``` train <- data.frame(CountryID=c(1,1,1,1, 2,2,2,2, 3,3,3,3), RegionID=c(1,1,1,2, 3,3,4,4, 5,5,5,5), CityID=c(1,1,2,3, 4,5,6,6, 7,7,7,8), Age=c(23,48,62,63, 25,41,45,19, 37,41,31,50), Gender=factor(c("M","F","M","F", "M","F","M","F", "F","F","F","M")), Income=c(31,42,71,65, 50,51,101,38, 47,50,55,23)) train CountryID RegionID CityID Age Gender Income 1 1 1 1 23 M 31 2 1 1 1 48 F 42 3 1 1 2 62 M 71 4 1 2 3 63 F 65 5 2 3 4 25 M 50 6 2 3 5 41 F 51 7 2 4 6 45 M 101 8 2 4 6 19 F 38 9 3 5 7 37 F 47 10 3 5 7 41 F 50 11 3 5 7 31 F 55 12 3 5 8 50 M 23 ``` Now suppose I want to predict the income of a new person who lives in City 7. My training set has a whopping 3 samples with people in City 7 (assume this is a lot) so I can probably use the average income in City 7 to predict the income of this new individual. Now suppose I want to predict the income of a new person who lives in City 2. My training set only has 1 sample with City 2 so the average income in City 2 probably isn't a reliable predictor. But I can probably use the average income in Region 1. Extrapolating this idea a bit, I can transform my training dataset as ``` Age Gender CountrySamples CountryIncome RegionSamples RegionIncome CitySamples CityIncome 1: 23 M 4 52.25 3 48.00 2 36.5000 2: 48 F 4 52.25 3 48.00 2 36.5000 3: 62 M 4 52.25 3 48.00 1 71.0000 4: 63 F 4 52.25 1 65.00 1 65.0000 5: 25 M 4 60.00 2 50.50 1 50.0000 6: 41 F 4 60.00 2 50.50 1 51.0000 7: 45 M 4 60.00 2 69.50 2 69.5000 8: 19 F 4 60.00 2 69.50 2 69.5000 9: 37 F 4 43.75 4 43.75 3 50.6667 10: 41 F 4 43.75 4 43.75 3 50.6667 11: 31 F 4 43.75 4 43.75 3 50.6667 12: 50 M 4 43.75 4 43.75 1 23.0000 ``` So, the goal is to somehow combine the average CityIncome, RegionIncome, and CountryIncome while using the number of training samples for each to give a weight/credibility to each value. (Ideally, still including information from Age and Gender.) What are tips for solving this type of problem? I prefer to use tree based models like random forest or gradient boosting, but I'm having trouble getting these to perform well. UPDATE ====== For anyone willing to take a stab at this problem, I've generated sample data to test your proposed solution [here](https://github.com/ben519/MLPB/tree/master/Projects/AverageIncome/Data).
Given that you only have two variables and straightforward nesting, I would echo the comments of others mentioning a hierarchical Bayes model. You mention a preference for tree-based methods, but is there a particular reason for this? With a minimal number of predictors, I find that linearity is often a valid assumption that works well, and any model mis-specification could easily be checked via residual plots. If you did have a large number of predictors, the RF example based on the EM approach mentioned by @Randel would certainly be an option. One other option I haven't seen yet is to use model-based boosting (available via the [mboost package in R](https://cran.r-project.org/web/packages/mboost/index.html)). Essentially, this approach allows you to estimate the functional form of your fixed-effects using various base learners (linear and non-linear), and the random effects estimates are approximated using a ridge-based penalty for all levels in that particular factor. [This](https://epub.ub.uni-muenchen.de/12754/1/tr120.pdf) paper is a pretty nice tutorial (random effects base learners are discussed on page 11). I took a look at your sample data, but it looks like it only has the random effects variables of City, Region, and Country. In this case, it would only be useful to calculate the Empirical Bayes estimates for those factors, independent of any predictors. That might actually be a good exercise to start with in general, as maybe the higher levels (Country, for example), have minimal variance explained in the outcome, and so it probably wouldn't be worthwhile to add them in your model.
What is the simplest example of a rewriting system from binary strings to binary strings $$f:\Sigma^\*\rightarrow\Sigma^\*\qquad\Sigma=\{0,1\}$$ that can perform universal computation? Binary string rewriting systems in general can compute any computable function, but I have trouble finding *particular* instances that can by themselves compute any computable function given an appropriate input. I've seen statements that a class of rewriting systems (e.g., the set of cyclic tag systems) is Turing-complete, but I'm looking for a single rewriting system that is universal. I was thinking a [self-modifying bitwise cyclic tag system](https://esolangs.org/wiki/Bitwise_Cyclic_Tag#Self_BCT) might be a candidate, but I'm not sure how to interpret the output of such a system.
[Rule 110](https://en.wikipedia.org/wiki/Rule_110) is a binary rewriting system that can perform universal computation, i.e., it has been [proven](https://en.wikipedia.org/wiki/Rule_110#Interesting_properties) to be universal. It can be implemented by a finite-state transducer: it needs only finite state. However, Rule 110 is not a tag system or a cyclic tag system, so this does not provide an instance of a specific binary tag system that is known to be universal. It might be that examining the proof of universality of Rule 110 could yield such a system, as apparently the proof involves a reduction that goes by way of cyclic tag systems -- though personally I've never read the proof, so this is only speculation. A side note: From Rule 110, you can construct a particular [queue automaton](https://en.wikipedia.org/wiki/Queue_automaton) that is universal: the queue alphabet is $\{0,1,\$\}$ and contains the state of the cellular automaton (a binary string representing the contents of each cell, followed by $\$$). I don't know whether it'd be possible to use this to construct a specific tag system that is universal (e.g., if you can find a way to use a tag system to emulate a queue automaton).
I have a dataset comprised of proportions that measure "activity level" of individual tadpoles, therefore making the values bound between 0 and 1. This data was collected by counting the number of times the individual moved within a certain time interval (1 for movement, 0 for no movement), and then averaged to create one value per individual. My main fixed effect would be "density level". The issue I am facing is that I have a factor variable, "pond" that I would like to include as a random effect - I do not care about differences between ponds, but would like to account for them statistically. One important point about the ponds is that I only have 3 of them, and I understand it is ideal to have more factor levels (5+) when dealing with random effects. If it is possible to do, I would like some advice on how to implement a mixed model using `betareg()` or `betamix()` in R. I have read the R help files, but I usually find them difficult to understand (what each argument parameter really means in the context of my own data AND what the output values mean in ecological terms) and so I tend to work better via examples. On a related note, I was wondering if I can instead use a `glm()` under a binomial family, and logit link, to accomplish accounting for random effects with this kind of data.
The package [glmmTMB](https://cran.r-project.org/web/packages/glmmTMB/glmmTMB.pdf) may be helpful for anyone with a similar question. For example, if you wanted to include pond from the above question as a random effect, the following code would do the trick: ``` glmmTMB(y ~ 1 + (1|pond), df, family=list(family="beta",link="logit")) ```
Probabilities of a random variable's observations are in the range $[0,1]$, whereas log probabilities transform them to the log scale. What then is the corresponding range of log probabilities, i.e. what does a probability of 0 become, and is it the minimum of the range, and what does a probability of 1 become, and is this the maximum of the log probability range? What is the intuition of this of being of any practical use compared to $[0,1]$? I know that log probabilities allow for stable numerical computations such as summation, but besides arithmetic, how does this transformation make applications any better compared to the case where raw probabilities are used instead? a comparative example for a continuous random variable before and after logging would be good
The log of $1$ is just $0$ and the limit as $x$ approaches $0$ (from the positive side) of $\log x$ is $-\infty$. So the range of values for log probabilities is $(-\infty, 0]$. The real advantage is in the arithmetic. Log probabilities are not as easy to understand as probabilities (for most people), but every time you multiply together two probabilities (other than $1 \times 1 = 1$), you will end up with a value closer to $0$. Dealing with numbers very close to $0$ can become unstable with finite precision approximations, so working with logs makes things much more stable and in some cases quicker and easier. Why do you need any more justification than that?
> > You have a Turing machine which has its memory tape unbounded on the > right side which means that there is a left most cell and the head cannot move left beyond it since the tape is finished. Unfortunately, you also find that on execution of a head move left instruction, rather than moving to the adjacent left cell, the head moves all the way back to the initial left most cell of the tape. Now figure out whether you can still use this TM effectively. The Turing machine with left initialize is similar to an ordinary Turing machine, but the transition function has the form > > > $$\delta \colon Q × Γ → Q × Γ × \{R, \mathit{INIT}\}.$$ > > > If $\delta(q, a) = (r, b, \mathit{INIT})$, when the machine is in state $q$ reading an $a$, the machines head > jumps to the left-hand end of the tape after it writes $b$ on the tape and enters state $r$. Show > that you can program this TM such that it simulates a standard TM. > > > I can't figure out how to simulate this as standard TM. One thought I have is to copy the content of the tape which is afterwards the left move to the starting point of the tape before making a left move. Any further help would be appreciated.
When you want to move one position to the left, execute the following algorithm: * Mark the current position as special. * Move to the origin. * Translate the tape one cell to the right, while keeping the special marker in place. * Move to the origin. * Scan until reaching the special marker. * If you move to the right, don't forget to erase the special marker. (This simulates a Turing machine with a single *two-sided* tape.)
Assume we have some Poisson process that produces events. In a given year we have counted $N$ of these events. Further assume that for some reason we need to report a monthly rate instead of this yearly number and also the (estimated) standard deviation in this monthly rate. Clearly, the monthly rate is $N/12$. Now, the question is: **What is the standard deviation in this monthly number?** We have two contradicting views on this. **Alice** maintains that, since the monthly number ($X$) is just a scaled version of the yearly figure ($Y$), one could just apply the [scaling rule for variances](https://en.wikipedia.org/wiki/Variance#Basic_properties). Then, with $X = Y/12$ it follows that $\rm{Var}(X) = \frac{1}{12^2}\rm{Var}(Y)$ and hence the standard deviation of the monthly figure is 1/12 of the standard deviation of the yearly figure. The latter standard deviation is $\sqrt{N}$ as this is a Poisson process. So, we have $\sigma\_{X}=\sqrt{N}/12$. **Bob**, on the other hand, argues that the results for each month are generated by a Poisson process with a *parameter* that is scaled by 12. This follows from the [rule w.r.t. the sums of Poisson distributed variables](https://en.wikipedia.org/wiki/Poisson_distribution#Sums_of_Poisson-distributed_random_variables). So, with $Y\sim \rm{Pois(N)}$ it follows that $X\sim \rm{Pois(N/12)}$. Clearly, $\sigma\_{X}$ is just the standard deviation of such a Poisson process, which is the square root of its rate parameter. Therefore, $\sigma\_{X}=\sqrt{N/12}$. Although the means resulting from Alice's and Bob's reasoning are the same, we've got a factor of $\sqrt{12}$ between their respective standard deviations. Who is right here, Alice or Bob? Note: The standard deviation of this monthly number is to be understood as the (theoretical) standard deviation of future determinations of this monthly number generated by the same, assumed Poisson process.
You have for the number of counts: * Counts per year: $$Y \sim Pois(\lambda)$$ * Counts per month: $$X \sim Pois(\lambda/12)$$ But... * Counts per month (average for 12 months) $$Y/12 \nsim Pois(\lambda/12)$$ or $$\frac{X\_1+X\_2+...X\_{12}}{12} \nsim Pois(\lambda/12) $$ If you divide the counts over a year by twelve then you do *not* get a variable that corresponds to the counts for a particular individual month, but instead you get an average over twelve months. --- The Poisson distribution is only to be used for the *raw number of counts*. It is not true for any derived (scaled) number. So a term like 'counts per T' should be used very carefully. The Poisson distribution describes 'counts' and not 'counts per T'. --- Bob was right in stating that $\text{Var}(X) = \frac{\lambda}{12}$. However when you *take the mean of twelve of those variables* (which is what Alice computed) then you will get: $$\text{Var} \left( \frac{1}{12} (X\_1+X\_2+...X\_{12}) \right) = \frac{1}{12}\frac{\lambda}{12} = \frac{\lambda}{12^2}$$ and the standard deviation, $\sigma = \frac{\sqrt{\lambda}}{12}$, corresponds to Alices number. --- > > for some reason we need to report a monthly rate instead of this yearly number and also the (estimated) standard deviation in this monthly rate > > > You can report * an estimate for the mean monthly rate and the estimated standard error for that estimate. But note that this will be different from * the standard deviation of that monthly rate. The variance of a distribution and the variance of an estimate for the mean of that distribution [are not the same](https://stats.stackexchange.com/questions/32318/difference-between-standard-error-and-standard-deviation). (This occurs very often that some people report figures with very tiny error bars. That makes it [appear](https://doi.org/10.1093/bja/aeg087) as if the difference between two cases is very small. But, what those people only did is show that they can estimate the means very precise and show those are different, but this does not mean that the differences between the groups is so large. Often it is also [confusing/ambiguous](http://doi.org/10.1136/bmj.331.7521.903) what the reported variation/error bars mean.)
What is an R-trivial language? What is an R-trivial monoid? Context: Formal languages. Afaik, R-trivial languages is a subset of the starfree languages. I mostly have background in formal languages and automata theory but not so much with the syntactic monoid characterization. So it would be nice to give a basic definition with maybe a small example of such a language. --- (In order to support multiple QA-sites because I don't want to have any QA-site stay behind and to have that question also represented there, I have also posted this question on these other sites: [stackoverflow.com](https://stackoverflow.com/questions/4346608/formal-languages-what-does-r-trivial-mean), [math.stackexchange.com](https://math.stackexchange.com/questions/12921/formal-languages-what-does-r-trivial-mean), [mathoverflow.net](https://mathoverflow.net/questions/48181/formal-languages-what-does-r-trivial-mean). In general I am against cross-posting but in this case, as they all have the same goal to be a complete reference of questions in the specific area, having the question cross posted is the best thing you can do.)
A semigroup $S$ is $R\text{-trivial}$ iff $a \: R \: b \Rightarrow a = b$ for all $a, b \in S$ where $R$ is [Green's relation](http://en.wikipedia.org/wiki/Green%27s_relations#The_L.2C_R.2C_and_J_relations) $a \: R \: b \Leftrightarrow aS^1 = bS^1$. The set of $R\text{-trivial}$ monoids forms a variety which can be ultimately defined by the equations $(xy)^n x = (xy)^n$. A language is $R\text{-trivial}$ if its [syntactic monoid](http://en.wikipedia.org/wiki/Syntactic_monoid) is $R\text{-trivial}$. This variety of languages is alternatively defined as the set of all languages that can be written as a disjoint union of languages of the form $A\_0^\* a\_1 A\_1^\* a\_2 \ldots a\_n A\_n^\*$ where $n \geq 0$, $a\_1, \ldots, a\_n \in A$, $A\_i \subseteq A \setminus \{a\_{i+1}\}$ for $0 \leq i \leq n-1$. Another definition given in [[Pin]](http://www.liafa.jussieu.fr/~jep/Resumes/Varietes.html), which I'm not familiar with, uses the so-called *automates extensifs* ("extensive automata"). You can find more results about those languages in [[Pin]](http://www.liafa.jussieu.fr/~jep/Resumes/Varietes.html). There is an English version of this book, I've never read it but I'm pretty sure that you can find the same content. For the sake of completeness, here is an example of an (artificial) $R\text{-trivial}$ language: $\{b\}^\* a \{a,c\}^\* b \{a\}^\* b \{a,b,c\}^\* \cup \{d\}^\* \cup abcd$. You can build other examples with the previous definitions. Note that all of the properties of varieties of languages hold for $R\text{-trivial}$ languages, therefore they are closed under union, intersection and complementation. It should help to build more complicated languages.
``` sum = 0; for (int i = 1; i <= n; i++ ) for (int j = 1; j <= i*i; j++) if (j % i ==0) for (int k = 0; k < j k++) sum++; ``` I am trying to find out time complexity of this above program. First "for loop" will run n times. Second for loop will execute overall n^3 times The innermost loop will execute when j is multiple of i, that will happen exactly i times. Please help me to find the overall time complexity of this program.
The number of times that the `if` statement is executed is $$ \sum\_{i=1}^n i^2 = \Theta(n^3). $$ The number of times that `sum` is incremented is $$ \sum\_{i=1}^n \sum\_{j'=1}^i ij' = \sum\_{i=1}^n i \sum\_{j'=1}^i j' = \sum\_{i=1}^n \Theta(i^3) = \Theta(n^4). $$ Here $j' = j/i$, and the reason we are allowed to do this is that the inner loop gets executed only when $j'$ is integral. We get that overall, the running time is $\Theta(n^4)$.
If we go by the [book](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf) (or any other version of the language specification if you prefer), how much computational power can a C implementation have? Note that “C implementation” has a technical meaning: it is a particular instantiation of the C programming language specification where implementation-defined behavior is documented. A C implementation doesn't have to be able to run on an actual computer. It does have to implement the whole language, including every object having a bit-string representation and types having an implementation-defined size. For the purpose of this question, there is no external storage. The only input/output you may perform is `getchar` (to read the program input) and `putchar` (to write the program output). Also any program that invokes *undefined* behavior is invalid: a valid program must have its behavior defined by the C specification plus the implementation's description of implementation-defined behaviors listed in appendix J (for C99). Note that calling library functions that are not mentioned in the standard is undefined behavior. My initial reaction was that a C implementation is nothing more than a finite automaton, because it has a limit on the amount of addressable memory (you can't address more than `sizeof(char*) * CHAR_BIT` bits of storage, since distinct memory addresses must have distinct bit patterns when stored in a byte pointer). However I think an implementation can do more than this. As far as I can tell, the standard imposes no limit on the depth of recursion. So you can make as many recursive function calls as you like, only all but a finite number of calls must use non-addressable (`register`) arguments. Thus a C implementation that allows arbitrary recursion and has no limit on the number of `register` objects can encode deterministic pushdown automata. Is this correct? Can you find a more powerful C implementation? Does a Turing-complete C implementation exist?
As noted in the question, standard C requires that there exists a value UCHAR\_MAX such that every variable of type `unsigned char` will always hold a value between 0 and UCHAR\_MAX, inclusive. It further requires that every dynamically-allocated object be represented by a sequence of bytes which is identifiable via pointer of type `unsigned char*`, and that there be a constant `sizeof(unsigned char*)` such that every pointer of that type be identifiable by a sequence of `sizeof(unsigned char *)` values of type `unsigned char`. The number of objects that can be simultaneously dynamically allocated is thus rigidly limited to $UCHAR\\_MAX ^{sizeof(unsigned\ char\*)}$. Nothing would prevent a theoretical compiler from assigning the values of those constants so as to support more than $10^{10^{10}}$ objects, but from a theoretical perspective the existence of any bound, no matter how large, means something isn't infinite. A program could store an unbounded quantity of information on the stack *if nothing that is allocated on the stack ever has its address taken*; one could thus have a C program that was capable of doing some things which could not be done by any finite automaton of any size. Thus, even though (or perhaps because) access to stack variables is much more limited than access to dynamically-allocated variables, it turns C from being a finite automaton into a push-down automaton. There is, however, another potential wrinkle: it is required that if a program examines the underlying fixed-length sequences of character values associated with two pointers to different objects, those sequences must be unique. Because there are only $UCHAR\\_MAX ^{sizeof(unsigned\ char\*)}$ possible sequences of character values, any program that created a number of pointers to distinct objects in excess of that could not comply with the C standard *if code ever examined the sequence of characters associated with those pointers*. It would be possible in some cases, however, for a compiler to determine that no code was ever going to examine the sequence of characters associated with a pointer. If each "char" was actually capable of holding any finite integer, and the machine's memory was a countably-infinite sequence of integers [given an unlimited-tape Turing machine, one could emulate such a machine although it would be *really* slow], then it would indeed be possible to make C a Turing-complete language.
I attempted a mock paper for finite automata .so i was asked to create a DFA which accepts a language in which inputs are a,b and number of b's should be divisible by 3.The first image is my answer for which i was not given marks .but when i checked the answer of the paper they gave the second image as the correct answer.Both the DFA give the accept the language(number of b divisible by 3),so can any one tell how the second DFA in the second is better than the first one,or where did i go wrong. [![DFA where number of b's divisible by 3](https://i.stack.imgur.com/YhJZ5.jpg)](https://i.stack.imgur.com/YhJZ5.jpg)
First DFA does not accept $\epsilon$ which has 0 number of $b$'s which is divisible by 3 (0 is said to be divisible by 3). So first answer is incorrect, where as second is correct.
I'm trying to understand the approximation ratio for the [Kenyon-Remila](http://mor.journal.informs.org/content/25/4/645.full.pdf) algorithm for the 2D cutting stock problem. The ratio in question is $(1 + \varepsilon) \text{Opt}(L) + O(1/\varepsilon^2)$. The first term is clear, but the second doesn't mean anything to me and I can't seem to figure it out.
This seems a looser variant of polynomial time approximation scheme ($PTAS$). If $\epsilon$ is not small, you can achieve approximation with ratio very close to $1+\epsilon$ because $\mathcal O(\epsilon^{-2}) \le c \epsilon^{-2}$ is small. ($c$ is a fixed positive real number independent of any other variable.) If $\epsilon$ is small, the 2nd term gets larger. However, $OPT(L)$ is usually much larger than a constant. No matter how large $\mathcal O(\epsilon^{-2})$ becomes, it is still a constant (since $\epsilon$ is a given target real number for the approximation ratio). So Kenyon-Remila theorem means: "Constructed $\le (1+\epsilon) OPT +\mathcal O(1)$ for any given app ratio $1+\epsilon$, where the $\mathcal O(1)$ term is a constant depending on $\epsilon$. It is actually $\mathcal O(\epsilon^{-2})$."
I am wondering if my solution is correct or I am on the right track. I have searched online and found a paper about [Java type system being unsound](https://dl.acm.org/doi/10.1145/2983990.2984004) but that doesn't really answer the progress and preservation issue. Question: Give a non-obvious problem in the language definition that would prevent “progress” from being true. Give a non-obvious problem in the language definition that would prevent “preservation” from being true. My attempt: Progress: If $· \vdash e : τ$ then either $e \to e'$ for some $e'$ or $e$ is a value. ```java class A { String name; String getName() { return this.name; }; } A a = null; String b = a.getName(); // we don't get to another evaluation step nor we get to a value ``` I think this is correct because the expression doesn't evaluate to another expression nor do we get a value ... and the program crashes. Preservation: If $· \vdash e : τ$ and $e \to e'$ then $· \vdash e' : τ$ ```java class A { String name; String getName() { return (String)((Object)12); }; } A a = new A(); String b = a.getName(); // after evaluation of RHS, the assignment is not really valid ``` This one I am not so confident. The up-cast to object and down-cast to integer cause the assignment to not type check.
For $A \in \mathsf{NP}$ you have $ A \le\_p 3SAT \le\_p \overline{3SAT} \in \mathsf{co{\text -}NP}$, which implies $A \in \mathsf{co{\text -}NP}$ and hence $\mathsf{NP} \subseteq \mathsf{co{\text -}NP}$. Simmetrically, for $A \in \mathsf{co{\text -}NP}$, you have $A \le\_p \overline{3SAT} \le\_p 3SAT \in \mathsf{NP}$, which implies $A \in \mathsf{NP}$ and hence $\mathsf{co{\text -}NP} \subseteq \mathsf{NP}$. From $\mathsf{NP} \subseteq \mathsf{co{\text -}NP} \subseteq \mathsf{NP}$, it follows that $\mathsf{NP} = \mathsf{co{\text -}NP}$.
Let's say the results for an experiment give a p-Value of 0.0354234. Why is it necessary to fix a threshhold before doing the experiment and than report significance? Why do I not simply report the p-Value given above? An additional advantage would be that I could repeat the experiment and give summary statistics of the p-Value like mean, median, min, max, and standard deviation. Why is this not common practice?
It is considered bad practice to pick the significance level (or $\alpha$) post-simulation. Two reasons for picking the confidence level beforehand: > > 1. The significance level is one criterion often used in deciding on an appropriate sample size. See e. g. [here.](https://www.ma.utexas.edu/users/mks/statmistakes/power.html) > 2. The analyst is not tempted to choose a cut-off on the basis of what he or she *hopes* is true. *[Source](https://www.ma.utexas.edu/users/mks/statmistakes/errortypes.html)* > > > Jim Frost phrases the second point nicely: ***"It protects you from choosing a significance level because it conveniently gives you significant results!"***. For a graphical example and further elaboration see [his post](http://blog.minitab.com/blog/adventures-in-statistics-2/understanding-hypothesis-tests%3A-significance-levels-alpha-and-p-values-in-statistics). So reporting the p-values for **relevant** parameters makes sense and should be done (after picking the significance level for your study/case). But just always adding them "because you can" doesn't make sense, consider what information is gained from the reported parameters. Here some more background on significance levels and reasons to consider them carefully for each case: > > "No scientific worker has a fixed level of significance at which from > year to year, and in all circumstances, he rejects hypotheses; he > rather gives his mind to each particular case in the light of his > evidence and his ideas.” - Fisher (1956 in *Statistical Methods and Scientific Inference*, p. 42) > > > --- **The University of Texas in Austin** has some nice [webpages](https://www.ma.utexas.edu/users/mks/statmistakes/errortypes.html) on the topic, from which I will quote: It is important to consider the implications and possible consequences of Type I and Type II errors before beginning with data analysis. So it is also a consideration between practical statistical significance. Consider the following example for the difference between the two. > > A large clinical trial is carried out to compare a new medical > treatment with a standard one. The statistical analysis shows a > statistically significant difference in lifespan when using the new > treatment compared to the old one. But the increase in lifespan is at > most three days, with average increase less than 24 hours, and with > poor quality of life during the period of extended life. Most people > would not consider the improvement practically significant. > > > Now back to Type I and Type II erors and why their consideration is so important. (Here a small recap on Type I and Type II errors from [datasciencedojo.com](https://datasciencedojo.com/type-i-and-type-ii-error-smoke-detector-and-the-boy-who-cried-wolf/):) [![enter image description here](https://i.stack.imgur.com/aWnf3.gif)](https://i.stack.imgur.com/aWnf3.gif) Again, this can probably best be explained with examples (also from Texas University). > > * **If the consequences of a type I error are serious or expensive, then a very small significance level is appropriate.** > > > *Example 1:* Two drugs are being compared for effectiveness in treating > the same condition. Drug 1 is very affordable, but Drug 2 is extremely > expensive. The null hypothesis is "both drugs are equally effective," > and the alternate is "Drug 2 is more effective than Drug 1." In this > situation, a Type I error would be deciding that Drug 2 is more > effective, when in fact it is no better than Drug 1, but would cost > the patient much more money. That would be undesirable from the > patient's perspective, so a small significance level is warranted. > > > --- > > * **If the consequences of a Type I error are not very serious (and especially if a Type II error has serious consequences), then a larger > significance level is appropriate.** > > > *Example 2:* Two drugs are known to be equally effective for a certain > condition. They are also each equally affordable. However, there is > some suspicion that Drug 2 causes a serious side-effect in some > patients, whereas Drug 1 has been used for decades with no reports of > the side effect. The null hypothesis is "the incidence of the side > effect in both drugs is the same", and the alternate is "the incidence > of the side effect in Drug 2 is greater than that in Drug 1." Falsely > rejecting the null hypothesis when it is in fact true (Type I error) > would have no great consequences for the consumer, but a Type II error > (i.e., failing to reject the null hypothesis when in fact the > alternate is true, which would result in deciding that Drug 2 is no > more harmful than Drug 1 when it is in fact more harmful) could have > serious consequences from a public health standpoint. So setting a > large significance level is appropriate. > > >
The answer is no to what i have learned but i am finding difficulties to absorb it reason being . We say for every language we have a grammar as language without grammar makes no sense even in general scenerio also. i have taken this argument as my base for the conclusion of my answer as i found this argument somewhere on stackexchanege please correct me if iam wrong. coming to point , as there are grammar for every language which also indicate existence of set of rules for the formation of the language. Taking this ahead , i wonder if we can device a set of rules which is indeed a process to derive or generate a solution then why cant we form an automata for the same. Adding more to this point , we know the highest class of automata is turing machines capable of accepting many languages but not all , and we know that the languages not accpeted by turing machines have grammars( as said every language has grammar) means there are some machines automata that does not necessarily satisfies the all the langugaes not accepted by turing machines but do satisfy some but all of them , using this we say there are different classes of machines which are not turing but accept some languages not accpeted by turing machines if this is not true than there must exist the languages that do not have any grammar but as said every languae has grammar. adding more woes to this argument , we have not able to identify any machine more powerful then turing , so this means some of the arguments made here are wrong . but i am finding it difficult to understand which one of them is wrong, so please let me know.
Prolog does not support arbitrary first-order logic but only a fragment of it known as [Horn clauses](https://en.wikipedia.org/wiki/Horn_clause). These are statements of the form $$\forall x\_1, \ldots, x\_n \,.\, P(x\_1, \ldots, x\_n) \Rightarrow q(x\_1, \ldots, x\_n)$$ where $P$ is built from atomic predicates and conjunctions, and $q$ is an atomic predicate. Not every statement in logic can be converted to this form. You are suggesting use of `foreach`. Note that this is not properly a *logic quantifier* in Prolog, but rather a special-purpose routine which operates on lists. Pure prolog does not have any of this. If you are willing to use lists and to limit attention to only quantification over finite lists of elements, then you can just implement everything easily enough in Prolog using lists and functions on them. But that misses the point of logic programming, does it not?
Does the Kleene star distribute over each element? Is this true: $(0+1)^\* = (0^\* + 1^\*)$?
You can verify that $010$ is in $(0+1)^\*$ but not in $(0^\* + 1^\*)$. Therefore, $(0 + 1)^\* \neq (0^\* + 1^\*)$.
I have read that the SMOTE package is implemented for binary classification. In the case of n classes, it creates additional examples for the smallest class. Can I balance all the classes by running the algorithm n-1 times?
I am pretty sure that the SMOTE package in python can also be used for multi-class as well. Just you can check its implementation for iris dataset- <http://contrib.scikit-learn.org/imbalanced-learn/stable/auto_examples/plot_ratio_usage.html> -Please correct me if I am wrong.
We have a dataset where the non-exposed group has follow up to 5 years but the exposed group has follow up only to 1 year (>1 year not possible in the dataset). Analysis is with Cox regression. The question is whether we should censor the non-exposed patients at 1 year to match the maximum follow up in the exposed group, or not. Would the coefficient for the exposure be different using the full follow-up time vs. the 1-year censored follow-up time for the non-exposed? Any advice much appreciated.
If you only have exposure and no other covariates it makes no difference. The Cox partial likelihood compares observations at the *same time*, so when you have no observations still at risk in one group, those in the other group provide no information. In R ``` > library(survival) > set.seed(2020-6-28) > z<-rep(1:2,each=100) > x<-rexp(200,z/2) > c<-ifelse(z==1,5,1) > t<-pmin(c,x) > d<-x<=c > table(z,d) d z FALSE TRUE 1 3 97 2 37 63 > coxph(Surv(t,d)~factor(z)) Call: coxph(formula = Surv(t, d) ~ factor(z)) coef exp(coef) se(coef) z p factor(z)2 0.5748 1.7768 0.2011 2.859 0.00425 Likelihood ratio test=8.4 on 1 df, p=0.003757 n= 200, number of events= 160 ``` Now re-do the censoring at 1 for both groups ``` > c.early<-rep(1,200) > t.early<-pmin(c.early,x) > d.early<-x<c.early > table(z,d.early) d.early z FALSE TRUE 1 59 41 2 37 63 > coxph(Surv(t.early,d.early)~factor(z)) Call: coxph(formula = Surv(t.early, d.early) ~ factor(z)) coef exp(coef) se(coef) z p factor(z)2 0.5748 1.7768 0.2011 2.859 0.00425 Likelihood ratio test=8.4 on 1 df, p=0.003757 n= 200, number of events= 104 ``` Precisely no change in the Cox model, as claimed. If you have other covariates the results will not be identical. The question then is whether you expect the relationship between the other covariates and survival to stay the same after 1 year or not. If it does stay about the same, you'll get a better estimate of it (and so potentially better adjustment) by using the whole data. If it changes too much, you will get an estimate that's averaged over the whole time and so is biased for the one-year period where you have information on exposure. The censoring itself won't introduce a bias (or rather, it's a basic assumption of survival analysis that it doesn't and there's no fix if it does).
I want to do a simulation study in R and I have already some empirical data, that gives me a hint about the variance parameters to set. But what should I use for the error variance? Here's an example of what I mean: ``` > a <- aov(terms(yield ~ block + N * P + K, keep.order=TRUE), npk) > anova(a) Analysis of Variance Table Response: yield Df Sum Sq Mean Sq F value Pr(>F) block 5 343.29 68.659 4.3911 0.012954 * N 1 189.28 189.282 12.1055 0.003684 ** P 1 8.40 8.402 0.5373 0.475637 N:P 1 21.28 21.282 1.3611 0.262841 K 1 95.20 95.202 6.0886 0.027114 * Residuals 14 218.90 15.636 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > var(residuals(a)) [1] 9.517536 ``` So would I use 15.6 or 9.5 as my empirical error variance?
The simplest and least error-prone approach would be [rejection sampling](https://en.wikipedia.org/wiki/Rejection_sampling): generate uniformly distributed points in the *square* around your circle, and only keep those that are in the circle. [![circle](https://i.stack.imgur.com/6mUJY.png)](https://i.stack.imgur.com/6mUJY.png) ``` nn <- 1e4 radius <- 1 set.seed(1) # for reproducibility foo <- cbind(runif(nn,-radius,radius),runif(nn,-radius,radius)) plot(foo[rowSums(foo^2)<radius^2,],pch=19,cex=0.6,xlab="x",ylab="y") ``` Of course, you will only keep a fraction of your generated data points, around $\frac{\pi}{4}$ (which is the ratio of the areas of the circumscribed square to the disk). So you can either start with $\frac{4n}{\pi}$ points, or generate points until you keep your target number $n$ of them.
Inspired by Peter Donnelly's talk at [TED](http://www.ted.com/talks/peter_donnelly_shows_how_stats_fool_juries.html), in which he discusses how long it would take for a certain pattern to appear in a series of coin tosses, I created the following script in R. Given two patterns 'hth' and 'htt', it calculates how long it takes (i.e. how many coin tosses) on average before you hit one of these patterns. ``` coin <- c('h','t') hit <- function(seq) { miss <- TRUE fail <- 3 trp <- sample(coin,3,replace=T) while (miss) { if (all(seq == trp)) { miss <- FALSE } else { trp <- c(trp[2],trp[3],sample(coin,1,T)) fail <- fail + 1 } } return(fail) } n <- 5000 trials <- data.frame("hth"=rep(NA,n),"htt"=rep(NA,n)) hth <- c('h','t','h') htt <- c('h','t','t') set.seed(4321) for (i in 1:n) { trials[i,] <- c(hit(hth),hit(htt)) } summary(trials) ``` The summary statistics are as follows, ``` hth htt Min. : 3.00 Min. : 3.000 1st Qu.: 4.00 1st Qu.: 5.000 Median : 8.00 Median : 7.000 Mean :10.08 Mean : 8.014 3rd Qu.:13.00 3rd Qu.:10.000 Max. :70.00 Max. :42.000 ``` In the talk it is explained that the average number of coin tosses would be different for the two patterns; as can be seen from my simulation. Despite watching the talk a few times I'm still not quite getting why this would be the case. I understand that 'hth' overlaps itself and intuitively I would think that you would hit 'hth' sooner than 'htt', but this is not the case. I would really appreciate it if someone could explain this to me.
**I like to draw pictures.** ![enter image description here](https://i.stack.imgur.com/uMGVl.png) These diagrams are [finite state automata](http://en.wikipedia.org/wiki/Finite-state_machine) (FSAs). They are tiny children's games (like [Chutes and Ladders](http://en.wikipedia.org/wiki/Snakes_and_ladders)) that "recognize" or "accept" the HTT and HTH sequences, respectively, by moving a token from one node to another in response to the coin flips. The token begins at the top node, pointed to by an arrow (line *i*). After each toss of the coin, the token is moved along the edge labeled with that coin's outcome (either H or T) to another node (which I will call the "H node" and "T node," respectively). When the token lands on a terminal node (no outgoing arrows, indicated in green) the game is over and the FSA has accepted the sequence. Think of each FSA as progressing vertically down a linear track. Tossing the "right" sequence of heads and tails causes the token to progress towards its destination. Tossing a "wrong" value causes the token to back up (or at least stand still). *The token backs up to the most advanced state corresponding to the most recent tosses.* For instance, the HTT FSA at line *ii* stays put at line *ii* upon seeing a head, because that head could be the initial sequence of an eventual HTH. It does *not* go all the way back to the beginning, because that would effectively ignore this last head altogether. After verifying these two games indeed correspond to HTT and HTH as claimed, and comparing them line by line, and it should now be obvious that **HTH is harder to win**. They differ in their graphical structure only on line *iii*, where an H takes HTT back to line *ii* (and a T accepts) but, in HTH, a T takes us all the way back to line *i* (and an H accepts). **The penalty at line *iii* in playing HTH is more severe than the penalty in playing HTT.** **This can be quantified.** I have labeled the nodes of these two FSAs with the *expected number of tosses needed for acceptance.* Let us call these the node "values." The labeling begins by > > (1) writing the obvious value of 0 at the accepting nodes. > > > Let the probability of heads be p(H) and the probability of tails be 1 - p(H) = p(T). (For a fair coin, both probabilities equal 1/2.) Because each coin flip adds one to the number of tosses, > > (2) the value of a node equals one plus p(H) times the value of the H node plus p(T) times the value of the T node. > > > **These rules determine the values**. It's a quick and informative exercise to verify that the labeled values (assuming a fair coin) are correct. As an example, consider the value for HTH on line *ii*. The rule says 8 must be 1 more than the average of 8 (the value of the H node on line *i*) and 6 (the value of the T node on line *iii*): sure enough, 8 = 1 + (1/2)\*8 + (1/2)\*6. You can just as readily check the remaining five values in the illustration.
I'm having trouble imagining what variance and deviation mean with a series of die rolls. That is, a fair die will fall with a flat distribution on all its values 1-6 in 6 bins (1, 2, 3, 4, 5, 6) over time (as n goes towards infinity). Firstly, does the concept of *variance* really make sense on such a question? [Edit: only if I provide some data on bin outcomes. Say n=36, and the die lands as follows: 1 (6 times), 2 (5x), 3 (5x), 4 (7x), 5 (7x), 6 (6x).] The average outcome will be n/6 over time for each of the six bins [Edit: My prior writeup was confusing, as I had said the mean was 3.5 -- but this mean face-value is irrelevant to the question.] Is this question even valid? It seems a perfectly flat distribution (as n-> infinity), with no other hidden variables, *has no variance* (or *shouldn't* have any), but then what should one make of the results when n is finite?
If $X$ is the value of the die we already know $\text{E}(X) = 21 / 6$ so we only need to find $\text{E}(X^2)$ since $\text{Var}(X) = \text{E}(X^2) - \text{E}(X)^2$. We can just directly calculate \begin{align} \text{E}(X^2) &= \sum\_{k=1}^{6} \frac{k^2}{6} \\ &= \frac{1^2 + 2^2 + 3^2 + 4^2 + 5^2 + 6^2}{6} \\ &= \frac{91}{6} \end{align} which after some arithmetic gives us $\text{Var}(X) = 105 / 36$.
I have fitted a binomial regression in R using `glm.nb` from the MASS package. I have two questions and would be very thankful if you could answer any of them: 1a) Can I use the Anova (type II, car package) to analyse which explanatory variables are significant? Or should I use the summary() function? However, the summary uses a z-test which requires normal distribution if i am not mistaken. When looking at examples in books and websites, mostly summary has been used. I get completely different outcomes for Anova test and summary. Based on visualisation of the data I feel that Anova is more accurate. (i only get different outcomes when I have included an interaction). 1b) When using the Anova, both an F-test, chi-square test and anova (type 1) give different (but pretty similar) results - is there any of these tests that is preferred for a negative binomial regression? Or is there any way to find out which test represents the most likely results? 2) When looking at the diagnostic plots, my qq-plot looks kinda off. I am wondering if this is fine - since the negative binomial is different from the normal distribution? Or should the residuals still be normally distributed? [![diagnostic plots](https://i.stack.imgur.com/p9AdK.jpg)](https://i.stack.imgur.com/p9AdK.jpg)
*1(a)* `Anova()` can be easier to understand in terms of evaluating the significance of a predictor in your model, even though there is nothing wrong with the output from `summary()`. The usual R `summary()` function reports something that can appear quite different from `Anova()`. A `summary()` function typically reports whether the estimated value for each coefficient is significantly different from 0. `Anova()` (with what it calls Type II tests) examines whether a particular predictor, including all of its levels and interactions, adds significantly to the model. So if you have a categorical predictor with more than 2 levels `summary()` will report whether each category other than the reference is significantly different *from the reference level*. Thus with `summary()` you can get different apparent significance for the individual levels depending on which is chosen as the reference. `Anova()` considers all levels together. With interactions, as you have seen, `Anova()` and `summary()` can seem to disagree for a predictor included in an interaction term. The problem is that `summary()` reports results for a reference situation in which both that predictor and the *predictor included in its interaction* are at their reference levels (categorical) or at 0 (continuous). With an interaction, the choice of that reference situation (change of reference level, shift of a continuous variable) can determine whether the coefficient for a predictor is significantly different from 0 *at that reference situation*. As you probably don't want to have "significance" for a predictor depend on what reference situation you chose, `Anova()` results can be easier to interpret. *1(b)* I would avoid Type I tests even if they seem to be OK in your data set. In particular, results depend on the order of entry of the predictors into your model if you don't have what's called an [orthogonal design](https://stats.stackexchange.com/q/228797/28500). See [this classic answer](https://stats.stackexchange.com/a/20455/28500) for an explanation of the different Types of ANOVA. [This answer](https://stats.stackexchange.com/a/144608/28500) nicely illustrates the 3 different types of statistical tests that are typically reported for models fit by maximum likelihood, like your negative binomial model. All of these tests make assumptions about distributions (normality or the related $\chi^2$), but these are assumptions about distributions of calculated statistics, not about the underlying data. Those assumptions have reasonable theoretical bases. As the answer linked in this paragraph puts it: > > As your $N$ [number of observations] becomes indefinitely large, the three different $p$'s should converge on the same value, but they can differ slightly when you don't have infinite data. > > > Likelihood-ratio tests would probably be considered best, but any could be acceptable so long as you are clear about which test you used (and you didn't choose one because it was significant and the others weren't). *2* *Diagnostics* There is no reason to expect deviance residuals to be distributed normally in a negative binomial or other count-based model; see [this answer](https://stats.stackexchange.com/a/248035/28500) and its link to another package that you might find useful for diagnostics. The other answers on [that page](https://stats.stackexchange.com/q/70558/28500), and [this page](https://stats.stackexchange.com/q/25440/28500), might also help.
Background ========== Suppose I collected a data set of the latitude and longitude of moose tracks within an irregular polygon, and also took a compass bearing of the direction that the hooves pointed in. ![](https://static.inaturalist.org/photos/192605048/original.jpeg) *Image Credit: © Galen Seilis 2022* (used with permission) Also suppose that the spatial sampling intensity is approximately uniform. The study area is small enough to safely ignore the curvature of the Earth, if desired. This gets us very close to having a vector field over $\mathbb{R}^2$, only that there is not a clear notion of magnitude. To account for this I define each observation $\vec{v}\_i$ to be the normalized gradient at that point in space of some hypothetical field $u : \mathbb{R}^2 \mapsto \mathbb{R}$. [![enter image description here](https://i.stack.imgur.com/jhpjP.png)](https://i.stack.imgur.com/jhpjP.png) I would like to take on the assumption that there exist no sources or sinks in the gradient of the field $u$. This is due to the fact that moose come and go from the potential study area. While of course moose are born and die *somewhere*, I want to assume that these events are rare enough to be ignored in my model. Visually this means that neither of the following two patterns occur at any point: [![enter image description here](https://i.stack.imgur.com/xCZKK.png)](https://i.stack.imgur.com/xCZKK.png) Because of how I defined the observations to be the normalized gradient, I was willing to assume that the gradient isn't the zero vector anywhere anyways. This precludes other field patterns as well. --- Other Considerations ==================== * At the moment I do not have a [well-posed](https://en.wikipedia.org/wiki/Well-posed_problem) problem. * The goal is to produce plausible patterns of flow of moose through a small area. * Sometimes moose tracks are plausibly from the same individual due to being aligned and close together, but usually the tracks are relatively isolated. * There are plenty of machine learning approaches that I could use to estimate a map $\mathbb{R}^2 \mapsto \mathbb{R}^2$, but I would rather use differential equations for understandability. * At a boundary point the gradient of $u$ could be perpendicular, parallel, or neither to boundary itself. * I am not modeling time dependence because estimating the age of a track is quite difficult. * I have started with assuming it is a function rather than a [multivalued function](https://en.wikipedia.org/wiki/Multivalued_function), but a random vector is a reasonable way to go. The former might work if in practice even partially overlapping tracks are not *exactly* on top of each other. But the latter makes sense in that a given moose might go different directions from the same point depending on unmodeled details of its environment, or that distinct moose could have different brains or perceptions and consequently decide to walk different ways from the same point. Frank's point about crossing paths is excellent: namely that the likely existence of crossing paths precludes the existence of a single-valued vector field. * I have not decided what is reasonable to assume about the curl; I will think more on it. As Frank pointed out, the curl of the gradient of a field *must* be zero. * Random walks on $\mathbb{R}^2$ might be fruitful. The more explainable the better, but I don't mind sprinkling in a little bit of noise from a stochastic process. * **The ultimate goal is to estimate likely paths that the moose are taking into and then out of the bounded region.** * Whuber raises a good point that specific paths are followed by the moose. In theory there should be no vectors where the moose did not go. The difficulty is we do not know where the moose have gone, and wish to infer it. * SextusEmpiricus suggested that a [flux](https://en.wikipedia.org/wiki/Flux#General_mathematical_definition_(transport)) formulation is promising for resolving the problem of crossing paths. * My guess is that there are probably zero moose in the bounded region on a given day. What I suspect happens is that moose occasionally pass through the area as they browse. * Sometimes it is possible to tell if tracks are 'extremely' fresh, but in general track ages are not reliably guessed (by me anyway). --- Question ======== What model (and boundary conditions if applicable) would be suitable for modeling the flow of moose through a bounded region?
This approach of finding best-fit vector paths through a bounded volume with directed point measurements is the overall principle of Diffusion Tensor Imaging. There is a large volume of methodology and mathematics around finding paths under these constraints. An example of an introduction article: <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3163395/> explains the general principle of anisotropic measurements in voxels (or pixels, in your case). More [advanced approaches](https://www.frontiersin.org/articles/10.3389/fnins.2019.00492/full) can account for crossing fibers, which is more likely in your case since you only have two dimensions. These methods are generally validated against real-world physical brains, so you can have some confidence that they have validity within their constraints. I hope you can adapt these principles to your moose-tracking problem.
Suppose that I am interested in predicting an outcome (say, the arrival delay [in seconds] of a flight) based upon a set of features. One of these features is a nominal variable - `carrier` - that specifies the airline carrier of the flight. This feature has 16 different values. After investigation of how arrival delay is distributed across each carrier, it appears that some carriers could be collapsed into one value (e.g., "AS" and "HA" or "WN" and "B6"). ``` install.packages("nycflights13") library(nycflights) boxplot( formula = arr_delay ~ with(flights, reorder(carrier, -arr_delay, median, na.rm = TRUE)), data = flights, horizontal = TRUE, las = 2, plot = TRUE ) ``` [![Boxplots of Arrival Delay (in seconds) by Carrier.](https://i.stack.imgur.com/Cgt5o.png)](https://i.stack.imgur.com/Cgt5o.png) In general, are there well-known methods for reducing the dimensionality **within a feature**?
I think you are looking for a method that groups up the 16 nominal categorical values. If you conduct the regression problem on a tree-based algorithm, say, rpart, it would give you the various splits that you could consider aggregating them up to reduce the number of categorical values. For example, the tree based algorithm may suggest a split of carrier IN (AS, HA, VC) vs. NOT IN (AS, HA, VC). This effectively would reduce the number of distinct values to 2. You might want to consider more than 1 split to take into account interactions. Overall, this approach would reduce the number of distinct values in a categorical variable.
I'm looking for an explanation or reference on the implementation of computer clock. To keep the question at the level of logical abstractions: say, we put together some combinational and sequential logic from basic gates. The role of the clock (an oscillator of some sort, but that's going into more detail than needed for my abstraction) is to synchronise the unorderly chorus of inputs and outputs passing through the gates. Exactly how does the clock synchronize the signals? To render things more concrete... Take a two-way NAND gate. Say, I set the two inputs (`U1`) to high signal, and obtain (after some inherent delay) a stable low signal at the other end (`U2`). Now, to add basic sequential logic to this inverter, let's add a Data Flip-Flop (DFF). The effect is that we can be certain that the `Q` end of the DFF will broadcast the low signal "definitively" at the start of the next clock cycle. What happens within the cycle is not to be trusted. The clock period is set such that the other circuitry (NAND gate in my case) has the time to stabilize during the cycle. This is the contract. But how is it achieved? The metaphor in my mind is that the clock acts as a sluice. But the comparison is misleading, in that the signal entering and exiting the DFF is *not* truncated at any point. We could physically measure the signal within the tick-tock cycle of the clock at the `Q` end of the DFF. Another high-flown metaphor would be the warm-up motions of an orchestra before the rehearsal gradually transforming into an attuned performance, with the conductor setting the beat. How is the *proper* signal (attuned to the clock's oscillations) distinguished from the *noise* propagating through gate circuitry at all times? I realize, there may be missing parts in my picture, so that question should be framed in constructive terms: **How can one implement this basic circuit to allow the logic to distinguish between signal and noise?** [![gate](https://i.stack.imgur.com/SmwSj.png)](https://i.stack.imgur.com/SmwSj.png) --- **Edit:** Judging by responses to this question, my original question must be poorly phrased. I understand the reasons behind the choice of frequency for the system clock to ensure the circuit is stable on the tick of the clock. The responses from [@Ran G.](https://cs.stackexchange.com/users/157/ran-g) and [@slebetman](https://cs.stackexchange.com/users/23207/slebetman) emphasize the "contractual" side of things. My question is really about why the contract holds. In retrospect, what I was getting at is the trivial fact that measuring instruments (system clock) must be selected based on the degree of precision required -- in a typical case discussed here, for our human sluggish attention span. To illustrate, here's a graph of the clock pulse against data-in and Qa output for a DFF. [![DFF](https://i.stack.imgur.com/uwsTw.png)](https://i.stack.imgur.com/uwsTw.png) Say, in the 2nd clock cycle, Captain Marvel sends a pulse on the data-in line and -- oblivious to the clock period -- expects to read it off immediately (mid-phase). With his lightning speed, there's no way he can make sense of the output of this circuit because the clock cycle is geological time to him. Billy Watson, on the other hand, can read it just fine. Neither Captain M. nor Billy W. is synchronized with the system clock. Not in the sense that the gate circuitry is. But for Billy's experience of time, the clock's time scale is sufficiently precise.
You have a good understanding of clocking mechanism and how flip-flops (registers really, can be implemented using any clocked memory, not just flip-flops) are used to get a "final" reading after all propagation of signals have stabilized. But your question: > > The clock period is set such that the other circuitry (NAND gate in my case) has the time to stabilize during the cycle. This is the contract. But how is it achieved? > > > Perhaps is overthinking it. It is never achieved. Rather, it is specified. Basically you read the user manual (or in engineering it's usually the data sheet). If the user manual says the maximum clock is 100MHz then you don't supply it with a 200MHz clock. That's the basic mechanism of how it's "achieved". So, I can already see the next question forming: How do the designers know to specify 100MHz? It can't be arbitrary can it? The basic way it's done is to calculate the timing of all propagation. Say you have this circuit: ``` output = A && (B || C || (D && E)) ``` Lets say all OR and AND gates have the same propagation time: 1ns. Lets also re-arrange the circuit above to make things clearer: ``` A / output = && B D \ / / || && \ / \ || E \ C ``` So, in the above circuit, the longest path to output is the input from D and E. It passes through four gates (assuming gates can only have two inputs, you can do three levels if you use a three input OR gate). Since each gate takes 1ns to stabilize, the circuit above can be sampled at a rate of every 4ns or 250MHz. The calculations above are simplified of course. It assumes wires have zero propagation time and also assume that inputs are simultaneous. Real-world CAD software can calculate propagation time of wires/traces and can even lengthen traces if necessary to ensure signals arrive at the same time. As for the simultaneity of the inputs, that's the user's (the engineer using your component) problem. If the outputs from his circuit take time to stabilize before going in to your circuit he has to take that into account and use a slower clock to allow the signals to stabilize. There is also the dirty way to do the above calculations: overclocking. You keep increasing the clock frequency to your system until it fails then back off a bit until it works again then back off a bit more to allow for some overhead. There's also a third question and it is part of the assumption of almost every digital designer: When we clock, how are we SURE the inputs have stabilized? We've only accounted for the outputs of our gates, not the inputs to them? The answer is that inputs to our circuit comes form another circuit in our system. They synchronize by using the same clock. Since they were clocked at the end of the previous clock cycle, we assume they're stable at the beginning of this clock cycle. Which is why we only consider the propagation of the gates as the limiting factor for the stability of signals. All non-internal signals or all signals that don't share our clock must be sampled. That's part of the reason that external signals can never be as fast as our internal clock - it's to allow for them to be stable in a register somewhere before signaling to the internal circuits that they're ready to enter our system. So in general, in terms of signal stability, we assume noise only exists between clock pulses and all the signals in our entire system should stabilize before the next clock pulse. That effectively defines our maximum clock rate.
We know that Maximum Independent Set (MIS) is hard to approximate within a factor of $n^{1-\epsilon}$ for any $\epsilon > 0$ unless P = NP. What are some special classes of graphs for which better approximation algorithms are known? What are the graphs for which polynomial-time algorithms are known? I know for perfect graphs this is known, but are there other interesting classes of graphs?
I don't have a good overview of this problem, but I can give some examples. A simple approximation algorithm would be to find some order of the nodes and greedily select the nodes to be in the independent set if non of its previous neighbors have been selected in the independent set. If the graph has degeneracy $d$ then using the degeneracy ordering will give a $d$-approximation. hence for graphs of degeneracy $n^{1-\epsilon}$ we have a good enough approximation. There is a couple of other techniques for approximations that work too, but I don't know them well. See: <http://en.wikipedia.org/wiki/Baker%27s_technique> and <http://courses.engr.illinois.edu/cs598csc/sp2011/Lectures/lecture_7.pdf> For the polynomial algorithms solving the problems exactly The link Suresh gave is the best. Which graphclasses that are more interesting is hard to say. One class you wont find in that list is the complement of $k$-degenerate graphs. Since max clique can be solved in $O(2^k n)$ on graphs of degeneracy $k$ see <http://en.wikipedia.org/wiki/Bron%E2%80%93Kerbosch_algorithm> especially the work of Eppstein. Then Independent set is polynomial on G if the complement of G has degeneracy $O(\log n)$.
I am looking for a method to detect sequences within univariate discrete data without specifying the length of the sequence or the exact nature of the sequence beforehand (see e.g. [Wikipedia - Sequence Mining](http://en.wikipedia.org/wiki/Sequence_mining)) Here is example data ``` x <- c(round(rnorm(100)*10), c(1:5), c(6,4,6), round(rnorm(300)*10), c(1:5), round(rnorm(70)*10), c(1:5), round(rnorm(100)*10), c(6,4,6), round(rnorm(200)*10), c(1:5), round(rnorm(70)*10), c(1:5), c(6,4,6), round(rnorm(70)*10), c(1:5), round(rnorm(100)*10), c(6,4,6), round(rnorm(200)*10), c(1:5), round(rnorm(70)*10), c(1:5), c(6,4,6)) ``` The method should be able to identify the fact that x contains the sequence 1,2,3,4,5 at least eight times and the sequence 6,4,6 at least five times ("at least" because the random normal part can potentially generate the same sequence). I have found the `arules` and `arulesSequences` package but I could'nt make them work with univariate data. Are there any other packages that might be more appropriate here ? I'm aware that only eight or five occurrences for each sequence is not going to be enough to generate statistically significant information, but my question was to ask if there was a good method of doing this, assuming the data repeated several times. Also note the important part is that the method is done without knowing beforehand that the structure in the data had the sequences `1,2,3,4,5` and `6,4,6` built into it. The aim was to find those sequences from `x` and identify where it occurs in the data. Any help would be greatly appreciated! **P.S** This was put up here upon suggestion from a stackoverflow comment... **Update:** perhaps due to the computational difficulty due to the number of combinations, the length of sequence can have a maximum of say 5?
**Finding the high-frequency sequences is the hard part:** once they have been obtained, basic matching functions will identify where they occur and how often. Within a sequence of length `k` there are `k+1-n` `n`-grams, whence for n-grams up to length `n.max`, there are fewer than `k * n.max` n-grams. Any reasonable algorithm shouldn't have to do much more computing than that. Since the longest possible n-gram is `k`, *every* possible sequence could be explored in $O(k^2)$ time. (There may be an implicit factor of $O(k)$ for any hashing or associative tables used to keep track of the counts.) **To tabulate all n-grams,** assemble appropriately shifted copies of the sequence and count the patterns that emerge. To be fully general we do not assume the sequence consists of positive integers: we treat its elements as factors. This slows things down a bit, but not terribly so: ``` ngram <- function(x, n) { # Returns a tabulation of all n-grams of x k <- length(x) z <- as.factor(x) y <- apply(matrix(1:n, ncol=1), 1, function(i) z[i:(k-n+i)]) ngrams <- apply(y, 1, function(s) paste("(", paste(s, collapse=","), ")", sep="")) table(as.factor(ngrams)) } ``` (For pretty output later, the penultimate line encloses each n-gram in parentheses.) **Let's generate the data** suggested in the question: ``` set.seed(17) f <- function(n) c(round(rnorm(n, sd=10)), 1:5, c(6,4,6)) x <- unlist(sapply(c(100,300,70,100,200,70,70,100,200,70), f)) ``` We will want to **look only at the highest frequencies:** ``` test <- function(y, e=0, k=2) { # Returns all extraordinarily high counts in `y`, which is a table # of n-grams of `x`. "Extraordinarily high" is at least `k` and # is unusual for a Poisson distribution of mean `e`. u <- max(k, ceiling(e + 5 * sqrt(e))) y[y >= u] } ``` **Let's do it!** ``` n.alphabet <- length(unique(x)) # Pre-compute to save time n.string <- length(x) # Used for computing `e` below n.max <- 9 # Longest subsequence to look for threshold <- 4 # Minimum number of occurrences of interesting subsequences y <- lapply(as.list(1:n.max), function(i) test(ngram(x,i), e=(n.string+1-i) / n.alphabet^i, k=threshold)) ``` **This calculation took 0.22 seconds** to find all high-frequency n-grams, with `n`=1, 2, ..., 9 within a string of length 1360. Here is a compact list (the corresponding frequencies can also be found in `y`: just print it out, for instance): ``` > temp <- lapply(as.list(1:n.max), function(i) {cat(sprintf("%d-grams:", i), names(y[[i]]), "\n")}) 1-grams: (-1) (-3) (-4) (-7) (0) (1) (2) (3) (4) (5) (6) 2-grams: (-1,-1) (-1,0) (-1,1) (-11,0) (-3,-7) (-3,-8) (-3,3) (-4,-4) (-6,-1) (-6,0) (-7,-3) (-7,-5) (-7,-7) (-7,-9) (-8,3) (-9,0) (-9,9) (0,5) (0,9) (1,2) (1,4) (10,6) (12,-7) (2,-5) (2,-7) (2,3) (3,-1) (3,-2) (3,2) (3,4) (4,-5) (4,-9) (4,4) (4,5) (4,6) (5,-2) (5,-4) (5,6) (6,-4) (6,1) (6,3) (6,4) (6,5) (6,7) (7,6) (8,-7) (8,14) 3-grams: (1,2,3) (2,3,4) (3,4,5) (4,5,6) (5,6,4) (6,4,6) 4-grams: (1,2,3,4) (2,3,4,5) (3,4,5,6) (4,5,6,4) (5,6,4,6) 5-grams: (1,2,3,4,5) (2,3,4,5,6) (3,4,5,6,4) (4,5,6,4,6) 6-grams: (1,2,3,4,5,6) (2,3,4,5,6,4) (3,4,5,6,4,6) 7-grams: (1,2,3,4,5,6,4) (2,3,4,5,6,4,6) 8-grams: (1,2,3,4,5,6,4,6) 9-grams: ```
I thought about this problem a long time ago, but have no ideas about it. The generating algorithm is as follows. We assume there are $n$ discrete nodes numbered from $0$ to $n - 1$. Then for each $i$ in $\{1, \dotsc, n - 1\}$, we make the $i$th node's parent in the tree be a random node in $\{0, \dotsc, i - 1\}$. Iterate through each $i$ in order so that the result is a random tree with root node $0$. (Perhaps this is not random enough but this doesn't matter.) What is the expected depth of this tree?
I think there is a concentration result about $e \log n$, but I haven't filled in the details yet. We can get an upper bound for the probability that node $n$ has $d$ ancestors not including $0$. For each possible complete chain of $d$ nonzero ancestors $(a\_1,a\_2,...,a\_d)$, the probability of that chain is $(\frac{1}{a\_1})(\frac{1}{a\_2})\cdots (\frac{1}{a\_d}) \times \frac{1}{n}$. This corresponds to $\frac{1}{n}$ times a term of $(1+\frac{1}{2} + \frac{1}{3}+ \cdots \frac{1}{n-1})^d$ where the terms are ordered. So, an upper bound for this probability is $\frac{1}{n (d!)} H\_{n-1}^d$ where $H\_{n-1}$ is the $n-1$st harmonic number $1 + \frac{1}{2} + ... + \frac{1}{n-1}$. $H\_{n-1} \approx \log (n-1) + \gamma$. For fixed $d$ and $n \to \infty$, the probability that node $n$ is at depth $d+1$ is at most $$\frac{(\log n)^d}{n (d!)} \left(1+o(1)\right)$$ By Stirling's approximation we can estimate this as $$ \frac{1}{n\sqrt{2\pi d}} \left( \frac{e \log n}{d} \right)^d. $$ For large $d$, anything much larger than $e \log n$, the base of the exponential is small, so this bound is small, and we can use the union bound to say that the probability that there is at least one node with $d$ nonzero ancestors is small. --- See [Luc Devroye, Omar Fawzi, Nicolas Fraiman. "Depth properties of scaled attachment random recursive trees."](http://arxiv.org/abs/1210.7168) [B. Pittel. Note on the heights of random recursive trees and random m-ary search trees. Random Structures and Algorithms, 5:337–348, 1994.](https://doi.org/10.1002/rsa.3240050207) The former claims the latter showed the the maximum depth is $(e+o(1))\log n$ with high probability, and offers another proof.
If MLE (Maximum Likelihood Estimation) cannot give a proper closed-form solution for the parameters in Logistic Regression, why is this method discussed so much? Why not just stick to Gradient Descent for estimating parameters?
Maximum likelihood is a method for estimating parameters. Gradient descent is a numerical technique to help us solve equations that we might not be able to solve by traditional means (e.g., we can't get a closed-form solution when we take the derivative and set it equal to zero). **The two can coexist.** In fact, when we use gradient descent to minimize the crossentropy loss in a logistic regression, we are solving for a maximum likelihood estimator of the regression parameters, as minimizing crossentropy loss and maximizing likelihood are equivalent in logistic regression. In order to descend a gradient, you have to have a function. If we take the negative log-likelihood and descend the gradient until we find the minimum, we have done the equivalent of finding the maximum of the log-likelihood and, thus, the likelihood.
If I can prove that for an estimator $\hat{k}( \theta)$ I can write: $$\frac{\partial l(X\_1, \dots , X\_n)}{\partial \theta} = a(n, \theta)(\hat{\theta} - \theta)$$ Am i sure that the estimator is unbiased? and consistent? NB: * $l$: is the log likelihood * $X\_1$ is generated from a regular model * $\hat{\theta}$ is the estimator for $\theta$ * $a(\cdot,\cdot)$ is a function of $n$ and $\theta$ (without any particular meaning i guess)
Estimators that are asymptotically efficient are not necessarily unbiased but they are asymptotically unbiased and consistent. An estimator that is efficient for a finite sample is unbiased. Since efficient estimators achieve the Cramer-Rao lower bound on the variance and that bound goes to 0 as the sample size goes to infinity efficient estimators are consistent.
So I have a background in computer programming and a little in machine learning in general. What I would like to do is create a fun project in A.I. with deep learning. I have a dataset that has a whole bunch of stock prices at a certain date, with a bunch of features for each entry to go with it. I also have some "experts" who made predictions on whether the stock will go up or down. As my dataset grows I can evolve the game to make selections from multiple stocks...etc Essentially what I would love to do is create an A.I. app that will be fed the same data that the "experts" had and see if I can create something more accurate and beat them at it. Is this a viable approach?
> > Essentially what I would love to do is create an A.I. app that will be fed the same data that the "experts" had and see if I can create something more accurate and beat them at it. Is this a viable approach? > > > Sure, you can use one or more supervised learning techniques to train a model here. You have features, a target variable and ground truth for that variable. In addition to applying ML you have learned, all you need to do to test your application fairly is reserve some of the data you have with expert predictions for comparison as test data (i.e. do not train using it). I would caveat that with some additional thoughts: * You haven't really outlined an "approach" here, other than mentioning use of ML. * Be careful not to leak future data back into the predictive model when building a test version. * Predicting stock and markets is hard, because they react to their own predictability and many professional organisations trade on the slightest advantage they can calculate, with experienced and highly competent staff both gathering and analysing data. --- Not directly part of the answer, but to anyone just starting out and discovering machine learning, and finding this Q&A: Please don't imagine rich rewards from predicting markets using stats at home, it doesn't happen. If you think that this is a route to "beating the market" be aware that you are far from the first to think of doing this, and such a plan can be summarised like this: 1. Market Data + ML 2. ??? 3. Profit You can fill in the ??? by learning *loads* about financial markets - i.e. essentially by becoming one of the experts. ML is not a short-cut, but it might be a useful tool if you are, or plan to be, a market analyst.
Following the post [What Books Should Everyone Read](https://cstheory.stackexchange.com/questions/3253/what-books-should-everyone-read), I noticed that there are recent books whose drafts are available online. For instance, the [Approximation Algorithms](https://cstheory.stackexchange.com/questions/3253/what-books-should-everyone-read/3466#3466) entry of the above post cites a 2011 book (yet to be published) titled [The design of approximation algorithms](http://www.designofapproxalgs.com/). I think knowing recent works is really useful for whoever wants to get a taste of TCS trends. When drafts are available, one can check the books before actually buying them. So, > > What are the recent TCS books whose drafts are available online? > > > Here, by "recent", I mean something that's no older than ~5 years.
[Spectra of Graphs](http://www.win.tue.nl/~aeb/2WF02/spectra.pdf) by [Brouwer](http://www.win.tue.nl/~aeb/) and [Haemers](http://lyrawww.uvt.nl/~haemers/home.html). I came to this book by way of [Chapter 16](http://cs-www.cs.yale.edu/homes/spielman/PAPERS/SGTChapter.pdf) (written by Spielman) in [Combinatorial Scientific Computing](http://www.crcpress.com/product/isbn/9781439827352).
I have a skewed distribution that looks like this: [![picture of a skewed distribution](https://i.stack.imgur.com/09kmt.jpg)](https://i.stack.imgur.com/09kmt.jpg) How can I transform it to a Gaussian distribution? The values represent ranks, so modifying the values does not cause information loss as long as the order of values remains the same. I'm doing this to experiment if different distributions change the behavior of my ML models. I'm working with Python/NumPy/Pandas/scikit-learn. Edit: I should clarify that I have a lot of features and I'm looking to automatically transform all feature distributions. I was able to find a reasonable transformation for a single feature with a lot of experimentation, but it doesn't generalize to other features: `normalize(np.log(0.30 + original))`. \*\* here would be image `i.stack.imgur.com/uzorK.jpg` but I don't have enough rep to post more than 2 images \*\* `normalize(np.log(0.17 + another_feature_distribution))`. [![enter image description here](https://i.stack.imgur.com/IAWJo.jpg)](https://i.stack.imgur.com/IAWJo.jpg) In this image the purple bars represent the original distribution of another feature, green bars represent the transformed distribution. No matter how much I tweak the constant, I don't get the high green bar on the left extreme to disappear. Also, I don't have time to manually find a formula for each feature. Not sure if these are bell-shaped enough anyway?
For contemporary viewers, an update in scikit-learn now includes the `PowerTransformation` in the API, providing a neat way of including these transforms in the workflow. See [Preprocessing Transformers](https://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-transformer).
Deep Learning, now one of the most popular fields in Artificial Neural Network, has shown great promise in terms of its accuracies on data sets. How does it compare to Spiking Neural Network. Recently Qualcomm unveils its zeroth processor on SNN, so I was thinking if there are any difference if deep learning is used instead.
**Short answer:** Strictly speaking, "Deep" and "Spiking" refer to two different aspects of a neural network: "Spiking" refers to the activation of individual neurons, while "Deep" refers to the overall network architecture. Thus in principle there is nothing contradictory about a spiking, deep neural network (in fact, the brain is arguably such a system). However, in practice the current approaches to DL and SNN don't work well together. Specifically, Deep Learning as currently practiced typically relies on a differentiable activation function and thus doesn't handle discrete spike trains well. **Further details:** Real neurons communicate via discrete spikes of voltage. When building hardware, spiking has some advantages in power consumption, and you can route spikes like data packets (Address Event Representation or AER) to emulate the connectivity found in the brain. However, spiking is a noisy process; generally a single spike doesn't mean much, so it is common in software to abstract away the spiking details and model a single scalar spike rate. This simplifies a lot of things , especially if your goal is machine learning and not biological modeling. The key idea of Deep Learning is to have multiple layers of neurons, with each layer learning increasingly-complex features based on the previous layer. For example, in a vision setting, the lowest level learns simple patterns like lines and edges, the next layer may learn compositions of the lines and edges (corners and curves), the next layer may learn simple shapes, and so on up the hierarchy. Upper levels then learn complex categories (people, cats, cars) or even specific instances (your boss, your cat, the batmobile). One advantage of this is that the lowest-level features are generic enough to apply to lots of situations while the upper levels can get very specific. The canonical way to train spiking networks is some form of [Spike Timing Dependent Plasticity (STDP)](http://en.wikipedia.org/wiki/Spike-timing-dependent_plasticity), which locally reinforces connections based on correlated activity. The canonical way to train a Deep Neural Network is some form of [gradient descent](http://en.wikipedia.org/wiki/Gradient_descent) back-propagation, which adjusts all weights based on the global behavior of the network. Gradient descent has problems with non-differentiable activation functions (like discrete stochastic spikes). If you don't care about learning, it should be easier to combine the approaches. One could presumably take a pre-trained deep network and implement just the feed-forward part (no further learning) as a spiking neural net (perhaps to put it on a chip). The resulting chip would not learn from new data but should implement whatever function the original network had been trained to do.
I'm trying to wrap my head around the result of Bayes Theorem applied to the classic mammogram example, with the twist of the mammogram being perfect. That is, Incidence of cancer: $.01$ Probability of a positive mammogram, given the patient has cancer: $1$ Probability of a positive mammogram, given the patient does not have cancer: $.01$ By Bayes: P(cancer | mammogram+) = $\dfrac {1 \cdot .01}{(1 \cdot .01) + (.091 \cdot .99)}$ $ = .5025$ So, if a random person from the population takes the mammogram, and obtains a positive result, there is a 50% chance they have cancer? I'm failing to intuitively understand how the tiny 1% chance of a false positive in 1% of the population can trigger a 50% result. Logically, I would think a perfectly true positive mammogram with a tiny false positive rate would be much more accurate.
I will answer this question both from a medical and a statistics standpoint. It has received a lot of attention in the lay press, particularly after the best-seller *The Signal and the Noise* by Nate Silver, as well as a number of articles in publications such as [The New York Times](http://www.newyorker.com/books/page-turner/what-nate-silver-gets-wrong) explaining the concept. So I'm very glad that @user2666425 opened this topic on CV. First off, let me please clarify that the $p\,(+|C) = 1$ is not accurate. I can tell you that this figure would be a dream come true. Unfortunately there are a lot of *false negative* mammograms, particularly in women with dense breast tissue. The estimated figure can be [$20\%$ or higher](http://www.cancer.gov/types/breast/mammograms-fact-sheet), depending on whether you lump all different types of breast cancers into one (invasive v DCIS), and other factors. This is the reason why other modalities based on sonographic or MRI technology are also applied. A difference between $0.8$ and $1$ is critical in a screening test. Bayes theorem tells us that $\small p(C|+) = \large \frac{p(+|C)}{p(+)}\small\* p(C)$, and has recently gotten a lot attention as it relates to mammography *in younger, low risk women*. I realize this is not exactly what you are asking, which I address in the final paragraphs, but it is the most debated topic. Here is a taste of the issues: 1. The *prior* (or probability of having cancer based on prevalence) in *younger* patients, say from 40 - 50 years of age is rather small. According to the [NCI](http://www.cancer.gov/types/breast/risk-fact-sheet) it would can round it up at $\sim 1.5\%$ (see table below). This relatively low pre-test probability in itself reduces the post-test conditional probability of having cancer given that the mammogram was positive, regardless of the *likelihood* or data collected. 2. The probability of a *false positive* becomes a very significant issue on a screening procedure that will be applied to thousands and thousands of a priori healthy women. So, although the false positive rate of $7 - 10\%$ (which is much higher if you focus on the [cumulative risk](http://www.ncbi.nlm.nih.gov/pubmed/22972811)) may not sound so bad, it is actually an issue of colossal psychological and economical costs, particularly given the low pre-test probability in younger, low-risk patients. Your figure of $1\%$ is widely off the mark - the reality is that "scares" are incredibly common due to many factors, including the medicolegal concerns. So, recalculating and very importantly, for ***younger women without risk factors***: $p(C|+) = \frac{p(+|C)}{p(+)}\small\* p(C) =$ $= \frac{p(+|C)}{p(+|C)\,\*\,p(C)\, +\, p(+|\bar C)\,\*\,p(\bar C)}\small\* p(C) = \large \frac{0.8}{0.8\*0.015\, +\, 0.07\*0.985}\small\*\, 0.015 = 0.148$. The probability of having cancer when a screening mammogram has been read as positive can be as low as $15\%$ in young, low-risk women. As an aside, mammographic readings come with an indirect estimate of the confidence in the diagnosis the radiologist has (it is called BI-RADS), and this Bayesian analysis would change radically as we progress from a BI-RADS 3 to a BI-RADS 5 - all of them "positive" tests in the broadest sense. This figure can logically be changed depending on what estimates you consider in your calculation, but the truth is that the recommendations for the starting age to enter a screening mammography program have [recently been pushed up from age $40$ to $45$](http://www.cancer.org/cancer/news/news/american-cancer-society-releases-new-breast-cancer-guidelines). In older women the prevalence (and hence the pre-test probability) increases linearly with age. According to the current report, [the risk that a woman will be diagnosed with breast cancer during the next 10 years](http://www.cancer.gov/types/breast/risk-fact-sheet), starting at the following ages, is as follows: ``` Age 30 . . . . . . 0.44 percent (or 1 in 227) Age 40 . . . . . . 1.47 percent (or 1 in 68) Age 50 . . . . . . 2.38 percent (or 1 in 42) Age 60 . . . . . . 3.56 percent (or 1 in 28) Age 70 . . . . . . 3.82 percent (or 1 in 26) ``` This results in a life-time cumulative risk of approximately $10\%$: ![](https://i.stack.imgur.com/0ecde.png) The calculation in older women with a prevalence of $4\%$ would be: $p(C|+)=\large \frac{0.8}{0.8\*0.04\, +\, 0.07\*0.96}\small\*\, 0.04 = 0.32 \sim 32\%$ lower than you calculated. I can't overemphasize how many "scares" there are even in older populations. As a screening procedure a mammogram is simply the first step so it makes sense for the positive mammogram to be basically interpreted as *there is a possibility that the patient has breast cancer, warranting further work-up with ultrasound, additional (diagnostic) mammographic testing, follow-up mammograms, MRI or biopsy.* If the $p(C|+)$ was very high we wouldn't be dealing with a ***screening test*** it would be a ***diagnostic test***, such as a biopsy. **Specific answer to your question:** **It is the "scares", the $p(+|\bar C)$ of $7-10\%$, and not $1\%$ as in the OP, in combination with a relative low prevalence of disease (low pre-test probability or high $p(\bar C)$) especially in younger women, that accounts for this lower post-test probability across ages.** Notice that this "false alarm rate" is multiplied by the much larger proportion of cases without cancer (compared with patients with cancer) in the denominator, not the "the tiny 1% chance of a false positive in 1% of the population" you mention. I believe this is the answer to your question. To emphasize, although this would be unacceptable in a diagnostic test, it is still worthwhile in a screening procedure. **Intuition issue:** @Juho Kokkala brought up the issue that the OP was asking about the *intuition*. I thought it was implied in the calculations and the closing paragraphs, but fair enough... This is how I would explain it to a friend... Let's pretend we are going hunting for meteor fragments with a metal detector in Winslow, Arizona. Right here: ![](https://i.stack.imgur.com/Q14tY.jpg) [Image from meteorcrater.com](http://meteorcrater.com) ... and the metal detector goes off. Well, if you said that chances are that it is from a coin a tourist dropped off, you'd probably be right. But you get the gist: if the place hadn't been so thoroughly screened, it would be much more likely that a beep from the detector on a place like this came from a fragment of meteor than if we were on the streets of NYC. What we are doing with mammography is going to a healthy population, looking for a silent disease that if not caught early can be lethal. Fortunately, the prevalence (although very high compared with other less curable cancers) is low enough that the probability of randomly encountering cancer is low, ***even if the results are "positive"***, and especially in young women. On the other hand, if there were no false positives, i.e. ($p(\bar C|+)=0$, $\frac{p(+|C)}{p(+|C)\,\*\,p(C)\, +\, p(+|\bar C)\,\*\,p(\bar C)}\small\* p(C) = \frac{p(+|C)}{p(+|C)\,\*\,p(C)}\small\* p(C) = 1$, much as the probability of having hit a meteor fragment if our metal detector went off would be $100\%$ independent of the area we happened to be exploring ***if*** instead of a regular metal detector we were using a perfectly accurate instrument to detect outer-space amino acids in the meteor fragment (made-up example). It would still be more likely to find a fragment in the Arizona desert than in New York City, *but* ***if*** the detector happened to beep, we'd know we had found a meteor. Since we never have a perfectly accurate measuring device or system, the fraction $\frac{\text{likelihood}}{\text{unconditional p(+)}}=\frac{p(+|C)}{p(+|C)\,\*\,p(C)\, +\, p(+|\bar C)\,\*\,p(\bar C)}$ will be $<1$, and the more imperfect it is, the lesser the fraction of the $p(C)$, or *prior*, that will be "passed on" to the LHS of the equation as the *posterior*. If we settle on a particular type of detector, the likelihood fraction will act as constant in a linear equation of the form, $\text{posterior} = \alpha \* \text{prior}$, where the $\text{posterior} < \text{prior}$, and the smaller the prior, the linearly smaller will the posterior be. This is referred to as the dependence on *prevalence* of the ***positive predictive value (PPV)***: probability that subjects with a positive screening test truly have the disease.
I have documents of pure natural language text. Those documents are rather short; e.g. 20 - 200 words. I want to classify them. A typical representation is a bag of words (BoW). The drawback of BoW features is that some features might always be present / have a high value, simply because they are an important part of the language. Stopwords like the following are examples: is, are, with, the, a, an, ... One way to deal with that is to simply define this list and remove them, e.g. by looking at the most common words and just deciding which of them don't carry meaning for the given task. Basically by gut feeling. Another way is TF-IDF features. They weight the words by how often they occur in the training set overall vs. how often they occur in the specific document. This way, even words which might not directly carry meaningful information might be valuable. The last part is my question: Should I remove stopwords when I use TF-IDF features? Are there any publications on this topic? (I'm pretty sure I'm not the first one to wonder about this question)
From the way the TfIdf score is set up, there shouldn't be any significant difference in removing the stopwords. The whole point of the Idf is exactly to remove words with no semantic value from the corpus. If you do add the stopwords, the Idf should get rid of it. However, working without the stopwords in your documents will make the number of features smaller, which might have a slight computational advantage.
What are the restrictions of application fields in searching for association rules (finding frequent itemsets)? All examples I came across cover topic of 'true' basket-analysis in the sense of using a list of products which a sample of customers purchased with a goal to find rules such 'when one buys bread, it is likely butter is bought, too'. What about **more abstract applications?** I mean finding **any** rules in dataset. *EXAMPLE*. Let's assume I have a huge dataset with tourist-trip prices in 2013 year. The data includes trip-price and trip-features (such country of destination, days the travel lasts, accommodation condition elements, means of transport, extracurricular activities etc.). I want to find different associations between price and other trip features. My idea is to categorize price variable and find frequent itemsets among these trips (e.g. *air conditioning*=true, *5\* hotel*=true and *Australia*=true **=>** *high price*=true). * Is this a good way to work with such problems? * Would you suggest any other general way of dealing with searching for any types of assocciations in different data sets?
If you want to be able to discover easily human-quantifiable "rules" in your data which predict which features are associated with high vs. low prices, then it sounds like what you want to do is run a [decision tree](http://en.wikipedia.org/wiki/Decision_tree_learning)-based [statistical classifier](http://en.wikipedia.org/wiki/Statistical_classification) on your data set, treating the "trip-price" field as the class variable. A good example of a tree-based classifier is the [C4.5](http://en.wikipedia.org/wiki/C4.5_algorithm) algorithm or its successor, C5.0. The C5.0 algorithm is implemented in the [R](http://www.r-project.org/) package [C50](http://cran.r-project.org/web/packages/C50/C50.pdf), and a slightly modified version of C4.5 is also implemented in [Weka](http://www.cs.waikato.ac.nz/ml/weka/) as [J48](http://weka.sourceforge.net/doc.dev/weka/classifiers/trees/j48/package-summary.html) (confusingly named, I know, but if you dig around a little bit you'll discover that it is indeed derived essentially from C4.5). A short video course covering use of the GUI version of Weka can be found [here](http://www.cs.waikato.ac.nz/ml/weka/mooc/dataminingwithweka/), and in particular, a 10 minute video (section 3.4 in the course syllabus) which introduces J48 specifically can be found [here](https://www.youtube.com/watch?v=l7R9NHqvI0Y). There is also the [CART](http://en.wikipedia.org/wiki/Predictive_analytics#Classification_and_regression_trees) (Classification And Regression Tree) algorithm as well; a version of [that](http://scikit-learn.org/stable/modules/tree.html#tree-algorithms-id3-c4-5-c5-0-and-cart) is [implemented](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier) in the Python [scikit-learn](http://scikit-learn.org/stable/) package. Lastly, it's worth noting that tree-based classifiers are only needed if you actually want to have human-understandable predictive rules that you easily write down on a sheet of paper. If you just want a computer to be able to accurately predict class outcomes (i.e., was it an expensive vacation or a cheap one) based on trip features (air conditioning, brand of hotel, country location, etc.) and you don't really care *why* the computer made a particular prediction, there are many other types of [supervised machine learning](http://en.wikipedia.org/wiki/Supervised_learning) algorithms that can help you as well.
I've never seen an algorithm with a log in the denominator before, and I'm wondering if there are any actually useful algorithms with this form? I understand lots of things that might cause a log factor to be multiplied in the run time, e.g. sorting or tree based algorithms, but what could cause you to divide by a log factor?
The Rubik's Cube is a very natural (and to me, unexpected) example. An $n\times n\times n$ cube requires $\Theta(n^2/\log n)$ steps to solve. (Note that this is theta notation, so that's a tight upper and lower bound). This is shown in [this paper](http://erikdemaine.org/papers/Rubik_ESA2011/) [1]. It may be worth mentioning that the complexity of solving specific instances of the Rubik's cube is ~~open, but conjectured to be NP-hard (discussed [here](https://cstheory.stackexchange.com/questions/783/is-optimally-solving-the-nnn-rubiks-cube-np-hard) for example)~~ [NP hard](https://arxiv.org/pdf/1706.06708.pdf) [2]. The $\Theta(n^2/\log n)$ algorithm guarantees a solution, and it guarantees that all solutions are asymptotically optimal, but it may not solve specific instances optimally. Your definition of useful may or may not apply here, as Rubik's cubes are generally not solved with this algorithm ([Kociemba's algorithm](http://kociemba.org/cube.htm) is generally used for small cubes as it gives fast, optimal solutions in practice). [1] Erik D. Demaine, Martin L. Demaine, Sarah Eisenstat, Anna Lubiw, and Andrew Winslow. Algorithms for Solving Rubik's Cubes. Proceedings of the 19th Annual European Symposium on Algorithms (ESA 2011), September 5–9, 2011, pages 689–700 [2] Erik D. Demaine, Sarah Eisenstat, and Mikhail Rudoy. Solving the Rubik's Cube Optimally is NP-complete. Proceedings of the 35th International Symposium on Theoretical Aspects of Computer Science (STACS 2018), February 28–March 3, 2018, pages 24:1-24:13.
I am using MLP neural network. My question is for training the neural network and testing it how much splitting of data is needed like is there any rule that I always have to split data 70% for training and 30% for testing when I did this my accuracy was not good as when I split it into 10% for training and 90% for testing I got more accuracy... Is this valid?
The Malley (2012) is available here: <http://dx.doi.org/10.3414%2FME00-01-0052>. A full reference is in the references part in the ranger documentation. In short, each tree predicts class probabilities and these probabilities are averaged for the forest prediction. For two classes, this is equivalent to a regression forest on a 0-1 coded response. In contrast, in `randomForest` with `type="prob"` each tree predicts a class and probabilities are calculated from these classes. In the example here I tried to use the uniform distribution instead of the normal distribution to generate the probabilities, and here the other approach seems to perform better. I wonder if these probabilities are really the truth? By the way, the same results as in the `randomForest` example above can be achieved with ranger by using classification and manual probability computation (use `predict.all=TRUE` in prediction).
**Questions** 1. Does it depend on whether the tree is shallow or deep? Or can we say this irrespective of the depth/levels of the tree? 2. Why is bias low & variance high? Please explain intuitively and mathematically
A bit late to the party but i feel that this question could use answer with concrete examples. I will write summary of this **excellent** article: [bias-variance-trade-off](https://machinelearningmastery.com/gentle-introduction-to-the-bias-variance-trade-off-in-machine-learning/), which helped me understand the topic. The prediction error for any machine learning algorithm can be broken down into three parts: * Bias Error * Variance Error * Irreducible Error **Irreducible error** As the name implies, is an error component that we cannot correct, regardless of algorithm and it's parameter selection. Irreducible error is due to complexities which are simply **not** captured in the training set. This could be attributes which we don't have in a learning set but they affect the mapping to outcome regardless. **Bias error** Bias error is due to our assumptions about target function. The more assumptions(restrictions) we make about target functions, the more bias we introduce. Models with high bias are less flexible because we have imposed more rules on the target functions. **Variance error** Variance error is variability of a target function's form with respect to different training sets. Models with small variance error will not change much if you replace couple of samples in training set. Models with high variance might be affected even with small changes in training set. Consider simple linear regression: ``` Y=b0+b1x ``` Obviously, this is a fairly restrictive definition of a target function and therefore this model has a high bias. On the other hand, due to low variance if you change couple of data samples, it's unlikely that this will cause major changes in the overall mapping the target function performs. On the other hand, algorithm such as k-nearest-neighbors have high variance and low bias. It's easy to imagine how different samples might affect K-N-N decision surface. Generally, parametric algorithms have a high bias and low variance, and vice versa. One of the challenges of machine learning is finding the right balance of bias error and variance error. **Decision tree** Now that we have these definitions in place, it's also straightforward to see that decision trees are example of model with low bias and high variance. The tree makes almost no assumptions about target function but it is highly susceptible to variance in data. There are ensemble algorithms, such as bootstrapping aggregation and random forest, which aim to reduce variance at the small cost of bias in decision tree.
I was wondering what tools that people in this field (theoretical computer science) use to create presentations. Since a great deal of computer science is not just writing papers but also giving presentations I thought that this would be an important soft question. This is inspired by the previous question [what tools do you use to write papers](https://cstheory.stackexchange.com/questions/2255/what-tools-do-you-use-to-write-papers). The most common that I have seen are as follows. * [Beamer](https://bitbucket.org/rivanvx/beamer/wiki/Home) * [Microsoft PowerPoint](http://en.wikipedia.org/wiki/Microsoft_PowerPoint) * [LaTeX](http://www.latex-project.org/) * [GraphViz](http://www.graphviz.org/) I was wondering if there are any other tricks that I am missing?
Two tools I can mention, and I guess it's an answer to both questions (tools for presentations and tools for papers). The first is [Xfig](http://xfig.org/), an ugly yet very powerful program for making figures, available for several platforms. I usually include $\LaTeX$ code and export as Combined PS/PDF/LaTeX, which allows me to compile with `(xe)latex` or `pdflatex` without having to change the input for the figures every time. Others may prefer to write code for their figures, but I've found that Xfig is powerful enough, and making figures is quite fast once you are used to the functionalities and the keyboard shortcuts. The second is the book [Trees, maps and theorems](http://www.treesmapsandtheorems.com/), by Jean-luc Doumont. It is not a tool to make or give a presentation, but to make or give a good one. I am all for simplicity when it comes to presentations. We all know, or understand, that a slide full of text might not be the best means of communicating a message or a series of messages, but this book goes beyond that kind of basic, "common sense" advice, giving guidelines on how to write comprehensible text in scientific papers (and slides), how to make figures that visually attractive and easy to understand, and even how to structure a document (you'd be surprised when you realize how "primitive" the intro-content-conclusions structure is). I could say more, but I guess it's better if you read some reviews out there.
I am starting read a book about Computational Complexity and Turing Machines. Here is quote: > > An algorithm (i.e., a machine) can be represented as a bit string once > we decide on some canonical encoding. > > > This assertion is provided as a simple fact, but I can't understand it. For example, if I have an algorithm which takes $x$ as input and computes $(x+1)^2$ or: ``` int function (int x){ x = x + 1; return x**2; } ``` How that can this be represented as string using alphabet $\{0, 1\}^\*$?
You already have a representation of that function as text. Convert each character to a one-byte value using the ASCII encoding. Then the result is a sequence of bytes, i.e., a sequence of bits, i.e., a string over the alphabet $\{0,1\}^\*$. That's one example encoding.
How do these four types of gradient descent functions differ from each other? * GD * Batch GD * SGD * Mini-Batch SGD
Gradient Descent is an optimization method used to optimize the parameters of a model using the gradient of an objective function ( loss function in NN ). It optimizes the parameters until the value of the loss function is the minimum ( of we've reached the **minima of the loss function** ). It is often referred to as **back propagation** in terms of Neural Networks. All the below methods are variants of Gradient Descent. You can learn more from this [video](https://www.youtube.com/watch?v=nhqo0u1a6fw). **Batch Gradient Descent:** The samples from the whole dataset are used to optimize the parameters i.e to compute the gradients for a single update. For a dataset of 100 samples, updates occur only once. **Stochastic Gradient Descent:** Stochastic GD computes the gradients for each and every sample in the dataset and hence makes an update for every sample in the dataset. For a dataset of 100 samples, updates occur 100 times. **Mini Batch Gradient Descent:** This is meant to capture the good aspects of Batch and Stochastic GD. Instead of a single sample ( Stochastic GD ) or the whole dataset ( Batch GD ), we take small batches or chunks of the dataset and update the parameters accordingly. For a dataset of 100 samples, if the batch size is 5 meaning we have 20 batches. Hence, updates occur 20 times. > > All the above methods use gradient descent for optimization. The main > difference is that on how much samples are the gradients calculated. > Gradients are averaged in Mini-Batch and Batch GD. > > > You can refer to these blogs/posts: [Batch gradient descent versus stochastic gradient descent](https://stats.stackexchange.com/q/49528) [Gradient Descent Algorithm and Its Variants](https://towardsdatascience.com/gradient-descent-algorithm-and-its-variants-10f652806a3)
Is there consensus in the field of statistics that one book is the absolute best source and completely covering every aspect of GLM - detailing everything from estimation to inference?
The closest thing I've found to a GLM Bible is Applied Linear Statistical Models by Kutner, Nachtsheim, Neter, and Li. It's over 1400 pages and covers linear regression and GLMs. Virtually anything involving GLMs can be found in that book.
I would like to learn about Parametrized Complexity (both on the algorithmic side and on the hardness side). What books/lecture notes can I read on this subject?
See <http://fpt.wikidot.com/books-and-survey-articles>. I also prefer Flum and Grohe, especially for the hardness part, whereas the book by Niedermeier is more focused on the algorithmic side. Note that there are some technical differences between the two, for instance the definition of a parameter as polynomial time computable function in the book of Flum and Grohe, which has to be altered if one likes to consider smaller parameterized space classes (see [this article](http://link.springer.com/chapter/10.1007/978-3-642-33293-7_20) by Elberfeld, Stockhusen and Tantau).
**Deriving life expectancy from FLIPI index data for FLIPI(3) High Risk** I completely rewrote my very terribly simplistic initial question (currently erased) to take into account two key elements of advice from the original answer. **(1) Median years of survival is a good measure of life expectancy** for patients with typical disease progression within a FLIPI score. **(2) Survival rates have drastically improved in the Rituximab era.** ![](https://i.stack.imgur.com/aj1rS.jpg) The above is from Figure 4 from this paper: [Follicular Lymphoma International Prognostic Index 2004](https://ashpublications.org/blood/article/104/5/1258/18907/Follicular-Lymphoma-International-Prognostic-Index) The **6.16 years median life expectancy** referenced by the answer below may be a very reliable measure of typical outcomes for those with typical FLIPI(3) disease progression. Based on the advice in the answer I found a sufficiently large sample that recalibrated the overall survival rates for the Rituximab era: [Follicular lymphoma in the modern era: survival, treatment outcomes, and identification of high-risk subgroups 2020](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7366724/) ![](https://i.stack.imgur.com/OKZOC.png) Because the number at risk for 10 years had 58% of the subjects drop out of the study we may not be able to trust the graph's 69% survival rate. Only 9% of the subjects dropped out at the 5 year mark so we can probably trust its 82% survival rate. This is a 55% improvement over the original FLIPI data 53%. **The graph uses Kaplan-Meier to adjust for missing data.** When we simply multiply the above 6.18 year life expectancy by the 55% increased life expectancy provided by the Rituximab era **we derive a 9.56 year life expectancy for patients with a FLIPI index score of 3** (eventually treated with Rituximab or equivalent) having typical disease progression.
Those calculations aren't correct. You can't reliably turn overall-survival percentages at any given year into a survival estimate at other times unless you know the shape of the survival curve over time and make certain assumptions. Sometimes there are sharp survival drops at early times followed by a plateau of long survival times, representing a situation in which some are "[cured](https://stats.stackexchange.com/q/300733/28500)." The [paper you cite](https://doi.org/10.1182/blood-2003-12-4434), however, shows survival curves for 3 groups based on FLIPI, in its Figure 4 reproduced below: [![Survival vs FLIPI](https://i.stack.imgur.com/thcy9.png)](https://i.stack.imgur.com/thcy9.png) An index of 3 is in the "High" group. Although it's not strictly a "life expectancy," I think that a measure of survival like the median survival time (time at which half have died, survival probability of 0.5) is generally more informative.\* That's just a bit beyond 60 months after diagnosis for the "High" group. That's only a crude estimate for several reasons, and is unlikely to apply to you personally. First, FLIPI doesn't take into account the details of the different risk factors (Age, Ann Arbor Stage, hemoglobin, LDH, and number of involved sites), just the number of those adverse features (Table 4 of the cited paper). Second, survival curves often depend heavily on the particular patient population that was evaluated. The cited paper was based on an international study; the details of survival might well depend on residence location. Third, and *most important*, that paper is now nearly 20 years old and is based on initial diagnoses from 30 to 37 years ago. Therapy and outcomes have improved substantially since then, particularly due to the development of [rituximab](https://en.wikipedia.org/wiki/Rituximab), which was only approved in 1997 and thus was not available to most patients in the study you cite. See Freedman and Jacobsen, [Follicular lymphoma: 2020 update on diagnosis and management](https://doi.org/10.1002/ajh.25696), American Journal of Hematology, Volume 95, Issue 3, pages 316-327, for a more recent discussion. This type of thing is always hard when you are personally involved. Make sure to discuss your concerns with your own clinicians, who are in the best position to evaluate your specific personal risks. --- \*For example, in a "cure" model the average survival time (life expectancy) can be quite high because of the numbers of individuals who survive the disease in question and die much later from other causes. The median survival time is when there's a 50/50 chance of having survived that long.
I don't mean a TI-84 type calculator, and I also do not mean a simple calculator app on your phone. Does a basic scientific calculator require an operating system?
No. Operating systems have several purposes, such as interfacing with the hardware and, managing multiple concurrent or sequential applications and providing protection between different users. A calculator has only user and only one application so there's nothing to manage there. And the hardware is simple enough that the single application may as well just talk directly to it, rather than going through the intermediate layer of an operating system.
I was going through [the following slides](http://fsl.cs.illinois.edu/images/7/75/CS522-Fall-2018-Lambda-slides.pdf) and I wanted to show the following: $$ \lambda x. x \equiv\_{\alpha} \lambda y . y$$ formally. They define a an $\alpha$-conversion on page 15 as follows: $$ \lambda x . E = \lambda z.(E[x \leftarrow z])$$ however, I wasn't sure how to formally show the statement I am trying to show. Essentially I guess I don't know how to formally show in a proof that two distinct objects actually belong to this same equivalence class. The intuition and idea is clear, but how do I know if I've shown the statement? In fact if someone can show me how to do the more complicated one too that would be really helpful too: $$ \lambda x.x (\lambda y . y) \equiv\_{\alpha} \lambda y . y (\lambda x. x)$$ how do I know if I've shown what is being asked? --- Actually I think page 16 is the one thats confusing me most: > > Using the equation above, one has now the possibility to prove > $\lambda$-expressions "equivalent". To capture this provability > relation formally, we let $E \equiv\_{\alpha} E^\prime$ denote the fact > that the equation $E = E^\prime$ can proved using standard equational > deduction form the equational axioms above (($\alpha$) plus those for > substitution). > > > **Exercise 3** *Prove the following equivalences of $\lambda$-expressions:* > > > * $\lambda x.x \equiv\_{\alpha} \lambda y.y$, > * $\lambda x.x (\lambda y.y) \equiv\_{\alpha} \lambda y.y (\lambda x.x)$, > * $\lambda x.x(\lambda y.y) \equiv\_\alpha \lambda y.y(\lambda y.y)$. > > > what does: > > can be proved using standard equational deduction from the > equational axioms above > > > mean? --- Since there is already an answer that is not helping (because I don't understand the notation) I will add what I thought was the answer but I'm not sure: I would have guessed that: $$ \lambda x. x \equiv\_{\alpha} \lambda y . y$$ if and only if there is a variable such that if we plug it into the lambda functions evaluates to the same function with the same variables. i.e. $$ \lambda x. x \equiv\_{\alpha} \lambda y . y \iff \exists z \in Var : \lambda x . x = \lambda z. ( (\lambda y . y)[y \leftarrow z] )$$ if we set $z = x$ we get: $$\lambda z. ( (\lambda y . y)[y \leftarrow z] )$$ $$\lambda x. ( (\lambda y . y)[y \leftarrow x] )$$ $$\lambda x. (\lambda x .x )$$ which I assume the last line is the same as $\lambda x .x$ but I am not sure. If that were true then I'd show I can transform $\lambda y . y$ to $\lambda x . x$ which is what I assume the equivalence class should look like. Where did I go wrong?
By definition of substitution we have $$x [x \leftarrow z] = z$$ therefore $$\lambda z . x [x \leftarrow z] = \lambda z . z \tag{1}$$ because $\lambda$-abstraction is a congruence (it preserves equality). By the definition of $\alpha$-equality we have $$\lambda x . x = \lambda z . x [x \leftarrow z] \tag{2}.$$ By transitivity of equality we get from (1) and (2) that $$\lambda x . x = \lambda z . z$$ If you require more details than this, you should use a computer proof assistant to check the details.
I've been looking into the math behind converting from any base to any base. This is more about confirming my results than anything. I found what seems to be my answer on mathforum.org but I'm still not sure if I have it right. I have the converting from a larger base to a smaller base down okay because it is simply take first digit multiply by base you want add next digit repeat. My problem comes when converting from a smaller base to a larger base. When doing this they talk about how you need to convert the larger base you want into the smaller base you have. An example would be going from base 4 to base 6 you need to convert the number 6 into base 4 getting 12. You then just do the same thing as you did when you were converting from large to small. The difficulty I have with this is it seems you need to know what one number is in the other base. So I would of needed to know what 6 is in base 4. This creates a big problem in my mind because then I would need a table. Does anyone know a way of doing this in a better fashion. I thought a base conversion would help but I can't find any that work. And from the site I found it seems to allow you to convert from base to base without going through base 10 but you first need to know how to convert the first number from base to base. That makes it kinda pointless. Commenters are saying I need to be able to convert a letter into a number. If so I already know that. That isn't my problem however. My problem is in order to convert a big base to a small base I need to first convert the base number I have into the base number I want. In doing this I defeat the purpose because if I have the ability to convert these bases to other bases I've already solved my problem. Edit: I have figured out how to convert from bases less than or equal to 10 into other bases less than or equal to 10. I can also go from a base greater than 10 to any base that is 10 or less. The problem starts when converting from a base greater than 10 to another base greater than 10. Or going from a base smaller than 10 to a base greater than 10. I don't need code I just need the basic math behind it that can be applied to code.
You can convert from base n to base 10 without any conversion to some intermediate base. To convert from base n to base 9, for example, you take the algorithm for conversion to base 10, and replace “10” with “9”. Same for any other base.
I am running `glmnet` for the first time and I am getting some weird results. My dataset has n = 139; p = 70 (correlated variables) I am trying to estimate the effect of each variable for both, inference and prediction. I am running: ``` > cvfit = cv.glmnet(X, Y,family = c('gaussian'),alpha = 0.5,intercept = T,standardize = T,nlambda=100,type = "mse") > coef(cvfit, s = "lambda.min") ``` From all the 70 estimates, two caught my attention: ``` 4 0.5731999 14 5.419356829 ``` What bugs me is the fact that: ``` > cor(X[,4],Y) [1,] 0.674714 > cor(X[,14],Y) [1,] -0.01742419 ``` In addition, if I standardize `X` myself (using `scale(X)`) and run it again: ``` > cvfit = cv.glmnet(scale(X), Y,family = c('gaussian'),alpha = 0.5,intercept = T,standardize = F,nlambda=100,type = "mse") > coef(cvfit, s = "lambda.min") ``` I now get that 4 has the highest effect and variable "14" is about 5 times smaller. I couldn't find a good description about the normalization process in glmnet. Any clue as to why this is happening (I don't think its a bug, I just would like to understand why and which one is right)? PS: I ran this many times, so I know it is not an effect of the sampling during the cross-validation.
I tracked down the standardization process of glmnet and documented it on the thinklab Platform [there](https://thinklab.com/discussion/computing-standardized-logistic-regression-coefficients/205#5). This includes a comparison of the different ways to use standardization with glmnet. Long story short, if you let glmnet standardize the coefficients (by relying on the default `standardize = TRUE`), glmnet performs standardization behind the scenes and reports everything, including the plots, the "de-standardized" way, in the coefficients' natural metrics.
Is there a way to use logistic regression to classify multi-labeled data? By multi-labeled, I mean data that can belong to multiple categories simultaneously. I would like to use this approach to classify some biological data.
I principle, yes - I'm not sure that these techniques are still called logistic regression, though. Actually your question can refer to two independent extensions to the usual classifiers: 1. You can require the sum of all memberships for each case being one ("closed world" = the usual case) or drop this constraint (sometimes called "one-class classifiers") This could be trained by multiple independent LR models although one-class problems are often ill-posed (this class vs. all kinds of exceptions which could lie in all directions) and then LR is not particularly well suited. 2. partial class memberships: each case belongs with membership $\in [0, 1]^{n\_{classes}}$ to each class, similar to memberships in fuzzy cluster analysis: Assume there are 3 classes A, B, C. Then a sample may be labelled as belonging to class B. This can also be written as membership vector $[A = 0, B = 1, C = 0]$. In this notation, the partial memberships would be e.g. $[A = 0.05, B = 0.95, C = 0]$ etc. * different interpretations can apply, depending on the problem (fuzzy memberships or probabilities): + fuzzy: a case can belong half to class A and half to class C: [0.5, 0, 0.5] + probability: the reference (e.g. an expert classifying samples) is 80 % certain that it belongs to class A but says a 20 % chance exists that it is class C while being sure it is not class B (0 %): [0.8, 0, 0.2]. + another probability: expert panel votes: 4 out of 5 experts say "A", 1 says "C": again [0.8, 0, 0.2] * for prediction, e.g. the posterior probabilities are not only possible but actually fairly common * it is also possible to use this for training * and even validation * The whole idea of this is that for borderline cases it may not be possible to assign them unambiguously to one class. * Whether and how you want to "harden" a soft prediction (e.g. posterior probability) into a "normal" class label that corresponds to 100% membership to that class is entirely up to you. You may even return the result "ambiguous" for intermediate posterior probabilities. Which is sensible depends on your application. In R e.g. `nnet:::multinom` which is part of MASS does accept such data for training. An ANN with logistic sigmoid and without any hidden layer is used behind the scenes. I developed package `softclassval` for the validation part. One-class classifiers are nicely explained in [Richard G. Brereton: Chemometrics for Pattern Recognition, Wiley, 2009.](http://onlinelibrary.wiley.com/book/10.1002/9780470746462) We give a more detailed discussion of the partial memberships in this paper: [Claudia Beleites, Kathrin Geiger, Matthias Kirsch, Stephan B Sobottka, Gabriele Schackert & Reiner Salzer: Raman spectroscopic grading of astrocytoma tissues: using soft reference information. Anal Bioanal Chem, 2011, Vol. 400(9), pp. 2801-2816](http://dx.doi.org/10.1007/s00216-011-4985-4)
This question is inspired by an [existing question](https://cstheory.stackexchange.com/questions/2562/one-stack-two-queues "existing question") about whether a stack can be simulated using two queues in amortized $O(1)$ time per stack operation. The answer seems to be unknown. Here is a more specific question, corresponding to the special case in which all PUSH operations are performed first, followed by all POP operations. How efficiently can a list of $N$ elements be reversed using two initially empty queues? The legal operations are: 1. Enqueue the next element from the input list (to the tail of either queue). 2. Dequeue the element at the head of either queue and enqueue it again (to the tail of either queue). 3. Dequeue the element at the head of either queue and add it to the output list. If the input list consists of elements $[1,2,...,N-1,N]$, how does the minimum number of operations required to generate the reversed output list $[N,N-1,...,2,1]$ behave? A proof that it grows faster than $O(N)$ would be especially interesting, since it would resolve the original question in the negative. --- Update (15 Jan 2011): The problem can be solved in $O(N \log N)$, as shown in the submitted answers and their comments; and a lower bound of $\Omega(N)$ is trivial. Can either of these bounds be improved?
If N is a power of two, I believe O(N log N) operations suffice, even for a somewhat more restricted problem in which all items start on one of the queues and must end up in reverse order on one of the queues (without the input and output lists). In O(N) steps it is possible to start with all elements on one queue, play "one for you one for me" to split them into alternating subsets on the other queue, and then concatenate them all back into one queue. In terms of the binary representations of the positions of the items, this implements a rotate operation. In O(N) steps it is also possible to pull off pairs of elements from one queue, swap them, then put them back, reversing all pairs. In terms of the binary representations of the positions of the items, this complements the low order bit of the position. By repeating O(log N) times an unshuffle and a pairwise swap, we can complement all the bits of the binary representations of the positions — which is the same thing as reversing the list.
**EDIT STARTS** After seeing the comments and answers, I believe I started in a wrong direction. I have a set of rectangles, which I want to cluster as shown below. ![enter image description here](https://i.stack.imgur.com/V5vOs.png). The approach I took was to consider central points of each rectangle as a data point in $R^2$ and cluster them using Euclidean distance (K-means, K-mediods approach or any other method). I am **not** trying to discover "shapes" in the data, as I know the best shape would be rectangles. This is the reason I was trying to know if there is any way to include "shape priors" in the clustering methods. There might be algorithmic formulations of this problem, I would be glad if someone can point me to. I hope it is clear now what I want to do, I will be glad if someone can point me to right directions. **EDIT ENDS** Suppose, we have a set of points in $R^d$ which we want to cluster. We do have some idea about the shape of the clusters, i.e., we know the clusters should be spherical, ellipsoidal or rectangular. Is there a way to include this "shape prior" in the clustering process? (it would be more useful for me if this has been done in some well known algorithms like k-means) I did find some papers like [this one](http://research.microsoft.com/pubs/147122/PPC_nips.pdf) and [this one](http://www.cs.utexas.edu/~ml/papers/semi-thesis-05.pdf) which talks about imposing some constraint into clustering, but the constraints are primarily in the form of "pairwise constraints" i.e. specifying whether two instances should be in same or different clusters.
There is the frequent claim that k-means "prefers" spherical clusters. Mathematically, it produces Voronoi cells, but there exists a close relationship between Voronoi cells, nearest neighbors and euclidean spheres. In R, when you look at the `Mclust` function, you can specify which model to use. If you allow variances, it will model the data using spheres of different size, if you add in covariance they can also be rotated. I'm not aware of an approach preferring rectangular clusters in any meaningful way. You could however use an R\*-tree index, and use the index pages as clusters. The question you didn't answer is: *what are you trying to do*? If you have prerequirements such as rectangles, you are most likely *not* trying to *discover structure*, but instead you want to squeeze your data into a predefined model. At which point you are in the domain of constraint *optimization* and data modelling, not structure discovery (clustering). When specifying such constraints, you should also talk about how to handle model overlaps and these things.
Let's say there's a single process performed in the machine which utilizes all available processors (whether physical or logical processors). The process use 100% processor utilization most of the time on all processors. Which machine will perform better? * Machine with 2 cores, 1 thread per core (2 physical processors); or the * Machine with 2 cores, 2 threads per core (4 logical processors) This is assuming everything (memory, clock speed, etc) is the same and the only thing that differ is the number of threads. My initial thoughts would be that the former -- machine with 2 raw cores (1 thread) -- will perform better because it doesn't have context switching. Whereas the latter would have overhead on the context switching. Which machine will perform better?
There's no real way to answer this without a lot of details about both the code to be executed, and the micro-architecture of the CPU executing the code. A CPU that can execute multiple threads per core will typically start with a pool of instructions, decoded and ready to execute. As instructions enter the pool, a scoreboard1 gets updated to reflect inputs to an instruction, execution resource(s) needed by that instruction, and the output of the instruction. So you might have an instruction like `add r0, r1, r2`, meaning add register 0 to register 1 and put the result in register 2. The inputs are registers 0 and 1, the execution resource needed is an integer adder, and the output is register 2. Then there's a scheduler that looks at the resource usage and tries to find instructions that don't conflict, and each clock cycle, tries to find as many of those as possible to execute. As it does so, it updates the scoreboard so the next clock cycle, it'll know what resources are in use now. That all leads to the basic question: given the parallel resources provided by this CPU, and the instructions in the streams to be executed, what is the average level of conflict between instructions? The drastically simplified answer is that more often than not, executing two independent streams of instructions will result in more instructions that can execute with fewer conflicts, so overall speed will increase much more often than not. It's rarely a question of which is faster--only of how much speed you'll gain by executing more threads in parallel, and whether that gain is enough to justify the extra labor to make that work, or whether you'd have gained more by expending more the CPU budget on other things that could have made an even bigger difference (e.g., bigger cache, better branch prediction, etc.) ### Goal There's also room for a bit of question about your goal here. A single core executing two threads will typically execute each thread a *little* slower than that thread would execute on a core by itself. On the other hand, the overall throughput of the system will be somewhat *greater* than if you ran only one thread at a time. For example, if you had a CPU that was split almost perfectly between two threads, each might run at 60% of the speed it would if it was the only thread running on a core. So, if you really care almost exclusively about the performance of one thread, then running only one thread per core may well improve the speed of a thread when it is running. At the same time, if you have two threads each running at 60% the speed it would on its own, your overall speed is 120% of what it would be with each thread running on its own, meaning you're getting about 20% more work done per unit time. In a game (for one example) it's pretty common to have one thread that really matters, so you really want to optimize performance for one thread. If that's your case, you probably only want one thread per core. On the other hand, with something like a server you often have *lots* of requests happening all the time, and one major concern is maximizing overall throughput. In such a case, more threads per core can make a lot more sense (but also note that total throughput isn't usually the *only* concern even in this case). ### CPU Utilization Since "CPU Utilization" was mentioned in a comment, I'll comment on it a bit as well. Generally speaking, when an operating system reports something like percentage of CPU utilization, it's basically irrelevant to questions like this. An OS is basically just reporting what percentage of CPUs have threads assigned to execute on them at any given time. For example, if a single-threaded CPU executes an instruction to read from main memory, it could easily take 50-100 clock cycles to execute a single instruction. According to the OS, that's keeping the CPU 100% busy for those clock cycles--but most of the CPU's actual resources would be available, so it could perfectly well execute other instructions during that time. --- 1. Well, the early versions were called "scoreboards". More recently there are more complex structures with other names, but I'm going to call all of them scoreboards to keep things simple.
I'm trying to fit a 4 parameter boltzmann sigmoid and get an error: "Error in nls(y ~ a0 + (a1 - a0)/(1 + exp((a2 - x)/a3)), start = list(a0 = max(y), : singular gradient" I have figured out that the code runs if the a3 parameter is set to .9 or greater, or -.9 or less. Does anyone have the reason this is? I want to provide a starting parameter for a3 as the slope according to the description on this website: <http://www.originlab.com/doc/Origin-Help/Boltzmann-FitFunc> . That is why I have the linear fit coefficient a3.s, but the result is < .9 and I get the error. Is there a way to estimate a3.s prior to use as starting parameter for nls? I am simply using a linear fit of the midpoint of the sigmoid +/- 10 x units - is that the correct interpretation of the a3 parameter? Here is my code: ``` #fit boltzman sigmoid a0.s=max(y); a1.s=min(y); a2.i=which.min( abs(((a0.s+a1.s)/2) - y) ); a2.s=x[a2.i] lin.x.i=x<a2.s+10 & x>a2.s-10 a3.s=unname(coef(lm(y[lin.x.i]~so[lin.x.i]))[2]) fit <- nls(y ~ a0 + (a1-a0)/(1+exp((a2-x)/a3)), start=list(a0=max(y), a1=min(y), a2=a2.s,a3=.9) , trace=TRUE) params=coef(fit) curve(params[1]+(params[2]-params[1])/(1+exp((params[3]-x)/params[4])), 1,100,col='black',add=T,type='l') ``` Here is the data: ``` x=c( 75, 40, 90, 55, 15, 100, 10, 70, 90, 50, 15, 5, 5, 70, 100, 20, 60, 65, 20, 50, 30, 85, 60, 80, 55, 40, 45, 95, 10, 55, 60, 10, 35, 80, 75, 25, 30, 5, 35, 50, 100, 40, 30, 80, 20, 45, 25, 25, 95, 95, 65, 35, 90, 85, 70, 15, 75, 45, 85, 65); y=c(4.673686, 0.034781, 5.014355, 0.843847, 0.013337, 4.214557, 0.015299, 5.017280, 4.327815, 0.041139, 0.008704, 0.007437, 0.005125, 4.725786, 3.869776, 0.018725, 4.514051, 3.232932, 0.012979, 0.257651, 0.028170, 4.723512, 2.676991, 5.018232, 0.633399, 0.040133, 0.051864, 5.019395, 0.006505, 0.642376, 2.752317, 0.010827, 0.029303, 4.050711, 3.698887, 0.018385, 0.029491, 0.013894, 0.032034, 0.053761, 5.029349, 0.038272, 0.032619, 5.030450, 0.022356, 0.053421, 0.025370, 0.024763, 4.948973, 3.254528, 1.149153, 0.038530, 4.612227, 4.048692, 4.809153, 0.016246, 5.014711, 0.062841, 5.026961, 2.951881) ``` Related to this question: the formula on the linked to website has a slight variation in the equation, where the the a2 parameter is used in the form "x-a2", while the equation I provided, and got from my data acquisition software's curve fitting function is the one I provided in the code with "a2-x". Which form of the Boltzmann is correct? Does the difference matter?
Why are you reinventing the wheel? Use the native function `SSfpl` for your model. ``` fitr <- nls(y ~ SSfpl(x, a1, a0, a2, ma3)) ``` The parameter `ma3` is `-a3` in your notation, but otherwise the parametrization is identical, and you get slightly better convergence. You should probably be using weighted least squares, since ordinary least squares assumes the variability of $y-E(y)$ does not depend on the value of $x$; which is clearly violated in your data.
I have customer data for around 400,000 customers where 270,000 of them are current customers and 130,000 of them are past customers who churned, what I am doing is classifying them as 0 (non-churn) and 1 (churner) to come up with probabilities for likelihood of churning. I am using random forests in R. What I want to know is can I use the full training set (splitting 80/20 for train and test sets) then use the entire current customer list to output the probabilities or will using the same data as the training/testing data affect the final output? Should I instead take a sample of current and past customers and not include that in the final output of the model? I need to use some current customer data to train the model but can I still use that same data to output the churn risk?
In contemporary usage the p-value refers to the cumulative probability of the data given the null hypothesis being at or greater than some threshold. I.e. $P(D|H\_0)\le\alpha$. I think that $H\_0$ tends to be a hypothesis of 'no effect' usually proxied by a comparison to the probability to a satisfactorily unlikely random result in some number of trials. Dependent on the field it varies from 5% down to 0.1% or less. However, $H\_0$ does not have to be a comparison to random. 1. It implies that 1/20 results may reject the null when they should not have. If science based it's conclusion on single experiments then the statement would be defensible. Otherwise, if experiments were repeatable it would imply that 19/20 would not be rejected. The moral of the story is that experiments should be repeatable. 2. Science is a tradition grounded in "objectivity" so "objective probability" naturally appeals. Recall that experiments are suppose to demonstrate a high degree of control often employing block design and randomisation to control for factors outside of study. Thus, comparison to random does make sense because all other factors are supposed to be controlled for except for the ones under study. These techniques were highly successful in agriculture and industry prior to being ported to science. 3. I'm not sure if a lack of information was ever really the problem. It's notable that for many in the non-mathematical sciences that statistics is just a box to tick. 4. I'd suggest a general read about decision theory which unites the two frameworks. It simply comes down to using as much information as you have. Frequentist statistics assume parameters in models have unknown values from fixed distributions. Bayesians assume parameters in models come from distributions conditioned by what we know. If there is enough information to form a prior and enough information to update it to an accurate posterior then that's great. If there isn't then you may end up with worse results.
The exercises in a textbook I studied asks about the best case for Shell sort. I have scribbled a derivation for the same along the margins almost two years ago. Basically I don't know if this was my own derivation or one copied from an authoritative source. I have elaborated upon the same below. Could you let me know if the reasoning is right here? * The least number of comparisons occur when the data is completely sorted. * For a particular value of the increment, say, $h\_i$, each of the $h\_i$ sub-sequences require at most one less comparison than the number of elements in the sub-sequence(as insertion sort is used) which is,${N \over h\_i} - 1$ ,where N is the total number of data items. * For the given data in this situation $h\_i \times \left (N \over h\_i - 1 \right ) = N - h\_i$ number of comparisons are needed as there are $h\_i$ sub-sequences. * If the increment sequence selected is has $k$ increments(such that $h\_k = 1$), the total number of comparisons required would be $C(N) \ge (N - h\_i) + (N - h\_2) + ... + (N - h\_k) = kN - \sum h\_i = O(N)$
So finally this should be the correct solution: The variable ordering is $x\_{0} < x\_{5} < x\_{1} < x\_{4} < x\_{2} < x\_{3}$. The BDD is: ![enter image description here](https://i.stack.imgur.com/2ZSDJ.png)
If you go to [FFFFOUND!](http://ffffound.com/) and click on some image you will notice that on the new page, under the image, there is a section called "You may like these images." which suggests 10 images that look similar to the original. What would be a good algorithm to achieve this functionality for a collection of images? Any documentation, books, etc. related to such algorithms is very appreciated. Also algorithms for finding similar images that yield better results than those seen on FFFFOUND! website are also welcome.
You could take a collective intelligence approach and try to determine similarity between various users based on their tastes of other images. If I like a bunch of images and another user likes majority of the same images, then it could be said that I will most likely like pictures the other user likes. You might want to try and find information on Collaborative Filtering, Euclidean Distance/Pearson correlation (within the context of social networks.)
A tree is a special kind a graph. However, I came across a data structure which is a like a rooted tree, but where nodes are authorized to have direct links to any of their descendants. Shortcuts if you will. This is not a tree anymore. This is a specialized DAG that has more restriction. That is, it has a single root (or source) Does this type of graph have a name? CLARIFICATION: By 'tree', I'm referring to the [tree data structure](https://en.wikipedia.org/wiki/Tree_(data_structure)), not a [tree in graph theory](https://en.wikipedia.org/wiki/Tree_(graph_theory)). It seems crazy to me that such related objects have the same name even though they don't mean the same thing. The tree data structure is always rooted but there are rooted DAG that are not trees. {1→2, 2→3, 1→3, 4→2} doesn't qualify because 4→2 is an edge toward an ancestor, not a descendant. {1→2, 1→3, 2→4, 3→4} doesn't qualify. {1→2, 2→3, 1→3} and {1→2, 1→3, 2→4, 3→5, 3→6, 1→4, 1→6} both qualify.
Let us actually use the master theorem. Define $S(n) = T(e^n)$ for all $n$. Then $$S(n) = T(e^n) = 2T(\sqrt{e^n}) + \log(e^n) = 2T(e^{n/2}) + n = 2S(n/2) + n$$ Now we can apply the second case of [the master theorem](https://cs.stackexchange.com/a/2823/91753) to $S(n)$ for $a = b = 2$ and $f(n) = n$ to obtain $$ S(n) = \Theta(n\log n)$$ So for $n\gt0$, $$ T(n) = S(\log n) = \Theta(\log n \log\log n)$$
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$. I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things. $T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ ***equals*** $O(f(n))$, if one asks you what's the ***value*** of $O(f(n))$, what would be your answer? There is no answer.
Strictly speaking, $O(f(n))$ is a *set* of functions. So the value of $O(f(n))$ is simply *the set of all functions that grow asymptotically not faster than $f(n)$*. The notation $T(n) = O(f(n))$ is just a conventional way to write that $T(n) \in O(f(n))$. Note that this also clarifies some caveats of the $O$ notation. For example, we write that $(1/2) n^2 + n = O(n^2)$, but we never write that $O(n^2)=(1/2)n^2 + n$. To quote Donald Knuth (The Art of Computer Programming, 1.2.11.1): > > The most important consideration is the idea of *one-way equalities*. [...] If $\alpha(n)$ and $\beta(n)$ are formulas that involve the $O$-notation, then the notation $\alpha(n)=\beta(n)$ means that the set of functions denoted by $\alpha(n)$ is *contained* in the set denoted by $\beta(n)$. > > >
I came across the following in explaining the log-linear regression model. Given the model $\log(Y\_i) = β\_0 + β\_1X\_i + u\_i$ The expected value of $\log(Y)$ given $X$ is $β\_0 + β\_1X$. So far, so good. But then it says: > > 'When $X$ is $X+ΔX$, the expected value is given by $\log(Y+ΔY)$'. > > > I don't see why this is necessarily the case. Could someone explain why $\log(Y+ΔY) = β\_0 + β\_1(X+ΔX)$?
Your question is complete without defining what $\Delta Y$ is. Regardless, a tick thing about log-linear regression is that if you want to get the expected value after anti-log transformation, a bias has to be corrected. Details are given in many classical papers (e.g., Beauchamp, J.J. and Olson, J.S., 1973. Corrections for bias in regression estimates after logarithmic transformation. Ecology, 54(6), pp.1403-1407; Sprugel, D. G. "Correctiong for bias in log-transformed allometric equations." Ecology 64 (1983): 209-210; Newman MC. Regression analysis of log‐transformed data: Statistical bias and its correction. Environmental Toxicology and Chemistry: An International Journal. 1993 Jun;12(6):1129-33.) Here is the gist: suppose that $ Z=log(Y)=\beta\_0+\beta\_1\*X+u$. The expected value of Z or log )Y) is simply $E[Z]=E[log(Y)]=\beta\_0+\beta\_1\*X$, but the expectation of Y is not $exp(\beta\_0+\beta\_1\*X)$, but a more correct one is $E(Y)=exp(\beta\_0+\beta\_1\*X+{\sigma}^2/2)$. Whatever you are trying to derive, if it involves anti-log transformation and taking expectation, this bias correction should not be ignored.
I have been recently doing two-sample analysis and have a little trouble interpreting the resulting $p$-value. The $p$-value was 0.12, thus at the risk rate of 5% I cannot say the samples are different. **However, does $p=0.12$ still mean I can say there is a detectable difference between samples, just not as obvious?** My current intuition is that until $p<0.5$ the interpretation that the datasets are different is more plausible that they are the same, yet of course at e.g 0.4 the risk of that claim is very high and thus maybe not useful.
The wikipedia for p-value should give you a precise definition. Essentially, it's the probability how your null hypothesis would be inconsistent to your data. With a p-value of 0.12, you would need a large significance level to reject your hypothesis. * You can think like there is some difference between the samples, but not as inconsistent as you might have thought. If you have no or very little difference, your p-value should be close to 1. * Do you mean p < 0.05? 0.5 is simply too big. * You should never think your groups are `different` or `same`, because they will never be the same. If they were the same, you wouldn't have to do a statistical test at all. You should ask yourself, is a p-value of 0.12 enough to convince you that two groups are statistically different enough such that you can call them significant? The answer is no, we usually reject the hypothesis when the p-value is less than 0.05. Your results indicate your groups are not statistically significant unless you want a significance level 12%.
Most papers are now written collaboratively, and collaborators are often located in different places. I have always used version control systems for my documents and code, and have also found version control critical for collaborative software projects, but it seems many researchers in theory avoid their use for writing joint papers. To convince my collaborators that version control (revision control) is a good idea for working together, there seem to be some prerequisites. *It is not possible to force everyone to worry about a specific set of conventions for line breaks and paragraphs, or to avoid tab/space conversions.* > > Does someone offer free hosting of small shared document repositories, with text-document-friendly version control that can handle word-level diffs (***not*** line-based)? > > > If not, then I would welcome other suggestions that are based on experience (let's avoid speculation, please). I was thinking of Git, Subversion, Mercurial, darcs, or Bazaar, set up to handle word-level differences with wdiff, together with a simple way of setting up access secured by public keys (for instance via ssh). However, none of the version control providers that I looked at seem to offer anything like this. For scientific collaboration the "enterprise" features stressed by many of these companies are not very important (lots of branches, integration with trac, auditing by third parties, hierarchical project teams). But word-level diffs seem critical yet unsupported. In my experience, with line-level diffs for text files, everyone has to avoid reformatting paragraphs and editors that change tabs to spaces or vice versa cause problems; there also seem to be many spurious edit conflicts. I have used wdiff and latexdiff quite successfully to help with manual merging of changes previously, so I am hoping a word-level diff would reduce such problems. See related question at MO about [tools for collaboration](https://mathoverflow.net/questions/3044/tools-for-collaborative-paper-writing), and related questions over at TeX.SE, about [version control for LaTeX documents](https://tex.stackexchange.com/questions/1118/what-are-the-advantages-of-using-version-control-git-cvs-etc-in-latex-document) and [LaTeX packages for version control](https://tex.stackexchange.com/questions/161/latex-packages-for-use-with-revision-control). See also the [SVN Hosting Comparison Review Chart](http://www.svnhostingcomparison.com/) for a large list of hosting providers, for just one of the main version control systems. --- *Edit:* Jukka Suomela's answer to the TeX.SE question "[Best LaTeX-aware diff and merge tools for subversion](https://tex.stackexchange.com/questions/4182/best-latex-aware-diff-and-merge-tools-for-subversion)" seems to be the best suggestion so far, covering how to interpret the deltas on a word level. Moreover, Jukka has explained how the differences between successive versions on the repository end are separate from the user-level differences used for conflict detection and merging of changes. Jukka's answer at TeX.SE explicitly excludes simultaneous edits and merging, relying instead on the traditional atomic edit token to avoid edit conflicts. Clarifying (and modifying) my original question, is there a way to ensure that edit conflicts can be resolved on a word difference basis, rather than on a line difference basis? In other words, can `wdiff` or similar tools be integrated into the *conflict detection* part of the version control tools, similar to the way end-of-line differences and differences in whitespace can be ignored?
While reading your great post and looking around for a solution myself I stumbled into the option to **colorize changes at word level in gitk**. The gitk parameter seems to be a new and/or undocumented feature since the auto-completion does not offer it and the [gitk man page](ftp://ftp.kernel.org/pub/software/scm/git/docs/gitk.html) does not list it. Here are the options the I found: ``` gitk --word-diff=plain gitk --word-diff=porcelain gitk --word-diff=color ``` You can find several discussions on that topic searching for ["diff --color-words" gitk](http://tinyurl.com/3sh79dw). **Edit:** This is what is looks like ... ![Differences colored at a word-level using gitk](https://i.stack.imgur.com/9Ans7.png)
Is there a better term for "complete k-partite graph" in the case where k is not fixed? If I say "complete k-partite graph", people tend to assume "for some particular k". In other words, what's a term for any graph for whom each connected component in the complement graph is a clique? I asked this before, but it was as part of another question, so it was ignored.
I believe the most standard term is [complete multipartite graph](http://en.wikipedia.org/wiki/Glossary_of_graph_theory#Independence).
Frankly I'm very uncomfortable with the material right now. There are some things I can understand, but many I still do not. My first assignment is asking me in one question (which I do know how to do) to give a full description of a TM that accepts a language $L = \{ x \in \{0,1\}^\* \mid x \text{ is divisible by } 4 \}$. I know that any binary string ending with $00$ is divisible by 4, so $\{00,100,1100,1000,11100,11000,10100,10000,\dots\}$ is the language that this TM accepts. But on the topic of (un)decidability: I know that a language is decidable if there exists a TM that accepts all strings in, and only strings from that language — and that same TM rejects all strings and only strings not in that language. Which leads to the question: What is the difference between a Turing machine *accepting* and *deciding* a language?
In situations like these, where you are wondering about the difference between two phrases, the correct thing to do is to look up some definitions and think about them. You will discover several variations, perhaps something like this: 1. A machine "eats" a language $L$ if on input $x$ it halts and outputs $1$ if $x \in L$, and $0$ otherwise. 2. A machine "drinks" a language $L$ if on input $x$ it halts in an `accept` state when $x \in L$ and in a `reject` state if $x \not\in L$. 3. A machine "sniffs" a language $L$ if on input $x$ it halts in an `accept` state when $x \in L$, and never terminates if $x \not\in L$. 4. A machine "gobbles" a language $L$ if on input $x$ it reaches a `OK` state when $x \in L$ and never enters the `OK` state otherwise. 5. A machine "chews" a language $L$ if on input $x$ it halts in state `0` when $x \in L$, and either never terminates or halts in state `1` when $x \not\in L$. These all look the same but are all slightly diffent. You should expect some definitions to be slightly broken (because you got it from Wikipedia or your classmate's notes), or phrased in a weird way that suits the needs of a particular texbook, etc. The most important thing to do is to figure out what the definitions are really trying to convey. If they are basic definitions that everyone refers to, they are likely to tell you something important. (In a research paper people sometimes make definitions to numb the readers' brains.) Once the *concepts* are clear, it does not matter what they are called. Also, in case of terminological confusion, you will always be able to quckly resolve possible misunderstanding because you already know what concepts to expect. In the present case, there are really two possible notions. One in which a machine always halts and signals acceptance/non-acceptance in some way, for example by halting in certain states, or by outputting certain symbols. The other concept is when a machine does not always halt, and it uses halting to indicate acceptance and non-halting to indicate non-acceptance. A nice exercise is to figure out which of the five definitions given above corresponds to which of the two concepts (and which definition is a bit unclear, if any). But even before the exercise you need to know why these are two different concepts! You need to be aware of a language which falls under one but not the other concept. The trouble with learning math is that you have to simultaneously learn new phrases *and* their meaining, *and* figure out why people invented them.
As my data has a lot of outliers, using mean to standardize data doesn't seem to be optimal. I'm experimenting with using median to classify outliers and stumbled upon robStandardize function from robustHD package. What I struggle is how to intuitively explain the numbers produces by this kind of standardization. For example, it's quite intuitive if we scale by mean and sd and the corresponding z-score tells us how far deviated is a value from its population mean, with standard deviation as a unit. Although with median and median absolute deviation (mad), it's harder to interpret it. Based on the standardization technique using median and mad, I want to classify outliers and potentially exclude them from regression analysis. EDIT: As suggested by mkt in the comment, it'd be helpful to have some context. I'm dealing with e-commerce transaction prices. So each record will have the product SKU along with the transacted price. Throughout the year, the price of the different SKUs may vary over time which can be due to many events such as promotional period, clearing out inventory, etc. I would expect the price variation is anchored around the normal price. For example, in the case of promotional period, sellers would at most give say 30% discount. For some reasons, prices may also increase should the sellers feel increasing price doesn't have impact on sales and profit. My end objective is to calculate the demand curve based on number of sales or transactions at different price point. I would then use this demand curve to derive the optimal price for which they can maximize profit. I have hundreds of SKUs and each SKU has their own price range. The challenge is that some of these prices are really just errors which I consider outliers. The outliers will have impact on the demand curve, and therefore may not capture accurate information. For example, based on the demand curve I may calculate the price-demand elasticity of the SKU. However if these outliers were to be introduced, they inevitably produce unexpected results. Example below, I have profit-price curves for each different segments (represented by color lines) for the same product. Business users would expect, while there maybe some variations in optimal recommended prices for each segment due to different demand curves, the variations shouldn't be varied by too much. In this case, we have optimal price for segment E (blue line) to be at USD269 but for segment B (green line) at USD118. Segment E has high outliers while segment B has low outliers. The expected range of the price should have been around USD180-USD200. Notice that segment A (red line) and segment D (yellow line) are somewhat ok as there're no erroneous transacted prices for these two segments. [![enter image description here](https://i.stack.imgur.com/EjSjT.png)](https://i.stack.imgur.com/EjSjT.png) Thanks
First, the median divided by the MAD is really no less intuitive than the mean divided by the SD. It's a little less common but "how far from the median" is just as clear as "how far from the mean". Neither SD nor MAD is very intuitive, but, if anything, I'd say MAD is more so. No squaring and square rooting to deal with. Second, unless the method of recording prices of SKUs is really terrible (in which case there could be lots of errors) if you have a "lot" of outliers then I don't think they are really outliers. I think they are an indication that you are missing something. And, if the method of recording prices is so bad, then you have other problems. There will be other errors in the data that just aren't so clear. Third, rather than deleting outliers (especially a lot of them) I think you should use methods that deal with them well. There is a whole field of robust statistics. Fourth, the exception is when the data are clearly wrong. Some values are impossible. There are no 4 meter tall human beings, for instance. But, unless you have some information that you haven't put in your question, all you have is a sort of suspicion that prices shouldn't vary as much as yours do. You are surprised. One definition of an outlier is a "surprising point". That leads to: Fifth: My favorite professor in grad school used to say "if you're not surprised, you haven't learned anything". Don't throw the surprise away. Figure out where it comes from.
I got into a debate with my supervisor over a recent paper. The test of correlation in a sample of 77 participants yielded a p-value smaller than 0.05. After removing a few participants (because later we found out they are underaged), the p-value is 0.06 (r = 0.21). Then my supervisor says, 'you should report there are no correlations between these two variables, the p-value is not significant.' Here's what I reply: It makes no sense to tell people that the result is not significant in a sample of 71, but it’s significant in a sample of 77. It is important to link the results to the findings in the literature when interpreting a trend. Although we find a weak trend here, this trend aligns with numerous studies in the literature that finds significant correlations in these two variables. Here is what my supervisor reply: I would argue the other way: If it’s no longer significant in the sample of 71, it’s too weak to be reported. If there is a strong signal, we will see it in the smaller sample, as well. Shall I not report this 'not significant' result?
For the purpose of this answer I'm going to assume that excluding those few participants was fully justified, but I agree with Patrick that this is a concern. --- There's no meaningful difference between p ~ 0.05 or p = 0.06. The only difference here is that the convention is to treat the former as equivalent to 'true' and the latter as equivalent to 'false'. This convention is *terrible* and is unjustifiable. The debate between you and your professor amounts to how to form a rule of thumb to deal with the arbitrariness of the p = 0.05 boundary. In a saner world, we would not put quite so much stock into tiny fluctuations of a sample statistic. Or to put it more colourfully: > > ...surely, God loves the .06 nearly as much as the .05. > Can there be any doubt that God views the strength of evidence for or > against the null as a fairly continuous function of the magnitude of > p?” > > > -Rosnow, R.L. & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276-1284. So go ahead and report that p = 0.06. The number itself is fine, it's how it is subsequently described and interpreted that is important. Keep in mind that 'significant' and 'non-significant' are misleading terms. You will have to go beyond them to describe your results accurately. Furthermore, I recommend you read the answers to [What is the meaning of p values and t values in statistical tests?](https://stats.stackexchange.com/questions/31/what-is-the-meaning-of-p-values-and-t-values-in-statistical-tests)
I've been using this technique in 'black-box' form for a little while as a physics student. I have been struggling to understand what's happening under the hood for some time and I think I almost have it - but I'm mixed up about several things. I'm trying to fit together the *prior distribution, proposal distribution, and Bayes Theorem* What is the difference between the *prior* $\mathcal{P}$ and the *jumping distribution* $\mathcal{Q}$ in the simple MH algorithm? Are these the same thing? I know the jumping distribution is used to draw a candidate point $X\_c$ that is 'near' the current point $X\_i$. Often this is something like a Gaussian function centered at $X\_i$, therefore producing $X\_c$ that is usually locally 'close' to $X\_i$. That's what I keep finding, using resources such as [this youtube video](https://www.youtube.com/watch?v=h1NOS_wxgGg). Elsewhere, like in [this amazing tutorial](http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter3_MCMC/Chapter3.ipynb), I see that the prior is a surface above the parameter space in question shaped in a way that makes sense given the problem, and this surface is used to weight the posterior in conjunction with the appropriate likelihood function. For instance maybe we believe in advance that scores on an particular exam are Gaussian distributed, with known mean. So the posterior is weighted by this assumption and the MCMC will return samples with a higher density in that region for that particular parameter. That makes sense with Bayes Theorem, as the posterior is proportional to the product of the prior and the likelihood. $P(A|B) \propto P(B|A)P(A)$ I am not seeing where this happens into the MH algorithm. I know these three things (prior, proposal, and Bayes Theorem) are all wrapped up in the 'acceptance ratio' part of the algorithm. Can someone break it down for me like I'm five years old and show me exactly where and how those pieces fit together? Finally I'm having a lot of trouble understanding how we define likelihood for an arbitrary statistical model. One explanation from the first video is it's an exponential function that literally evaluates the difference between the proposed fit and the real data: $\mathcal{L} = e^{-(Data - Model)}$ Okay, so this function takes a higher value in regions of parameter space that produce better fits. But how can we say this is exactly the actual likelihood (i.e. the $P(B|A)$ part of Bayes Theorem)? This doesn't seem like it's guaranteed to be at all points proportional to what we want... Any help, guidance, or resources greatly appreciated! Thanks so much for reading to this point and bearing with me.
As you say, the three elements used in MH are the proposal (jumping) probability, the prior probability, and the likelihood. Say that we want to estimate the posterior distribution of a parameter $\Theta$ after observing some data $\mathbf x$, that is, $p(\Theta|\mathbf{x})$. Assume that we know the prior distribution $p(\Theta)$, that summarizes our beliefs about the value of $\Theta$ *before* we observe any data. Now, it is usually impossible to compute the posterior distribution analytically. Instead, an ingenious method is to create an abstract Markov chain whose states are values of $\Theta$, such that that the *stationary distribution* of such chain is the desired posterior distribution. Metropolis-Hastings (MH) is a schema (not the only one, e.g. there's Gibbs sampling) to construct such a chain, that requires to **carefully** select a *jumping* (or *proposal*) distribution $q(\Theta|\theta)$. In order to go from one value of $\Theta$, denoted as $\theta$, to the next, say $\theta'$, we apply the following procedure: 1. Sample a candidate (or proposed) $\theta^\*$ as the next value, by sampling from $q(\Theta|\theta)$, where $\theta$ is the current value. 2. Accept the candidate value with a probability given by the *MH acceptance ratio*, given by the formula: $$ \alpha(\theta,\theta^\*) = \min\left[1,\frac{p(\theta^\*|\mathbf{x})\;q(\theta|\theta^\*)}{p(\theta|\mathbf{x})\;q(\theta^\*|\theta)} \right]. $$ By applying Bayes rule the the posterior probability terms in the formula above, we get: $$ \alpha(\theta,\theta^\*) = \min\left[1,\frac{p(\theta^\*)\;p(\mathbf{x}|\theta^\*)\;q(\theta|\theta^\*)}{p(\theta)\;p(\mathbf{x}|\theta)\;q(\theta^\*|\theta)} \right]. $$ After iterating this process "enough" times, we are left with a collection of points that approximates the posterior distribution. A counterintuitive thing about the formula above is that the proposal probability of the candidate value appears at the denominator, while the "reverse" proposal probability (i.e. going from the proposed to the original value) is at the numerator. This is so that the overall transition distribution resulting from this process ensures a necessary property of the Markov chain called **detailed balance**. I found this [paper](http://www.ics.uci.edu/~johnsong/papers/Chib%20and%20Greenberg%20-%20Understanding%20the%20Metropolis-Hastings%20Algorithm.pdf) quite helpful on this topic. Now, it is perfectly possible to use the prior distribution itself as the proposal distribution: $q(\Theta|\theta)=p(\Theta)$. Note that in this case the proposal distribution is not conditional on the current value of $\Theta$, but that is not a problem in theory. If we substitute this in the formula for $\alpha$ above, and carry out some simplifications, we obtain: $$ \alpha(\theta,\theta^\*) = \min\left[1,\frac{p(\mathbf{x}|\theta^\*)}{p(\mathbf{x}|\theta)} \right]. $$ What is left is just the ratio of the likelihoods. This is a very simple approach and usually not very efficient, but may work for simple problems. Regarding the likelihood, I think it really depends on what your model is. Regarding the formula you write, I don't really understand what is going on. What are $Data$ and $Model$ in there?
I need a list of $\Sigma\_2^p$ complete languages. There are two such problems listed in the [Complexity Zoo](https://complexityzoo.uwaterloo.ca/Complexity_Zoo%3aS), namely: * Minimum equivalent DNF. Given a DNF formula F and integer k, is there a DNF formula equivalent to F with k or fewer occurences of literals? * Shortest implicant. Given a formula F and integer k, is there a conjunction of k or fewer literals that implies F? Another basic $\Sigma\_2^p$ complete problem: * $\Sigma\_i \text{SAT}$. Given a quantified boolean formula $\varphi$ of the form $\varphi = \exists \vec{u} \forall \vec{v}\, \phi(\vec{u}, \vec{v})$, is $\varphi$ valid? However, I am hopefully looking for a problem which makes use of graphs (e.g. a clique related problem).
Marcus Schaefer and Chris Umans have [a nice Garey-and-Johnson-esque survey](http://ovid.cs.depaul.edu/documents/phcom.pdf) of complete problems in the polynomial hierarchy.
I am relatively new to data science and have an exercise task. This consists of the classifications of excerpts of texts. However, the texts are obfuscated such that one cannot read the words, spaces etc. But the "patterns" are preserved. I have a training set of the following form. One .txt file with the text excerpts looking like shdbcjhbjhbefbhbwhbkbehbwbwbfhwb / wbhbwtjnwkjbrfbqlenk / wjnfkjebrkbrghkbibgibib / tberbtewtwbkwtjbrkbwkbtwrbt / . . . and one .txt file with the labels of the texts looking like 1 / 4 / 11 / 0 / . . . This means that each string of symbols belongs to a certain text and one knows to which one. The task is now to set up a classifier, to train it on above training data and to test it on test data which also consists of text excerpts. My basic idea was to interpret the data as images and to set up a deep learning NN with e.g. TensorFlow. However, I am not sure about the shape of the data which is needed to feed the data to the NN. Do I have to create label-folders containing all texts with a certain class-label or is there a more direct way (so far I created vectors in R consisting of the texts and the labels)? Must the excerpts all have the same length? How do I have to manipulate the data until I can feed it to the NN?
The text is obfuscated for you but to the computer, this is as incomprehensible as English for instance. When classifying text, you always first vectorize your input, meaning that your text becomes a vector. This vector typically represents the frequency of the tokens in your vocabulary in that particular string (bag-of-words representation). What you can do is represent each document/string as a vector representing the frequencies of characters, character n-grams, or even words. One you have your input, you can deal with it as any classification task. Naive Bayes has shown to be a good lower bound for these types of tasks.
I am currently fitting a mixed effects model to some experimental data as follows: ``` model <- lmer(Y ~ X + M + (1+X+M|Subject), data=mydata) ``` The meaning of the variables is not so important here, but $X$ is the predictor of interest while $M$ is a (suspected) mediating variable. All variables are continuous and measured within-subjects. Now the question concerns the random slopes in this model. The above syntax specifies fully correlated random effects. However, I would like to remove the correlation between the two random slopes ($X$ and $M$) without removing the correlation between the random slopes and the random intercept. Initially, I attempted the following code: ``` model <- lmer(Y ~ X + M + (1+X|Subject) + (1+M|Subject), data=mydata) ``` This does produce uncorrelated random slopes but lmer() now estimates a random subject intercept *both* for $X$ and $M$. I am not sure this is correct (or what I require), because I am now forced to introduce an extra variance parameter (simply for removing another one). Is there any way to specify a single subject intercept *and* uncorrelated random slopes for $X$ and $M$?
I think what you want is not directly achievable. The best seems to be your second option (i.e., two random intercepts but no slopes). Depending on the number of levels in `X` and `M`, this should decrease the number of parameters overall. As in the following example: ``` require(lme4) # use data with two within variables: data(obk.long, package = "afex") # the full model m1 <- lmer(value ~ phase + hour + (phase + hour|id), data = obk.long) print(m1, corr = FALSE) # has correlations between the slopes. # the alternative model with two intercepts: m2 <- lmer(value ~ phase + hour + (hour|id) + (phase|id), data = obk.long) print(m2, corr = FALSE) # has no correlations between the slopes but two intercepts: # m2 has overall less parameters (29 vs 36): anova(m1, m2) ```
In (intuitionistic) linear logic the usual rules for the storage modality $!$ are promotion, dereliction, contraction, and weakening: $$\frac{!\Gamma\vdash A}{!\Gamma\vdash !A}(prom) \qquad \frac{\Gamma,A \vdash C}{\Gamma,!A\vdash C}(der) \qquad \frac{\Gamma,!A,!A\vdash C}{\Gamma,!A\vdash C}(ctr) \qquad \frac{\Gamma\vdash C}{\Gamma,!A\vdash C}(wk)$$ Lafont's [soft linear logic](https://www.sciencedirect.com/science/article/pii/S0304397503005231) is obtained by observing that these rules are equivalent to soft promotion, multiplexing, and digging: $$\frac{\Gamma\vdash A}{!\Gamma\vdash !A}(sp) \qquad \frac{\Gamma,\overbrace{A,\dots,A}^n\vdash C}{\Gamma,!A\vdash C}(mpx) \qquad \frac{\Gamma,!!A\vdash C}{\Gamma,!A\vdash C}(dig)$$ and then removing the digging rule. Soft linear logic (SLL) is interesting because it corresponds closely to polynomial-time computation in various precise senses. For instance, Lafont showed that polytime functions are precisely those representable in second-order SLL in a certain sense, while [McKinley](https://www.sciencedirect.com/science/article/pii/S1567832608000179) showed that they are also those representable in a naive set theory based on SLL, and [lambda calculi](https://www.sciencedirect.com/science/article/pii/S1571066108001710) based on SLL can also characterize polytime computation (plus other complexity classes when augmented with additional features). My question is what happens if in SLL we replace the multiplexing rule (which is actually a rule schema consisting of one rule for each external natural number $n$) with the following "strong multiplexing" or "weak contraction" rule? $$\frac{\Gamma,!A,A\vdash C}{\Gamma,!A \vdash C}\qquad (\*)$$ (and also weakening, which is a special case of ordinary multiplexing but not of $(\*)$) Ordinary contraction says that we can use $!A$ as two $!A$'s — hence any number of $!A$'s, and hence any number of $A$'s — while ordinary multiplexing says that we can use $!A$ as any number of $A$'s as long as we do it all at once. So, for instance, if we have $!A$ and two branches of a derivation both need some $A$'s, then ordinary contraction says we can give them both a whole $!A$, while ordinary multiplexing says we need to decide right away before doing the branch how many $A$'s each branch needs. The rule $(\*)$ seems to be in between: it says that from $!A$ we can extract any number of $A$'s and keep the $!A$ around, but we can't get more than one $!A$. So if we have $!A$ and two branches of a derivation both need some $A$'s, then we can give *one* of them a whole $!A$ but we need to decide right away how many $A$'s the other branch needs. The question "what happens" is admittedly a bit vague, but it could be made precise in various ways. Is rule $(\*)$ conservative over ordinary multiplexing? Is contraction conservative over rule $(\*)$? Does SLL+$(\*)$ still correspond to polytime computation, in any or all of the ways that ordinary SLL does? Or does it correspond to some larger complexity class? I would be interested in answers to any of these questions. (My reason for asking is partly curiosity, but also partly practical: it could be inconvenient for some purposes that ordinary multiplexing is a rule schema indexed by external natural numbers, whereas $(\*)$ is a single rule.)
Your rule $(\ast)$ is sometimes referred to as "absorption". I think the first who considered it was Jean-Marc Andreoli in his [paper on *focusing proofs*](https://pdfs.semanticscholar.org/b9dd/97a9ed29263923a2d7da195f1f7e790242d1.pdf). Indeed, it makes a lot of sense in proof search: read bottom-up, it says that when you have $?A$ (on the right), you may extract a copy of $A$ and try and do something with it, while keeping $?A$ for future use. Of course, as you know, absorption plus weakening and promotion (the usual one, i.e., functoriality of $!(-)$ + comonad comultiplication) gives a system equivalent to full linear logic. You are asking about the system obtained by also taking promotion to be strictly functorial (what Lafont calls "soft promotion"). Let me call it $\mathbf{PL}^-$, for reasons I'll clarify momentarily. As you rightly observe, $\mathbf{PL}^-$ is much more powerful than $\mathbf{SLL}$. Indeed, and this is essentially a computational reformulation of your answer, the untyped term calculus underlying $\mathbf{PL}^-$ is Turing-complete, whereas the untyped calculus underying $\mathbf{SLL}$ has exactly the same expressiveness as $\mathbf{SLL}$: its terms strongly normalize in $O(s^d)$ steps, where $s$ is the size and $d$ the exponential depth of the term being normalized (this is *the* key feature of so-called "light" logics: their normalization does not rely on the logical complexity of formulas. Types/formulas are there only to ensure that the normal forms are of a prescribed shape: integers, strings, booleans, etc.). Now I must apologize for invoking my own work, but it is truly relevant to your question: $\mathbf{PL}^-$ is actually the intersection of linear logic and a system I call *parsimonious logic* ($\mathbf{PL}$), which is a variant of linear logic based on the the fact that the exponential modality obeys the isomorphism $$!A \cong (A\&1)\,\otimes\,!A$$ and nothing more (except functoriality, of course). With Kazushige Terui, we refer to this iso as *Milner's law*, because of its similarity with the structural equality $!P\equiv P\mathrel{|}\,!P$ in the $\pi$-calculus (especially if one considers the *affine* version of parsimonious logic, which is most frequent in applications). Milner's law does not hold in $\mathbf{LL}$, so $\mathbf{PL}$ is not a subsystem of linear logic. On the other hand, the $!(-)$ of linear logic is a comonad and has contraction, which is not the case in $\mathbf{PL}$, so this latter is not a supersystem of $\mathbf{LL}$. Categorically, a model of $\mathbf{PL}$ is just a SMCC with products (in fact, free copointed objects suffice) and with a monoidal endofunctor $!(-)$ satisfying Milner's law (naturally in $A$). Comparatively, models of $\mathbf{LL}$ are much more complex to describe. As you see, the implication from left to right of Milner's law gives you precisely weakening and absorption (composing with the projections of $\&$, which is the product as usual). So the system you are asking about is $\mathbf{PL}$ with only "half" of Milner's law, and in fact $\mathbf{PL}^-=\mathbf{PL}\cap\mathbf{LL}$. Not unlike $\mathbf{SLL}$, it turns out that $\mathbf{PL}$ has quite interesting complexity properties. However, quite unlike $\mathbf{SLL}$ and Girard's *light linear logic*, complexity is not controlled by the structure of the logical rules (they are Turing-complete) but by the types: simply-typed $\mathbf{PL}$ captures $\mathsf L$ (deterministic logspace), whereas $\mathbf{PL}$ with linear polymorphism (i.e., where the $X$ in $\forall X.A$ may only be instantiated with a $!$-free formula) captures $\mathsf P$ (deterministic polytime). I also conjecture that "parsimonious system F" ($\mathbf{PL}$ with full polymorphism) captures primitive recursion. When I say "captures" above I mean that one defines types $\mathsf{Str}$ of binary strings and $\mathsf{Bool}$ of booleans (more or less as usual) and looks at the class of problems decidable by terms of type $\mathsf{Str}\multimap\mathsf{Bool}$ (or instantiations of them when polymorphism is absent/limited). The side of Milner's law that's missing from $\mathbf{PL}^-$ is necessary for the completeness with respect to logspace and polytime (for instance, without it you cannot type the predecessor with simple types $\mathsf{Nat}\multimap\mathsf{Nat}$). So simply-typed (resp. linear polymorphic) $\mathbf{PL}^-$ does not correspond to any class I know of: you will only be able to solve logspace (resp. polytime) problems in it, but you'll be missing some of those. However, when you have full second order (or when you are untyped), $\mathbf{PL}^-$ seems to be (is) just as expressive as $\mathbf{PL}$. In particular, one may play around with "parsimonious OCaml", which is a (fictional, I haven't implemented it :-) ) variant of OCaml following the type discipline prescribed by $\mathbf{PL}^-$. The funny thing about such a language is that it does not have general fixpoints, only *linear* fixpoints: when you write $$\mathtt{letrec}\ \mathtt f\ \mathtt x = \mathtt{code}$$ $\mathtt{code}$ may only contain *one* occurrence of $\mathtt f$ (in every branch of an $\mathtt{if\ldots then\ldots else}$ statement). Since while loops are exactly linear (tail) recursions, parsimonious Ocaml is Turing-complete. However, there's lots of "natural" OCaml programs you'll have to non-trivially rewrite for them to run in parsimonious OCaml! (Pretty much every structural recursion on binary trees for example). In a sense that I would like (but don't know how) to make precise, parsimonious logic (and hence the logic you are asking about) is the "logic of while loops". I think that this is the ultimate reason why it behaves so well from the complexity viewpoint. I'm going to stop here and give you some pointers, in case you are more curious. There is a series of papers (by myself and co-authors) on parsimonious logic and complexity, all available on my web page. However, things are a bit scattered here and there and I think that the most readable account is in Chapter 3 of my [*habilitation* thesis](http://www-lipn.univ-paris13.fr/~mazza/papers/Habilitation.pdf), so I suggest you go there instead.
In the [Wikipedia article on Rogers' theorem](http://en.wikipedia.org/wiki/Kleene%27s_recursion_theorem#Rogers.27_fixed-point_theorem), it is stated that all total computable functions have a fixed point. The notation is a little hard for me to understand; a symbol is used that is used to denote "semantic equivalence." I do not know what semantic equivalence is; I would appreciate it if someone could shed some light on what a fixed point is in this context, and on what semantic equivalence is in this context.
The notation is explained in the Wikipedia entry (though regrettably after its first use): for partial functions $f,g$, we say that $f\simeq g$ if for all inputs $x$, $f$ halts on $x$ iff $g$ halts on $x$, and if both halt on an input $x$, then $f(x) = g(x)$. In other words, $f$ and $g$ compute the same partial function, and so they are *semantically equivalent*: they result in the same outcome. The idea behind *semantic equivalence* is that two programs might be different but could be equivalent in the sense that they compute the same function. For example, if there are two statements $x \gets 0; y \gets 1$ and we switch their order to $y \gets 1; x \gets 0$ then the resulting programs are different *syntactically* but equivalent *semantically*. Roger's theorem shows that for every total computable function $P$, which we think of as a transformation rule for programs, there is some program $e$ that is equivalent to $P(e)$. The rest of the Wikipedia entry explains why this is useful.
Suppose I have a text like below which usually have 2/3 sentences and 100-200 characters. *Johny bought milk of 50 dollars from walmart. Now he has left only 20 dollars.* I want to extract Person Name: Johny Spent: 50 dollars Money left: 20 dollars. Spent where: Walmart. I have gone through lots of material on Recurrent neural network. Watched cs231n video on RNN and understood the next character prediction. In these cases we have set of 26 characters that we can use as output classes to find the next character using probability. But here the problem seems entirely different because we don't know the output classes. The output depends on the words and numbers in the text which can be any random word or number. I read on Quora that convolutional neural network can also extract features on the text. Wondering if that can also solve this particular problem?
The problem you pose here is called named entity recognition (NER), or named entity extraction. There are multiple technologies (not necessary neural networks) that can be used for this problem, and some of them are quite mature. See e.g. [this repo](https://github.com/philipperemy/Stanford-NER-Python) for an easy-to-plug-in solution, or try to apply the `ne_chunk_sents` function from the `NLTK` module in Python.
Here is an experiment I did: 1. I bootstrapped a sample $S$ and stored the results as empirical distribution under the name $S\_1$. 2. Then I bootstrapped $i=10000$ times in a row the same sample $S$ and compare the resulting empirical distributions $S\_i$ with $S\_1$ using Kolmogorov-Smirnov test . Results from the experiment: The comparisons return different $p$-values (from $0.01$ to $0.99$) and different $D$ values (from $0.02$ to $0.06$). Is that expected? If I bootstrap the same sample 1000 times isn't it expected that all 1000 empirical distributions to be from the same distribution? If yes then should I try to establish the distribution of the empirical distributions ($S\_1$, $S\_i$)? For instance: Three empirical distributions $S\_1$, $S\_2$, $S\_3$ bootstrapped from the same initial sample $S$: ``` S1: 1,2,3,4,5,6 S2: 1,3,4,5,6,7 S3: 2,4,5,6,7,8 ``` If I add them up I get: ``` 1,1,2,2,3,3,4,4,4,5,5,5,6,6,6,7,7,8 ```
The thing to recognize here is that all of your bootsamples come from the same population. That is, the null hypothesis obtains here. Bear in mind that under the null hypothesis, the $p$-value is distributed as a uniform. So it sounds like everything worked fine (although I don't know if that is what you were trying to do).
I'm interested in why natural numbers are so beloved by the authors of books on programming languages theory and type theory (e.g. J. Mitchell, Foundations for programming languages and B. Pierce, Types and Programming Languages). Description of the simply-typed lambda-calculus and in particular PCF programming language are usually based on Nat's and Bool's. For the people using and teaching general-purpose industrial PL's it is great deal more natural to treat integers instead of naturals. Can you mention some good reasons why PL theorist prefer nat's? Besides that it is a little less complicated. Are there any fundamental reasons or is it just an honour the tradition? **UPD** For all those comments about “fundamentality” of naturals: I'm a quite aware about all those cool things, but I'd rather prefer to see an example when it is really vital to have those properties in type theory of PL's theory. E.g. widely mentioned induction. When we have any sort of logic (which simply typed LC is), like basic first-order logic, we do really use induction — but induction on derivation tree (which we also have in lambda). My question basically comes from people from industry, who wants to gain some fundamental theory of programming languages. They used to have integers in their programs and without concrete arguments and applications to the theory being studied (type theory in our case) why to study languages with only nat's, they feel quite disappointed.
Short answer: the naturals are the first limit ordinals. Hence they play a central role in axiomatic set theory (eg, the axiom of infinity is the assertion they exist) and logic, and PL theorists tend to share foundational preoccupations with logicians. We want to have access to the principle of induction to prove total correctness, termination, and similar properties, and the naturals are an (er) natural choice of well-order. I don't want to imply that finite-width binary integers are any less cool objects, though. They are representations of the p-adics, and permit us to use power series methods in number theory and combinatorics. This means that their significance becomes more visible in algorithmics than PL, since this is when we start caring more about complexity rather than termination.
I've been searching in the literature for examples of **distances defined on the set of the DFAs (or on the set of minimal DFAs) that are defined on a given alphabet sigma**. Since the languages they describe (regular languages) can potentially have an infinite size, defining a distance is not a trivial matter. Nevertheless, having a distance on these objects can be useful, in order to fit these in metric spaces, which allows for a range of things (in my case to assess the performance of an algorithm). My only consistent idea so far is to create a distance similar to the edit-distance in labeled graphs on the minimized DFAs. Does someone have ever heard of other distances ?
There are many possible distance metrics, and without any criteria, we have no basis to choose. Here are two plausible ones. Let $L\_1,L\_2$ be the languages of the two DFAs. Let $L$ be the symmetric difference of those languages (i.e., $(L\_1 \setminus L\_2) \cup (L\_2 \setminus L\_1)$). One dissimilarity measure is $d(L\_1,L\_2) = 2^{-n}$ where $n=\min\{|x| : x \in L\}$, i.e., $n$ is the length of the shortest word in $L$. This is not a distance metric, because it does not satisfy the triangle inequality. Another possibility is to use the density of $L$, i.e., $$d(L\_1,L\_2) = \sum\_{n=0}^\infty {|L \cap \Sigma^n| \over 2^n |\Sigma|^n}.$$ This one *is* a distance metric. These can be computed given DFAs for $L\_1,L\_2$. You can easily compute a DFA for $L$. Then, it is easy to find the length of the shortest word in $L$ using breadth-first search. Computing the density is a bit trickier, but can be done efficiently using a little more math. Let $A$ denote the transition matrix of the DFA for $L$, i.e., $A\_{ij}=1$ if there is a transition (on any input symbol) from state $i$ to state $j$, or $A\_{ij}=0$ if not. Let $s$ denote the one-hot vector indicating the start state, i.e., $s\_i=1$ iff $i$ is the start state, and $f$ denote the one-hot vector indicating the final states, i.e., $f\_i=1$ iff $i$ is an accepting state. Now $|L \cap \Sigma^n| = s^\top A^n f$, so the distance is given by $$d(L\_1,L\_2) = \sum\_{n=0}^\infty {s^\top A^n f \over 2^n |\Sigma|^n} = s^\top C f$$ where $$C = \sum\_{n=0}^\infty B^n = {1 \over I - B} = (I - B)^{-1}$$ where $B={1 \over 2|\Sigma} A$ and $I$ represents the identity matrix. This can be computed in time proportional to the cube of the number of states in the DFA for $L$ using standard algorithms for matrix inversion.
I am currently writing my thesis on deep learning models where I train a VGG like model. I trained my model always with Early Stopping function from Keras, where it stopped training after approximately 100 Epochs. My professor asked why I stop after 100 Epochs and that it is very few epochs. He said I should also try 500, 700, 800 Epochs and see if my model is overfitting or does a better job. After trying out to train my model on 800 Epochs, this is what comes out: [![enter image description here](https://i.stack.imgur.com/ArBgJ.png)](https://i.stack.imgur.com/ArBgJ.png) Looking at Validation Accuracy and Validation Loss values it looks quite good: Acc: 1.00; Val Acc: 0.9805; Loss: 0.0019; Val Loss 0.00 But the first thing that comes to my mind is: My proffessor always told us that in the real world or a really realistic model should never have more than let's say 94% of accuracy (given it is a little bit more complex task and not just: is this image black or white). Looking at the image I also see there is a lot of noise for Val Loss. Does that mean my model is overfitting or what can I understand from this. for more information: I used save best model with the parameter Val Loss because its the only parameter that stagnates so much every time. I have 2 classes with around 8000 images. My learning rate is 0.0001, my val split is 0.35 and batch size is 32 (because bigger batch size causes gpu memory error).
I would suggest this: * after 200 epochs, lower your learning rate + lower your batch size. why? on some epochs you get a low loss, and some with a high loss, this usually mean that there is a problem in the final "fine tuned" convergence. reference for why learning rate and batch size are connected [here](https://miguel-data-sc.github.io/2017-11-05-first/#:%7E:text=For%20the%20ones%20unaware%2C%20general,descent%20(batch%20size%201).). good luck!
I want to prove the non deterministic space hierarchy theorem. Let $f(n),g(n)\geq\log n$ be space constructible functions such that $f(n)=o(g(n))$, **Prove**: $$NSPACE(f(n))\subsetneq NSPACE(g(n))$$ I feel that the standard way of constructing a TM that takes as an input a TM and simulates the machine on itself, then flipping the output won't work because the input is a nondetrministic TM maybe. Can someone suggest a hint?
> > But every problem in NP has a polynomial time verification algorithm, so then does that not mean that I can also verify X in polynomial time because every NP problem is reducible to X? > > > No. If a problem $P$ is **polynomial-time** reducible to $X$, it means that a solution for $X$ can be used to solve problem $P$ (in no more time that is required to find the solution for $X$), and not necessarily that a solution for $P$ may be used to solve problem $X$ (unless $X$ is also polynomial-time reducible to $P$). Provided, clearly, that those problems require at least a polynomial-time algorithm to find their solution. In other words, $X$ is at least as hard as $P$, that is we have an upper bound of the hardness of $P$. You need to think about the reduction as a mapping: whenever you have a solution for $X$, you also have a solution for $P$. A more accurate definition of what a reduction is can be found [here](https://en.wikipedia.org/wiki/Reduction_(complexity)#Definition).
I have a self-study question that goes as follows: Let $X$ be one observation from a $N\sim(0, \sigma^2)$ population. Is $|X|$ a sufficient statistic? My question is - since you KNOW the $\mu$ parameter here is $0$, should this question be read as is $|X|$ a sufficient statistic for $\sigma^2$, the variance? A lot of the questions in my book (Casella) proceed this way, without identifying which parameter that the statistic is sufficient for.
I found the information I was talking about in my comment. From [van Buurens book](http://rads.stackoverflow.com/amzn/click/1439868247), page 31, he writes "Several tests have been proposed to test MCAR versus MAR. These tests are not widely used, and their practical value is unclear. See [Enders](http://rads.stackoverflow.com/amzn/click/1606236393) (2010, pp. 17–21) for an evaluation of two procedures. It is not possible to test MAR versus MNAR since the information that is needed for such a test is missing."
[This](https://cs.stackexchange.com/q/11263/8660) link provides an algorithm for finding the diameter of an undirected tree **using BFS/DFS**. Summarizing: > > Run BFS on any node s in the graph, remembering the node u discovered last. Run BFS from u remembering the node v discovered last. d(u,v) is the diameter of the tree. > > > Why does it work ? Page 2 of [this](http://courses.csail.mit.edu/6.046/fall01/handouts/ps9sol.pdf) provides a reasoning, but it is confusing. I am quoting the initial portion of the proof: > > Run BFS on any node s in the graph, remembering the node u discovered last. Run BFS from u remembering the node v discovered last. d(u,v) is the diameter of the tree. > > > Correctness: Let a and b be any two nodes such that d(a,b) is the diameter of the tree. There is a unique path from a to b. Let t be the first node on that path discovered by BFS. If the paths $p\_1$ from s to u and $p\_2$ from a to b do not share edges, then the path from t to u includes s. So > > > $d(t,u) \ge d(s,u)$ > > > $d(t,u) \ge d(s,a)$ > > > ....(more inequalities follow ..) > > > ![](https://i.stack.imgur.com/AeruG.png) The inequalities do not make sense to me.
All parts of proving the claim hinge on 2 crucial properties of trees with undirected edges: * 1-connectedness (ie. between any 2 nodes in a tree there is exactly one path) * any node can serve as the root of the tree. Choose an arbitrary tree node $s$. Assume $u, v \in V(G)$ are nodes with $d(u,v) = diam(G)$. Assume further that the algorithm finds a node $x$ starting at $s$ first, some node $y$ starting at $x$ next. wlog $d(s,u) \geq d(s,v)$. note that $d(s,x) \geq d(s,y)$ must hold, unless the algorithm's first stage wouldn't end up at $x$. We will see that $d(x,y) = d(u,v)$. The most general configuration of all nodes involved can be seen in the following pseudo-graphics ( possibly $s = z\_{uv}$ or $s = z\_{xy}$ or both ): ``` (u) (x) \ / \ / \ / ( z_uv )---------( s )----------( z_xy ) / \ / \ / \ (v) (y) ``` we know that: 1. $d(z\_{uv},y) \leq d(z\_{uv},v)$. otherwise $d(u,v) < diam(G)$ contradicting the assumption. 2. $d(z\_{uv},x) \leq d(z\_{uv},u)$. otherwise $d(u,v) < diam(G)$ contradicting the assumption. 3. $d(s,z\_{xy}) + d(z\_{xy},x) \geq d(s,z\_{uv}) + d(z\_{uv},u)$, otherwise stage 1 of the algorithm wouldn't have stopped at $x$. 4. $d(z\_{xy},y) \geq d(v,z\_{uv}) + d(z\_{uv},z\_{xy})$, otherwise stage 2 of the algorithm wouldn't have stopped at $y$. 1) and 2) imply $\, \\ d(u,v) = d(z\_{uv},v) + d(z\_{uv},u) \\ \qquad\geq d(z\_{uv},x) + d(z\_{uv},y) = d(x,y) + 2\, d(z\_{uv}, z\_{xy}) \\ \qquad\qquad\geq d(x,y)$. 3) and 4) imply $\, \\ d(z\_{xy},y) + d(s,z\_{xy}) + d(z\_{xy},x) \\ \qquad\geq d(s,z\_{uv}) + d(z\_{uv},u) + d(v,z\_{uv}) + d(z\_{uv},z\_{xy}) \qquad\qquad\qquad\qquad \\ \, $ equivalent to $\, \\ d(x,y) = d(z\_{xy},y) + d(z\_{xy},x) \\ \qquad\geq 2\*\,d(s,z\_{uv}) + d(v,z\_{uv}) + d(u,z\_{uv}) \\ \qquad\qquad\geq d(u,v)$. therefore $d(u,v) = d(x,y)$. analogue proofs hold for the alternative configurations ``` (u) (x) \ / \ / \ / ( s )---------( z_uv )----------( z_xy ) / \ / \ / \ (v) (y) ``` and ``` (x) (u) / \ / \ / \ ( s )---------( z_xy )----------( z_uv ) \ / \ / \ / (y) (v) ``` these are all possible configurations. in particular, $x \not\in path(s,u), x \not\in path(s,v)$ due to the result of stage 1 of the algorithm and $y \not\in path(x,u), y \not\in path(x,v)$ due to stage 2.
I have the following *randomly* generated distribution: ``` mean=100; sd=15 x <- seq(-4,4,length=100)*sd + mean hx <- dnorm(x,mean,sd) plot(x, hx, type="l", lty=2, xlab="x value", ylab="Density", main="Some random distribution") ``` [![enter image description here](https://i.stack.imgur.com/UA3cJ.jpg)](https://i.stack.imgur.com/UA3cJ.jpg) And a "non-random" value ``` x <- seq(-4,4,length=100)*10 + mean ux <- dunif(x = x, min=10, max=100) non_random_value <- ux[1] non_random_value # [1] 0.01111111 ``` I'd like to have the statistic that show `non_random_value` is significant and doesn't come up by chance with respect to `hx`. What is the reasonable statistics to check that?
How do you define *"by chance"*? I ask because the answer to the question asked like this is very simple and very unlikely to give any meaningful results. If you have random variable $X$ that follows a distribution described by a cumulative distribution function $F$, then to answer your question you need to simply decide about some *arbitrary* probability cut-off $\alpha$ and then check if $$ \Pr(X > x) = 1-F(x) < \alpha $$ or $$ \Pr(X < x) = F(x) > 1-\alpha $$ depending on your hypothesis. Where $x$ is your value of interest. However doing so will led you to meaningless results, e.g. that any human cannot be hit by a thunderbolt "by chance" if it happens with probability less then $\alpha$... [![xkcd comic](https://i.stack.imgur.com/xltod.jpg)](https://i.stack.imgur.com/xltod.jpg) (source: [xkcd.com](https://xkcd.com/795/))
In an undirected graph, can two nodes at an identical distance n from the root of a [DFS](http://en.wikipedia.org/wiki/Depth-first_search) tree be neighbors in the original graph? I'm thinking no, but I'm not sure (because of back edges)
I think you are correct: All neighbors of a node will be of rank one less (the node we got here from) or one more (or two more, etc) because we will rank them before we back out of the node.
Simpson's paradox is a classic puzzle discussed in introductory statistics courses worldwide. However, my course was content to simply note that a problem existed and did not provide a solution. I would like to know how to resolve the paradox. That is, when confronted with a Simpson's paradox, where two different choices seem to compete for the being the best choice depending on how the data is partitioned, which choice should one choose? To make the problem concrete, let's consider the first example given in [the relevant Wikipedia article](http://en.wikipedia.org/wiki/Simpson%27s_paradox). It is based on a real study about a treatment for kidney stones. ![enter image description here](https://i.stack.imgur.com/c4m7X.png) Suppose I am a doctor and a test reveals that a patient has kidney stones. Using only the information provided in the table, I would like to determine whether I should adopt treatment A or treatment B. It seems that if I know the size of the stone, then we should prefer treatment A. But if we do not, then we should prefer treatment B. But consider another plausible way to arrive at an answer. If the stone is large, we should choose A, and if it is small, we should again choose A. So even if we do not know the size of the stone, by the method of cases, we see that we should prefer A. This contradicts our earlier reasoning. So: A patient walks into my office. A test reveals they have kidney stones but gives me no information about their size. Which treatment do I recommend? Is there any accepted resolution to this problem? Wikipedia hints at a resolution using "causal Bayesian networks" and a "back-door" test, but I have no clue what these are.
This nice article by [Judea Pearl](http://en.wikipedia.org/wiki/Judea_Pearl) published in 2013 deals exactly with the problem of which option to choose when confronted with Simpson's paradox: [Understanding Simpson's paradox (PDF)](http://ftp.cs.ucla.edu/pub/stat_ser/r414.pdf)
I have one dataset of two variables (x,y). When the data is plotted in a 2D diagram, I see some data points create a good cluster, while the other data points are scattered randomly. Here is an example: [![enter image description here](https://i.stack.imgur.com/JgrRq.jpg)](https://i.stack.imgur.com/JgrRq.jpg) *(These data points were collected through an experiment. The weight is equal among all points. After plotting the data points, I found that some points can create a cluster. Then, I changed the color of these points to green color to show those better.)* Based on the plot, I have two clusters: (1) Green points (2) Red Points. Each data point in both clusters has two variables: X, Y. **My question is that:** *How can I conduct a statistical test to (statistically) show that the Green cluster has lower entropy than the red cluster?* * H0: No differences * Ha: Green cluster is statistically better (Less Entropy) than the red cluster. I actually want to show that among all data points (Red + Green), the green data points create a good cluster.
(One step back first) Typically, the assumptions underlying a linear regression model $$y\_i = x\_i^T\beta + e\_i,\,\,\, i=1,\dots,n$$ are: 1. The errors $e\_i$ are i.i.d. with Normal distribution with mean zero and variance $\sigma^2$. 2. The covariates are either a sequence of deterministic vectors or they come from a joint distribution such that for large enough $n$ the matrix $X^TX$ is positive definite, where $X$ is the design matrix. 3. $x\_i \bot e\_i$, the covariates and the errors are independent. Of course, there are all sorts of generalizations of these assumptions (e.g. heteroscedasticity). Suppose that you remove some covariates and keep $z\_i$ covariates, then $y\_i-z\_i^T\beta\_z$ are not necessarily normal since $e\_i = y\_i-x\_i^T\beta \neq y\_i-z\_i^T\beta\_z$, and consequently nothing guarantees the normality of the residuals under the smaller model. In practice, if you fit a model, and the residuals look normal, this does not imply that under a smaller model the residuals will also look normal. Have a look at the following example in R for instance: ``` # Simulated data ns = 1000 # sample size X = cbind(1,rgamma(ns,5,5),rgamma(ns,5,5,)) # design matrix e = rnorm(ns,0,0.5) # errors beta = c(1,2,3) # true regression parameters y = X%*%beta + e # simulating the responses # fitting the model lmr = lm(y~-1+X) # residuals res = lmr$residuals # histogram and normality test: nicely normal looking hist(res) shapiro.test(res) # Using only two covariates (one of them is the intercept) Z = X[,1:2] # Fitting the smaller model lmrz = lm(y~-1+Z) # residuals resz = lmrz$residuals # histogram and normality test: not normal looking and failing the test hist(resz) shapiro.test(resz) ```
I need an algorithm which can do the following: > > Given some finite number of particles (circles) each with the same radius, and a list of prescribed points for each particle, find paths in $\mathbb{R}^2$ for the particles to take so they won't collide. > > > The list of points for a particle specifies the order in which the particle must visit those points. All particles start moving at the same time. The time elapsed for a particle to travel from any prescribed point to the next is constant for every particle regardless of distance. I'd like to find a continuous trajectory for each particle, so they won't collide (the circles won't overlap). Ideally, I would like to find the solution which minimizes total path length under the constraint that there are no collisions. Assume that the distance between the n'th prescribed point of different particles are at least the particle's diameter apart so that there are no unfixable collisions. My best solution is just making splines for every path, and keeping a record of all the collisions. I have been reading papers about potentially adding more prescribed points to bend the paths, and 'elastic bands', but I'm not sure exactly how to do it.
This algorithm runs in linear time in the size of the input $O(n)$. To see why try to count how many times you read/write to a position in the string. Note that asymptotically this is equal to the total running time.
I have a clustering algorithm (not k-means) with input parameter $k$ (number of clusters). After performing clustering I'd like to get some quantitative measure of quality of this clustering. The clustering algorithm has one important property. For $k=2$ if I feed $N$ data points without any significant distinction among them to this algorithm as a result I will get one cluster containing $N-1$ data points and one cluster with $1$ data point. Obviously this is not what I want. So I want to calculate this quality measure to estimate reasonability of this clustering. Ideally I will be able to compare this measures for different $k$. So I will run clustering in the range of $k$ and choose the one with the best quality. How do I calculate such quality measure? UPDATE: Here's an example when $(N-1, 1)$ is a bad clustering. Let's say there are 3 points on a plane forming equilateral triangle. Splitting these points into 2 clusters is obviously worse than splitting them into 1 or 3 clusters.
The choice of metric rather depends on what you consider the purpose of clustering to be. Personally I think clustering ought to be about identifying different groups of observations that were each generated by a different data generating process. So I would test the quality of a clustering by generating data from known data generating processes and then calculate how often patterns are misclassified by the clustering. Of course this involved making assumtions about the distribution of patterns from each generating process, but you can use datasets designed for supervised classification. Others view clustering as attempting to group together points with similar attribute values, in which case measures such as SSE etc are applicable. However I find this definition of clustering rather unsatisfactory, as it only tells you something about the particular sample of data, rather than something generalisable about the underlying distributions. How methods deal with overlapping clusters is a particular problem with this view (for the "data generating process" view it causes no real problem, you just get probabilities of cluster membership).
I have lot of records like this: ![enter image description here](https://i.stack.imgur.com/EmbqW.png) `M` is about 10 million and `N` is about 100K. Now I want to apply collaborative filtering on these data, for example, A user comes in with its features(sparse data), how do I find out which existing user is most similar to him/she ? I don't think I could compute all of the records every time a request comes in, thanks ! Or is there any other algorithm could do this ?
What you're talking about is a "Vector Space Model" of information retrieval. [Wikipedia lists](http://en.wikipedia.org/wiki/Vector_space_model) some programs which help with this - the one I'm most familiar with is Lucene. [This page](http://lucene.sourceforge.net/talks/pisa/) describes their algorithm. The major points are that 1) you can invert your index, 2) you can look through indices in parallel and 3) you can limit to just the top $k$. All of these things give you a pretty nice speedup.
I'm familiar with the basic probability rules which problems come in the form of: * What is the probability of getting a head next by tossing a coin? 0.5 * What is the probability of getting a 3 if a number is chosen uniformly at random from the set {1, 2, 3, 4}? 0.25 How more complicated forms of questions are calculated? for example: what is the probability a person sees a tree by looking out of the window given there are 2 trees that could be seen from that window? There are many things to consider: * The person's eye vision. * Time of the day (and how many light bulbs are around in case it's after sunset) * The direction at which he looks. * How tall are the trees? * Can the person pass a drug test at the given moment? * Does he have a history of delusions? And the list goes on which may contain infinite possibilities leading to seeing / not seeing a tree. Given we don't have a frequency of people who saw a tree by looking from that very same window, how to calculate the probability with any degrees of accuracy? How confident are we given that we are 100% confident the probability of getting 3 out of {1, 2, 3, 4} is 0.25? What if we don't have anything given in which case the question is the same as the title?
> > How to calculate the probability with any degrees of accuracy?? > > > There is no way to compute this because the estimates that we make to perform the computation have an undefined accuracy due to lack of knowledge. The way that it is generally tackled is that we use some *simplified* model and apply it to the problem. But the model is wrong and we have no way to express how wrong exactly. Still, as long as the range of error is small, or smaller than the statistical variations, then the model is good enough to apply. See also: <https://en.m.wikipedia.org/wiki/All_models_are_wrong>
I would like to draw a sample $\mathbf{x} \sim N\left(\mathbf{0}, \mathbf{\Sigma} \right)$. [Wikipedia](http://en.wikipedia.org/wiki/Multivariate_normal_distribution#Drawing_values_from_the_distribution) suggests either using a [Cholesky](https://en.wikipedia.org/wiki/Cholesky_decomposition) or [Eigendecomposition](https://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix), i.e. $ \mathbf{\Sigma} = \mathbf{D}\_1\mathbf{D}\_1^T $ or $ \mathbf{\Sigma} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^T $ And hence the sample can be drawn via: $ \mathbf{x} = \mathbf{D}\_1 \mathbf{v} $ or $ \mathbf{x} = \mathbf{Q}\sqrt{\mathbf{\Lambda}} \mathbf{v} $ where $ \mathbf{v} \sim N\left(\mathbf{0}, \mathbf{I} \right) $ Wikipedia suggests that they are both equally good for generating samples, but the Cholesky method has the faster computation time. Is this true? Especially numerically when using a monte-carlo method, where the variances along the diagonals may differ by several orders of magnitude? Is there any formal analysis on this problem?
Here is a simple illustration using R to compare the computation time of the two method. ``` library(mvtnorm) library(clusterGeneration) set.seed(1234) mean <- rnorm(1000, 0, 1) sigma <- genPositiveDefMat(1000) sigma <- sigma$Sigma eigen.time <- system.time( rmvnorm(n=1000, mean=mean, sigma = sigma, method = "eigen") ) chol.time <- system.time( rmvnorm(n=1000, mean=mean, sigma = sigma, method = "chol") ) ``` The running times are ``` > eigen.time user system elapsed 5.16 0.06 5.33 > chol.time user system elapsed 1.74 0.15 1.90 ``` When increasing the sample size to 10000, the running times are ``` > eigen.time <- system.time( + rmvnorm(n=10000, mean=mean, sigma = sigma, method = "eigen") + ) > > chol.time <- system.time( + rmvnorm(n=10000, mean=mean, sigma = sigma, method = "chol") + ) > eigen.time user system elapsed 15.74 0.28 16.19 > chol.time user system elapsed 11.61 0.19 11.89 ``` Hope this helps.
I'm trying to estimate the value of a property depending on the property characteristics. I did some research and I found out, that it would be better to use the **Hedonic Model/Regression** instead of **Linear Square Regression**. After reading a couple of papers about it, I still have some questions. I work with **R**, so I have the data (information about other properties) saved as a data.frame, with the following columns (c stands for characteristic). ``` ---------------------------------- | price | c1 | c2 | c3 | c4 | c5 | ---------------------------------- ``` My questions: 1. I know how to estimate the coefficients with the **Least Square Regression**, but how do I do it with the **Hedonic Regression**? I know, that in **R** is no function for it. 2. The environment characteristics (air pollution, criminality rate, etc.) are almost the same, because the properties are in the same district. The **Last Square Regression** gives them a very small coefficient, but they have a big importance in real life. How can I tell the regression, that they have a big importance? 3. As I understood so far, if an attribute of an observation is missing, I should not use the observation, is that right? 4. In the calculation of the coefficients, should I use only the date from nearby (example: same district) real estates or it would be better to use all real estates from the town? Could somebody please give me a hint? Thank you very much!
1) There is no such thing as a hedonic regression as estimation method, you will use least squares / maximum likelihood estimator. 2) I understood that your goal is to estimate effects of different characteristic, it might be that your understanding concerning different factors affecting pricing is not complete. Of course if you have insufficient variation in the factors then it might not be possible to estimate these effects. If you have only price information from the similar nearby districts then this might happen. 3) Estimation methods typically eliminate incomplete records, you might try to imputate missing records for example by using characteristics from the nearby observations. 4) Of course you should have enought variation in the characteristics and prices to isolate effects from the different factors.
Suppose I have a dataframe consisting of six time series. In this dataframe, some observations are missing, meaning at some timepoints all time series contain a NA-value. In R, one possible imputation package that can be used to impute time series data is Amelia. However, this package does not work for observations that are completely missing. Are there other ways to impute my data? For what it's worth, the amount of missing observations is less than 20% of all observations.
There is otherwise the package [mtsdi](https://cran.r-project.org/web/packages/mtsdi/index.html) for multivariate time series, it seems to offer an EM algorithm taking into account time auto-correlation and within variables correlation. This package seems promising, although there appear to be a few implementation problems, I ran into one and there is another one reported here: <https://stackoverflow.com/questions/29472532/arima-method-in-mtsdi>
Let's say we have a neural network with one input neuron and one output neuron. The training data $(x, f(x))$ is generated by a process $$f(x) = ax + \mathcal{N}(b, c)$$ with $a, b, c \in \mathbb{R}^+$, e.g. something like ``` feature | target ----------------- 0 0.0 0 1.0 0 1.5 0 -1.2 0 -0.9 ... ``` I know that neural networks can deal pretty well with labeling errors in classification problems. Meaning if you have a large dataset and a couple of examples have the wrong label, they get basically ignored. But for this kind of problem I'm not too sure. A first experiment indicates that they do smooth values. **Are there choices in architecture / training which help the smoothing / averaging / removal of noise?** What I tried ------------ I created a network which can solve this kind of regression problem without noise. It gets a MSE of about `0.0005`. When I add a bit of noise to the training set only, I get an MSE of `0.001`: ``` #!/usr/bin/env python # core modules import random # 3rd party modules from keras.models import Sequential from keras.layers import Dense from sklearn.model_selection import train_test_split import numpy as np def main(add_noise=True): # Get data xs, ys = create_data_points(10000) x_train, x_test, y_train, y_test = train_test_split(xs, ys, test_size=0.20) # Add noise to training data if add_noise: noise = np.random.normal(0, 0.1, len(x_train)) x_train = x_train + noise # Create model model = create_model() model.compile(optimizer='rmsprop', loss='mse', metrics=['mse']) # Fit model to data. model.fit(x_train, y_train, epochs=10, batch_size=32, verbose=1) # Evaluate y_pred = model.predict(x_test, batch_size=100).flatten() print("MSE on test set:") print(((y_pred - y_test)**2).sum() / len(y_test)) def create_data_points(nb_points): xs = [] ys = [] for i in range(nb_points): x = random.random() xs.append(x) ys.append(2 * x) return np.array(xs), np.array(ys) def create_model(input_dim=1, output_dim=1): model = Sequential() model.add(Dense(200, input_dim=input_dim, activation='relu')) model.add(Dense(200, input_dim=input_dim, activation='relu')) model.add(Dense(output_dim, activation='linear')) return model if __name__ == '__main__': main() ``` Outliers -------- In an earlier version of this question I wrote "outlier" when I meant "label noise". For outliers, there is: * [The Effects of Outliers Data on Neural Network Performance](http://www.scialert.net/fulltext/?doi=jas.2005.1394.1398&org=11)
In general simpler models are more robust to noise in the input. The strength of neural networks is also their biggest 'gotcha' - they are extremely expressive. This means they easily overfit and can be sensitive to noise in inputs. The strategy of simplifying a model to make it more robust to noise is called regularization. There are many types: * use smaller hidden layers (i.e. 20 instead of 200 nodes) * use dropout, where during training inputs are randomly set to 0 during training (makes the network more robust to noise overall) * stop training earlier - 'early stopping' * use L1 or L2 regularization, which imposes a cost on the weights All of these can be done from with Keras. You want to make your network more robust to noise without decreasing your validation quality. To do this, I would try the above suggestions in order. You can measure overfitting by looking at the difference between predictive accuracy on your training data and validation data. If they are very different - your model has learned structure in your training data that is not in your validation that is NOT what you want. Try to fiddle regularization nobs until (1) your train-validation AUC or accuracy is very similar, (2) your validation AUC/accuracy is still sufficiently high.
Does the unbounded fan-in circuit model apply in "practical" settings? In other words, are there real-world realisable computers with unbounded fan-in gates? As I understand, standard silicon ASICs are made of so-called cells where the number of input signals in the largest cells is small (e.g. never exceeds a fan-in of 100).
Don't forget that even though the fan-in is unbounded, the number of gates is polynomially bounded in the number of variables $n$ (in the definition of $\mathsf{AC}$ for instance) .