input
stringlengths
38
38.8k
target
stringlengths
30
27.8k
I am looking for papers and articles on modal substructural logics-- not on the semantics of linear logic modalities, but on substructural logics augmented with standard modal operators, e.g. substructural K (something like MALL with box operator, necessitation and K rules).
I know of work adding temporal modalities to linear logic to produce what has been called *temporal linear logic* (in contrast to LTL = linear-time temporal logic). This is quite interesting: a formula (without a modality) is interpreted as resources being available *now*. The next time modality $\bigcirc-$ is interpreted as resources being available in the next time step. The box modality $\Box-$ means that the resources can be consumed at any point in the future, *determined by the holder of the resources*, whereas $\lozenge-$ means that the resources can be consumed at any point in time *determined by the system*. Notice the duality between the holder of the resource and the system. * Banbara, M., Kang, K.-S., Hirai, T., Tamura, N.: [Logic programming in a fragment of intuitionistic temporal linear logic](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.77.6666&rep=rep1&type=pdf). In: Codognet, P. (ed.) ICLP 2001. LNCS, vol. 2237, pp. 315–330. Springer, Heidelberg (2001) * Hirai, T.: [Propositional temporal linear logic and its application to concurrent systems](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.2107). EICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences (Special Section on Concurrent Systems Technology) E83- A(11), 2219–2227 (2000) * Hirai, T.: [Temporal Linear Logic and Its Application](http://kaminari.scitec.kobe-u.ac.jp/pub/hiraiPhD00.pdf). PhD thesis, The Graduate School of Science and Technology, Kobe University, Japan (September 2000). * Kamide, N.: [Temporalizing Linear Logic](http://www.filozof.uni.lodz.pl/bulletin/pdf/v3634_08.pdf) Bulletin of the Section of Logic Volume 36:3/4 (2007), pp. 173–182 There are a few papers adding all sorts of modalities to linear and affine logic: * Kamide, N.: [Linear and affine logics with temporal, spatial and epistemic logics](http://portal.acm.org/citation.cfm?id=1143365). Theoretical Computer Science 252, 165–207 (2006). * Kamide, N: [Combining Soft Linear Logic and Spatio-Temporal Operators](http://logcom.oxfordjournals.org/content/14/5/625.full.pdf). J Logic Computation (2004) 14 (5): 625-650. The work on temporal linear logic has been applied in agent-oriented programming and coordination, making essential use of the interpretation of the modalities described above: * Kungas, P.: [Temporal linear logic for symbolic agent negotiation](http://www.springerlink.com/index/r87l9g0mw7mvknr6.pdf). In: Zhang, C., W. Guesgen, H., Yeap, W.-K. (eds.) PRICAI 2004. LNCS, vol. 3157, pp. 23–32. Springer, Heidelberg (2004) * Pham, D.Q., Harland, J., Winikoff, M.: [Modelling agent’s choices in temporal linear logic](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.64.9407&rep=rep1&type=pdf). In: Baldoni, M., Son, T.C., van Riemsdijk, M.B., Winikoff, M. (eds.) DALT 2007. LNCS, vol. 4897, pp. 140–157. Springer, Heidelberg (2008) * Clarke, D. [Coordination: Reo, Nets and Logic](https://lirias.kuleuven.be/handle/123456789/217971). FMCO proceedings, LNCS, vol. 5382. (2008)
I know that "conjugate prior" is to help us calculate the the denominator of the Bayes formula(to make the calculations easier). And I just learnt to approximate the inference by mean field approximation to help us calculate the denominator of the Bayes formula(make the calculations easier). What is the relation between the two? Why do we need "mean field approximation" If we have a conjugate prior?
You're talking about a pooled 2-sample t test, of $H\_0: \mu\_1 = \mu\_2$ vs $H\_a: \mu\_1 \ne \mu\_2.$ This test assumes that $\sigma\_1 = \sigma\_2.$ Let's consider a sample of size $n\_1 = 10$ from $\mathsf{Norm}(\mu = 50, \sigma\_1 = 1)$ and a sample of size $n\_2 = 40$ from $\mathsf{Norm}(\mu = 50, \sigma\_1 = 1).$ That is, the two sample means are equal. We reject $H\_0$ at the 5% level, if the P-value $< 0.05.$ Comparing two specific such samples, what output do we get from the pooled 2-sample t test? ``` set.seed(1234) x1 = rnorm(10, 50, 1); x2 = rnorm(40, 50, 1) t.test(x1, x2, var.eq=T) Two Sample t-test data: x1 and x2 t = 0.27657, df = 48, p-value = 0.7833 alternative hypothesis: true difference in means is not equal to 0 ... sample estimates: mean of x mean of y 49.61684 49.52947 ``` All is well. From the simulation, we know that $\mu\_1 - \mu\_2 = 50.$ (Also that $\sigma\_1^2 = \sigma\_2^2 = 1.)$ And the test has (correctly) failed to reject $H\_0.$ However, 5% of the time, a pooled test at the 5% level will make a mistake, rejecting $H\_0$ with a P-value $ < 0.05.$ We could discuss the theory to show that this rejection rate is correct. Instead, let's look at actual results of a million such pooled 2-sample t tests. ``` set.seed(817) pv = replicate(10^6, t.test(rnorm(10,50,1), rnorm(40,50,1), var.eq = T)$p.val) mean(pv <= 0.05) [1] 0.049801 ``` Just 'as advertised': The pooled 2-sample t test has incorrectly rejected $H\_0$ in almost exactly 5% of the tests on one million sets of two samples from the designated distributions. Now let's see what happens if we keep everything exactly the same--except that we change the population variances to be unequal, with $\sigma\_1^2 = 16$ and $\sigma\_2^2 = 1.$ ``` set.seed(818) pv = replicate(10^6, t.test(rnorm(10,50,4), rnorm(40,50,1), var.eq = T)$p.val) mean(pv <= 0.05) [1] 0.293618 ``` Now the test is falsely rejecting about 30% of the time---much more than 5% of the time. The 'null distribution' (distribution when $H\_0$ is true) has changed substantially. Obviously, the change from equal variances to unequal variances has made a difference in how the pooled t test works. The t test cannot have "detected" that means are unequal, because they aren't. Maybe it is unfair to say that the test has "detected" unequal variances, but it is clear that unequal variances do change how the test performs. One can quibble whether equal variances are *part of* the null hypothesis. But, using the pooled t test, equal variances are essential to a *fair test of the null hypothesis.* *Notes* about R code: (a) The default 2-sample t test in R is the Welch test, which does not assume equal variances. The parameter `var.eq=T` leads to use of the pooled test. If one uses the Welch test for samples from populations with unequal variances, the significance level is very nearly 5%. ``` set.seed(819) pv = replicate(10^6, t.test(rnorm(10,50,4), rnorm(40,50,1))$p.val) mean(pv <= 0.05) [1] 0.050252 ``` (b) The vector `pv` contains P-values of a million pooled tests. The logical vector `pv <= 0.05` contains a million `TRUE`s and `FALSE`s. The `mean` of a logical vector is the proportion of its `TRUE`s. (c) The comprehensive text *An intro. to statistical methods and data analysis, 7e,* by Ott and Longnecker (2016), Cengage, has a useful table of the critical values of the pooled t test for various sample sizes and ratios of $\sigma\_1/\sigma\_2,$ Table 6.4, p311. Tabled values are based on fewer iterations than used in this Answer, so they do not agree exactly with answers here. (In particular, all tabled values in the column for $\sigma\_1/\sigma\_2 = 1$ should be exactly 0.050.)
I am creating a graph to show trends in death rates (per 1000 ppl.) in different countries and the story that should come from the plot is that Germany (light blue line) is the only one whose trend is increasing after 1932. This is my first (basic) try [![enter image description here](https://i.stack.imgur.com/SixXW.png)](https://i.stack.imgur.com/SixXW.png) In my opinion, this graph is already showing what we want it to tell but it is not super intuitive. Do you have any suggestion to make it clearer that distinction among trends? I was thinking of plotting growth rates but I tried and it is not that better. The data are the following ``` year de fr be nl den ch aut cz pl 1927 10.9 16.5 13 10.2 11.6 12.4 15 16 17.3 1928 11.2 16.4 12.8 9.6 11 12 14.5 15.1 16.4 1929 11.4 17.9 14.4 10.7 11.2 12.5 14.6 15.5 16.7 1930 10.4 15.6 12.8 9.1 10.8 11.6 13.5 14.2 15.6 1931 10.4 16.2 12.7 9.6 11.4 12.1 14 14.4 15.5 1932 10.2 15.8 12.7 9 11 12.2 13.9 14.1 15 1933 10.8 15.8 12.7 8.8 10.6 11.4 13.2 13.7 14.2 1934 10.6 15.1 11.7 8.4 10.4 11.3 12.7 13.2 14.4 1935 11.4 15.7 12.3 8.7 11.1 12.1 13.7 13.5 14 1936 11.7 15.3 12.2 8.7 11 11.4 13.2 13.3 14.2 1937 11.5 15 12.5 8.8 10.8 11.3 13.3 13.3 14 ```
Sometimes less is more. With **less detail** about the year-to-year variations and the country distinctions you can provide **more information** about the trends. Since the other countries are moving mostly together you can get by without separate colors. In using a smoother you're requiring the reader to trust that you haven't smoothed over any interesting variation. [![enter image description here](https://i.stack.imgur.com/q5ofr.png)](https://i.stack.imgur.com/q5ofr.png) *Update after getting a couple requests for code*: I made this in [JMP](http://jmp.com)'s interactive Graph Builder. The JMP script is: ``` Graph Builder( Size( 528, 456 ), Show Control Panel( 0 ), Show Legend( 0 ), // variable role assignments: Variables( X( :year ), Y( :Deaths ), Overlay( :Country ) ), // spline smoother: Elements( Smoother( X, Y, Legend( 3 ) ) ), // customizations: SendToReport( // x scale, leaving room for annotations Dispatch( {},"year",ScaleBox, {Min( 1926.5 ), Max( 1937.9 ), Inc( 2 ), Minor Ticks( 1 )} ), // customize colors and DE line width Dispatch( {}, "400", ScaleBox, {Legend Model( 3, Properties( 0, {Line Color( "gray" )}, Item ID( "aut", 1 ) ), Properties( 1, {Line Color( "gray" )}, Item ID( "be", 1 ) ), Properties( 2, {Line Color( "gray" )}, Item ID( "ch", 1 ) ), Properties( 3, {Line Color( "gray" )}, Item ID( "cz", 1 ) ), Properties( 4, {Line Color( "gray" )}, Item ID( "den", 1 ) ), Properties( 5, {Line Color( "gray" )}, Item ID( "fr", 1 ) ), Properties( 6, {Line Color( "gray" )}, Item ID( "nl", 1 ) ), Properties( 7, {Line Color( "gray" )}, Item ID( "pl", 1 ) ), Properties( 8, {Line Color("dark red"), Line Width( 3 )}, Item ID( "de", 1 )) )}), // add line annotations (omitted) ``` ));
There are many NP-complete decision problems that ask the question whether it holds for the optimal value that OPT=m (say bin packing asking whether all items of given sizes can fit into m bins of a given size). Now, I am interested in the problem whether OPT>m. Is this a decision problem or an optimization problem? It seems to be that it lies in NP (a NTM can guess a solution and it can be verified in polynomial time that the bound is met). Is it also NP-complete? I would have said yes, because having a polynomial algorithm, we could find a solution in polynomial time for the original problem (asking whether OPT=m) by using binary search and repeatedly using the polynomial algorithm to test if OPT larger than some bound. However when I try to construct a proper solution, I always see the complication that the oracle (that asks whether OPT>m') would need to be queried more than once, and this is forbidden in the polynomial time Karp reduction. Any solutions or remarks? Would it make a difference if I ask whether OPT>=m? Thanks in advance
Regular languages are those that can be described by *weak monadic second order logic* (WMSO) [1]. Star-free languages are those that can be described by *first order logic with $<$* (FO[<]) [2]. The two logics are not equally powerful. One example for a language that is WMSO-definable but not FO[<]-definable is $(aa)^\*$ (which is clearly regular³); this can be shown using *[Ehrenfeucht-Fraissé games](https://en.wikipedia.org/wiki/Ehrenfeucht%E2%80%93Fra%C3%AFss%C3%A9_game)*⁴. --- 1. [Weak Second-Order Arithmetic and Finite Automata](http://dx.doi.org/10.1002/malq.19600060105) by Büchi (1960) 2. *Counter-free automata* by McNaughton and Papert (1971) 3. A WMSO-formula for $(aa)^\*$ is $\ \begin{align} \bigl[ \forall x. P\_a(x)\bigr] \land \Bigl[ \exists x. P\_a(x) \to \bigl[ \exists X. X(0) &\land [\forall x,y. X(x) \land \operatorname{suc}(x,y) \to \lnot X(y)] \\ &\land [\forall x,y. \lnot X(x) \land \operatorname{suc}(x,y) \to X(y)] \\ &\land [\forall x. \operatorname{last}(x) \to \lnot X(x)] \bigr] \Bigr] \;. \end{align}$ (If the word is not empty, $X$ is the set of all even indices.) 4. See also [here](http://www.math.cornell.edu/~mec/Summer2009/Raluca/index.html).
P.S. I have added the tag 'history', if there is any historical connotation. Also, I found this question [What is running time of an algorithm?](https://cs.stackexchange.com/questions/56133/what-is-running-time-of-an-algorithm) but I am not satisfied with answers.
*Time complexity* is a formal model (an abstraction) of program running time. Although on the face of it you are right that it really measures the number of steps, it is *asymptotically* no different from the actual running time of the machine (Turing machine or any other model of computation). Therefore I disagree that there is any problem with the terminology. Think about it from the programmer's perspective. When you write a piece of code, say ``` for i = 1 ... n : for j = 1 ... i : print j print newline ``` you can't (as a programmer) actually predict how long the program will take to run in seconds, with accuracy. Moreover the number of seconds depends on the exact platform on which you run the code, level of parallelization, what file or output you are printing to, etc. But what you *can* measure is the number of steps your code runs -- that is, its *time complexity*, as a function of $n$. You simply count the number of times a print statement is executed. This is -- *up to a constant* -- a good and correct estimate of the actual time the program will take to run, in seconds. In summary, the concept of *time complexity* is exactly the same concept that programmers use to think about their code's performance, and the abstraction is the same as the actual running time up to a constant.
I've been looking into the math behind converting from any base to any base. This is more about confirming my results than anything. I found what seems to be my answer on mathforum.org but I'm still not sure if I have it right. I have the converting from a larger base to a smaller base down okay because it is simply take first digit multiply by base you want add next digit repeat. My problem comes when converting from a smaller base to a larger base. When doing this they talk about how you need to convert the larger base you want into the smaller base you have. An example would be going from base 4 to base 6 you need to convert the number 6 into base 4 getting 12. You then just do the same thing as you did when you were converting from large to small. The difficulty I have with this is it seems you need to know what one number is in the other base. So I would of needed to know what 6 is in base 4. This creates a big problem in my mind because then I would need a table. Does anyone know a way of doing this in a better fashion. I thought a base conversion would help but I can't find any that work. And from the site I found it seems to allow you to convert from base to base without going through base 10 but you first need to know how to convert the first number from base to base. That makes it kinda pointless. Commenters are saying I need to be able to convert a letter into a number. If so I already know that. That isn't my problem however. My problem is in order to convert a big base to a small base I need to first convert the base number I have into the base number I want. In doing this I defeat the purpose because if I have the ability to convert these bases to other bases I've already solved my problem. Edit: I have figured out how to convert from bases less than or equal to 10 into other bases less than or equal to 10. I can also go from a base greater than 10 to any base that is 10 or less. The problem starts when converting from a base greater than 10 to another base greater than 10. Or going from a base smaller than 10 to a base greater than 10. I don't need code I just need the basic math behind it that can be applied to code.
This is a refactoring (Python 3) of [Andrej's](https://cs.stackexchange.com/a/10321/61097) code. While in Andrej's code numbers are represented through a list of digits (scalars), in the following code numbers are represented through a list of **arbitrary symbols** taken from a custom string: ``` def v2r(n, base): # value to representation """Convert a positive number to its digit representation in a custom base.""" if n == 0: return base[0] b = len(base) digits = '' while n > 0: digits = base[n % b] + digits n = n // b return digits def r2v(digits, base): # representation to value """Compute the number represented by string 'digits' in a custom base.""" b = len(base) n = 0 for d in digits: n = b * n + base[:b].index(d) return n def b2b(digits, base1, base2): """Convert the digits representation of a number from base1 to base2.""" return v2r(r2v(digits, base1), base2) ``` To perform a conversion from value to representation in a custom base: ``` >>> v2r(64,'01') '1000000' >>> v2r(64,'XY') 'YXXXXXX' >>> v2r(12340,'ZABCDEFGHI') # decimal base with custom symbols 'ABCDZ' ``` To perform a conversion from representation (in a custom base) to value: ``` >>> r2v('100','01') 4 >>> r2v('100','0123456789') # standard decimal base 100 >>> r2v('100','01_whatevr') # decimal base with custom symbols 100 >>> r2v('100','0123456789ABCDEF') # standard hexadecimal base 256 >>> r2v('100','01_whatevr-jklmn') # hexadecimal base with custom symbols 256 ``` To perform a base conversion from one custome base to another: ``` >>> b2b('1120','012','01') '101010' >>> b2b('100','01','0123456789') '4' >>> b2b('100','0123456789ABCDEF','01') '100000000' ```
My question may actually be more broadly described as: can I use the fact that an algorithm is expected to return $(O(f(n))$ answers to show that it may never run better than $O(f(n))$? I would intuitively assume yes, but I am not sure. Examples of what I'm talking about is algorithms to calculate all the paths that pass through a set of points. I can easily calculate a higher bound $O(f(n))$ on how many those paths are. Will that tell me the algorithm must be no better than $\Omega(f(n))$? Thanks
If the algorithm (modelled, say by a Turing Machine writing the output on a special output tape) generates at least $f(n)$ output, its running time can't be less than the time required to write it out, i.e., it's $\Omega(f(n))$. Can't say anything about an upper bound. Think for example about the problem of determining if an array is sorted, the result is clearly just "yes" or "no", while the running time is at least the size of the array, and that isn't $O(1)$.
How would you interpret this interaction? The structure of the data is all integer variables. Inc.fix= income, age.fix=age, profit99= profit ``` Call: lm(formula = Profit99 ~ Age.fix + Inc.fix + Age.fix:Inc.fix, data = pilg) Residuals: Min 1Q Median 3Q Max -421.86 -148.76 -84.45 55.67 1938.70 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -139.9706 11.1273 -12.579 < 2e-16 *** Age.fix 37.8453 2.4430 15.491 < 2e-16 *** Inc.fix 26.5252 1.9790 13.403 < 2e-16 *** Age.fix:Inc.fix -2.2217 0.4475 -4.965 6.92e-07 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 268.1 on 31630 degrees of freedom Multiple R-squared: 0.03443, Adjusted R-squared: 0.03434 F-statistic: 375.9 on 3 and 31630 DF, p-value: < 2.2e-16 ```
If you have a similarity matrix, try to use Spectral methods for clustering. Take a look at Laplacian Eigenmaps for example. The idea is to compute eigenvectors from the Laplacian matrix (computed from the similarity matrix) and then come up with the feature vectors (one for each element) that respect the similarities. You can then cluster these feature vectors using for example k-means clustering algorithm. From practical perspecive, if your matrix is big and dense, Spectral methods can quickly become very computationally intensive and memory hogs. I used Spectral methods for image clustering and eventually classification. The results were pretty good. The difficult part was to get a good similarity matrix.
I have come across this question: > > Let 0<α<.5 be some constant (independent of the input array length n). Recall the Partition subroutine employed by the QuickSort algorithm, as explained in lecture. What is the probability that, with a randomly chosen pivot element, the Partition subroutine produces a split in which the size of the smaller of the two subarrays is ≥α times the size of the original array? > > > The answer is 1-2\*α. Can anyone explain me how has this answer come?
The other answers didn't quite click with me so here's another take: If at least one of the 2 subarrays must be ![formula](https://chart.googleapis.com/chart?cht=tx&chl=%5Cgeq%20%5Calpha%20N) you can deduce that the pivot must also be in position ![formula](https://chart.googleapis.com/chart?cht=tx&chl=%5Cgeq%20%5Calpha%20N). This is obvious by contradiction. If the pivot is ![formula](https://chart.googleapis.com/chart?cht=tx&chl=<%20%5Calpha%20N) then there is a subarray smaller than ![formula](https://chart.googleapis.com/chart?cht=tx&chl=%5Calpha%20N). By the same reasoning the pivot must also be ![formula](https://chart.googleapis.com/chart?cht=tx&chl=%5Cleq%20N%20-%20%5Calpha%20N). Any larger value for the pivot will yield a smaller subarray than ![formula](https://chart.googleapis.com/chart?cht=tx&chl=%5Calpha%20N) on the "right hand side". This means that ![formula](https://chart.googleapis.com/chart?cht=tx&chl=%5Calpha%20N%20%5Cleq%20pivot%20%5Cleq%20N%20-%20%5Calpha%20N), as shown by the diagram below: [![enter image description here](https://i.stack.imgur.com/IhK1j.png)](https://i.stack.imgur.com/IhK1j.png) What we want to calculate then is the probability of that event (call it A) i.e ![formula](https://chart.googleapis.com/chart?cht=tx&chl=P%28%5Calpha%20N%20%5Cleq%20pivot%20%5Cleq%20N%20-%20%5Calpha%20N%29). The way we calculate the probability of an event is to sum of the probability of the constituent outcomes i.e. that the pivot lands at ![formula](https://chart.googleapis.com/chart?cht=tx&chl=%5Calpha%20N%2C%20%5Calpha%20N+1%20%5Calpha%20N+2%2C%20...%2C%20N-%5Calpha%20N). That sum is expressed as: [![enter image description here](https://i.stack.imgur.com/j78vB.gif)](https://i.stack.imgur.com/j78vB.gif) Which easily simplifies to: [![enter image description here](https://i.stack.imgur.com/i5eks.gif)](https://i.stack.imgur.com/i5eks.gif) With some cancellation we get: [![enter image description here](https://i.stack.imgur.com/1TnIQ.gif)](https://i.stack.imgur.com/1TnIQ.gif)
So let's say I have an array of elements where each of the values can range from 0 to $n^2-1$. I'm trying to make an algorithm to sort this array in O(n) running time and I was thinking of using radix sort. The run time of radix sort is O(d(n+N)) or O(dn) if n is really large. So how can I modify radix sort so that it runs in O(n)? EDIT: I don't think you guys understand this. The amount of elements in the array is n but the ACTUAL value for each element can range from 0 to n^2 - 1. So if we have an array with 10 elements in it then the largest the element can be is 99 and the smallest it can be is 0 but there will still be 10 elements in the array.
If you want to implement this on a RAM with integer math (ie. a real computer if your n is smaller than $2^{32}$), you can have a look at [Upper bounds for sorting integers on random access machines](http://dx.doi.org/10.1016/0304-3975(83)90023-3). The authors show that integers in the range [0,$n^c$] can be sorted in $O(n(1 + \log c))$. Word RAM models add some loglog factors to that runtime.
If we consider literature, sorting algorithms are based only on number of comparisons needed to sort a list of size n, considering that n is the size of the input. But if we want to encode input, we can't encode each object of the list into a fixed-size binary representation because hence, we would consider that the domain of the objects is fixed and thus, I think we could find better sorting algorithms by precomputing some stuff in the Turing Machine. If we consider that the domain isn't fixed, we have to encode each of our items into a $\log(n)$-size representation. Thus the input is of size $N = n\log(n)$. But as our numbers are of variable length, then we can consider that comparison has a cost of $\log(n)$, but even with this, if we apply a reasonable sorting algorithm (ie an $n \log n$ algorithm), the algorithm will take $n \log^2(n)$ time in a Turing machine, where $n$ is the number of objects, but where $n \log n$ is the size of our input. In this case, we have an algorithm of complexity lower than $O(N \log(N))$ where $N$ is the size of the input. Is there a mistake?
Without looking at any details, why do your say your algorithm beats $O(N\log N)$? In your notation, $N\log N = n\log n\log(n\log n) = n\log n\log n + n\log n\log\log n = O(n\log^2n)$. No contradiction.
When performing linear regression, it is often useful to do a transformation such as log-transformation for the dependent variable to achieve better normal distribution conformation. Often it is also useful to inspect beta's from the regression to better assess the effect size/real relevance of the results. This raises the problem that when using e.g. log transformation, the effect sizes will be in log scale, and I've been told that because of non-linearity of the used scale, back-transforming these beta's will result in non-meaningful values that do not have any real world usage. This far we have usually performed linear regression with transformed variables to inspect the significance and then linear regression with the original non-transformed variables to determine the effect size. Is there a right/better way of doing this? For the most part we work with clinical data, so a real life example would be to determine how a certain exposure affects continues variables such as height, weight or some laboratory measurement, and we would like to conclude something like "exposure A had the effect of increasing weight by 2 kg".
I would suggest that transformations aren't important to get a normal distribution for your errors. Normality isn't a necessary assumption. If you have "enough" data, the central limit theorem kicks in and your standard estimates become asymptotically normal. Alternatively, you can use bootstrapping as a non-parametric means to estimate the standard errors. (Homoskedasticity, a common variance for the observations across units, is required for your standard errors to be right; robust options permit heteroskedasticity). Instead, transformations help to ensure that a linear model is appropriate. To give a sense of this, let's consider how we can interpret the coefficients in transformed models: * outcome is units, predictors is units: A one unit change in the predictor leads to a beta unit change in the outcome. * outcome in units, predictor in log units: A one percent change in the predictor leads to a beta/100 unit change in the outcome. * outcome in log units, predictor in units: A one unit change in the predictor leads to a beta x 100% change in the outcome. * outcome in log units, predictor in log units: A one percent change in the predictor leads to a beta percent change in the outcome. If transformations are necessary to have your model make sense (i.e., for linearity to hold), then the estimate from this model should be used for inference. An estimate from a model that you don't believe isn't very helpful. The interpretations above can be quite useful in understanding the estimates from a transformed model and can often be more relevant to the question at hand. For example, economists like the log-log formulation because the interpretation of beta is an elasticity, an important measure in economics. I'd add that the back transformation doesn't work because the expectation of a function is not the function of the expectation; the log of the expected value of beta is not the expected value of the log of beta. Hence, your estimator is not unbiased. This throws off standard errors, too.
I have a process with binary output. Is there a standard way to test if it is a Bernoulli process? The problem translates to checking if every trial is independent of the previous trials. I have observed some processes where a result "sticks" for a number of trials.
I don't think the Frequentists and Bayesians give different answers to the same questions. I think they are prepared to answer *different questions*. Therefore, I don't think it makes sense to talk much about one side winning, or even to talk about compromise. Consider all the questions we might want to ask. Many are just impossible questions ("What is the true value of $\theta$?"). It's more useful to consider the subset of these questions that can be answered given various assumptions. The larger subset is the questions that can be answered where you do allow yourself to use priors. Call this set BF. There is a subset of BF, which is the set of questions that do not depend on any prior. Call this second subset F. F is a subset of BF. Define B = BF \ B. However, we cannot choose which questions to answer. In order to make useful inferences about the world, we sometimes have to answer questions that are in B and that means using a prior. Ideally, given an estimator you would do a thorough analysis. You might use a prior, but it also would be cool if you could prove nice things about your estimator which do not depend on any prior. That doesn't mean you can ditch the prior, maybe the really interesting questions require a prior. Everybody agrees on how to answer the questions in F. The worry is whether the really 'interesting' questions are in F or in B? An example: a patient walks into the doctor and is either healthy(H) or sick(S). There is a test that we run, which will return positive(+) or negative(-). The test never gives false negatives - i.e $\mathcal{P}(-|S) = 0$. But it will sometimes give false positives - $\mathcal{P}(+|H) = 0.05$ We have a piece of card and the testing machine will write + or - on one side of the card. Imagine, if you will, that we have an oracle who somehow knows the truth, and this oracle writes the true state, H or S, on the other side of the card before putting the card into an envelope. As the statistically-trained doctor, what can we say about the card in the envolope before we open the card? The following statements can be made (these are in F above): * If S on one side of the card, then the other side will be +. $\mathcal{P}(+|S) = 1$ * If H, then the other side will be + with 5% probability, - with 95% probability. $\mathcal{P}(-|H) = 0.95$ * (summarizing the last two points) The probability that the two sides *match* is *at least* 95%. $\mathcal{P}( (-,S) \cup (+,H) ) \geq 0.95$ We don't know what $\mathcal{P}( (-,S) )$ or $\mathcal{P}( (+,H) )$ is. We can't really answer that without some sort of prior for $\mathcal{P}(S)$. But we can make statements about the sum of those two probabilities. This is as far as we can go so far. *Before opening the envelope*, we can make very positive statements about the accuracy of the test. There is (at least) 95% probability that the test result matches the truth. But what happens when we actually open the card? Given that the test result is positive (or negative), what can we say about whether they are healthy or sick? If the test is positive (+), there is nothing we can say. Maybe they are healthy, and maybe not. Depending on the current prevalence of the disease ($\mathcal{P}(S)$) it might be the case that most patients who test positive are healthy, or it might be the case that most are sick. We can't put any bounds on this, without first allowing ourselves to put some bounds on $\mathcal{P}(S)$. In this simple example, it's clear that everybody with a negative test result is healthy. There are no false negatives, and hence every statistician will happily send that patient home. Therefore, *it makes no sense to pay for the advice of a statistician unless the test result has been positive*. The three bullet points above are correct, and quite simple. But they're also useless! The really interesting question, in this admittedly contrived model, is: $$ \mathcal{P}(S|+) $$ and this cannot be answered without $\mathcal{P}(S)$ (i.e a prior, or at least some bounds on the prior) I don't deny this is perhaps an oversimplified model, but it does demonstrate that if we want to make useful statements about the health of those patients, we must start off we some prior belief about their health.
The Gaussian, or squared exponential covariance is $k\_{SE}(s,t) = \exp \left\{ -\frac{1}{2l} (s - t)^2 \right\}$. It is a common covariance function used in Gaussian processes. The Karhunen-Loeve expansion is an orthonormal decomposition of sample paths of a Gaussian process. If $g(t)$ is a sample path from a Gaussian process with mean 0 and covariance $k(s,t)$, then $g(t) = \sum\_{i=1}^\infty \xi\_i f\_i (t)$ where the eigenfunctions $f\_i(t)$ are deterministic functions determined by $k$ and eigenvalues $\xi\_i$ are the standard normals. My question is, **does there exist a closed form expression for the $f\_i$ corresponding to $k\_{SE}$?** According to [1], closed form expressions for $f\_i$ are known for exponential covariances ($k(s,t) = \exp \left\{ -|s - t| \right\}$), band-limited stationary processes (finite sums of trigonometric functions), and Brownian motion. [1] Huang, S. P. and Quek, S. T. and Phoon, K. K., Convergence study of the truncated Karhunen–Loeve expansion for simulation of stochastic processes, International Journal for Numerical Methods in Engineering (2001), <http://dx.doi.org/10.1002/nme.255>
The eigenfunctions of SE kernel under Gaussian measure can be written using Hermite polynomials (see references below). If instead Lebesgue measure is used, it's more complicated. * C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. <http://www.gaussianprocess.org/gpml> (p. 115) * Zhu, H., Williams, C. K. I., Rohwer, R. J., and Morciniec, M. (1998). Gaussian Regression and Optimal Finite Dimensional Linear Models. In Bishop, C. M., editor, Neural Networks and Machine Learning. Springer-Verlag, Berlin.
I am studying about set cover problem and wondering that which problems in real world can be solved by set cover. I found that IBM used this problem for their anti-virus problem, so there should be many more others that can be solved by set cover.
Set-cover heuristics are used in random testing ("fuzz testing") of programs. Suppose we have a million test cases, and we're going to test a program by picking a test case, randomly modifying ("mutating") it by flipping a few bits, and running the program on the modified test case to see if it crashes. We'd like to do this over and over again. If we do this naively, it will be less effective than it could be: typically many test cases cover basically the same code paths, but a small minority of test cases cover unusual code paths that would be interesting to test more intensively. So, here's one solution that is used in industry. We use a coverage measurement tool to instrument the program and record which lines of code are covered by each of the million test cases. Then, we choose a small subset $S$ of those million test cases that has maximal coverage: every line of code covered by one of the million test cases will be covered by some test case in $S$. $S$ is called a reduced test suite. We then apply random mutation & testing to the reduced test suite $S$. Empirically, this seems to make random testing more effective. The smaller $S$ is, the more effective and efficient testing becomes. So, how do we choose a reduced test suite $S$ that is as small as possible, while still achieving maximal coverage? Answer: that's a set cover problem, so we use standard heuristics/approximation algorithms for the set cover problem. The standard greedy approximation algorithm is typically used for this purpose.
I don't understand where this formula for Mean Squared Error is coming from. How do we arrive at: $$MSE = \frac{1}{m}||y' - y||\_2^2$$ from: $$MSE = \frac{1}{m}\cdot\sum\_i(y'\_{i} - y\_{i})^2$$ (The source is deeplearningbook)
We have $$\|x\|\_2=\sqrt{\sum\_{i=1}^n x\_i^2}$$ Hence $$\|x\|\_2^2=\sum\_{i=1}^n x\_i^2$$ Now let $x=y'-y$ and you obtain your formula.
For a given input into the input nodes, there are multiple correct values for the output nodes. In the training set, there are times when the inputs result in a certain output, and other times when the inputs result in a completely different (but equally valid) output. Will a neural network still be able to "figure out" a pattern? From what I know about backpropagation, it seems like the different right answers would prevent it from functioning properly. If that's so, are there any solutions? I don't need the neural network to predict all possible correct solutions, but I do need it to output *a* correct solution.
A neural network can in principle deal with this. Actually, I believe they are among the best models for this task. The question is whether it is modeled correctly. Say you are looking at a regression problem and minimize the sum of squares, i.e. $$L(\theta) = \sum\_i (\hat{y}\_i - y\_i)^2.$$ Here, $L$ is the loss function we minimize with respect to the parameters $\theta$ of our neural net $f$, which we use to find an approximation $\hat{y}\_i = f(x\_i; \theta)$ of $y\_i$. What will this loss function result in for ambiguous data like $(x\_1, y\_1), (x\_1, y\_2)$ with $y\_1 \neq y\_2$? It will make the function predict $f$ predict the mean of both. This is a property which not only holds for neural nets, but also for linear regression, random forests, gradient boosting machines etc--basically every model that is trained with a squared error. It makes now sense to investigate where the squared error comes from, so that we can adapt it. I have explained [elsewhere](https://stats.stackexchange.com/questions/9547/measuring-quantization-error-for-clustering-squared-or-not/9560#9560) that the squared error stems from the log-likelihood of a Gaussian assumption: $p(y|x) = \mathcal{N}(f(x; \theta), \sqrt{1 \over 2})$. Gaussians are uni modal, which means that this assumption is the core error in the model. If you have ambiguous outputs, you need an output model with many modes. The most commonly used one is [mixture density networks](http://eprints.aston.ac.uk/373/1/NCRG_94_004.pdf), which assume that the output $p(y|x)$ is actually a mixture of Gaussians, e.g. $$p(y|x) = \sum\_j \pi\_j(x) \mathcal{N}(y|\mu\_j(x), \Sigma\_j(x)).$$ Here, $\mu\_j(x), \Sigma\_j(x)$ and $\pi\_j(x)$ are all distinct output units of the neural nets. Training is done via differentiating the log-likelihood and back-propagation. There are many other ways, though: * This idea is applicable also to GBMs and RFs. * A completely different strategy would be to estimate a complicated joint likelihood $p(x, y)$ which allows conditioning on $x$, yielding a complex $p(y|x)$. Efficient inference/estimation will be an issue here. * A quite different example is certain Bayesian approaches which give rise to multimodal output distributions as well. Efficient inference/estimation is a problem here as well. ---
The problem that I am dealing with is predicting time series values. I am looking at one time series at a time and based on for example 15% of the input data, I would like to predict its future values. So far I have come across two models: * [LSTM](http://deeplearning.net/tutorial/lstm.html) (long short term memory; a class of recurrent neural networks) * ARIMA I have tried both and read some articles on them. Now I am trying to get a better sense on how to compare the two. What I have found so far: 1. LSTM works better if we are dealing with huge amount of data and enough training data is available, while ARIMA is better for smaller datasets (is this correct?) 2. ARIMA requires a series of parameters `(p,q,d)` which must be calculated based on data, while LSTM does not require setting such parameters. However, there are some hyperparameters we need to tune for LSTM. 3. **EDIT:** One major difference between the two that I noticed while reading a great article [here](http://www.analyticsvidhya.com/blog/2016/02/time-series-forecasting-codes-python/), is that ARIMA could only perform well on stationary time series (where there is no seasonality, trend and etc.) and you need to take care of that if want to use ARIMA Other than the above-mentioned properties, I could not find any other points or facts which could help me toward selecting the best model. I would be really grateful if someone could help me finding articles, papers or other stuff (had no luck so far, only some general opinions here and there and nothing based on experiments.) I have to mention that originally I am dealing with streaming data, however for now I am using [NAB datasets](https://github.com/numenta/NAB) which includes 50 datasets with the maximum size of 20k data points.
Adding to @AN6U5's respond. From a purely theoretical perspective, this [paper](https://link.springer.com/chapter/10.1007/11840817_66) has show RNN are universal approximators. I haven't read the paper in details, so I don't know if the proof can be applied to LSTM as well, but I suspect so. The biggest problem with RNN in general (including LSTM) is that they are hard to train due to gradient exploration and gradient vanishing problem. The practical limit for LSTM seems to be around 200~ steps with standard gradient descent and random initialization. And as mentioned, in general for any deep learning model to work well you need a lot of data and heaps of tuning. ARIMA model is more restricted. If your underlying system is too complex then it is simply impossible to get a good fit. But on the other hand, if you underlying model is simple enough, it is much more efficient than deep learning approach.
We all know that **Principal Component Analysis is executed on a Covariance/Correlation matrix**, but what if we have a very high dimensional data, assuming 75 features and 157849 rows? How does PCA tackle this? * Does it tackle this problem in the same way as it does for correlated datasets? * Will my explained variance be equally distributed among the 75 features? * I came across **BARTLETT'S Test and KMO Test** which helps us: + in identifying the wether there is any correlation present or not, and + the proportion of variance that might be a common variance among the variables respectively. I can certainly leverage these two tests in making a controlled decision, but I am still looking for an answer towards: * ***How does PCA behave when there is no correlation in the dataset?*** I want to get an interpretation of this in a way that I could explain it to my non-technical brother. Practical example using Python: ``` s = pd.Series(data=[1,1,1],index=['a','b','c']) diag_data = np.diag(s) df = pd.DataFrame(diag_data, index=s.index, columns=s.index) # Normalizing df = (df.subtract(df.mean())).divide(df.std()) ``` Which looks like: ``` a b c a 1.154701 -0.577350 -0.577350 b -0.577350 1.154701 -0.577350 c -0.577350 -0.577350 1.154701 ``` Covariance Matrix looks like this: ``` Cor = np.corrcoef(df.T) Cor array([[ 1. , -0.5, -0.5], [-0.5, 1. , -0.5], [-0.5, -0.5, 1. ]]) ``` Now, calculating PCA Projections: ``` eigen_vals,eigen_vects = np.linalg.eig(Cor) projections = pd.DataFrame(np.dot(df,eigen_vects)) ``` And projections are: ``` 0 1 2 0 1.414214 -2.012134e-17 -0.102484 1 -0.707107 -2.421659e-16 -1.170283 2 -0.707107 -1.989771e-16 1.272767 ``` The explained Ratio seems to be equally distributed among two features: ``` [0.5000000000000001, -9.680089716721685e-17, 0.5000000000000001] ``` Now, when I tried calculating the Q-Residual error in order to find the reconstruction error, I got zero for all the features: ``` a 0.0 b 0.0 c 0.0 dtype: float64 ``` This would indicate that PCA on a non-correlated dataset like identity matrix gives us the projections which are very close to the original data-points. And the same results are obtained with the **DIAGONAL MATRIX**. If the reconstruction error is very low, this would suggest that, in a single pipeline, we can fix the PCA method to execute and even if the dataset is not carrying much correlation we will get the same results after PCA transformation, but for the dataset which has high correlated features, we can prevent our curse of dimensionality. **Public views on this?**
The components are the eigenvectors of the covariance matrix. If the covariance matrix is diagonal, then the features are already eigenvectors. So PCA generally will return the original features (up to scaling), ordered in decreasing variance. If you have a degenerate covariance matrix where two or more features has the same variance, however, a poorly designed algorithm that returns linear combinations of those features would technically satisfy the definition of PCA as generally given.
One of the things which [makes econometrics unique](https://economics.stackexchange.com/q/159/21) is the use of the Generalized Method of Moments technique. What types of problems make GMM more appropriate than other estimation techniques? What does using GMM buy you in terms of efficiency or reduced bias or more specific parameter estimation? Conversely, what do you lose by using GMM over MLE, etc.?
GMM is practically the only estimation method which you can use, when you run into endogeneity problems. Since these are more or less unique to econometrics, this explains GMM atraction. Note that this applies if you subsume IV methods into GMM, which is perfectly sensible thing to do.
I have count data passenger as Y. The data look like this![enter image description here](https://i.stack.imgur.com/zRmgy.png), as many of the values are 1 (about 18%.) Does it make sense that I take a log of it, and take it as a dependent variable in a generalized linear model with Poisson distribution ![logY](https://i.stack.imgur.com/3h5QU.png): I know the link function is log for Poisson distribution. **Did I have a problem to take double log of the Y?** The question for me is that my Log(Y) model has a much better goodness-of-fit stat compared to my Y model. I tried some Poisson and Negative Binomial model and they are not fitting very well. What other strategies may I try to model this data?
You data was zero-inflated (maybe more than 70% responses were zeros?). If both Poisson regression and negative binomial regression had bad fit, you should try Zero-inflated Poisson or even Zero-inflated negative binomial models. These mixture models have been proven to have better performance than using transformation.
When talking about turing machines, it can be easily shown that starting from two machines accepting $L$ and its complement $L^c$, one can build a machine which can fully decide if a word is inside $L$ or not. But what about PDAs? starting from two different PDAs, one accepting $L$ and one accepting $L^c$ can we build another PDA, which accepts $L$, and only crashes or halts in non-final states (rejects) when $w\notin L$?
Yes, it is possible to do so. Although a given PDA may have $\varepsilon$ loops that can induce infinite computation, we can sidestep this by converting the PDA to a CFG, then back to a PDA (using the standard methods). The second PDA is guaranteed to halt on all inputs (this is not too hard to see if you know the conversion method - essentially you guarantee that either a non-terminal from the CFG is added to the stack, or a terminal is read from the input at each broad step and the nondeterminism takes care of the rest, or equivalently, CFGs can always be parsed). So then we can take the PDA for $L$, apply this transition, and we get a machine that always halts, and only halts in an accept state if the input is in $L$.
I am interested in tools/techniques that can be used for analysis of [streaming data in "real-time"](http://en.wikipedia.org/wiki/Real-time_data)\*, where latency is an issue. The most common example of this is probably price data from a financial market, although it also occurs in other fields (e.g. finding trends on Twitter or in Google searches). In my experience, the most common software category for this is ["**complex event processing**"](http://en.wikipedia.org/wiki/Complex_event_processing). This includes commercial software such as [Streambase](http://www.streambase.com/index.htm) and [Aleri](http://www.sybase.com/products/financialservicessolutions/aleristreamingplatform) or open-source ones such as [Esper](http://esper.codehaus.org/) or [Telegraph](http://telegraph.cs.berkeley.edu/) (which was the basis for [Truviso](http://www.truviso.com/)). Many existing models are not suited to this kind of analysis because they're too computationally expensive. Are any models\*\* specifically designed to deal with real-time data? What tools can be used for this? *\* By "real-time", I mean "analysis on data *as it is created*". So I do not mean "data that has a time-based relevance" (as in [this talk by Hilary Mason](http://www.hilarymason.com/blog/conference-web2-expo-sf/)).* \*\* By "model", I mean a mathematical abstraction that describe the behavior of an object of study (e.g. in terms of random variables and their associated probability distributions), either for description or forecasting. This could be a machine learning or statistical model.
This area roughly falls into two categories. The first concerns stream processing and querying issues and associated models and algorithms. The second is efficient algorithms and models for learning from data streams (or data stream mining). It's my impression that the CEP industry is connected to the first area. For example, StreamBase originated from the [Aurora](http://www.cs.brown.edu/research/aurora/) project at Brown/Brandeis/MIT. A similar project was Widom's [STREAM](http://infolab.stanford.edu/stream/) at Stanford. Reviewing the publications at either of those projects' sites should help exploring the area. A nice paper summarizing the research issues (in 2002) from the first area is *[Models and issues in data stream systems](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.106.9846)* by Babcock et al. In stream mining, I'd recommend starting with *[Mining Data Streams: A Review](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.80.798)* by Gaber et al. BTW, I'm not sure exactly what you're interested in as far as specific models. If it's stream mining and classification in particular, the [VFDT](http://en.wikipedia.org/wiki/Incremental_decision_tree#VFDT) is a popular choice. The two review papers (linked above) point to many other models and it's very contextual.
Let A, B, C, and D be four random variables such that A and B are independent, and C and D are dependent. It is unknown whether A and C are independent nor whether B and D are independent. Let E and F represent the products E = AC and F = BD. Are E and F necessarily independent? If not, say we add the knowledge that A and C are independent and B and D are independent; now are E and F necessarily independent?
First, define *dependent* to mean *not independent*, that is, the joint distribution is *not* the product of the marginal distributions. Note also that all constant variables are independent of everything. Though this may look like cheating, if $A = B = 1$ and $C = D \in \{0,1\}$, with the constraint that their common distribution is not degenerate, then $A$ and $B$ are independent, $C$ and $D$ are not, and since $E = C$ and $F = D$, then $E$ and $F$ are not independent either. Furthermore, $A$ and $C$ are independent and $B$ and $D$ are independent by degeneracy of the distributions of $A$ and $B$.
I'm seeing this image passed around a lot. I have a gut-feeling that the information provided this way is somehow incomplete or even erroneous, but I'm not well versed enough in statistics to respond. It makes me think of this [xkcd comic](http://imgs.xkcd.com/comics/conditional_risk.png), that even with solid historical data, certain situations can change how things can be predicted. [![What it do baby](https://i.stack.imgur.com/broEo.png)](https://i.stack.imgur.com/broEo.png) Is this chart as presented useful for accurately showing what the threat level from refugees is? Is there necessary statistical context that makes this chart more or less useful? --- Note: Try to keep it in layman's terms :)
Imagine your job is to forecast the number of Americans that will die from various causes next year. A reasonable place to start your analysis might be the [National Vital Statistics Data](https://www.cdc.gov/nchs/data/nvsr/nvsr65/nvsr65_04.pdf) final death data for 2014. The assumption is that 2017 might look roughly like 2014. You'll find that approximately 2,626,000 Americans died in 2014: * 614,000 died of heart disease. * 592,000 died of cancer. * 147,000 from respiratory disease. * 136,000 from accidents. * ... * 42,773 from suicide. * 42,032 from accidental poisoning (subset of accidents category). * 15,809 from homicide. * 0 from terrorism under the [CDC, NCHS classification](https://www.cdc.gov/nchs/icd/terrorism_code.htm). * [18 from terrorism using a broader definition (University of Maryland Global Terrorism Datbase)](https://www.start.umd.edu/pubs/START_AmericanTerrorismDeaths_FactSheet_Oct2015.pdf) See link for definitions. + By my quick count, 0 of the perpetrators of these 2014 attacks were born outside the United States. + Note that anecdote is *not* the same as data, but I've assembled links to the underlying news stories here: [1](http://www.cnn.com/2015/08/31/us/kansas-jewish-center-gunman-guilty/), [2](https://en.wikipedia.org/wiki/Ali_Muhammad_Brown), [3](http://www.cnn.com/2014/06/06/justice/georgia-courthouse-shooting/), [4](https://en.wikipedia.org/wiki/2014_Las_Vegas_shootings), [5](http://www.latimes.com/nation/nationnow/la-na-nn-eric-frein-charged-terrorism-20141113-story.html), [6](https://en.wikipedia.org/wiki/2014_Queens_hatchet_attack), [7](https://www.washingtonpost.com/news/post-nation/wp/2014/12/01/police-austin-shooter-belonged-to-an-ultra-conservative-christian-hate-group/?utm_term=.3c6f3a63e5f2), [8](http://www.dailymail.co.uk/news/article-3981660/Man-charged-plotting-terror-attack-appear-court.html), and [9](https://www.nytimes.com/2015/01/03/nyregion/ismaaiyl-brinsleys-many-identities-fueled-life-of-wrong-turns.html). Terrorist incidents in the U.S. are quite rare, so estimating off a single year is going to be problematic. Looking at the time-series, what you see is that the vast majority of U.S. terrorism fatalities came during the 9/11 attacks (See [this report](https://www.start.umd.edu/pubs/START_AmericanTerrorismDeaths_FactSheet_Oct2015.pdf) from the National Consortium for the Study of Terrorism and Responses to Terrorism.) I've copied their Figure 1 below: [![ National Consortium for the Study of Terrorism and Responses to Terrorism, "American Deaths in Terrorist Attacks Fact Sheet"](https://i.stack.imgur.com/VTfkd.png)](https://i.stack.imgur.com/VTfkd.png) Immediately you see that you have an outlier, rare events problem. A single outlier is driving the overall number. If you're trying to forecast deaths from terrorism, there are numerous issues: * What counts as terrorism? + Terrorism can be defined broadly or narrowly. * Is the process [stationary](https://en.wikipedia.org/wiki/Stationary_process)? If we take a time-series average, what are we estimating? * Are conditions changing? What does a forecast conditional on current conditions look like? * If the vast majority of deaths come from a single outlier, how do you reasonably model that? + We can get more data in a sense by looking more broadly at other countries and going back further in time but then there are questions as to whether any of those patterns apply in today's world. IMHO, the FT graphic picked an overly narrow definition (the 9/11 attacks don't show up in the graphic because the attackers weren't refugees). There are legitimate issues with the chart, but the FT's broader point is correct that terrorism in the U.S. is quite rare. Your chance of being killed by a foreign born terrorist in the United States is close to zero. Life expectancy in the U.S. is about 78.7 years. What has moved life expectancy numbers down in the past has been events like the [1918 Spanish flu pandemic](https://en.wikipedia.org/wiki/1918_flu_pandemic) or WWII. Additional risks to life expectancy now might include obesity and opioid abuse. If you're trying to create a detailed estimate of terrorism risk, there are huge statistical issues, but to understand the big picture requires not so much statistics as understanding orders of magnitude and basic quantitative literacy. ### A more reasonable concern... (perhaps veering off topic) Looking back at history, the way *huge* numbers of people get killed is through disease, genocide, and war. A more reasonable concern might be that some rare, terrorist event triggers something catastrophic (eg. how the [assassination of Archduke Ferdinand](https://en.wikipedia.org/wiki/Assassination_of_Archduke_Franz_Ferdinand_of_Austria) help set off WWI.) Or one could worry about nuclear weapons in the hands of someone crazy. Thinking about extremely rare but catastrophic events is incredibly difficult. It's a multidisciplinary pursuit and goes far outside of statistics. Perhaps the only statistical point here is that it's hard to estimate the probability and effects of some event which hasn't happened? (Except to say that it can't be that common or it would have happened already.)
For example, a valid number would be 6165156 and an invalid number would be 1566515. I have tried many times to construct a finite state machine for this with no success, which leads me to believe the language is not regular. However, I am unsure how to formally prove this if that is indeed the case. I tried applying the pumping lemma but I am not completely sure how to apply it to this particular language. Any help is appreciated!
For most interesting optimisations, I think this is implied by [Rice's theorem](https://en.wikipedia.org/wiki/Rice%27s_theorem). For real numbers, [Richardson's theorem](https://en.wikipedia.org/wiki/Richardson%27s_theorem) is also relevant here.
I recently discovered a [new R package](http://www.r-bloggers.com/analyze-linkedin-with-r/) for connecting to the LinkedIn API. Unfortunately the LinkedIn API seems pretty limited to begin with; for example, you can only get basic data on companies, and this is detached from data on individuals. I'd like to get data on all employees of a given company, which you can do [manually on the site](https://www.linkedin.com/vsearch/p?keywords=stack%20exchange&f_CC=974353&sb=People%20who%20work%20at%20Stack%20Exchange&trk=tyah&trkInfo=clickedVertical%3Asuggestion%2Cidx%3A1-1-1%2CtarId%3A1431584515143%2Ctas%3Astack%20exchange) but is not possible through the API. [import.io](https://import.io/) would be perfect if it [recognised the LinkedIn pagination](http://blog.import.io/post/tips-tricks) (see end of page). Does anyone know any web scraping tools or techniques applicable to the current format of the LinkedIn site, or ways of bending the API to carry out more flexible analysis? Preferably in R or web based, but certainly open to other approaches.
[Beautiful Soup](http://www.crummy.com/software/BeautifulSoup/bs4/doc/) is specifically designed for web crawling and scraping, but is written for python and not R
I am currently using an SVM with a linear kernel to classify my data. There is no error on the training set. I tried several values for the parameter $C$ ($10^{-5}, \dots, 10^2$). This did not change the error on the test set. Now I wonder: is this an error *caused by the ruby bindings* for `libsvm` I am using ([rb-libsvm](https://github.com/febeling/rb-libsvm)) or is this *theoretically explainable*? Should the parameter $C$ always change the performance of the classifier?
C is essentially a regularisation parameter, which controls the trade-off between achieving a low error on the training data and minimising the norm of the weights. It is analageous to the ridge parameter in ridge regression (in fact in practice there is little difference in performance or theory between linear SVMs and ridge regression, so I generally use the latter - or kernel ridge regression if there are more attributes than observations). Tuning C correctly is a vital step in best practice in the use of SVMs, as structural risk minimisation (the key principle behind the basic approach) is party implemented via the tuning of C. The parameter C enforces an upper bound on the norm of the weights, which means that there is a nested set of hypothesis classes indexed by C. As we increase C, we increase the complexity of the hypothesis class (if we increase C slightly, we can still form all of the linear models that we could before and also some that we couldn't before we increased the upper bound on the allowable norm of the weights). So as well as implementing SRM via maximum margin classification, it is also implemented by the limiting the complexity of the hypothesis class via controlling C. Sadly the theory for determining how to set C is not very well developed at the moment, so most people tend to use cross-validation (if they do anything).
What is the name of the operator that takes a categorical vector and transforms it to the binary representation using one-hot encoding? I am wondering since I am writing a scientific paper and need a proper name for that.
Statisticians call one-hot encoding as [dummy coding](https://en.wikipedia.org/wiki/Dummy_variable_(statistics)). As others suggested (including *Scortchi* in the comments), this is not exact synonym, but this is the term that would be usually used for the 0-1 encoded categorical variables. See also: ["Dummy variable" versus "indicator variable" for nominal/categorical data](https://stats.stackexchange.com/questions/125608/dummy-variable-versus-indicator-variable-for-nominal-categorical-data)
I have two large sets of integers $A$ and $B$. Each set has about a million entries, and each entry is a positive integer that is at most 10 digits long. What is the best algorithm to compute $A\setminus B$ and $B\setminus A$? In other words, how can I efficiently compute the list of entries of $A$ that are not in $B$ and vice versa? What would be the best data structure to represent these two sets, to make these operations efficient? The best approach I can come up with is storing these two sets as sorted lists, and compare every element of $A$ against every element of $B$, in a linear fashion. Can we do better?
A linear scan is the best that I know how to do, if the sets are represented as sorted linked lists. The running time is $O(|A| + |B|)$. Note that you don't need to compare every element of $A$ against every element of $B$, pairwise. That would lead to a runtime of $O(|A| \times |B|)$, which is much worse. Instead, to compute the symmetric difference of these two sets, you can use a technique similar to the "merge" operation in mergesort, suitably modified to omit values that are common to both sets. In more detail, you can build a recursive algorithm like the following to compute $A \setminus B$, assuming $A$ and $B$ are represented as linked lists with their values in sorted order: ``` difference(A, B): if len(B)=0: return A # return the leftover list if len(A)=0: return B # return the leftover list if A[0] < B[0]: return [A[0]] + difference(A[1:], B) elsif A[0] = B[0]: return difference(A[1:], B[1:]) # omit the common element else: return [B[0]] + difference(A, B[1:]) ``` I've represented this in pseudo-Python. If you don't read Python, `A[0]` is the head of the linked list `A`, `A[1:]` is the rest of the list, and `+` represents concatenation of lists. For efficiency reasons, if you're working in Python, you probably wouldn't want to implement it exactly as above -- for instance, it might be better to use generators, to avoid building up many temporary lists -- but I wanted to show you the ideas in the simplest possible form. The purpose of this pseudo-code is just to illustrate the algorithm, not propose a concrete implementation. I don't think it's possible to do any better, if your sets are represented as sorted lists and you want the output to be provided as a sorted list. You fundamentally have to look at every element of $A$ and $B$. Informal sketch of justification: If there is any element that you haven't looked at, you can't output it, so the only case where you can omit looking at an element is if you know it is present in both $A$ and $B$, but how could you know that it is present if you haven't looked at its value?
Using this grammar, over the alphabet $\Sigma=\{a\}$ $$ S \rightarrow a \\ S\rightarrow CD \\ C\rightarrow ACB \\ C\rightarrow AB \\ AB\rightarrow aBA \\ Aa\rightarrow aA \\ Ba\rightarrow aB \\ AD\rightarrow Da \\ BD\rightarrow Ea \\ BE\rightarrow Ea \\ E\rightarrow a \\ $$ Im trying to show that the working string $aaaaaaaaaBBBAAAD$ or $a^{n^2} B^nA^nD$ generates the word $a^{(n+1)^2}$
See the ACM digital library, IEEE xplorer. These are the top in my opinion. Look as well in ScienceDirect (Elsevier) and Springer (for theoretical computer science, I believe these two libraries are better). Usually, googling your research problem would lead you to papers. The journals in which these papers are published is what you are looking. Of course, use the references and citations of the paper you read. In the long term, you will restrict yourself with the journals of your field.
What is the difference between a Convolutional Neural Network (CNN) and an ordinary Neural Network (NN)? What does convolution mean in this context?
### Starting from the Neural Network perspective: I would say that the base Neural Network has all neurons interconnected between layers. The convolutional version simplifies this model using two hypotheses: * meaningful features have a given size in the image. * features are shift equivariant (shifted input leads to similarly shifted output), and may occur anywhere in the image. The first asumption is expressed by setting to zero the weights leading to a hidden neuron, except for a region of interest/patch from the input. Shift invariance is obtained by sharing the same weights across all the patches. In order to capture features anywhere in the image, it is simpler to pave the input with patches only slided by one pixel. Those simplifications drastically reduce the number of parameters and lead to much simpler computations which 'happen' to take the form of a convolution, hence the C in CNN. Note 1: the fixed feature size hypothesis is alleviated by the use of multiresolution and/or by using separate networks with different patch sizes. Note 2: equivariance is usually not as useful as invariance, so the latter is often emulated with additional pooling layers. ### Alternative approach Before deep learning, a popular problem solving method was to extract features and feed them to a classifier. For images, the features were often extracted using expertly chosen filters such as Gabor filters/wavelets. On can view CNN as a parameterized filtering function, where parameters are trained using methods for Neural Networks
This statement of the pumping lemma from Wikipedia. > > Let $L$ be a regular language. Then there exists an integer $p \ge 1$ (depending only on $L$) such that every string $w$ in $L$ of length at least $p$ ($p$ is called the "pumping length") can be written as $w = x y z$ (i.e., $w$ can be divided into three substrings), satisfying the following conditions: > > > 1. $\lvert y \rvert \ge 1$ > 2. $\lvert x y \rvert \le p$ and > 3. for all $i \ge 0$, $x y^i z \in L$. > > $y$ is the substring that can be pumped (removed or repeated any number of times, and the resulting string is always in $L$). > > > What confuses me about the definition of pumping lemma are two requirements: $\lvert y \rvert \ge 1$ and $i \ge 0$, $x y^i z$. The way I read it, that we are required to have $y$ length be equal to one or greater, and at the same time, we can completely skip it, since $i \ge 0$, i.e. effectively $\lvert y \rvert = 0 $. Intuitively, it makes sense that we should be able to skip $y$ and still have string be in $L$.
Any finite state automaton that accepts an infinite number of words will necessarily have a loop in it. One such loop may go from state $q$ to state $q$ consuming word $y$ – that is, $y$ is the word based on the symbols on the transitions going from $q$ back to $q$. The pumping lemma says, more or less, that you can get to such a state $q$ from the initial state by consuming the word $x$ and that you can get to the final state from state $q$ by consuming the word $z$. In the middle you can go through the loop as many times as you like. Thus making the complete word $xy^iz$, where $i$ is the number of times you chose to go through the loop.
Forgive the naïveté that will be obvious in the way I ask this question as well as the fact that I'm asking it. Mathematicians typically use $\exp$ as it's the simplest/nicest base in theory (due to calculus). But computers seem to do everything in binary, so is it faster on a machine to compute `2**x` than `Math::exp(x)`?
If by `2**x` you mean $2^x$, then yes. We can use the left-shift operator `<<`, i.e. we compute `1 << x`. This is lightning-fast as it is a primitive machine instruction in every processor I know of. This can not be done with any base other than 2. Moreover, integer exponentiation will always be faster than real exponentiation, as floating point numbers take longer to multiply.
I'm learning about GPs, and one thing I don't quite understand is how the posterior works. Consider this figure: [![enter image description here](https://i.stack.imgur.com/bFci1.png)](https://i.stack.imgur.com/bFci1.png) Rasmussen and Williams say: > > Graphically in Figure 2.2 you may think of generating functions from the prior, and rejecting the ones that disagree with the observations... Fortunately, in probabilistic terms this operation is extremely simple, corresponding to conditioning the joint Gaussian prior distribution on the observations. > > > To formalize a bit, given this joint distribution, $$ \begin{bmatrix} \mathbf{f}\_\* \\ \mathbf{f} \end{bmatrix} \sim \mathcal{N} \Bigg( \begin{bmatrix} \mathbf{0} \\ \mathbf{0} \end{bmatrix}, \begin{bmatrix} K(X\_\*, X\_\*) & K(X\_\*, X) \\ K(X, X\_\*) & K(X, X) \end{bmatrix} \Bigg) $$ the conditional distribution is $$ \begin{align} \mathbf{f}\_{\*} \mid \mathbf{f} \sim \mathcal{N}(&K(X\_\*, X) K(X, X)^{-1} \mathbf{f},\\ &K(X\_\*, X\_\*) - K(X\_\*, X) K(X, X)^{-1} K(X, X\_\*)) \end{align} $$ What I don't understand is how samples from this conditional distribution always "agree" with the observations? Aren't the samples $\mathbf{f}\_\*$ still instances of Gaussian random variables?
To be slightly more explicit about what I think your question is: Yes, samples from the posterior are Gaussian everywhere, including exactly at the previously-observed points. But, in this "noise-free" setting, the variance at those points is 0 – so a Gaussian with variance 0 is always going to be exactly its mean. It's easiest to see this in the case where we condition on only point, $X = X\_\*$, in which case the conditional variance becomes $$K(X, X) - K(X, X) K(X, X)^{-1} K(X, X) = 0,$$ and the conditional mean is $$K(X, X) K(X, X)^{-1} \mathbf{f} = \mathbf{f}.$$
I have one dummy variable, $D$, which equals 1 if the subject received treatment and $0$ otherwise. My outcome of interest is $Y$. For example, $D$ tells me whether the subject took the drug or a placebo and $Y$ is a continuous variable measuring pain. I want to discover whether taking the drug reduces pain. I have other variables that measure some features of the subjects, let's call them $X\_1$ and $X\_2$. For example, $X\_1$ is the age of the subject and $X\_2$ is the amount of physical activity the subject does each day. By a t-test I discover that the mean of $X\_1$ is different between the treated and not treated group, and the mean of $X\_2$ is different between the two groups as well. So I cannot use a naive estimator, and I understand that. It may be that group 1 experience less pain because the subjects in that group are younger, not because of my drug. But if I write: $$Y = \beta\_0 + \beta\_1 D + \beta\_2 X\_1 + \beta\_3 X\_2$$ and run an OLS on it, will $\beta\_1$ be the effect I am looking for? Is this model correctly specified? Yes, $X\_1$ and $X\_2$ are different between the two groups: the two groups are not the same (so there is no randomization). But, *I'm controlling for the difference*. I put the variables in the model, so I am accounting for the difference between the two groups. Would this model work?
No they do not need to be similar, if you control for that variables, as you did. That is the whole point of using control variable apart from the dummy that you are interested in.
I was wondering if it is at all possible to use Kneser-Ney to smooth word unigram probabilites? The basic idea behind back-off is to use (n-1)-gram frequencies when an n-gram has 0 count. This is obviously hard to do with an unigram. Is there something that I am missing that would allow to use Kneser-Ney for unigrams to smooth probabilities of single words? If this is possible how could that be done? If not, why is that impossible?
**Short answer**: although it's possible to use it in this strange way, Kneyser-Ney is not designed for smoothing unigrams, because in this case its nothing but additive smoothing: $p\_{abs}\left ( w\_{i} \right )=\frac{max\left ( c\left ( w\_{i} \right )-\delta ,0 \right )}{\sum\_{w'}^{ }c(w')}$. This looks similar to Laplace smoothing and it is very well-known fact, that additive smoothing have poor perfomance and why wouldn't it? *Good and Turing* revealed better scheme. The idea is to reallocate the probability mass of n-grams that occur $r + 1$ times in the training data to the n-grams that occur $r$ times. In particular, reallocate the probability mass of n-grams that were seen once to the n-grams that were never seen. For each count $r$, we compute an adjusted count $r^{\*}=(r+1)\frac{n\_{r+1}}{n\_{r}}$, where $n\_{r}$ is the number of n-grams seen exactly r times. Then we have: $p(x:c(x)=r)=\frac{r^{\*}}{N}, N=\sum\_{1}^{\infty }r\*n\_{r}$. But many more sophisticated models were invented since then, so you have to do your research. **Long answer**: First, let's start with the problem ( if your motivation is to gain deeper understanding of what's going on behind statistical model ). You have some kind of probabilistic model, which is a distribution $p(e)$ over an event space $E$. You want to estimate the parameters of your model distribution $p$ from data. In principle, you might to like to use maximum likelihood (ML) estimates, so that your model is $p\_{ML}\left ( x \right )=\frac{c(x)}{\sum\_{e}^{ }c(e)}$ But, you have insufficient data: there are many events $x$ such that $c(x)=0$, so that the ML estimate is $p\_{ML}(x)=0$. In case of language models those events are words, which were never seen so far, we don't want to predict their probability to be zero. Kneser-Ney is very creative method to overcome this bug by smoothing. It's an extension of absolute discounting with a clever way of constructing the lower-order (backoff) model. The idea behind that is simple: the lower-order model is significant only when count is small or zero in the higher-order model, and so should be optimized for that purpose. Example: suppose “San Francisco” is common, but “Francisco” occurs only after “San”. “Francisco” will get a high unigram probability, and so absolute discounting will give a high probability to “Francisco” appearing after novel bigram histories. Better to give “Francisco” a low unigram probability, because the only time it occurs is after “San”, in which case the bigram model fits well. For bigram case we have: $p\_{abs}(w\_{i}|w\_{i-1})=\frac{max(c(w\_{i}w\_{i-1})-\delta,0)}{\sum\_{w'}^{ } c(w\_{i-1}w')}+\alpha\*p\_{abs}(w\_{i})$, from which is easy to conclude what will happen if we have no context ( i.e. only unigrams ). Also take a look at classic Chen & Goodman paper for thorough and systematic comparison of many traditional language models : <http://u.cs.biu.ac.il/~yogo/courses/mt2013/papers/chen-goodman-99.pdf>
I am relatively new to ML and in the process of learning pipelines. I am creating a pipeline of custom transformers and get this error: AttributeError: 'numpy.ndarray' object has no attribute 'fit'. Below is the code. I am not entirely sure where the error is occurring. Any help is appreciated Note: I am using the King county housing data <https://www.kaggle.com/harlfoxem/housesalesprediction> ```ipyhton housing_df =pd.read_csv("kc_house_data.csv") housing_df class FeatureSelector(BaseEstimator,TransformerMixin): def __init__(self,feature_names): self._feature_names = feature_names def fit(self,X, y= None): #We will not do anything here and just return the object print(f'\n Fit method - Feature Selector\n') return self def transform(self,X, y= None): print(f'\n Transform method - Feature Selector\n') #we will return only the columns mentioned in the feature names return X[self._feature_names] class CategoricalTansformer(BaseEstimator,TransformerMixin): def __init__(self, use_dates = ['year','month','day']): self._use_dates = use_dates def fit(self,X,y=None): #nothing to do here. Return the object print(f'\n Fit method - Categorical Transformer\n') return self # Helper functions to extract year from column 'dates' def get_year(self, data): return str(data)[:4] def get_month(self,data): return str(data)[4:6] def get_day(self,data): return str(data)[6:8] #Helper function thta converts values to Binary def create_binary(self,data): if data ==0: return 'No' else: return 'Yes' def transform(self,X,y=None): print(f'\n Transform method - Categorical Transformer\n') #Depending on the costructor argument break dates column to specified units for spec in self._use_dates: exec("X.loc[:,'{}']= X['date'].apply(self.get_{})".format(spec,spec)) #now drop the date column X = X.drop('date',axis =1) #Convert the columns to binary for one hot encoding later X.loc[:,'waterfront']=X['waterfront'].apply(self.create_binary) X.loc[:,'view']= X['view'].apply(self.create_binary) X.loc[:,'yr_renovated']= X['yr_renovated'].apply(self.create_binary) # returns numpy array return X.values class NumericalTransformer(BaseEstimator,TransformerMixin): def __init__(self,bath_per_bed =True, years_old = True): self._bath_per_bed = bath_per_bed self._years_old = years_old def fit(self, X,y=None): # No computations here, return object print(f'\n Fit method - Numerical Transformer\n') return self def transform(self,X,y=None): print(f'\n Transform method - Numerical Transformer\n') if self._bath_per_bed: #create a new column X.loc[:,'bath_per_bed'] = X['bathrooms']/X['bedrooms'] #drop redundant column X.drop('bathrooms',axis =1) if self._years_old: #create a new column X.loc[:,'years_old']= 2019 - X['yr_built'] # drop redundant column X.drop('yr_built',axis =1) #Converting any infinity value in the data set to NaN X =X.replace([np.inf,-np.inf],np.nan) #print(X.values) #returns a numpy array return X.values #Categorical features to pass down the Categorical pipeline categorical_features =['date','waterfront','view','yr_renovated'] #Numerical features to pass down the Numerical pipeline numerical_features = ['bedrooms','bathrooms','sqft_living','sqft_lot','floors','condition', 'grade','sqft_basement','yr_built'] #Defining the Categorical Pipeline categorical_pipeline = Pipeline(steps=[ ('cat_selector', FeatureSelector(categorical_features)), ('cat_transformer',CategoricalTansformer()), ('one_hot_encoder', OneHotEncoder(sparse=False)) ]) #Defining the Numerical Pipeline numerical_pipeline = Pipeline(steps =[ ('num_selector',FeatureSelector(numerical_features)), ('num_transformer', NumericalTransformer()), ('imputer', SimpleImputer(strategy ='median')), ('std_scaler',StandardScaler) ]) #Combining numerical and categorical pipelines using FeatureUnion full_pipeline = FeatureUnion(transformer_list =[ ('categorical_pipeline',categorical_pipeline), ('numerical_pipeline',numerical_pipeline) ]) #Let us add an estimator to the pipeline that was built from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split data_X= housing_df.drop('price',axis =1) # Note the values of y are converted to numpy values data_y= housing_df['price'].values X_train,X_test,y_train,y_test = train_test_split(data_X,data_y,test_size =0.2, random_state = 42) #Now let us build the final pipeline full_model_pipeline = Pipeline(steps =[ ('full_pipeline',full_pipeline), ('model',LinearRegression()) ]) full_model_pipeline.fit(X_train,y_train) y_pred =full_model_pipeline.predict(X_test) ``` Here is the full stack trace: ```ipyhton AttributeError Traceback (most recent call last) <ipython-input-192-64f513a54376> in <module> 16 ]) 17 ---> 18 full_model_pipeline.fit(X_train,y_train) 19 y_pred =full_model_pipeline.predict(X_test) ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/sklearn/pipeline.py in fit(self, X, y, **fit_params) 339 """ 340 fit_params_steps = self._check_fit_params(**fit_params) --> 341 Xt = self._fit(X, y, **fit_params_steps) 342 with _print_elapsed_time('Pipeline', 343 self._log_message(len(self.steps) - 1)): ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/sklearn/pipeline.py in _fit(self, X, y, **fit_params_steps) 301 cloned_transformer = clone(transformer) 302 # Fit or load from cache the current transformer --> 303 X, fitted_transformer = fit_transform_one_cached( 304 cloned_transformer, X, y, None, 305 message_clsname='Pipeline', ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/joblib/memory.py in __call__(self, *args, **kwargs) 350 351 def __call__(self, *args, **kwargs): --> 352 return self.func(*args, **kwargs) 353 354 def call_and_shelve(self, *args, **kwargs): ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/sklearn/pipeline.py in _fit_transform_one(transformer, X, y, weight, message_clsname, message, **fit_params) 752 with _print_elapsed_time(message_clsname, message): 753 if hasattr(transformer, 'fit_transform'): --> 754 res = transformer.fit_transform(X, y, **fit_params) 755 else: 756 res = transformer.fit(X, y, **fit_params).transform(X) ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/sklearn/pipeline.py in fit_transform(self, X, y, **fit_params) 978 sum of n_components (output dimension) over transformers. 979 """ --> 980 results = self._parallel_func(X, y, fit_params, _fit_transform_one) 981 if not results: 982 # All transformers are None ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/sklearn/pipeline.py in _parallel_func(self, X, y, fit_params, func) 1000 transformers = list(self._iter()) 1001 -> 1002 return Parallel(n_jobs=self.n_jobs)(delayed(func)( 1003 transformer, X, y, weight, 1004 message_clsname='FeatureUnion', ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/joblib/parallel.py in __call__(self, iterable) 1042 self._iterating = self._original_iterator is not None 1043 -> 1044 while self.dispatch_one_batch(iterator): 1045 pass 1046 ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/joblib/parallel.py in dispatch_one_batch(self, iterator) 857 return False 858 else: --> 859 self._dispatch(tasks) 860 return True 861 ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/joblib/parallel.py in _dispatch(self, batch) 775 with self._lock: 776 job_idx = len(self._jobs) --> 777 job = self._backend.apply_async(batch, callback=cb) 778 # A job can complete so quickly than its callback is 779 # called before we get here, causing self._jobs to ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/joblib/_parallel_backends.py in apply_async(self, func, callback) 206 def apply_async(self, func, callback=None): 207 """Schedule a func to be run""" --> 208 result = ImmediateResult(func) 209 if callback: 210 callback(result) ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/joblib/_parallel_backends.py in __init__(self, batch) 570 # Don't delay the application, to avoid keeping the input 571 # arguments in memory --> 572 self.results = batch() 573 574 def get(self): ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/joblib/parallel.py in __call__(self) 260 # change the default number of processes to -1 261 with parallel_backend(self._backend, n_jobs=self._n_jobs): --> 262 return [func(*args, **kwargs) 263 for func, args, kwargs in self.items] 264 ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/joblib/parallel.py in <listcomp>(.0) 260 # change the default number of processes to -1 261 with parallel_backend(self._backend, n_jobs=self._n_jobs): --> 262 return [func(*args, **kwargs) 263 for func, args, kwargs in self.items] 264 ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/sklearn/utils/fixes.py in __call__(self, *args, **kwargs) 220 def __call__(self, *args, **kwargs): 221 with config_context(**self.config): --> 222 return self.function(*args, **kwargs) ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/sklearn/pipeline.py in _fit_transform_one(transformer, X, y, weight, message_clsname, message, **fit_params) 752 with _print_elapsed_time(message_clsname, message): 753 if hasattr(transformer, 'fit_transform'): --> 754 res = transformer.fit_transform(X, y, **fit_params) 755 else: 756 res = transformer.fit(X, y, **fit_params).transform(X) ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/sklearn/pipeline.py in fit_transform(self, X, y, **fit_params) 385 fit_params_last_step = fit_params_steps[self.steps[-1][0]] 386 if hasattr(last_step, 'fit_transform'): --> 387 return last_step.fit_transform(Xt, y, **fit_params_last_step) 388 else: 389 return last_step.fit(Xt, y, ~/ML_Projects/Base_ML_env/env/lib/python3.9/site-packages/sklearn/base.py in fit_transform(self, X, y, **fit_params) 697 if y is None: 698 # fit method of arity 1 (unsupervised transformation) --> 699 return self.fit(X, **fit_params).transform(X) 700 else: 701 # fit method of arity 2 (supervised transformation) AttributeError: 'numpy.ndarray' object has no attribute 'fit' ```
It would be helpful if you could post the full stack trace, so that we can see which line your error occurs at. In general, the more information you can provide in a question, the better. In this case, it looks like your full\_model\_pipeline may somehow become a numpy array. Since you have a one-element pipeline, you could try changing ``` full_model_pipeline = Pipeline(steps =[ ('full_pipeline',full_pipeline), ('model',LinearRegression()) ]) full_model_pipeline.fit(X_train,y_train) ``` to ``` model = LinearRegression() model.fit(X_train, y_train) ```
I was just recently having a discussion about Turing Machines when I was asked, "Is the Turing Machine derived from automata, or is it the other way around"? I didn't know the answer of course, but I'm curious to find out. The Turing Machine is basically a slightly more sophisticated version of a Push-Down Automata. From that I would assume that the Turing Machine was derived from automata, however I have no definitive proof or explanation. I might just be plain wrong... perhaps they were developed in isolation. Please! Free this mind from everlasting tangents of entanglement.
**Neither!** The best way to see this independence is to read [the original papers](http://classes.soe.ucsc.edu/cmps130/Spring09/Papers.html). * [Turing's 1936 paper](http://classes.soe.ucsc.edu/cmps130/Spring09/Papers/turing-1936.pdf) introducing Turing machines does not refer to any simpler type of (abstract) finite automaton. * [McCulloch and Pitts' 1943 paper](http://classes.soe.ucsc.edu/cmps130/Spring09/Papers/mculloch-pitts.pdf) introducing "nerve-nets", the precursors of modern-day finite-state machines, proposed them as simplified models of neural activity, not computation per se. For an interesting early perspective, see the [1953 survey by Claude Shannon](http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4051186), which has an entire section on Turing machines, but says nothing about finite automata as we would recognize them today (even though he cites Kleene's 1951 report). Modern finite automata arguably start with [a 1956 paper of Kleene](http://classes.soe.ucsc.edu/cmps130/Spring09/Papers/kleene_1956.pdf), originally published as a RAND technical report in 1951, which defined regular expressions. Kleene was certainly aware of Turing's results, having published [similar results himself](http://www.springerlink.com/content/p11t718324483v87/) (in the language of primitive recursive functions) at almost the same time. Nevertheless, Kleene's only reference to Turing is an explanation that Turing machines are not finite automata, because of their unbounded tapes. It's of course possible that Kleene's thinking was influenced by Turing's abstraction, but Kleene's definitions appear (to me) to be independent. In the [1956 survey volume edited by Shannon and McCarthy](http://books.google.com/books?id=oL57iECEeEwC), in which both [Kleene's paper on regular experssions](http://classes.soe.ucsc.edu/cmps130/Spring09/Papers/kleene_1956.pdf) and [Moore's paper on finite-state transducers](http://people.mokk.bme.hu/~kornai/MatNyelv/moore_1956.pdf) were finally published, finite automata and Turing machines were discussed side by side, but almost completely independently. Moore also cites Turing, but only in a footnote stating that Turing machines aren't finite automata. ([A recent paper of Kline](http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5477410) recounts the rather stormy history of this volume and the associated Dartmouth conference, sometimes called the "birthplace of AI".) (An even earlier version of neural nets is found in Turing's work on "type B machines", as reprinted in the book "The essential Turing", from about 1937 I think. It seems likely that many people were playing with the idea at the time, as even today many CS undergrads think they have "invented" it at some point in their studies before discovering its history.)
The intersection non-emptiness problem is defined as follows: > > Given a list of deterministic finite automata as input, the goal is to determine whether or not their associated regular languages have a non-empty intersection. In other, the goal is to determine if there exists a string that is accepted by all of the automata in the list. > > > This problem is PSPACE-complete. Is the complexity known when we change DFA by regular expression or NFA?
As @MRC pointed stated, for NFAs the problem is also PSPACE-complete. However, I think the proof suggested by MRC does not cover the case you have in mind, when $k$ is non-constant, so I will give a complete proof here. **Lemma 1.** *With NFAs or regular expressions, the problem is PSPACE-complete.* *Proof.* First we give a proof for NFA variant. With DFAs the problem is known to be PSPACE-complete (shown originally by Kozen, I believe). The NFA variant cannot be easier (as the DFA variant is a restriction of the NFA variant), so the NFA variant is at least PSPACE-hard. It remains to verify that the NFA variant is in PSPACE. It seems that the proof that the DFA variant is in PSPACE, as summarized [here](https://cs.stackexchange.com/questions/119317/pspace-completeness-of-dfa-intersection-problem), adapts easily to the NFA variant. That proof uses Savitch's theorem, by which PSPACE = NPSPACE. So it suffices to describe a non-deterministic, polynomial-space verifier for the problem. The input is a list of $n$ NFAs, each with at most $m$ states. The (non-deterministic, polynomial space) algorithm just non-deterministically guesses the string in the intersection, character by character, meanwhile simulating concurrently, in lockstep, each given NFA on the guessed string, character by character, keeping track of just one state per given NFA. If it ever reaches a step where all machines are in an accept state, it accepts. It requires only polynomial space, because at any point it only has to store $n$ states, one for each of the given NFAs. This shows that the NFA variant is in PSPACE. The regular-expression variant reduces in polynomial time (and space) to the NFA variant, simply by converting each regular expression into an NFA by the standard polynomial-time algorithm. So the regular-expression variant is also in PSPACE. $~~~\Box$ By the way, the approach of explicitly constructing the NFA for the intersection of the given NFAs (using the standard Cartesian product construction) fails because that NFA can have as many as $m^n$ states, so in general has size exponentially large in the input size.
Consider a graph with $n$ vertices and $m$ edges. The vertices are labelled with real variables $x\_i$, where $x\_1=0$ is fixed. Each edge represents a "measurement": for edge $(u,v)$, I obtain a measurement $z \approx x\_u - x\_v$. More precisely, $z$ is a truly random quantity in $(x\_u - x\_v) \pm 1$, uniformly distributed and independent of all other measurements (edges). I am given the graph and the measurements, with the distribution promise for above. I want to "solve" the system and obtain the vector of $x\_i$'s. Is there some body of work on problems of this type? Actually, I want to solve an even simpler problem: somebody points me to vertices $s$ and $t$, and I have to compute $x\_s - x\_t$. There are many things to try, like finding a shortest path, or finding as many disjoint paths as possible and averaging them (weighted by the inverse of the square root of the length). Is there an "optimal" answer? The problem of computing $x\_s - x\_t$ is itself not completely defined (e.g. should I assume a prior on the variables?)
In the case that the field $F$ is of size at least $2n$, I think this problem is hard. More specifically, I think that if the above can be efficiently solved for $F$ this large, then CNF-SAT has efficient randomized algorithms. Say we are given a CNF formula $\varphi$. One can easily come up with an arithmetic circuit $C$ that computes an ``arithmetization'' $p$ of $\varphi$, where the polynomial $p$ agrees with the formula $\varphi$ on $0$-$1$ inputs. Consider the multilinearization $q$ of $p$. Note that $q$ agrees with $p$ and hence $\varphi$ on $\{0,1\}^n$. I claim that $q$ is non-zero iff $\varphi$ is satisfiable. Clearly, if $q=0$, then $\varphi$ cannot be satisfied. For the converse, one can show that any non-zero multilinear polynomial cannot vanish on all of $\{0,1\}^n$. This implies that a non-zero $q$ (and hence the corresponding $\varphi$) does not vanish at some input in $\{0,1\}^n$. Therefore, checking for satisfiability of $\varphi$ is equivalent to checking if $q$ is non-zero. Say, now, that we could evaluate $q$ over a large field $F$. Then, using the Schwartz-Zippel Lemma, we could identity-test $q$ using an efficient randomized algorithm and check if it is the zero polynomial (the size of $F$ is used to upper bound the error in the Schwartz-Zippel Lemma).
The prefixes of the string `aabab` are `a, aa, aab, aaba, aabab`. If the number of a's should be greater or equal to the number of b's, the grammar would be `S -> aS | aSbS | e`, but I need it with strictly more a's than b's in any prefix. I thought of this grammar, but I'm not sure it is correct. `S -> aS | aSAbA | a A -> aA | aAbA | e`
the grammar you have written can't produce string "a". but for a's strictly more than b's in any prefixes the grammar will be S -> aA | aaAbA A -> aAbA |aA |e because if there is less/ equal no of a's is in the production before b's then it will produce error.
I fitted a weighted regression model to predict age as a function of several DNA methylation markers (expressed in percentages). I used weighted regression because the variance of my original OLS model increases with age. When using the predict function to generate prediction intervals for a set of new samples, ``` predict(fGLS, newdata = Testset, interval = "prediction", level = 0.95) ``` I get the following warning: ``` Warning message: In predict.lm(fGLS, newdata = Testset, interval = "prediction", : Assuming constant prediction variance even though model fit is weighted ``` I tried adding the same weights I used to fit the model and this no longer yielded a warning; ``` predict(fGLS, newdata = Testset, interval = "prediction", level = 0.95, weights = 1/hhat) ``` I have two questions: 1. Am I correct in simply adding the same weights I used to fit the weighted regression model, to the predict function? What does this effectively do? 2. In the first situation, my prediction intervals are roughly the same size throughout the data in my test set. In the second situation, the prediction intervals become larger with increasing age. Does this mean my prediction intervals in the first situation are wrong? Or is it okay to have equal interval sizes since I "corrected" for heteroskedasticity by using weighted regression? In other words, can I afford to simply ignore the warning?
There seems to be some confusion about the purpose of a prediction interval. If I have frequency weights, then if my weights vector has some element `Weights[i] = 10`, this indicates for the i-th factor level, there were 10 such people/observations having a similar distribution of characteristic factors. That weight is endemic to the model and the model alone. When you calculate prediction intervals, it is for an independent 11th person or observation: the uncertainty of the prediction interval is a sum of the uncertainty in your estimates (confidence interval) as well as their individual uncertainty (sampling error). If in a contrived way, you assume you conduct an independent study and resample another 10 or even 20 people for that i-th factor level and you are interested in prediction intervals for their aggregate mean, you can simply calculate this yourself using a (1/sqrt(10) + 1/n)\*se scale for the prediction interval. Your problem is easily understood by trying to replicate results obtain from predict commands with `interval='confidence'` and `interval='prediction'` arguments. However, it seems in your case that the purpose of weighting here was precision weighting. In that case, you are correct to re-apply the weights, this should yield wider prediction intervals for more highly varied factor levels (higher age and more varied methylation). You can easily check this result for yourself.
What is the difference between Cross Correlation and Mutual Information. What kind of problems can be solved using these measures and when is it appropriate to use one over the other. Thanks for the comments. To clarify, the question is prompted by an interest in image analysis rather than time series analysis although any enlightenment in that area would also be appreciated
Cross correlation assumes a linear relationship between 2 sets of data. Whereas mutual information only assumes that one value of one dataset says something about the value of the other dataset. So mutual information makes much weaker assumptions. A traditional problem solved with mutual information is aligning (registration) of two types of medical images, for example an ultrasound and a x-ray image. (typically, the types of images are called modalities, so the problem is named multi-modal image registration). For both X-ray and ultrasound, a specific material, say bone, leads to a certain 'brightness' in the image. Whereas some materials lead to a bright x-ray and ultrasound image, for other materials (e.g. fat) it might be the opposite, one is bright, the other is dark. Therefore, it is not the case that bright parts of the X-ray image are also bright parts of the ultrasound. Therefore, mutual information is still a useful criterion for aligning the images, but cross correlation is not.
I have a dataset with a few million rows and ~100 columns. I would like to detect about 1% of the examples in the dataset, which belong to a common class. I have a minimum precision constraint, but due to very asymmetric cost I am not too keen on any particular recall (as long as I am not left with 10 positive matches!) What are some approaches that you would recommend in this setting? (links to papers welcome, links to implementations appreciated)
I've found [He and Garcia (2009)](http://www.ele.uri.edu/faculty/he/PDFfiles/ImbalancedLearning.pdf) to be a helpful review of learning in imbalanced class problems. Here are a few definitely-not-comprehensive things to consider: **Data-based approaches:** One can undersample the majority class or oversample the minority class. (Breiman pointed out that this is formally the equivalent to assigning non-uniform misclassification costs.) This can cause problems: Undersampling can cause the learner to miss aspects of the majority class; oversampling increases risk of overfitting. There are "informed undersampling" methods that reduce these issues. One of them is [EasyEnsemble](https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/tsmcb09.pdf), which independently samples several subsets from the majority class and makes multiple classifiers by combining each subset with all the minority class data. [SMOTE (Synthetic Minority Oversampling Technique)](http://arxiv.org/pdf/1106.1813.pdf) or [SMOTEBoost, (combining SMOTE with boosting)](http://www3.nd.edu/~nchawla/papers/ECML03.pdf) create synthetic instances of the minority class by making nearest neighbors in the feature space. SMOTE is implemented in R in the DMwR package (which accompanies [Luis Torgo's book “Data Mining with R, learning with case studies” CRC Press 2016](https://ltorgo.github.io/DMwR2/)). **Model fitting approaches** Apply class-specific weights in your loss function (larger weights for minority cases). For tree-based approaches, you can use [Hellinger distance](http://en.wikipedia.org/wiki/Hellinger_distance) as a node impurity function, as advocated in [Cieslak et al. "Hellinger distance decision trees are robust and skew-insensitive"](http://www.cse.nd.edu/~nchawla/papers/DMKD11.pdf) ([Weka code here](http://www3.nd.edu/~dial/hddt/).) Use a [one class classifier](http://wiki.pentaho.com/display/DATAMINING/OneClassClassifier), learning either (depending on the model) a probability density or boundary for one class and treating the other class as outliers. Of course, don't use accuracy as a metric for model building. Cohen's kappa is a reasonable alternative. **Model evaluation approaches** If your model returns predicted probabilities or other scores, chose a decision cutoff that makes an appropriate tradeoff in errors (using a dataset independent from training and testing). In R, the package OptimalCutpoints implements a number of algorithms, including cost-sensitive ones, for deciding a cutoff.
I have a small dataset that contains names of books, their average [Goodreads](https://www.goodreads.com/) ratings and the ratings I have given them. I want to know whether the average rating of a book is a good predictor of the rating I will give the same book. Here's a small sample of the data: ``` Title Author 340 Quiet: The Power of Introverts in a World That Can't Stop Talking Susan Cain 276 The Witches Roald Dahl 63 The 48 Laws of Power Robert Greene 293 Blink: The Power of Thinking Without Thinking Malcolm Gladwell 128 The Martian Andy Weir 119 The Design of Everyday Things Donald A. Norman 71 The Hostile Hospital (A Series of Unfortunate Events, #8) Lemony Snicket 33 The Stonekeeper (Amulet, #1) Kazu Kibuishi 369 Y: The Last Man, Vol. 1: Unmanned (Y: The Last Man #1) Brian K. Vaughan 222 The Book Thief Markus Zusak Average.Rating My.Rating 340 4.02 4 276 4.16 4 63 4.16 5 293 3.87 5 128 4.39 4 119 4.16 5 71 3.93 4 33 4.13 2 369 4.12 3 222 4.35 3 ``` How do I know whether `Average.Rating` is a good predictor of `My.Rating`? I tried the `cor` function in R and the correlation I got was 0.1970633. What I understand from this result is that the predictive power of a Goodreads rating is negligible. It follows that I might have a similar outcome if I had randomly picked books from a bookstore. However, I do not feel that Goodreads ratings are as unreliable as the correlation seems to suggest. In fact, I feel that most books that have a high rating are those I have personally liked as well. Also, I intuit that selecting books based on genre and the average rating is a much better approach than randomly picking up books from a bookstore or library. What am I missing? **Update:** Here's the scatter plot of my data: [![Sctterplot of My Rating vs Avg Rating](https://i.stack.imgur.com/9RgOQ.png)](https://i.stack.imgur.com/9RgOQ.png) This is result that `cor.test` gives me: ``` Pearson's product-moment correlation data: rated$Average.Rating and rated$My.Rating t = 2.9746, df = 219, p-value = 0.003263 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: 0.06683011 0.32069438 sample estimates: cor 0.1970633 ```
Kodiologist is right - you're right, he's wrong. However sadly this is an even more common place problem than what you're encountering. You're actually in an industry that's doing *relatively* well. For example, I currently work in a field where specifications on products need to be set. This is nearly always done by monitoring the products/processes in some ways and recording means and std deviations - then using good old $mean + 3\*\sigma$. Now, apart from the fact that this confidence interval is not telling them what they actually need (they need a tolerance interval for that), this is done blindly on parameters that are hovering near some maximum or minimum value (but where the interval won't actually exceed those values). Because Excel will calculate what they need (yes, I said Excel), they set their specs according to that, despite the fact that the parameter is not going to be anywhere near normally distributed. These people have been taught basic statistics, but not q-q plots or such like. One of the biggest problems is that stats will give you a number, even when used inappropriately- so most people don't know when they have done so. In other words, the specifications on the vast majority of products, in the vast majority of industries, are nonsense. One of the worst examples I have of people blindly following statistics, without understanding, is Cpk use in the automotive industry. One company spent about a year arguing over a product with their supplier, because they thought the supplier could control their product to a level that was simply not possible. They were setting only a maximum spec (no minimum) on a parameter and used Cpk to justify their claim - until it was pointed out that their calculations (when used to set a theoretical minimum level - they didn't want that so had not checked) implied a massive negative value. This, on a parameter that could never go less than 0. Cpk assumes normal, the process didn't give anywhere near normal data. It took a long time to get that to sink in. All that wasted time and money because people didn't understand what they were calculating - and it could have been a lot worse had it not been noticed. This might be a contributing factor to why there are regular recalls in the automotive industry! I, myself, come from a science background, and, frankly, the statistics teaching in science and engineering is shockingly insufficient. I'd never heard of most of what I need to use now - it's all been self taught and there are (compared to a proper statistician) massive gaps in my knowledge even now. For that reason, I don't begrudge people misusing statistics (I probably still do it regularly), it's poor education. So, going back to your original question, it's really not easy. I would agree with Kodiologist's recommendation to try to gently explain these things so the right statistics are used. **But**, I would add an extra caveat to that and also advise you to pick your battles wisely, for the sake of your career. It's unfortunate, but it's a fact that you won't be able to get everyone to do the best statistics every time. Choose to correct them when it really matters to the final overall conclusion (which sometimes means doing things two different ways to check). There are times (e.g. your model 1,2 example) where using the "wrong" way might lead to the same conclusions. Avoid correcting too many people too frequently. I know that's intellectually frustrating and the world should work differently - sadly it doesn't. To a degree you'll have to learn to judge your battles based on your colleagues' individual personalities. Your (career) goal is to be the expert they go to when they really need help, not the picky person always trying to correct them. And, in fact, if you become that person, that's probably where you'll have the most success getting people to listen and do things the right way. Good luck.
I have one source video, let us say if the person is standing or walking in the video, the person's clothes should swap with the destination image (contain the picture of any clothes). I would like to use a GAN, like StyleGan for it. I am trying to find out the repository on Google if somebody has implemented it but I am not finding it. If you have an idea kindly share the link with me.
Try this: * VITON: An Image-based Virtual Try-on Network - <https://arxiv.org/abs/1711.08447v4> * <https://github.com/xthan/VITON>
I know empirically that is the case. I have just developed models that run into this conundrum. I also suspect it is not necessarily a yes/no answer. I mean by that if both A and B are correlated with C, this may have some implication regarding the correlation between A and B. But, this implication may be weak. It may be just a sign direction and nothing else. Here is what I mean... Let's say A and B both have a 0.5 correlation with C. Given that, the correlation between A and B could well be 1.0. I think it also could be 0.5 or even lower. But, I think it is unlikely that it would be negative. Do you agree with that? Also, is there an implication if you are considering the standard Pearson Correlation Coefficient or instead the Spearman (rank) Correlation Coefficient? My recent empirical observations were associated with the Spearman Correlation Coefficient.
Correlation is the cosine of the angle between two vectors. In the situation described, (A,B,C) is a triple of observations, made n times, each observation being a real number. The correlation between A and B is the cosine of the angle between $V\_A=A-E(A)$ and $V\_B=B-E(B)$ as measured in n-dimensional euclidean space. So our situation reduces to considering 3 vectors $V\_A$, $V\_B$ and $V\_C$ in n dimensional space. We have 3 pairs of vectors and therefore 3 angles. If two of the angles are small (high correlation) then the third one will also be small. But to say "correlated" is not much of a restriction: it means that the angle is between 0 and $\pi/2$. In general this gives no restriction at all on the third angle. Putting it another way, start with any angle less than $\pi$ between $V\_A$ and $V\_B$ (any correlation except -1). Let $V\_C$ bisect the angle between $V\_A$ and $V\_B$. Then C will be correlated with both A and B.
The dataframe contains ``` >> df A B C A 196512 196512 1325 12.9010511000000 196512 196512 114569 12.9267705000000 196512 196512 118910 12.8983353775637 196512 196512 100688 12.9505091000000 196795 196795 28978 12.7805170314276 196795 196795 34591 12.8994111000000 196795 196795 13078 12.9135746000000 196795 196795 24173 12.8769653100000 196341 196341 118910 12.8983353775637 196341 196341 100688 12.9505091000000 196641 196641 28972 12.7805170314276 196641 196641 34591 12.8994111000000 196346 196341 118910 12.8983353775637 196346 196341 100688 12.9505091000000 196646 196641 28980 12.7805170314276 196646 196641 34591 12.8994111000000 ``` I tried to get minimum value for each group and display using the following code, ``` df.columns = ['a','b','c'] df.index = df.a.astype(str) dd=df.groupby('a').min()['c'] ``` it gives the result ``` 196512 12.7805170314276 196795 12.7805170314276 196341 12.7805170314276 196346 12.7805170314276 ``` but after grouping, I want to get the row with the minimum 'c' value, grouped by column 'a' and display that full matching row in result like, ``` 196512 118910 12.8983353775637 196795 28978 12.7805170314276 196341 28972 12.7805170314276 196346 28980 12.7805170314276 ```
In case this can help anyone else. Here is a solution that is more computationally efficient. TL;DR version ------------- If each row already has a unique index, then do this: ``` >>> df.loc[df.groupby('A')['C'].idxmin()] ``` If you've already indexed by 'A', then convert 'A' back into a column first. ``` >>> df2 = df.reset_index() >>> df2.loc[df2.groupby('A')['C'].idxmin()] ``` --- Step by Step explanation: Step 1. ------- First, make sure each row in your dataframe is uniquely indexed. This is the default when importing csv data. e.g. ``` >>> df = pd.read_csv('questionData.csv'); df A B C 0 196512 1325 12.901051 1 196512 114569 12.926770 2 196512 118910 12.898335 3 196512 100688 12.950509 4 196795 28978 12.780517 5 196795 34591 12.899411 6 196795 13078 12.913575 7 196795 24173 12.876965 8 196341 118910 12.898335 9 196341 100688 12.950509 10 196641 28972 12.780517 11 196641 34591 12.899411 12 196346 118910 12.898335 13 196346 100688 12.950509 14 196646 28980 12.780517 15 196646 34591 12.899411 ``` Aside: If you already converted column 'A' into an index, then you can turn the index back into a column (<https://stackoverflow.com/questions/20461165/how-to-convert-pandas-index-in-a-dataframe-to-a-column>) by doing: `df.reset_index()` Step 2. ------- Use the [pandas.DataFrame.idxmin](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmin.html) function to retrieve the indices of the minimum of each group. The semantics of the example below is this: "group by 'A', then just look at the 'C' column of each group, and finally return the index corresponding to the minimum 'C' in each group. ``` >>> indices = df.groupby('A')['C'].idxmin; indices A 196341 8 196346 12 196512 2 196641 10 196646 14 196795 4 Name: C, dtype: int64 ``` Step 3. ------- Finally, use the retrieved indices in the original dataframe using [pandas.DataFrame.loc](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html) to get the rows of the original dataframe correponding to the minimum values of 'C' in each group that was grouped by 'A'. ``` >>> df.loc[indices] A B C 8 196341 118910 12.898335 12 196346 118910 12.898335 2 196512 118910 12.898335 10 196641 28972 12.780517 14 196646 28980 12.780517 4 196795 28978 12.780517 ``` Note: The groupby('A') operation returns groups sorted by A. Thus 'indices' is sorted by A. If we want the original order, we just have to do ``` >>> df.loc[indices].sort_index() A B C 2 196512 118910 12.898335 4 196795 28978 12.780517 8 196341 118910 12.898335 10 196641 28972 12.780517 12 196346 118910 12.898335 14 196646 28980 12.780517 ```
Let $X\_1 \sim U[0,1]$ and $X\_i \sim U[X\_{i - 1}, 1]$, $i = 2, 3,...$. What is the expectation of $X\_1 X\_2 \cdots X\_n$ as $n \rightarrow \infty$?
**Update** I think it's a safe bet that the answer is $1/e$. I ran the integrals for the expected value from $n=2$ to $n=100$ using *Mathematica* and with $n=100$ I got ``` 0.367879441171442321595523770161567628159853507344458757185018968311538556667710938369307469618599737077005261635286940285462842065735614 ``` (to 100 decimal places). The reciprocal of that value is ``` 2.718281828459045235360287471351873636852026081893477137766637293458245150821149822195768231483133554 ``` The difference with that reciprocal and $e$ is ``` -7.88860905221011806482437200330334265831479532397772375613947042032873*10^-31 ``` I think that's too close, dare I say, to be a rational coincidence. The *Mathematica* code follows: ``` Do[ x = Table[ToExpression["x" <> ToString[i]], {i, n}]; integrand = Expand[Simplify[(x[[n - 1]]/(1 - x[[n - 1]])) Integrate[x[[n]], {x[[n]], x[[n - 1]], 1}]]]; Do[ integrand = Expand[Simplify[x[[i - 1]] Integrate[integrand, {x[[i]], x[[i - 1]], 1}]/(1 - x[[i - 1]])]], {i, n - 1, 2, -1}] Print[{n, N[Integrate[integrand, {x1, 0, 1}], 100]}], {n, 2, 100}] ``` **End of update** This is more of an extended comment than an answer. If we go a brute force route by determining the expected value for several values of $n$, maybe someone will recognize a pattern and then be able to take a limit. For $n=5$, we have the expected value of the product being $$\mu\_n=\int \_0^1\int \_{x\_1}^1\int \_{x\_2}^1\int \_{x\_3}^1\int \_{x\_4}^1\frac{x\_1 x\_2 x\_3 x\_4 x\_5}{(1-x\_1) (1-x\_2) (1-x\_3) (1-x\_4)}dx\_5 dx\_4 dx\_3 dx\_2 dx\_1$$ which is 96547/259200 or approximately 0.3724807098765432. If we drop the integral from 0 to 1, we have a polynomial in $x\_1$ with the following results for $n=1$ to $n=6$ (and I've dropped the subscript to make things a bit easier to read): $x$ $(x + x^2)/2$ $(5x + 5x^2 + 2x^3)/12$ $(28x + 28x^2 + 13x^3 + 3x^4)/72$ $(1631x + 1631x^2 + 791x^3 + 231x^4 + 36x^5)/4320$ $(96547x + 96547x^2 + 47617x^3 + 14997x^4 + 3132x^5 + 360x^6)/259200$ If someone recognizes the form of the integer coefficients, then maybe a limit as $n\rightarrow\infty$ can be determined (after performing the integration from 0 to 1 that was removed to show the underlying polynomial).
I'm currently training a CNN to do a binary classification. I'm getting fairly good results, but unfortunately the training is very unstable. Just by changing the seed the relative error changes by 20-30%. What can be the cause of this and how can I prevent it? Other info: * The amount of data is rather small. * I'm starting from imagenet snapshot.
What about this: do some manual preprocessing first. If you have many categorical variables, (Can't be that many for a survey.), for each one * order answers by decreasing frequency, * then lump them together into the say < 10 major responses, and assign the 10th to "other". Do so for each categorical variable. Sometimes it will be better to assign, say, only 3 major responses plus "other". Then do one-hot-encoding, (=categorical to numerical using dummy variables) then do simple KMeans clustering and interpret the resulting clusters yourself for plausibility. If you have only free-text responses in your survey, or lots of NAs, you have to do even more preprocessing first.
I have a data set that I need to analyze in R. A simplified version of it would be like this ``` SessionNo. Objects OtherColumns A 2 . A 3 B 4 C 1 D 2 D 1 D 2 D 3 E 5 ``` here each sessionno. represents one session of a broswer but due to the relation with other columns in the data it is aggregated like shown. So, Session 1 is now fragmented into two rows etc. What I need to find is the avg. number of objects downloaded per session (or any other statistics for no. of rows in each session). So, how do I count the no of objects for each session in R. Here, 5 objects in session A, 4 in session B, 8 in session D etc. I guess one way would be to sum the whole Objects column and count the no. of unique session numbers in SessionNo. But I guess it would be more of a general solution if I could group the unique session number with the total number of objects aggregated in it? Any suggestions on how to accomplish that in R?
The function that is specifically designed for this task is ave(). By default it returned the mean within a group and returns a vector of the same length as the two input arguments. It is designed to fill in columns of either means or deviations from means. If this is in a dataframe with name "tst": ``` > tst$tmn <- with(tst, ave(Objects, SessionNo.)) > tst$devmn <- tst$Objects- with(tst, ave(Objects, SessionNo.)) > tst SessionNo. Objects tmn devmn 1 A 2 2.5 -0.5 2 A 3 2.5 0.5 3 B 4 4.0 0.0 4 C 1 1.0 0.0 5 D 2 2.0 0.0 6 D 1 2.0 -1.0 7 D 2 2.0 0.0 8 D 3 2.0 1.0 9 E 5 5.0 0.0 ```
One would assume that a finite state transducer can perform any translation such that the resulting string is from a regular grammar. Similarly, that a pushdown transducer can generate strings from CFG grammars. I am, however, unsure about this because we can construct a finite state transducer which translates a string of the form $ab$ to $\{ a^n b^n : n \ge 1 \}$, which is irregular. Is this the case? If so, can anyone provide the relevant literature?
> > we can construct a finite state transducer [that maps $\{ab\}$ to $\{a^nb^n n \geq 1\}$]. > > > No, we can't. REG is closed against finite-state transduction. [Wikipedia](https://en.wikipedia.org/wiki/Regular_language#Closure_properties) hints as much without citation; [planetmath.org](http://planetmath.org/closurepropertiesonlanguages) lists some standard references, in particular Hopcroft/Ullman.
Can one maximize $\sum\_i c\_i x\_i^2$ where the $c\_i$ are constants (possibly negative), subject to linear constraints over the $x\_i$? [This paper](http://link.springer.com/article/10.1007/BF00120662) seems to come close to answering "no." They show it is NP-hard for target functions $x\_1 - x^2\_2$. However they have $x\_1$ which is not squared, and from the two pages I can access online I can't understand if that's critical or not. Bonus question: Is there a free software that can be tried to solve these problems (possibly heuristically)?
It's NP-hard. Here's a reduction from the feasibility version of Binary Integer Programming (BIP), which is NP-hard. The problem is to decide if there's a feasible solution to the constraints $Ax \leq b$ and $x\_i \in \{0,1\}$. It's easy to convert this to a problem with the constraints $Ax \leq b$ and $x\_i \in \{-1,1\}$. Now consider the following optimization problem: $\max \sum\_i x\_i^2$ subject to the constraints $Ax \leq b$ and $-1 \leq x\_i \leq 1$ for all $i$. This problem has objective value $n$ (the total number of variables $x\_i$) if and only if the original BIP problem was feasible.
Adleman's proof that $BPP$ is contained in $P/poly$ shows that if there is a randomized algorithm for a problem that runs in time $t(n)$ on inputs of size $n$, then there also is a deterministic algorithm for the problem that runs in time $\Theta(t(n)\cdot n)$ on inputs of size $n$ [the algorithm runs the randomized algorithm on $\Theta(n)$ independent randomness strings. There must be randomness for the repeated algorithm that is good for all $2^n$ possible inputs]. The deterministic algorithm is non-uniform - it may behave differently for different input sizes. So Adleman's argument shows that - if one doesn't care about uniformity - randomization can only speed-up algorithms by a factor that is linear in the input size. What are some concrete examples where randomization speeds up computation (to the best of our knowledge)? One example is polynomial identity testing. Here the input is an n-sized arithmetic circuit computing an m-variate polynomial over a field, and the task is to find out whether the polynomial is identically zero. A randomized algorithm can evaluate the polynomial on a random point, while the best deterministic algorithm we know (and possibly the best that exists) evaluates the polynomial on many points. Another example is minimum spanning tree, where the best randomized algorithm by Karger-Klein-Tarjan is linear time (and the error probability is exponentially small!), while the best deterministic algorithm by Chazelle runs in time $O(m\alpha(m,n))$ ($\alpha$ is the inverse Ackermann function, so the randomization speed-up is really small). Interestingly, it was proved by Pettie and Ramachandran that if there's a non-uniform deterministic linear time algorithm for minimum spanning tree, then there also exists a uniform deterministic linear time algorithm. What are some other examples? Which examples do you know where the randomization speed-up is large, but this is possibly just because we haven't found sufficiently efficient deterministic algorithms yet?
i dont know if this answers your question (or at least part of it). But for real-world examples where randomisation can provide a speed-up is in optimisation problems and the relation to the [No Free Lunch (**NFL**) theorem](http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization). There is a paper ["Perhaps not a free lunch but at least a free appetizer"](http://www.martinsewell.com/evolutionary-computation/nfl/DrJW98.pdf) where it is shown that employing randomisation, (optimisation) algorithms can have better performance. > > **Abstract:** > > > It is often claimed that Evolutionary Algorithms are superior to other optimization techniques, in particular, in situations where not much is known about the objective function to be optimized. In contrast to that Wolpert and Macready (1997) proved that all optimization techniques have the same behavior --- on average over all $f : X \rightarrow Y$ where $X$ and $Y$ are finite sets. This result is called [the] No Free Lunch Theorem. Here different scenarios of optimization are presented. It is argued why the scenario on which the No Free Lunch Theorem is based does not model real life optimization. For more realistic scenarios it is argued why optimization techniques differ in their efficiency. For a small example this claim is proved. > > > References: 1. [No Free Lunch Theorems for Optimization](http://ti.arc.nasa.gov/m/profile/dhw/papers/78.pdf) (original **NFL** theorem for optimisation) 2. [Perhaps not a free lunch but at least a free appetizer](http://www.martinsewell.com/evolutionary-computation/nfl/DrJW98.pdf) 3. [The No Free Lunch and description length](http://www.cs.colostate.edu/%7Egenitor/2001/gecco2001.pdf) (shows that **NFL** results hold for any subset $F$ of the set of all possible functions iff $F$ is closed under permutation, **c.u.p.**) 4. [On classes of functions for which No Free Lunch results hold](http://image.diku.dk/igel/paper/OCoFfwNFLRH.pdf) (It is proven that the fraction of subsets that are **c.u.p.** is negligibly small) 5. [Two Broad Classes of Functions for which a No Free Lunch Result Does Not Hold](http://www.cs.cmu.edu/%7Ematts/Research/mstreeter_gecco_2003.pdf) (shows that a **NFL** result does not apply to a set of functions when the description length of the functions is sufficiently bounded) 6. [Continuous lunches are free plus the design of optimal optimization algorithms](https://hal.inria.fr/inria-00369788/document) (shows that for continuous domains, [official version of] **NFL** does not hold. This free-lunch theorem is based on the formalization of the concept of random fitness functions by means of random fields) 7. [Beyond No Free Lunch: Realistic Algorithms for Arbitrary Problem Classes](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.172.6290&rep=rep1&type=pdf) (shows that "..[a]ll violations of the No Free Lunch theorems can be expressed as non-block-uniform distributions over problem subsets that are **c.u.p.**") 8. [Swarm-Based Metaheuristic Algorithms and No-Free-Lunch Theorems](http://cdn.intechopen.com/pdfs/32858.pdf) ("[..t]herefore, results for non-revisiting time-ordered iterations may not be true for the cases of revisiting cases, because the revisiting iterations break an important assumption of **c.u.p.** required for proving the NFL theorems (Marshall and Hinton, 2010)") 9. [No Free Lunch and Algorithmic Randomness](http://www.cs.bham.ac.uk/%7Ewbl/biblio/gecco2006etc/papers/lbp124.pdf) 10. [No Free Lunch and Benchmarks](http://duenez.evolicious.org/PDFs/Duenez2012a.pdf) (a set-theoretic approach it is generalised to criteria not-specific to **c.u.p.**, but still notes that (non-trivial) randomised algorithms can outperform deterministic algorithms on average, "[..]it has been demonstrated that probability is inadequate to affirm unconstrained **NFL** results in the general case. [..]this paper abandons probability, preferring a set-theoretic framework which obviates measure-theoretic limitations by dispensing with probability altogether") Summary on no-free-lunches (and free lunches) by David H. Wolpert, [What does dinner cost?](http://www.no-free-lunch.org/coev.pdf) (**note** that *NFL-type* theorems never specify an actual "*price*" due to their type of proof) specificaly for generalised optimisation (GO): > > 1. Two spaces $X$ and $Z$. E.g., $X$ is inputs, $Z$ is distributions over outputs. > 2. Fitness Function $f: X \to Z$ > 3. $m$ (perhaps repeated) sampled points of $f$: $$d\_m = \{d\_m(1), d\_m(2), ..., d\_m(m)\}$$ where $\forall t$, > $$d\_m(t) =\{d^X\_m(t),d^Z\_m(t)\}$$ > each $d^Z\_m(t)$ a (perhaps stochastic) function of $f[d^X\_m(t)]$ > 4. Search algorithm $a = \{d\_t \to d^X\_m(t) : t=0..m\}$ > 5. Euclidean vector-valued Cost function $C(f, d\_m)$ > 6. To capture a particular type of optimization problem,much of the problem structure is expressed in $C(., .)$ > > > **NFL theorems depend crucially on having $C$ be independent of $f$. If $C$ depends on $f$, free lunches may be possible. E.g., have $C$ > independent of $(f, d\_m)$, unless $f = f^\*$.** > > > Finally a simple (and a not-so-simple) remark why **randomisation** (in one form or another) may provide superior performance over strictly deterministic algorithms. 1. In the context of *optimisation* (although not restricted in this), a randomised search procedure can **on the average** *escape* local-extrema better than deterministic search, and reach global-extrema. 2. There is an interesting (but also not simple at a first glance) relation between ordering, cardinality and randomisation of a set (in the general sense). The powerset $2^A$ of a set $A$ (and its cardinality), intrinsicaly depends on a certain (staticaly) fixed ordering of the set (elements of) $A$. Assuming the ordering on (elements of) $A$ is not (staticaly) fixed (randomisation can enter here, in the form of random ordering), the set $A$ may be able to represent its own powerset (if it helps think of it as a kind of *quantum analog* of a classic set, where dynamic ordering plays such a role as to account for a kind of *superposition principle*).
For some time now I have been studying both support vector machines and neural networks and I understand the logic behind each of these techniques. Very briefly described: * In a support vector machine, using the kernel-trick, you "send" the data into a higher dimensional space where it can be linearly separable. * In a neural network you perform a series of linear combinations mixed with (usually) non linear activation functions across several layers. So far I have seen that neural networks tend to provide the best predictive results among machine learning alternatives. Of course, compared with other more classical tools like multivariate regression, they have some drawbacks, like providing little (if any) interpretability of the variables, while in regression the interpretability of the variables is immediate. My question is: Neural networks seem to provide better predictive results than support vector machines, and both provide the same amount of interpretability (which is *none*). Is there any situation in which using a support vector machine would be better than using a neural network?
SVM is interesting if you have a kernel in mind that you *know* is appropriate, or a domain-specific kernel that would be difficult to express in a differentiable way (a common example might be a string-similarity space for DNA sequences). But what if you have no idea what kind of kernel you should use? What if your data is a wide collection of values and you're not even sure in advance which ones have relevance? You could spend human researcher time doing feature engineering, or you could try automatic kernel search methods, which are pretty expensive, but might even come up with something that could be considered interpretable, on a good day. Or you could dump the whole thing into a DNN and train. What a neural net does through backprop and gradient descent could very well be considered to be *learning a kernel*, only instead of having a nice functional form, it's composed (literally) of a large number of applications of a basic nonlinearity, with some additions and multiplications thrown in. The next-to-last layer of a typical classification network is the result of this — it's a projection into a space with one dimension per neuron in that layer, where the categories are well-separated, and then the final result (ignoring the softmax, which is really just a kind of normalization) is an affine map of that space into one where the categories are axis-aligned, so the surfaces of separation come for free with the geometry (but we could send them backwards onto that second-to-last layer if we wanted). The DNN classifier accomplishes something very similar to an SVM classifier, only it does it in a "dumb" way using gradient descent and repetition of simple differentiable units. But sometimes in computation, "dumb" has its advantages. Ease of application to GPUs (which love applying the same simple operation to a large number of data points in parallel) is one. The ability of SGD and minibatch gradient descent to scale up to *very large* numbers of examples with minimal loss of efficiency is another. Of course, it comes with its own downsides. If you make the wrong choices of NN architecture, initial weights, optimization method, learning rate, batch size, etc. then the stochastic training process might completely fail to converge, or take a million years to do so — whereas SVM training is basically deterministic. (Forgive an amateur blundering around, oversimplifying, and abusing terminology; these are my personal experiences after 15 years or so of playing with this stuff on an occasional hobby level).
I want to prove the non deterministic space hierarchy theorem. Let $f(n),g(n)\geq\log n$ be space constructible functions such that $f(n)=o(g(n))$, **Prove**: $$NSPACE(f(n))\subsetneq NSPACE(g(n))$$ I feel that the standard way of constructing a TM that takes as an input a TM and simulates the machine on itself, then flipping the output won't work because the input is a nondetrministic TM maybe. Can someone suggest a hint?
Here is an abstract definition of the concepts NP-hard and NP-complete. If $A,B$ are two decision problems, say that $A \leq B$ if there is a polytime reduction from $A$ to $B$, that is, if there exists a polytime function $f$ such that $x \in A$ iff $f(x) \in B$. Then: $$ \begin{align\*} &\mathsf{NP\text{-}hard} = \{ B : A \leq B \text{ for all } A \in \mathsf{NP} \}, \\ &\mathsf{NP\text{-}complete} = \mathsf{NP\text{-}hard} \cap \mathsf{NP}. \end{align\*} $$ Here is a different example. Consider the universe of all subsets of $\mathbb{N}$, ordered under $A \leq B$ if $A \subseteq B$. Let $\mathsf{X}$ consist of all subsets of some fixed set $X$. Define $\mathsf{X\text{-}hard}$ and $\mathsf{X\text{-}complete}$ just as above. You can check that $$ \begin{align\*} &\mathsf{X\text{-}hard} = \{ S : S \supseteq X \}, \\ &\mathsf{X\text{-}complete} = \{ X \}. \end{align\*} $$ In a similar way, NP-hardness is a lower bound on the difficulty, whereas NP-completeness is both a lower bound and an upper bound. In contrast to the second example above, there are many NP-complete problems, all of them having the same level of difficulty (two problems $A,B$ have the same level of difficulty if $A \leq B \leq A$). Finally, here is a concrete problem which is NP-hard but not NP-complete: the halting problem. **The halting problem is NP-hard.** Let $A$ be any computable decision problem, say computed by a Turing machine $M$ which always halts. Let $M'$ be the Turing machine which simulates $M$, halts if $M$ accepts, and runs into an infinite loop if $M$ rejects. Define a polytime function $f$ by $f(x) = (\langle M' \rangle,x)$. Then $x \in A$ iff $M$ accepts $x$ iff $M'$ halts on $x$ iff $f(x) \in \mathsf{HALT}$. **The halting problem is not in NP.** The halting problem is not computable, and in particular not in NP.
I've been recently working on the following problem: Let $F = \{F\_1, F\_2,F\_3\}$ denote a set of *feature sets*. For example, $F\_1$ is comprised of 100 actual features. Before training a logistic regression classifier, I re-weight the *individual* feature sets (re-scaled between 0 and 1) as follows: $$F\_x = w\_x \* F\_x,$$ i.e., by multiplying each feature set with a weight between 0 and 1. My question is the following: Assuming I get (for the aforementioned example) weights: $\mathcal{W} = \{w\_1 = 0.2, w\_2 = 0.9,w\_3 = 0.000004\}$ (the index corresponds to an individual feature set from $F$). Can I interpret this in the lines of: "The second feature set (2) contributes the most to the learning" etc.? With other words: How sensitive is logistic regression to such changes in feature values. Thanks!
This depends on whether you are talking about logistic regression or penalized (e.g. LASSO, ridge or elastic net) logistic regression. * In standard logistic regression re-scaling features in any way doesn't really make any difference to the maximum likelihood estimate (obviously, coefficients will correspondingly end up being re-scaled), unless you multiply the feature by 0 (which just destroys all information). I.e. if before re-scaling the log-odds for an event increase by 0.5 for every year of age and you multiply age by 0.1, then the new coefficient will be 5 (i.e. for every 10 years of age the log-odds go up by 5). However, re-scaling coefficients can cause or fix numerical issues in fitting a logistic regression model. * In penalized logistic regression, re-scaling features matters, because (as explained above) the absolute size of the maximum likelihood estimates changes and so features that were re-scaled with small factors get penalized more and those re-scaled with large factors get penalized less. Here, "penalized more" means that the regression coefficients will be shrunk from the maximum likelihood estimate relatively more towards zero than for the coefficients for other features. * The second bullet is why, if one has no particular reason to favor some features over others, it is common to standardize (i.e. center on zero and fix SD to 1) features before applying penalized logistic regression. The kind of approach you describe can be applied thereafter to do something like expressing a prior belief about which coefficients are more likely to be truly further away from zero. If done without standardizing first, I'm not really sure why I would do it.
I am trying to understand the connection between *The W-hierarchy* as presented in chapter 13 of [this book](http://parameterized-algorithms.mimuw.edu.pl/parameterized-algorithms.pdf) by Cygan et al. and the notion of the *NP* problems. Is the existence of an FPT algorithm for a problem in *W[1]* suggests that *P=NP*? Why? For example, assuming I have an FPT algorithm for the [*k-Clique problem*](https://en.wikipedia.org/wiki/Clique_problem). Can I prove the *P=NP*? The algorithm run time will still be exponential. Only now, it will depend on $k$.
The correct answer is no. 3. Suppose $A \in \text{NP}$ and $A \notin \text{co-NP}$. Clearly this shows $\text{NP} \neq \text{coNP}$, but that's not a possible choice for this question. Observe that complement of the machine output can be trivially implemented in a deterministic polynomial machine. That is to say, $\text{P} = \text{co-P}$. Thus, if $\text{P} = \text{NP}$, then $P = \text{co-NP}$, which together imply $\text{NP} = \text{co-NP}$. This cannot be the case by assumption, so $\text{P} \neq \text{NP}$.
If you could include your thought process in determining why it's regular it would help me a lot. * $L\_1 = (0^\*(10)^\*11)$ * $L\_2 = \{ \langle M \rangle \mid M \text{ is a Turing machine that halts on all inputs from }L\_1 \}$ * $L\_3 = \{ x \in \{0,1\}^\* \mid \exists y \in L\_2. xy \in L\_1 \}$ Why is $L\_3$ regular? It's a set of strings, I need to determine if there's a DFA that can accept it. Do I even care about $L\_2$ and $L\_1$ in this case?
**Short answer**. The language $L\_1$ is clearly regular. Further, $L\_3 = L\_1L\_2^{-1}$ is a quotient of $L\_1$ and hence it is regular since the quotient of a regular language by any language (regular or not) is regular. So you don't care about $L\_2$, but you do care about $L\_1$. **More details**. Let $L$ and $R$ be languages. Then $LR^{-1} = \{ u \in A^\* \mid ur \in L \text{ for some }r \in R\}$. Therefore $LR^{-1} = \bigcup\_{r \in R} Lr^{-1}$. If $L$ is regular, there are only finitely many languages of the form $Lr^{-1}$, each of which is regular (this is the dual form of Nerode's lemma) and hence $LR^{-1}$ is a finite union of regular languages and thus it is regular.
Is this equation: $$\log{(y)} = a + bx$$ semi-log or log-linear mode (or it is the same thing)? I have two models: linear (1) and semi-log (2). The values of $R^{2}$, adjusted $R^{2}$, and Standard Error are: * Linear: $R^{2} = 0.6780,~\mathrm{adj.}~R^{2} = 0.6513,~~\mathrm{SE}=94.101$ * Semi-log: $R^{2} = 0.5803,~\mathrm{adj.}~R^{2} = 0.5455,~~\mathrm{SE}=0.5493$ How to interpret this values especially from the second model?
This is an answer to the first part of the question regarding the description of the model: $$\log{(y)} = a + bx.......(1)$$ It is important to distinguish: i) whether a model is linear in the sense of the [Classical Linear Regression Model](http://en.wikibooks.org/wiki/Econometric_Theory/Assumptions_of_Classical_Linear_Regression_Model#Linearity) (CLRM), and ii) whether a model has linear functional form. Model (1) is linear in the first sense because it is linear in the parameters $a$ and $b$, and this is not affected by the log of $y$. Similarly, models (2), (3) and (4) below are all linear in the CLRM sense: $$y = a + bx.......(2)$$ $$y = a + b\*log(x).......(3)$$ $$log(y) = a + b\*log(x).......(4)$$ However, of the above models only Model (2) has linear functional form. Models (1) and (3) could both be said to have semi-log functional form, although it is better I suggest to be more precise and indicate which variable is logged by describing (1) as semi-log (dependent) and (3) as semi-log (independent). The functional form of Model (4) is sometimes described as log-linear and sometimes as double log.
A recent question on the difference between confidence and credible intervals led me to start re-reading Edwin Jaynes' article on that topic: Jaynes, E. T., 1976. `Confidence Intervals vs Bayesian Intervals,' in Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, W. L. Harper and C. A. Hooker (eds.), D. Reidel, Dordrecht, p. 175; ([pdf](http://bayes.wustl.edu/etj/articles/confidence.pdf)) In the abstract, Jaynes writes: > > ...we exhibit the Bayesian and orthodox solutions to six common statistical problems involving confidence intervals (including significance tests based on the same reasoning). In every case, we find the situation is exactly the opposite, i.e. the Bayesian method is easier to apply and yields the same or better results. Indeed, the orthodox results are satisfactory only when they agree closely (or exactly) with the Bayesian results. **No contrary example has yet been produced.** > > > (emphasis mine) The paper was published in 1976, so perhaps things have moved on. My question is, are there examples where the frequentist confidence interval is clearly superior to the Bayesian credible interval (as per the challenge implicitly made by Jaynes)? Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches.
I said earlier that I would have a go at answering the question, so here goes... Jaynes was being a little naughty in his paper in that a frequentist confidence interval isn't defined as an interval where we might expect the true value of the statistic to lie with high (specified) probability, so it isn't unduly surprising that contradictions arise if they are interpreted as if they were. The problem is that this is often the way confidence intervals are used in practice, as an interval highly likely to contain the true value (given what we can infer from our sample of data) is what we often want. The key issue for me is that when a question is posed, it is best to have a direct answer to that question. Whether Bayesian credible intervals are worse than frequentist confidence intervals depends on what question was actually asked. If the question asked was: (a) "Give me an interval where the true value of the statistic lies with probability p", then it appears a frequentist cannot actually answer that question directly (and this introduces the kind of problems that Jaynes discusses in his paper), but a Bayesian can, which is why a Bayesian credible interval is superior to the frequentist confidence interval in the examples given by Jaynes. But this is only becuase it is the "wrong question" for the frequentist. (b) "Give me an interval where, were the experiment repeated a large number of times, the true value of the statistic would lie within p\*100% of such intervals" then the frequentist answer is just what you want. The Bayesian may also be able to give a direct answer to this question (although it may not simply be the obvious credible interval). Whuber's comment on the question suggests this is the case. So essentially, it is a matter of correctly specifying the question and properly intepreting the answer. If you want to ask question (a) then use a Bayesian credible interval, if you want to ask question (b) then use a frequentist confidence interval.
We defined the diagonal language as follows in the lecture: \begin{align\*} L\_{\text{diag}}=\left\{w \in \left\{0, 1\right\} ^{\*}\mid w=w\_{i} \text{ for some }i \in \mathbb{N} \text{ and }M\_{i} \text{ does not accept }w\_{i}\right\} .\end{align\*} Here, $w\_i$ is the $i$-th word in canonical order and $M\_i$ is the $i$th Turing machine, canonically ordered by their encodings in binary. No I asked myself the question whether this language could be empty, e.g. if every $M\_i$ would accept the word $w\_i$. Could this happen, depending on how we define the encoding of a Turing machine?
Here is a simple direct proof that $L\_{\text{diag}}$ is not empty. Let $N$ be a Turing machine that does not accept any word. For example, a Turing machine that just loops forever. Suppose $N$ is encoded as the $j$-th machine. Then $L\_{\text{diag}}$ contains the word $w\_j$. Hence $L$ is not empty.
The Likert scale I am analyzing has both positive and negative questions, but all the items are answered with 4 - strongly agree, 3 - agree, 2 - disagree and 4 - strongly disagree. For analysis, should I make the weight of the answers the same? For example, if the statement is positive, then 4 must be given to strongly agree, and then for negative statements, a 4 should also be given to strongly disagree. What else can I do aside from Mean and Standard Deviation?
I would recommend reading Agresti's work (e.g., ["Analysis of Ordinal Categorical Data"](http://books.google.com/books?id=VVIe4BPDR7kC)) as a starting place for Likert scale data.
If I understand correctly a confidence interval of a parameter is an interval constructed by a *method* which yields intervals containing the true value for a specified proportion of samples. So the 'confidence' is about the method rather than the interval I compute from a particular sample. As a user of statistics I have always felt cheated by this since the space of all samples is hypothetical. All I have is one sample and I want to know what that sample tells me about a parameter. Is this judgement wrong? Are there ways of looking at confidence intervals, at least in some circumstances, which would be meaningful to users of statistics? [This question arises from second thoughts after dissing confidence intervals in a math.se answer <https://math.stackexchange.com/questions/7564/calculating-a-sample-size-based-on-a-confidence-level/7572#7572> ]
I like to think of CIs as some way to escape the Hypothesis Testing (HT) framework, at least the binary decision framework following [Neyman](http://j.mp/awJEkH)'s approach, and keep in line with theory of measurement in some way. More precisely, I view them as more close to the reliability of an estimation (a difference of means, for instance), and conversely HT are more close to hypothetico-deductive reasoning, with its pitfalls (we cannot accept the null, the alternative is often stochastic, etc.). Still, with both interval estimation and HT we have to rely on distribution assumptions most of the time (e.g. a sampling distribution under $H\_0$), which allows to make inference from our sample to the general population or a representative one (at least in the frequentist approach). In many context, CIs are complementary to usual HT, and I view them as in the following picture (it is under $H\_0$): ![alt text](https://i.stack.imgur.com/9VSib.png) that is, under the HT framework (left), you look at how far your statistic is from the null, while with CIs (right) you are looking at the null effect "from your statistic", in a certain sense. Also, note that for certain kind of statistic, like odds-ratio, HT are often meaningless and it is better to look at its associated CI which is assymmetrical and provide more relevant information as to the direction and precision of the association, if any.
I'm fairly new to statistics (a handful of beginner-level Uni courses) and was wondering about sampling from unknown distributions. Specifically, if you have no idea about the underlying distribution, is there any way to "guarantee" that you get a representative sample? Example to illustrate: say you're trying to figure out the global distribution of wealth. For any given individual, you can somehow find out their exact wealth; but you can't "sample" every single person on Earth. So, let's say you sample n = 1000 people at random. 1. If your sample didn't include Bill Gates, you might think there are no billionaires in existence. 2. If you sample did include Bill Gates, you might think billionaires are more common than they actually are. In either case, you can't really tell how common or rare billionaires are; you may not be even able to tell whether any exist at all. Does a better sampling mechanism exist for a case like this? How would you tell a priori what sampling procedure to use (and how many samples are needed)? It seems to me that you might have to "sample" a huge percentage of the population to know, with anything approaching reasonable certainty, how common or rare billionaires are on the planet, and that this is due to the underlying distribution being a bit difficult to work with.
I dispute your claim that "In either case, you can't really tell how common or rare billionaires are". Let $f$ be the unknown fraction of billionaires in the population. With a uniform prior on $f$, the posterior distribution of $f$ after $1000$ draws that turned out to have 0 billionaires is a Beta(1,1001) distribution, which looks like this: ![p(f|b=0)](https://i.stack.imgur.com/p8xeE.png) While the posterior distribution of $f$ after $1000$ draws that turned out to have 1 billionaire is a Beta(2,1000) distribution, which looks like this: ![p(f|b=1)](https://i.stack.imgur.com/T0GHz.png) In both cases, you can be quite certain that $f < 0.01$. You might think that isn't precise enough. But actually 0.01 is quite precise for a sample of size 1000. Most other quantities that you might estimate would be less precise than this. For example, the fraction of males could only be estimated within a range of size 0.1.
There are numerous graph theoretic tools/packages. Each with its pros and cons. What should be the semantics/syntax of a programming language meant to solve graph theoretic problems?
Sazzad - I'm a bit frustrated with your responses. You have been given useful answers and suggestions, yet you remain unsatisfied. Either use the best-of-breed tools that are available on your system for several aspects of graph research, or write your own.
I am running a generalized linear model with Gamma distribution in R (glm, family=gamma) for my data (gene expression as response variable and few predictors). I want to calculate r-squared for this model. I have been reading about it online and found there are multiple formulas for calculating $R^2$ (psuedo) for glm (in R) with Gaussian (r2 from linear model), logistic regression (1-deviance/null deviance), Poisson distribution (using pR2 in the `pscl` package, D-squared value from the `modEvA` R package). But I could not find anything specific to Gamma distributions. Can `pscl` and `modEVA` packages be used for the Gamma distribution as well, or is there any other formula for doing the same?
I know I'm late to the party, but: the theory behind the data imbalance problem has been beautifully worked out by [Sugiyama (2000)](https://www.sciencedirect.com/science/article/abs/pii/S0378375800001154) and a huge number of highly cited papers following that, under the keyword "covariate shift adaptation". There is also a whole book devoted to this subject by Sugiyama / Kawanabe from 2012, called "Machine Learning in Non-Stationary Environments". For some reason, this branch of research is only rarely mentioned in discussions about learning from imbalanced datasets, possibly because people are unaware of it? The gist of it is this: **data imbalance is a problem if a) your model is misspecified, and b) you're *either* interested in good performance on a minority class *or* you're interested in *the model itself*.** The reason can be illustrated very simply: if the model does not describe reality correctly, it will minimize the deviation from the most frequently observed type of samples (figure taken from [Berk et al. (2018)](https://doi.org/10.1007/s10940-017-9348-7)): [![enter image description here](https://i.stack.imgur.com/6MSTv.png)](https://i.stack.imgur.com/6MSTv.png) I will try to give a very brief summary of the technical main idea of Sugiyama. Suppose your training data are drawn from a distribution $p\_{\mathrm{train}}(x)$, but you would like the model to perform well on data drawn from another distribution $p\_{\mathrm{target}}(x)$. This is what's called "covariate shift", and it can also simply mean that you would like the model to work equally well on all regions of the data space, i.e. $p\_{\mathrm{target}}(x)$ may be a uniform distribution. Then, instead of minimizing the expected loss over the training distribution $$ \theta^\* = \arg \min\_\theta E[\ell(x, \theta)]\_{p\_{\text{train}}} \approx \arg \min\_\theta \frac{1}{N}\sum\_{i=1}^N \ell(x\_i, \theta)$$ as one would usually do, one minimizes the expected loss over the target distribution: $$ \theta^\* = \arg \min\_\theta E[\ell(x, \theta)]\_{p\_{\text{target}}} \\ = \arg \min\_\theta E\left[\frac{p\_{\text{target}}(x)}{p\_{\text{train}}(x)}\ell(x, \theta)\right]\_{p\_{\text{train}}} \\ \approx \arg \min\_\theta \frac{1}{N}\sum\_{i=1}^N \underbrace{\frac{p\_{\text{target}}(x\_i)}{p\_{\text{train}}(x\_i)}}\_{=w\_i} \ell(x\_i, \theta)$$ In practice, this amounts to simply weighting individual samples by their importance $w\_i$. The key to practically implementing this is an efficient method for estimating the importance, which is generally nontrivial. This is one of the main topics of papers on this subject, and many methods can be found in the literature (keyword "Direct importance estimation"). All the oversampling / undersampling / SMOTE techniques people use are essentially just different hacks for implementing importance weighting, I believe.
I just started reading about theory of computation. If we compare which is more powerful (in accepting strings), both are same. But what about efficiency ? DFA will be fast compared to NFA, since it has only one outgoing edge & there will be no ambiguity. But in case of NFA we have to check all possible cases & that surely takes time. So can we say DFA is more efficient than NFA ? But, my other part of brain is also thinking that NFA exists only in theory, so we cannot compare it's efficiency with DFA.
In terms of power, they are equivalent as you said and there is an algorithm (subset construction) for converting an NFA to an equivalent DFA. As you might tell from the algorithm's name, it constructs subsets of states of the NFA. If your NFA has $n$ states, the algorithm may output a DFA with $2^n$ states but that's an upper bound. Sometimes, the number of states does not change at all or even reduces. So in practice, it matters less as to which one to use. DFA matching is linear in the size of the input string. NFA matching involves backtracking so NFAs do more work. Thus, DFAs are more efficient.
Suppose we are building/testing a fraud detection model for a specific credit card/ or a quick cash loan business. We have a lot of data to play with (say past 5years), and after careful preprocessing, model selection, and parameter-tuning, we build a good model to detect/prevent fraud. We thought we did a superb job. However, as we build our model, Con-artists are developing their anti-fraud-detection system/methodology and soon enough, the behavior pattern of frauds become completely deferent. The model we build before become useless,and we need to build new models again... I have very limited working experiences in building fraud detection models. My question is if there are any machine-learning models/combined models can self-evolve and detect this behavior changing issue, and quickly capture this pattern and adapt? or are there any academic/practical resources regarding this self-evolve AI/machine-learning? Thank you.
The behavior that you describe is called [concept drift](https://en.wikipedia.org/wiki/Concept_drift). Mostly concept drift is due to a natural change in the underlying data. In your case, you are in a game against the fraudsters, so you might want to be even more strict. In order to know whether your model still performs well, due to a change in the fraud patterns or a natural change, most methods are based on checking it from time to time. If there is a change, you can relearn if you have enough information. If you don't have enough data (very likely) or if you want to benefit from the previous methods, techniques designed of [domain adaptation](https://en.wikipedia.org/wiki/Domain_adaptation) are very useful. In case that you want to be very strict in the game, you can assume that your rival have unlimited computational power. Learning in this setting is described at > > [Learning in the Presence of Malicious Errors](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.3548&rank=1) by Michael Kearns , Ming Li > > >
This is a restatement of an [earlier question](https://cstheory.stackexchange.com/questions/9507/permutation-game). Consider the following [impartial](http://en.wikipedia.org/wiki/Impartial_game) [perfect information](http://en.wikipedia.org/wiki/Perfect_information) game between two players, Alice and Bob. The players are given a permutation of the integers 1 through n. At each turn, if the current permutation is increasing, the current player loses and the other player wins; otherwise, the current player removes one of the numbers, and play passes to the other player. Alice plays first. For example: * (1,2,3,4) — Bob wins immediately, by definition. * (4,3,2,1) — Alice wins after three turns, no matter how anyone plays. * (2,4,1,3) — Bob can win on his first turn, no matter how Alice plays. * (1,3,2,4) — Alice wins immediately by removing the 2 or the 3; otherwise, Bob can win on his first turn by removing the 2 or the 3. * (1,4,3,2) — Alice *eventually* wins if she takes the 1 on her first turn; otherwise, Bob can win on his first turn by *not* removing the 1. Is there a polynomial-time algorithm to determine which player wins this game from a given starting permutation, assuming [perfect play](http://en.wikipedia.org/wiki/Solved_game#Perfect_play)? More generally, because this is a standard impartial game, every permutation has a [Sprague–Grundy value](http://en.wikipedia.org/wiki/Sprague%E2%80%93Grundy_theorem); for example, (1,2,4,3) has value [\*1](http://en.wikipedia.org/wiki/Star_%28game_theory%29) and (1,3,2) has value \*2. How hard is it to compute this value? The obvious backtracking algorithm runs in O(n!) time, although this can be reduced to $O(2^n poly(n))$ time via dynamic programming.
The "permutation game" is isomorphic to the following game: > > **Disconnect.** Players alternately remove vertices from a graph $G$. The player that produces a fully disconnected graph (i.e., a graph with no edges) is the winner. > > > The graph $G\_{\pi}$ corresponding to a particular initial permutation $\pi\in S\_n$ contains just those edges $(i,j)$ for which $i-j$ and $\pi(i)-\pi(j)$ have opposite signs. That is, each pair of numbers in the *wrong* order in the permutation is associated with an edge. Clearly the allowed moves are isomorphic to those in the permutation game (remove a number = remove a node), and the winning conditions are isomorphic as well (no pairs in descending order = no edges remaining). A complementary view is obtained by considering playing a "dual" game on the graph complement $G^{c}\_\pi = G\_{R(\pi)}$, which contains those edges $(i,j)$ for which $i$ and $j$ are in the *correct* order in the permutation. The dual game to Disconnect is: > > **Reconnect.** Players alternately remove vertices from a graph $G$. The player that produces a complete graph is the winner. > > > Depending on the particular permutation, one of these games may seem simpler than the other to analyze. The advantage of graph representation is that it is clear that disconnected components of the graph are separate games, and so one hopes for some reduction in complexity. It also makes the symmetries of the position more apparent. Unfortunately, the winning conditions are non-standard... the permutation game will always end before all moves are used up, giving it something of a *misère* character. In particular, the nim-value cannot be calculated as the nim-sum (binary XOR) of the nim-values of the disconnected components. --- For Disconnect, it is not hard to see that for any graph $G$ and any even $n$, the game $G \cup \bar{K}\_n$ is equivalent to $G$ (where $\bar{K}\_n$ is the edgeless graph on $n$ vertices). To prove it, we need to show that the disjunctive sum $G + G\cup\bar{K}\_n$ is a second-player win. The proof is by induction on $|G|+n$. If $G$ is edgeless, then the first player loses immediately (both games are over). Otherwise, the first player can move in either $G$, and the second player can copy his move in the other one (reducing to $G' + G'\cup \bar{K\_n}$ with $|G'|=|G|-1$); or, if $n\ge 2$, the first player can move in the disconnected piece, and the second player can do the same (reducing to $G + G\cup\bar{K}\_{n-2}$). This shows that any graph $G$ is equivalent to $H \cup K\_p$, where $H$ is the part of $G$ with no disconnected vertices, and $p=0$ or $1$ is the *parity* of the number of disconnected vertices in $G$. All games in an equivalence class have the same nim-value, and moreover, the equivalence relation respects the union operation: if $G \sim H \cup K\_p$ and $G' \sim H' \cup K\_{p'}$ then $G \cup G' \sim (H \cup H')\cup K\_{p\oplus p'}$. Moreover, one can see that the games in $[H \cup K\_0]$ and $[H \cup K\_1]$ have different nim-values unless $H$ is the null graph: when playing $H + H \cup K\_1$, the first player can take the isolated vertex, leaving $H+H$, and then copy the second player's moves thereafter. I do not know any related decomposition results for Reconnect. --- Two special types of permutations correspond to particularly simple heap games. 1. The first is an *ascending run of descents*, e.g., $32165487$. When $\pi$ takes this form, the graph $G\_{\pi}$ is a union of disjoint cliques, and the game of Disconnect reduces to a game on heaps: players alternately remove a single bean from a heap *until all heaps have size $1$*. 2. The second is a *descending run of ascents*, e.g., $78456123$. When $\pi$ takes this form, the graph $G^{c}\_{\pi}$ is a union of disjoint cliques, and the game of Reconnect reduces to a game on heaps: players alternately remove a single bean from a heap *until there is only one heap left*. A little thought shows that these two different games on heaps (we can call them **1-Heaps** and **One-Heap**, at some risk of confusion) are, in fact, themselves isomorphic. Both can be represented by a game on a Young diagram (as initially proposed by @domotorp) in which players alternate removing a lower-right square until only a single row is left. This is obviously the same game as 1-Heaps when columns correspond to heaps, and the same game as One-Heap when rows correspond to heaps. A key element of this game, which extends to Disconnect and Reconnect, is that the duration is related to the final game state in a simple way. When it is your turn, you will win if the game has an odd number of moves remaining, including the one you're about to make. Since a single square is removed each move, this means you want the number of squares remaining at the end of the game to have the opposite parity that it has now. Moreover, the number of squares will have the same parity on all of your turns; so you know from the outset what parity you want the final count to have. We can call the two players Eve and Otto, according to whether the final count must be even or odd for them to win. Eve always moves in states with odd parity and produces states with even parity, and Otto is the opposite. In his answer, @PeterShor gives a complete analysis of One-Heap. Without repeating the proof, the upshot is the following: * Otto likes $1$-heaps and $2$-heaps, and can tolerate a single larger heap. He wins if he can make all heap sizes except one $\le 2$, at least without giving Eve an immediate win of the form $(1,n)$. An optimal strategy for Otto is to always take from the second-largest heap except when the state is $(1,1,n>1)$, when he should take from the $n$. Otto will lose if there are too many beans in big heaps to start with. * Eve dislikes $1$-heaps. She wins if she can make all heap sizes $\ge 2$. An optimal strategy for Eve is to always take from a $1$-heap, if there are any, and never take from a $2$-heap. Eve will lose if there are too many $1$-heaps to start with. As noted, this gives optimal strategies for 1-Heaps as well, although they are somewhat more awkward to phrase (and I may well be making an error in primary-to-dual "translation"). In the game of 1-Heaps: * Otto likes one or two large heaps, and can tolerate any number of $1$-heaps. He wins if he can make all but the two largest heaps be $1$-heaps, at least without giving Eve an immediate win of the form $(1,1,\dots,1,2)$. An optimal strategy for Otto is to always take from the third-largest heap, or from the smaller heap when there are only two heaps. * Eve dislikes a gap between the largest and second-largest heaps. She wins if she can make the two largest heaps the same size. An optimal strategy for Eve is to always take from the largest heap, if it is unique, and never if there are exactly two of the largest size. As @PeterShor notes, it isn't clear how (or if) these analyses could be extended to the more general games of Disconnect and Reconnect.
I know that Symbolic Model Checking is state space traversal based on representations of states sets and transition relations as formulas like in CTL using models like Kripke Model. I know the theory. But I'm finding it hard to understand the actual application. Where exactly is it used? What exactly does it do and how does it work? Can someone explain with a real example and relate theory to practice?
Symbolic Model Checking is Model Checking that works on symbolic states. That is, they encode the states into symbolic representations, typically Ordered Binary Decision Diagrams (OBDDs). The question is what do they do and how do they work. You first have your source code for some application. You then transform your source code into some state-transition graph like a Kripke Structure. The states are filled with atomic propositions which describe what is true in that particular state. In Symbolic Model Checking the atomic propositions are encoded as OBDDs to save on space and improve performance. The Model Checker then starts at some initial state, and explores the states, looking for errors in the state-transition graph. If it finds an error it will often generate a test case demonstrating the error. It uses the symbolic OBDDs to somewhat optimally navigate the state space. Wish I could explain more there but still learning. But that's basically it. You have a program converted into a formal model (state-transition graph), and then you use symbolic optimizations to navigate the state space to look for errors (by comparing it against an LTL/CTL specification). And if an error is found, the Model Checker gives you some stuff to help document and solve it.
Let $p(n)$ denote the number of [partitions](https://en.wikipedia.org/wiki/Partition_(number_theory)#Algorithm) of $n\in\mathbb{N}$ (briefly, number of ways to split a pile of $n$ stones into $\geq1$ unordered nonempty parts). The classical dynamic programming algorithm to find $p(n)$ is to construct a square table $A$ where $$A\_{i,j} = \text{partitions of $i$ where each part is $\leq j$}$$ and recursively fill it using the rules $$A\_{i,j} = A\_{i,j-1} \text{ if }j>i$$ and $$A\_{i,j} = A\_{i,j-1} + A\_{i-j,j}\text{ otherwise}$$ with the appropriate conventions for the corner cases. Then $p(n)=A\_{n,n}$. This takes $O(n^2)$ operations on integers (let's say we're looking for $p(n)$ modulo some big number, so the size of the numbers is $O(1)$). We can optimize the memory down to $O(n)$ by noticing that we only need the previous column to find the next one. *However*, the [pentagonal number theorem](https://en.wikipedia.org/wiki/Pentagonal_number_theorem) due to Euler says that $$p(n) = \sum\_{k\in\mathbb{Z}:g\_k\leq n}(-1)^{k+1}p(n-g\_k)$$ where $g\_k = k(3k-1)/2$ are the **pentagonal numbers**. This allows us to recursively build a one-dimensional list of the $p(i)$, using only $O(n\sqrt{n})$ operations, since the above sum contains $O(\sqrt{n})$ terms. **My question:** I was wondering if there are other examples of natural problems for which there is such a surprising speed-up over the standard dynamic programming algorithm, or if this is more of an isolated case complexity-wise.
I am not sure that the below question asked by me several years ago is suitable. <https://mathoverflow.net/questions/153475> If we use naive dynamic programming, the complexity will be $\Omega(k n^2)$ (I am not sure whether it is $\Theta(k n^2)$ or $\Theta(k n^3)$). Whereas using the fact mentioned in the post (i.e. the linear recurrence relation), we can solve it in $\mathcal{O}(k^2 \log k \log n)$ time since the order of recurrence is $\mathcal{O}(k^2)$ and computing the $n$-th term of a linear recurrence sequence of order $m$ can be achieved in $\mathcal{O}(m \log m \log n)$ time.
When we study correlation or association in real data, do we always (implicitly) assume a finite second order moment for any hypothetical population distribution? If we do not assume this, what statistical measure may we use? I am not sure if it makes sense to study correlation (or association in general) without assuming a finite second order moment for a generating probabilistic model. In the generalized version of pearson correlation ([wikipedia general correlation](https://en.wikipedia.org/wiki/Rank_correlation#General_correlation_coefficient)), the norms in the denominator of the expression should implicitly assume a finite second order moment. I ask this question simply to keep track of what we do and do not assume when we look into association in datasets. I think that association does not make sense without assuming a finite second order moment for any population distribution. However, maybe there is a nonparametric measure that I am not familiar with. Since both Spearman's and Kendall's correlation coefficients can be expressed in the form in [1](https://en.wikipedia.org/wiki/Rank_correlation#General_correlation_coefficient), I think this implicit assumption of a finite second order moment applies to these two association measures also. Any insights would be greatly appreciated! This stackexchange question seems relavent: [PCA without finite second moment](https://stats.stackexchange.com/questions/304704/pca-without-finite-second-moment). However, as far as I can tell, it does not definitively answer the question as I understand it.
You could use an ordinal ogistic regression, one example is here: [Alternatives to one-way ANOVA for heteroskedastic data](https://stats.stackexchange.com/questions/91872/alternatives-to-one-way-anova-for-heteroskedastic-data/91881#91881) I will make an example with some simulated data, using the package `MASS` in R. I will simulate data from the null. ``` N <- 2000 p <- c(0.1, 0.2, 0.3, 0.4) # We simulate from the NULL set.seed(7*11*13) country1 <- sample(1:4, N, TRUE, p) country2 <- sample(1:4, N, TRUE, p) country3 <- sample(1:4, N, TRUE, p) country4 <- sample(1:4, N, TRUE, p) # Amass the variables in format for regression: Y <- c(country1, country2, country3, country4) Country <- factor(rep(paste("Country", 1:4, sep=""), rep(N, 4))) simdata <- data.frame(Y=as.ordered(Y), Country) mod.polr <- MASS::polr(Y ~ Country, data=simdata, Hess=TRUE) mod.0 <- MASS::polr(Y ~ 1, data=simdata, Hess=TRUE) summary(mod.polr) Call: MASS::polr(formula = Y ~ Country, data = simdata, Hess = TRUE) Coefficients: Value Std. Error t value CountryCountry2 -0.03197 0.05767 -0.5543 CountryCountry3 0.03639 0.05790 0.6285 CountryCountry4 -0.03094 0.05765 -0.5367 Intercepts: Value Std. Error t value 1|2 -2.1959 0.0513 -42.8244 2|3 -0.8974 0.0431 -20.8418 3|4 0.4181 0.0421 9.9349 Residual Deviance: 20437.80 AIC: 20449.80 anova(mod.0, mod.polr) Likelihood ratio tests of ordinal regression models Response: Y Model Resid. df Resid. Dev Test Df LR stat. Pr(Chi) 1 1 7997 20439.67 2 Country 7994 20437.80 1 vs 2 3 1.867304 0.6003995 ```
I have the following Equivalent DNF problem: Input:Two DNF formulas, $F\_1$ and $F\_2$,with variables $a\_1,a\_2,...a\_n.$ Output: $1$ if $F\_1$ and $F\_2$ are equivalent, $0$ otherwise. $F\_1$ and $F\_2$ are equivalent if for all $(a\_1,a\_2,...a\_n)∈\{0,1\}^n,F\_1(a\_1,a\_2,...a\_n)= F\_2(a\_1,a\_2,...a\_n).$ Is the DNF-Equivalence problem polynomial or in $\mathsf{NP\mbox{-Hard}}$? If in $P$, how do we find an efficient algorithm and determine its complexity. How do we prove it if it's $\mathsf{NP\mbox{-Hard}}$.
A special case of DNF equivalence is DNF tautology: Given a DNF formula $F$, is it satisfied for all assignments? This can be seen by setting $F\_1 = F$ and $F\_2$ to be a trivial tautology. CNF non-satisfiability is co-NP-complete. Negating the input formula turns a CNF formula into a DNF formula and vice versa and non-satisfiability into tautology. Thus, DNF tautology is co-NP-complete.
It should be said, I have no background in calculus, but have read a few books on infinity and still don't understand it's relation to statistics. From what I've read it sounds like the textbook answer is something along the lines of: "The probability of picking 7 is indistinguishable from 0" Which seems to read as though it's impossible to pick 7. And of course 7 can be swapped out with any number in the infinite set. Which leads me to the conclusion that the probability of picking any number from an infinite set is indistinguishable from 0. This, though, seems to be able to be flipped on it's head by simply asking what the probability of picking a number contained by the set via randomly picking some positive integer. Can someone help me make sense of this?
When you start playing with infinities phrases like "indistinguishable from 0" are not the same as "equal to 0." In this case the probability is being handled as a limit. One could consider the probability of 7 being picked is $1/\aleph\_0$, however, $\aleph\_0$ is what is called a "limit cardinal," and infinitesimals generated from limit cardinals are not included in the set of real numbers, so we have to be creative. Consider a series defined by $1/x$ as we vary x: ``` x 1/x 1 1 2 0.5 3 0.3333... 4 0.25 ... 10 0.1 100 0.01 1000 0.001 ... ℵ_0 ? ``` So what's $1/\aleph\_0$? It can't be zero, because that doesn't satisfy the rules of arithmetic $0 \* \aleph\_0 = 0$, but basic reorganization of the algebra shows that $1/x \* x = 1$ for all x ($x \neq 0$). The concept of a limit is used to rigorously explore this intuitive understanding that there is some limiting thing out there for $1/x$ that equals 0. In very intuitive and imprecise wording, this limit is the number "approached" by a series, but typically never actually achieves it. Thus, when given an infinite number line and asked to pick a random value from it, the probability of any given number being picked is very very small. In fact, so small that the real numbers can't even describe how small it is. To capture this concept of smallness, we often look at the limit of the probability of a number being picked as we grow the set towards a cardinality of $\aleph\_0$. This limit is zero. This is not to say the probability of picking any given number is zero, just that any process used to describe the meaning of picking randomly from an infinite set that starts from smaller sets and builds larger ones from there (typically using induction) will describe a series of probabilities that get closer and closer to 0 as the sets get larger. The more formal definition of this limit could be phrased with some of the calculus you mention, using the epsilon-delta definition of a limit. By this definition, if you pick any arbitrarily small "epsilon," and try to determine which sized sets result in a probability of picking 7 below this epsilon, I can identify some set size for which all larger sets *must* yield probabilities smaller than this epsilon bound. Of course, you probably don't need the formal definition much. Sometimes intuition is enough, but its good to know there's a formal definition coming later down the line for you!
I am looking for a way to test whether a boundary threshold exists in a physiological response – a sample of the data is plotted below. My hypothesis is that the X-variable imposes a physiological constraint on Y-values, thus producing a boundary 'ceiling' for maximum Y-values that decreases at higher X-values (indicated by the red line on figure). I assume any Y-values below the boundary are limited by some other factor not included in this model. Essentially, my goal is to determine if the boundary exists and if so to derive a confidence interval for the boundary line model – similar to a linear regression model, but describing the upper bound of the Y-values, rather than the center of mass. I'm sure something like this exists, but I haven't come across it before. Also, I would appreciate any suggestions on a better title or tags for this post – I assume there are more accurate terms for what I'm describing that would help folks find this post. ![threshold](https://i.stack.imgur.com/1BQBk.jpg)
You can use a permutation based test for such threshold. [Permutation-based test](http://www.hydrol-earth-syst-sci.net/16/1255/2012/hess-16-1255-2012.pdf) It tests the hypothesis whether a "data-sparse" region above the threshold line is due to a random chance or not. In brief: The basic idea behind is to calculate the area of the "data-sparse" region and use it as a statistic. The next step is to randomly permute the X-coordinates of the scatter-plot and repeat the calculation of the area of "data-sparse" region. Probability p is the proportion of times the calculated area exceeded the original area. If p is sufficiently small the "data-sparse" region deemed to be significant.
I always wondered how various certain languages like Java, HTML5 and browser extensions are made with cross platform(operates and functions the same, regardless of OS, being Linux/Windows/BSD not specifically CPU architecture) compatibility in mind... Is there a certain developmental process semantic or otherwise that one would have to follow, in order to create such? Can some one give me the "nitty gritty" of how it's done , how it works ... i.e. system calls(if any)
Standardization certainly plays an important role. Specifications can be implemented differently but you can expect the final product to correctly implement the MUST's, SHOULD's, ... of the spec.
I tried to load fastText pretrained model from here [Fasttext model](https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md). I am using [wiki.simple.en](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.simple.zip) ``` from gensim.models.keyedvectors import KeyedVectors word_vectors = KeyedVectors.load_word2vec_format('wiki.simple.bin', binary=True) ``` But, it shows the following errors ``` Traceback (most recent call last): File "nltk_check.py", line 28, in <module> word_vectors = KeyedVectors.load_word2vec_format('wiki.simple.bin', binary=True) File "P:\major_project\venv\lib\sitepackages\gensim\models\keyedvectors.py",line 206, in load_word2vec_format header = utils.to_unicode(fin.readline(), encoding=encoding) File "P:\major_project\venv\lib\site-packages\gensim\utils.py", line 235, in any2unicode return unicode(text, encoding, errors=errors) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xba in position 0: invalid start byte ``` **Question 1** How do I load fasttext model with Gensim? **Question 2** Also, after loading the model, I want to find the similarity between two words ```py model.find_similarity('teacher', 'teaches') # Something like this Output : 0.99 ``` How do I do this?
For **.bin** use: [`load_fasttext_format()`](https://radimrehurek.com/gensim/models/fasttext.html#gensim.models.fasttext.FastText.load_fasttext_format) (this typically contains full model with parameters, ngrams, etc). For **.vec** use: [`load_word2vec_format`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.Word2VecKeyedVectors.load_word2vec_format) (this contains ONLY word-vectors -> no ngrams + you can't update an model). **Note**:: If you are facing issues with the memory or you are not able to load .bin models, then check the [pyfasttext](https://pypi.org/project/pyfasttext/) model for the same. Credits : Ivan Menshikh (Gensim Maintainer)
I am learning a statistical model, which includes a very large amount of parameters, which results in the risk of over-fitting. If I first learn the model parameters from the data, and then simply remove some of the parameters according to whichever criterion, would I be reducing the chance for overfitting? On the one hand, less parameters - less overfitting is supposed to be true. On the other hand one could claim that once the multi-parameter model was already fit, the parameters themselves were already learned incorrectly - and so reducing the number of parameters now does not help. I should note that I have been led to believe that the former is true, though I'm not sure why.
While removing parameters of the model and the **relearning** the weights will reduce overfitting (albeit at the potential cost of underfitting the data) simply removing the parameters after learning without any retraining will have highly unpredictable, and most likely detrimental, effects. As for the question of "why not?", I think the question of "why?" is more appropriate. That is, your working hypothesis is that removing parameters will improve test error in an overfit model. I don't see any reason why you would expect this to be true.
I am looking for examples in games such as Go, Chess, and Backgammon, where the believed-optimal move turned out to be suboptimal as a computer found better strategies.
See Wolfe and Berlekamp -- [Mathematical Go](http://books.google.com/books?id=Gb0bELsFQn8C&dq=isbn:1568810326&hl=en&sa=X&ei=fEa5UZWUK6OjiAKQ6oDoDw&ved=0CC8Q6AEwAA). Using Conway's theory of games, they show how to analyze certain kinds of Go endgames. Their solutions turn out to be measurably better than the solutions given by top Go players. (Not quite an answer to your problem, as those latter solutions were probably never claimed to be optimal.)
Since I'm interested in parsers (mainly in parser expression grammars), I'm wondering if there's some work that gives a categorical treatment of parsing. Any reference on applications of category theory to parsing is highly appreciated. Best,
One of the very first applications of category theory to a subject outside of algebraic geometry was to parsing! The keywords you want to guide your search are "Lambek calculus" and "categorial grammar". In modern terms, Joachim Lambek invented [noncommutative linear logic](http://en.wikipedia.org/wiki/Noncommutative_logic) in order to model sentence structure. The basic idea is that you can give basic parts of speech as having types, and then (say) ascribe English adjectives a function type taking noun phrases to noun phrases. (eg, "green" is viewed as function taking nouns to nouns, which means that "green eggs" is well-typed, since "eggs" is a noun). Linearity arises from the fact that an adjective takes exactly one noun phrase as an argument, and the noncommutativity arises from the fact that the order of words in sentences matters. For example, an adjective's noun argument comes after the adjective ("green eggs"), whereas a prepositional phrase's noun phrase comes *before* the prepositional phrases ("green eggs with ketchup"). In categorical terms, you want a (non-symmetric) monoidal category which is closed on the left and the right. So the type $A\setminus{}B$ is the type of a phrase which has type $B$, when it is preceded by an $A$ on the left, and $B/A$ is the type of a phrase which has type $B$ when succeeded by $A$ on the right, and the type $A \ast B$ is the type of a phrase made by concatenating something of type $A$ with something of type $B$. It turns out that Lambek grammars are equivalent to context free languages, though apparently this quite a difficult result -- showing CFGs are a subset of Lambek grammars is easy, but the other direction was only established in 1991 by Pentus. A good exercise^H^H^Hpublication for the reader (ie, I haven't tried it, but think it would be cool to try) is to use Lambek calculus to reformulate [Valiant's presentation of CYK parsing via boolean matrix multiplication](http://en.wikipedia.org/wiki/CYK_algorithm#Valiant.27s_algorithm), in categorical terms. As motivation, I quote from Lambek's 1958 paper [*The Mathematics of Sentence Structure*](http://dx.doi.org/10.2307%2F2310058): > > The calculus presented here is formally identical with a calculus constructed by G.D. Findlay and the present author for a discussion of canonical mappings in linear and multilinear algebra. > > >
I am interested in model selection in a time series setting. For concreteness, suppose I want to select an ARMA model from a pool of ARMA models with different lag orders. The ultimate **intent is forecasting**. Model selection can be done by 1. cross validation, 2. use of information criteria (AIC, BIC), among other methods. Rob J. Hyndman provides a way to do [cross validation for time series](http://robjhyndman.com/hyndsight/crossvalidation/). For relatively small samples, the sample size used in cross validation may be **qualitatively different** than the original sample size. For example, if the original sample size is 200 observations, then one could think of starting cross validation by taking the first 101 observations and expanding the window to 102, 103, ..., 200 observations to obtain 100 cross-validation results. Clearly, a model that is reasonably parsimonious for 200 observation may be too large for 100 observations and thus its validation error will be large. Thus cross validation is likely to systematically favour too-parsimonious models. This is an **undesirable effect due to the mismatch in sample sizes**. An alternative to cross validation is using information criteria for model selection. Since I care about forecasting, I would use AIC. Even though AIC is asymptotically equiv­a­lent to min­i­miz­ing the out-​​of-​​sample one-​​step fore­cast MSE for time series mod­els (according to [this post](http://robjhyndman.com/hyndsight/aic/) by Rob J. Hyndman), I doubt this is relevant here since the sample sizes I care about are not that large... **Question:** should I choose AIC over time series cross validation for small/medium samples? A few related questions can be found [here](https://stats.stackexchange.com/questions/8807/cross-validating-time-series-analysis), [here](https://stats.stackexchange.com/questions/14099/using-k-fold-cross-validation-for-time-series-model-selection) and [here](https://stats.stackexchange.com/questions/17932/calculating-forecast-error-with-time-series-cross-validation).
Taking theoretical considerations aside, Akaike Information Criterion is just likelihood penalized by the degrees of freedom. What follows, AIC accounts for uncertainty in the data (*-2LL*) and makes the assumption that more parameters leads to higher risk of overfitting (*2k*). Cross-validation just looks at the test set performance of the model, with no further assumptions. If you care mostly about making the predictions *and* you can assume that the test set(s) would be reasonably similar to the real-world data, you should go for cross-validation. The possible problem is that when your data is small, then by splitting it, you end up with small training and test sets. Less data for training is bad, and less data for test set makes the cross-validation results more uncertain (see [Varoquaux, 2018](https://arxiv.org/abs/1706.07581)). If your test sample is insufficient, you may be forced to use AIC, but keeping in mind what it measures, and what can assumptions it makes. On another hand, as already mentioned in comments, AIC gives you asymptomatic guarantees, and it's not the case with small samples. Small samples may be misleading about the uncertainty in the data as well.
(Originally [posted](https://math.stackexchange.com/questions/1724959/a-dynamical-systems-view-of-the-central-limit-theorem) on MSE.) I have seen many heuristic discussions of the classical central limit theorem speak of the normal distribution (or any of the stable distributions) as an "attractor" in the space of probability densities. For example, consider these sentences at the top of Wikipedia's [treatment](https://en.wikipedia.org/wiki/Central_limit_theorem): > > In more general usage, a central limit theorem is any of a set of weak-convergence theorems in probability theory. They all express the fact that a sum of many independent and identically distributed (i.i.d.) random variables, or alternatively, random variables with specific types of dependence, will tend to be distributed according to one of a small set of *attractor distributions*. When the variance of the i.i.d. variables is finite, the attractor distribution is the normal distribution. > > > This dynamical systems language is very suggestive. Feller also speaks of "attraction" in his treatment of the CLT in his second volume (I wonder if that is the source of the language), and Yuval Flimus in [this note](http://www.cs.toronto.edu/~yuvalf/CLT.pdf) even speaks of the "basin of attraction." (I don't think he really means "the exact form of the *basin of attraction* is deducible beforehand" but rather "the exact form of the *attractor* is deducible beforehand"; still, the language is there.) My question is: **can these dynamical analogies be made precise?** I don't know of a book in which they are -- though many books do make a point of emphasizing that the normal distribution is special for its stability under convolution (as well as its stability under the Fourier transform). This is basically telling us that the normal is important because it is a fixed point. The CLT goes further, telling us that it is not just a fixed point but an attractor. To make this geometric picture precise, I imagine taking the phase space to be a suitable infinite-dimensional function space (the space of probability densities) and the evolution operator to be repeated convolution with an initial condition. But I have no sense of the technicalities involved in making this picture work or whether it is worth pursuing. I would guess that since I can't find a treatment that does pursue this approach explicitly, there must be something wrong with my sense that it can be done or that it would be interesting. If that is the case, I would like to hear why. **EDIT**: There are three similar questions throughout Math Stack Exchange and MathOverflow that readers may be interested in: * [Gaussian distributions as fixed points in Some distribution space](https://mathoverflow.net/questions/191791/gaussian-distributions-as-fixed-points-in-some-distribution-space) (MO) * [Central limit theorem via maximal entropy](https://mathoverflow.net/questions/182752/central-limit-theorem-via-maximal-entropy) (MO) * [Is there a proof for the central limit theorem via some fixed point theorem?](https://math.stackexchange.com/questions/813748/is-there-a-proof-for-the-central-limit-theorem-via-some-fixed-point-theorem) (MSE)
After doing some digging in the literature, encouraged by Kjetil's answer, I've found a few references that do take the geometric/dynamical systems approach to the CLT seriously, besides the book by Y. Sinai. I'm posting what I've found for others who may be interested, but I hope still to hear from an expert about the value of this point of view. The most significant influence seems to have come from the work of Charles Stein. But the most direct answer to my question seems to be from Hamedani and Walter, who put a metric on the space of distribution functions and show that convolution generates a contraction, which yields the normal distribution as the unique fixed point. * M. Anshelevich, [The linearization of the central limit operator in free probability theory](http://arxiv.org/pdf/math/9810047.pdf), arXiv:math/9810047v2. * L.H.Y. Chen, L. Goldstein, and Q. Shao, [Normal Approximation by Stein's Method](https://books.google.com/books?id=5jMVpAbs9UkC&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false), Springer, 2011. * J.A. Goldstein, [Semigroup-theoretic proofs of the central limit theorem and other theorems of analysis](http://link.springer.com/article/10.1007%2FBF02195925), Semigroup Forum 12 (1976), no. 3, 189–206. * G.G. Hamedani and G.G. Walter, [A fixed point theorem and its application to the central limit theorem](http://link.springer.com/article/10.1007%2FBF01247572#page-1), Arch. Math. (Basel) 43 (1984), no. 3, 258–264. * S. Swaminathan, Fixed-point-theoretic proofs of the central limit theorem, in Fixed Point Theory and Applications (Marseille, 1989), Pitman Res. Notes Math. Ser., vol. 252, Longman Sci. Tech., Harlow, 1991, pp. 391–396. Cited in Karl Stromberg, Probability for Analysts, [page 114](https://books.google.com/books?id=gQaz79fv6QUC&pg=PA114&lpg=PA114&dq=%22central%20limit%20theorem%22%20%22contraction%20mapping%22&source=bl&ots=QFbhYCWe7-&sig=6Vg-YMXI7-QwwSc5QQpUqyK_Ab8&hl=en&sa=X&ved=0ahUKEwitwuzJwr_MAhVGax4KHREZDGsQ6AEINDAF#v=onepage&q=%22central%20limit%20theorem%22%20%22contraction%20mapping%22&f=false). --- **ADDED** October 19, 2018. Another source for this point of view is Oliver Knill's *Probability and Stochastic Processes with Applications*, p. 11 (emphasis added): > > Markov processes often are attracted by fixed points of the Markov operator. Such fixed points are called stationary states. They describe equilibria and often they are measures with maximal entropy. An example is the Markov operator $P$, which assigns to a probability density $f\_y$ the probability density of $f\_{\overline{Y+X}}$ where $\overline{Y+X}$ is the random variable $Y + X$ normalized so that it has mean $0$ and variance $1$. For the initial function $f= 1$, the function $P^n(f\_X)$ is the distribution of $S^{\*}\_n$ the normalized sum of $n$ IID random variables $X\_i$. This Markov operator has a unique equilibrium point, the standard normal distribution. It has maximal entropy among all distributions on the real line with variance $1$ and mean $0$. The central limit theorem tells that the Markov operator $P$ **has the normal distribution as a unique attracting fixed point** if one takes the weaker topology of convergence in distribution on $\mathcal{L}^1$. This works in other situations too. For circle-valued random variables for example, the uniform distribution maximizes entropy. It is not surprising therefore, that there is a central limit theorem for circle-valued random variables with the uniform distribution as the limiting distribution. > > >
When building a predictive model using machine learning techniques, what is the point of doing an exploratory data analysis (EDA)? Is it okay to jump straight to feature generation and building your model(s)? How are descriptive statistics used in EDA important?
Not long ago, I had an interview task for a data science position. I was given a data set and asked to build a predictive model to predict a certain binary variable given the others, with a time limit of a few hours. I went through each of the variables in turn, graphing them, calculating summary statistics etc. I also calculated correlations between the numerical variables. Among the things I found were: * One categorical variable almost perfectly matched the target. * Two or three variables had over half of their values missing. * A couple of variables had extreme outliers. * Two of the numerical variables were perfectly correlated. * etc. My point is that *these were things which had been put in deliberately* to see whether people would notice them before trying to build a model. The company put them in because they are the sort of thing which can happen in real life, and drastically affect model performance. So yes, EDA is important when doing machine learning!
What is the relationship between $Y$ and $X$ in the following plot? In my view there is negative linear relationship, But because we have a lot of outliers, the relationship is very weak. Am I right? I want to learn how can we explain scatterplots. ![enter image description here](https://i.stack.imgur.com/hVoLg.jpg)
**Let's have some fun!** First of all, I [scraped](http://arohatgi.info/WebPlotDigitizer/app/) the [data](https://alexisdinno.com/personal/files/data.csv) off your graph. Then I used a running line smoother to produce the black regression line below with the dashed 95% CI bands in gray. The graph below shows a span in the smooth of one half the data, although tighter spans revealed more or less precisely the same relationship. The slight change in slope around $X=0.4$ suggested a relationship that could be approximated using a linear model and adding linear hinge function of the slope of $X$ in a a nonlinear least squares regression (red line): $$Y = \beta\_{0} + \beta\_{X}X + \beta\_{\text{c}}\max\left(X-\theta,0\right) + \varepsilon$$ The coefficient estimates were: $$Y = 50.9 -37.7X -26.74436\max\left(X-0.46,0\right)$$ I would note that while the redoubtable whuber asserts that there are no strong linear relationships, the deviation from the line $Y = 50.9 - 37.7X$ implied by the hinge term is on the same order as the slope of $X$ (i.e. 37.7), so I would respectfully disagree that we see no strong nonlinear relationship (i.e. Yes there are no strong relationships, but the nonlinear term is about as strong as the linear one). ![Play time with data](https://i.stack.imgur.com/7NfNj.png) **Interpretation** (I have proceeded assuming that you are only interested in $Y$ as the dependent variable.) Values of $Y$ are very weakly predicted by $X$ (with an Adjusted-$R^{2}$=0.03). The association is approximately linear, with a slight decrease in slope at about 0.46. The residuals are somewhat skewed to the right, probably because the is a sharp lower bound on values of $Y$. Given the sample size $N=170$, I am inclined to tolerate [violations of normality](http://www.annualreviews.org/doi/abs/10.1146/annurev.publhealth.23.100901.140546). More observations for values of $X>0.5$ would help nail down whether the change in slope is real, or is an artifact of decreased variance of $Y$ in that range. **Updating with the $\ln(Y)$ graph:** (The red line is simply a linear regression of ln(Y) on X.) ![Updated with graph per Russ Lenth's suggestion.](https://i.stack.imgur.com/XwfRg.png) In comments Russ Lenth wrote: "I just wonder if this holds up if you smooth $\log Y$ vs. $X$. The distribution of $Y$ is skewed right." This is quite a good suggestion, as the $\log Y$ transform versus $X$ also gives a slightly better fit that a line between $Y$ and $X$ with residuals that are more symmetrically distributed. However, both his suggested $\log(Y)$ and my linear hinge of $X$ share a preference for a relationship between (untransformed) $Y$ and $X$ that is not described by a straight line.
I find some books about computers, but all of them are about technology. I want something more linked to theory.
To complement some of the above answers, Avi Widgerson's recent book [Mathematics and Computation](https://www.math.ias.edu/files/mathandcomp.pdf#page=1) briefly discusses philosophical interplays between computer science and philosophy in section 20.5. More broadly, the entire book contains a lot of material of philosophical interest, as it focuses mainly on the interplay between seemingly different fields, and does so by explaining the underlying structure and meaning of various concepts of ToC (randomness, knowledge, interaction, evolution, induction, learning... Among many others). While not a book about philosophy in itself, it makes the reader wonder about the new light shed by ToC on all these concepts. And it significantly departs from standard schoolbooks (indeed, that's not what this book is), as it contains no proofs. I highly recommend it.
Assume you have two coins $A,B$ with biases $P\_A,P\_B$ respectively. We would like to make $N$ coin tosses and get the maximal number of heads possible. Unfortunately, we know $P\_B$, but $P\_A$ is unknown at the beginning of the process. **What is the best possible strategy for selecting which coin to toss at each step?** Note: I'm looking for actual optimization of the selection, not heuristic methods. You may assume a reasonable prior distribution on $P\_A$ if needed. --- EDIT: Here is a formal definition of the objective. Let $P\_A,P\_B\in (0,1)$ and $P=max\{P\_A,P\_B\}$. The optimal strategy in hindsight will achieve on average $P\cdot N$ head tosses. Let $a(P\_A,P\_B)$ be the expected number of times our strategy picked coin $A$ given that the probabilities were $P\_A,P\_B$. **The Optimization Objective:** minimize the expected regret, i.e. $$Regret=P\cdot N - P\_A\cdot a(P\_A,P\_B) - P\_B\cdot (N-a(P\_A,P\_B))$$ **The Goal:** Devise a selection strategy that guarantees (expected) $Regret\leq r$ for any values of $P\_A,P\_B$ (for the smallest value of $r$ possible). Note that if $P\_A\approx P\_B$ this is not a problem in this definition as it means our regret will be small regardless of which coin we flip. --- If $N$ is unknown (i.e. the process may end arbitrarily after an adversarialy chosen number of tosses), the best strategy would be to occasionally explore coin $A$ to get a better $\widehat {P\_A}$ estimation, and in the rest of the time toss coin $A$ iff $\widehat {P\_A} > P\_B$. In the scenario $N$ is known, it's not hard to argue that the optimal strategy makes a series of tosses of coin $A$ and then either keeps on tossing $A$ for the remaining tosses or starts tossing only coin $B$, but how can we optimize the cutoff (i.e. the switch to tossing coin $B$) to maximize the expectancy of head results?
I agree with D.W. that this should just be a dynamic programming question. Assume that $P\_B$ is known and that we have a prior on $P\_A$ and that $N$ is known. (Without a prior on $P\_A$ or a known $N$, I do not see how your objective or "optimal" are well-defined.) Let optimal mean "maximizes expected number of heads" with the expectation over both the coin flips and the draw of $P\_A$ from the prior. (This seems to me the only natural definition of (edit: *achievable*) "optimal" here.) Let $Val(t,prior)$ be the expected number of heads of the optimal strategy starting at time $t$ with prior $prior$ on $P\_A$. Then immediately $$ Val(N,prior) = \max\{ P\_B, ~ \mathbb{E}P\_A \} $$ since at the last step we should just choose the higher chance of flipping heads, and $\mathbb{E}P\_A$ with respect to $prior$ is the probability of getting heads from flipping coin $A$. So we know how to choose the coin at the last step for any distribution on $P\_A$. At step $t < N$, $$ Val(t,prior) = \max\{ P\_B + Val(t+1,prior), ~~ \text{val if flip $A$}\} $$ where $$ \text{val if flip $A$} ~~ = \mathbb{E}P\_A ~~ + (\mathbb{E}P\_A) Val(t+1,posterior(prior,heads)) ~~ + (1-\mathbb{E}P\_A) Val(t+1,posterior(prior,tails)) . $$ Explanation: The value for flipping $B$ is $P\_B$ for this coin flip, plus the value of the optimal strategy at time $t+1$ with the same prior on $P\_A$ as we currently have. The value for flipping $A$ is the expected number of heads for this round, $\mathbb{E}P\_A$, plus our expected value over the two possible cases. First, if the coin comes up heads (which happens with probability $\mathbb{E}P\_A$), then we will update our prior to a posterior on $P\_A$ conditioned on that event, and we will play the optimal strategy at time $t+1$ with that new distribution. Similarly if it comes up tails. There are at the very most $O(n^3)$ cases to calculate because there are $N$ time steps, and at each time step $t$, we have a distribution on $P\_A$ that consists of the original prior updated after observing $j=0,\dots,t$ heads and $k=0,\dots,t-j$ tails. But this should be reduced with your observation that we need only consider strategies that try $A$ for some amount of time, then (optionally) flip $B$ for the rest of the time. Why this is optimal: Assume that the strategy is optimal at time $t+1$ for the specified prior distribution. But the "correct" prior distribution is always exactly the one we plug in to the formula: After observing a heads, the correct distribution over $P\_A$ is precisely $posterior(prior,heads)$, and so on. When I say correct, what I mean is that the posterior gives the exact distribution over $P\_A$ given that $P\_A$ is drawn from the prior and these outcomes are observed. So we really are maximizing over the "average case" in each given scenario.
I was reading some [research paper](https://people.cs.uchicago.edu/~laci/papers/hypergraphiso.pdf) where for hypergraph of bounded rank $k$ they have given moderately exponential algorithm. The runtime of the algorithm is $e^{ \mathcal {O}(k^2\sqrt n) \cdot poly(n)}$. Here $k$ is the rank of the hypergraph, and $poly(n)$ means polynomial in variable $n$. > > What is the meaning of *moderately exponential* running time ? > > > I have seen this [link](https://xlinux.nist.gov/dads/HTML/moderatexpon.html), but did not understand much.
If we're talking about a generator who can handle any length $n$ seed (perhaps this is more cryptographic PRG oriented), and stretch it to some length $n'>n$ pseudorandom string, then the answer is no. The reason actually has nothing to do with the properties of PRGs, but simply relies on the fact that the output of the generator is computable, and that its range is infinite. Kolmogorov's complexity isn't computable on any infinite recursively enumerable set of strings. To show this you can follow the standard proof of uncomputability of Kolmogorov's complexity. Since the set is infinite, it contains strings of arbitrarily high Kolmogorov's complexity, so you can write a program which enumerates them until it finds some string of high enough complexity, and then stop and output it. This was also answered in [this](https://math.stackexchange.com/questions/12848/is-there-an-infinite-set-of-strings-whose-kolmogorov-complexities-are-computable) math.se question.
So this is an odd fit, though really I think it's an odd fit for any site, so I thought I'd try it here, among my data-crunching brethren. I came to epidemiology and biostatistics from biology, and still definitely have some habits from that field. One of them is keeping a lab notebook. It's useful for documenting thoughts, decisions, musings about the analysis, etc. All in one place, everything committed so I can look back on analysis later and have some clue what I did. But it would be nice to move that into the 21st century. Particularly because even though the lab notebook system is decent enough for one person and documenting decisions, it would be nice to be able to attach plots from EDA, emails from data managers discussing a particular data set, etc. I'm guessing this will involve rigging up my own system from an unholy union of many different bits, but is anyone currently using a system and have any recommendations?
Personally I have found the [Livescribe](http://www.livescribe.com/en-us/) 'smartpen' a God send. it merges the trusty 'old-world charm' of a traditional pen and paper notebook but inlcudes the ability to record sound (which it synchronises with your pen strokes) ready for later revision. NB- there is a downside and that is you have to buy special paper that works with the pen.....swings and round-a-bouts really The audio/pen stokes an be uploaded onto the web and then attached to many of the other programs already higlihted above. Students I teach (biomechanics) absolutely love this and find later studying of difficult concepts much easier than before (pre livescribe)
I am trying to wrap my head around the benefits of salt in cryptography. [http://en.wikipedia.org/wiki/Salt\_(cryptography)](http://en.wikipedia.org/wiki/Salt_%28cryptography%29) I understand that adding salt makes it harder to precompute a table. But exactly how much harder do things get with salt? It seems to me, like when you add salt, the number of entries in your precomputed table would = number of common passwords to precompute x number of entries in the password table (ie number of different possibile salts). So if you have a list of 100 common passwords, then without salt, you would have 100 hashed passwords. But if you have 10 users on a system, with 10 different salts, then you would now have 1000 different combinations to check. So as the numbers of users or the size of common password list increases, the precomputed table gets so big that you can't pre-compute it easily (if at all) Am I getting this? Do I have it right?
There are a couple of different things going on here, and you need to define your problem more clearly. For starters, let's just look at a simple case where what is being stored in a database (for each user) is either $H(pw)$ or $s, H(s, pw)$ where $H$ is a hash function, $pw \in \{1, \ldots, N\}$ is a password, and $s \in \{0,1\}^\ell$ is a salt. To attack a single user without a salt, the attacker can pre-compute $H(pw)$ for all possible values of $pw$ yielding a table of size $N$. Without knowing the salt, however, the attacker has to compute $H(s, pw)$ for all possible salts as well, thus requiring a table of size $N \cdot 2^\ell$. On the other hand, if the attacker does no pre-computation but instead just waits until it compromises the database and then obtains $s^\*, H(s^\*, pw)$, then we are back to the previous case where the attacker just has to compute $N$ values of $H(s^\*, pw)$ to learn the password. Thus, in the single-user case, the salt increases the attacker's off-line computation but does not increase the on-line computation needed. Before continuing, let me note also that (in the case without the salt) the attacker can use to obtain various time-space tradeoffs. Use of salts makes rainbow tables less effective as well. Salts also help, in a somewhat orthogonal way, in the multi-user setting. To see this, note that if the attacker got the database of hashed passwords in the unsalted case, then using $N$ work he gets the passwords of *all* users. But in the salted case, assuming each of $M$ users is assigned a different salt, the attacker must do $M \cdot N$ work to recover all passwords.
In <https://www.seas.harvard.edu/courses/cs152/2019sp/lectures/lec18-monads.pdf> it is written that > > A type $\tau$ list is the type of lists with elements of type $\tau$ > > > Why must a list contain elements of the same type? Why can't it contain elements of different types? Is there a way of defining a list polymorphically in the typed lambda calculus, so that it takes elements of any type? Can we then use the List monad on lists, defined polymorphically?
The short answer is that $\tau\ \text{list}$ is *defined* as a type constructor, along with rules for formation and elimination, and so we could similarly *define* a type constructor that allowed terms of different types to form a single "variably-typed list". However, lists cannot take different types in the definition given, simply because they are defined with respect to a single type. In either case, adding lists, or variably-typed lists, involves extending the simply-typed $\lambda$-calculus, as lists of *any* kind do not exist in the usual presentation. If we have a slightly richer type system than the simply-typed $\lambda$-calculus, we can encode variably-typed lists using standard $\tau\ \text{list}$s. * If we have a form of [subtyping](https://en.wikipedia.org/wiki/Subtyping), we can store terms of different types, as long as they share a supertype. However, when we project elements out of the list, we can no longer tell specifically what type they were to begin with (this may be familiar from object-oriented programming), so this is a little limited. * If we have [dependent sum types](https://en.wikipedia.org/wiki/Dependent_type#Formal_definition) (also called $\Sigma$-types) and a [universe type](https://en.wikipedia.org/wiki/Universe_(mathematics)#In_type_theory) $\mathcal U$ (i.e. a "type of types"), we can form the type $(\Sigma\_{A : \mathcal U} A)\ \text{list}$, whose elements are pairs consisting of a type $A$ and a term of that type. Finally, I'll just note that polymorphism doesn't help us if we want heterogeneous lists: it just allows us to manipulate homogeneous lists for different $\tau$ more effectively. Polymorphic types have to be [uniform](https://en.wikipedia.org/wiki/Parametric_polymorphism) in some sense, which is why we need dependency here instead. --- To answer a follow-up question: if we have two variably-sorted lists using the dependent type approach, we can concatenate and flatten lists just as with ordinary lists. * The $\mathrm{List}$ monad has an operation $\mathrm{join}$ (in the language of Haskell), so given a list of variably-typed lists, $$l = [[(A, a), (B, b)], [(C, c), (D, d)]] : (\Sigma\_{X : \mathcal U} X)\ \text{list list}$$ we can perform $\mathrm{join}$ to get a new list: $$\mathrm{join}(l) = [(A, a), (B, b), (C, c), (D, d)] : (\Sigma\_{X : \mathcal U} X)\ \text{list}$$ * Similarly, $\tau\ \text{list}$ can be equipped with a concatenation operation $+\!+$, so given the two lists in the previous example, we can concatenate them for a similar result: $$[(A, a), (B, b)]\ {+\!+}\ [(C, c), (D, d)] = [(A, a), (B, b), (C, c), (D, d)] : (\Sigma\_{X : \mathcal U} X)\ \text{list}$$
Show that if $d(n)$ is $O(f(n))$, then $ad(n)$ is $O(f(n))$, for any constant $a > 0$? Does this need to be shown through induction or is it sufficient to say: Let $d(n) = n$ which is $O(f(n))$. Therefore $ad(n) = an$ which is trivially $O(f(n))$
No, it is not sufficient to say "let $d(n) = n$ which is $O(f(n))$. Therefore $ad(n) = an$ which is trivially $O(f(n))$". Although that is a reasonable way to to understand the proposition quickly, it is neither sufficient nor necessary. It cannot be considered as a proof. It can easily lead to misunderstanding if communicated. To *show* that "if $d(n)$ is $O(f(n))$, then $ad(n)$ is $O(f(n))$, for any constant $a > 0$", let us apply the relevant definitions. $$\begin{align\*} d(n)\text{ is }O(f(n)) &\Longrightarrow \limsup\_{n\to\infty}\dfrac{|d(n)|}{f(n)} <\infty\\ \left(\text{since } \limsup\_{n\to\infty}\dfrac{a|d(n)|}{f(n)}=a\limsup\_{n\to\infty}\dfrac{|d(n)|}{f(n)}\right) &\Longrightarrow\limsup\_{n\to\infty}\dfrac{a|d(n)|}{f(n)} <\infty\\ &\Longrightarrow\limsup\_{n\to\infty}\dfrac{|ad(n)|}{f(n)} <\infty\\ &\Longrightarrow ad(n)\text{ is }O(f(n)).\\ \end{align\*}$$ The *proof* above is rigorous, although it is hardly the way we as humans understand the proposition. Here is another approach. $$\begin{align\*} d(n)\text{ is }O(f(n)) &\Longrightarrow |d(n)|\text{ is bounded above by } cf(n)\text{ when $n$ is large enough for some constant } c\\ &\Longrightarrow |ad(n)|\text{ is bounded above by } acf(n)\text{ when $n$ is large enough for some constant } c\\ (\text{let } c'=ac)\ \ &\Longrightarrow |ad(n)|\text{ is bounded above by } c'f(n)\text{ when $n$ is large enough for some constant } c'\\ &\Longrightarrow ad(n)\text{ is }O(f(n)).\\ \end{align\*}$$ The approach above can be considered as a proof among people who are familiar with the stuff. It is probably the way to understand the proposition as well. You can imagine that the graph of $cf(n)$ lies above the graph of $d(n)$, and, hence, the graph of $acf(n)$ lies above the graph of $ad(n)$.
I understand that it is not the normality of a random variable that matters in a t-test, but rather the fact that the distribution of the mean follows a normal distribution for large samples. However, is it sometimes useful to test how large a sample must be before the assumption of a normal distribution is warranted? For example, if the number of samples required were very large (much larger than n=30), would assuming a normal distribution be inappropriate? If so, how would you go about checking this? The inspiration for this question comes from reading about Nassim Nicholas Taleb's Kappa metric.
Of course, the main issue is for the t statistic to have a t distribution with the appropriate degrees of freedom. For normal data $\bar X$ and $S$ are stochastically independent. (One might argue that they are not *functionally* independent because $S^2$ can be defined in terms of $\bar X.)$ By contrast, for exponential data $\bar X$ and $S$ are not independent. So, technically, it not enough for $n$ to be large enough that $\bar X$ is nearly normal. In order for the distribution of a t statistic, e.g, $T = \frac{\bar X-\mu\_0}{S/\sqrt{n}},$ to have the appropriate t distribution, its numerator and denominator should be independent. Using goodness-of-fit tests to check data for normality before doing a t test has been deprecated. But it is a good idea to look at plots of the data to see if they are nearly symmetrical and free of far outliers in either direction before using the data for a t test. For example, consider a sample of size $n = 100$ from a [Pareto](https://en.wikipedia.org/wiki/Pareto_distribution#Relation_to_the_exponential_distribution) distribution with minimum value $1$ and shape parameter $\alpha = 10,$ and mean $\mu = 10/9.$ [Observations can be sampled as $e^Y,$ where $Y\sim\mathsf{Exp}(\mathrm{rate}=10).]$ Each row of the matrix `MAT` below contains such a sample. ``` set.seed(2022) m = 10^4; n = 100 MAT = matrix(exp(rexp(m*n, 10)), nrow=m) ``` A boxplot of the first sample shows marked right-skewness and outliers. ``` boxplot(MAT[1,], horizontal=T, col="skyblue2") ``` [![enter image description here](https://i.stack.imgur.com/QWZOb.png)](https://i.stack.imgur.com/QWZOb.png) ``` a = rowMeans(MAT) s = apply(MAT,1,sd) cor(a, s) [1] 0.7201542 ``` Sample means and SDs are clearly correlated. But one might consider that the means $\bar X$ (`a` in the code) are "close enough" to normal. ``` hist(a, prob=T, br=30, col="skyblue2") curve(dnorm(x,mean(a), sd(a)), add=T, col="red", lwd=2) ``` [![enter image description here](https://i.stack.imgur.com/8VZgV.png)](https://i.stack.imgur.com/8VZgV.png) However, $T$ statistics are not distributed as $\mathsf{T}(\nu=99)$---especially not in the tails, where values are used to decide whether to reject the null hypothesis. So one should not trust results of a t test on such Pareto data. ``` t = (a - 10/9)/(s/sqrt(n)) hist(t, prob=T, br=30, col="skyblue2") curve(dt(x,n-1), add=T, lwd=2, col="red") ``` [![enter image description here](https://i.stack.imgur.com/kbF9J.png)](https://i.stack.imgur.com/kbF9J.png) By contrast, samples of size $n = 50$ from a standard uniform distribution behave well. ``` set.seed(129) m = 10^4; n = 50 MAT = matrix(runif(m*n), nrow=m) a = rowMeans(MAT); s = apply(MAT,1,sd) cor(a, s) [1] -0.01452768 # nearly independent ``` $T$ statistics are nearly distributed as $\mathsf{T}(\nu=49).$ So one could trust results of a t test on such uniform data. ``` t = (a - 1/2)/(s/sqrt(50)) hist(t, prob=T, br=20, col="skyblue2") curve(dt(x,49), add=T, col="red") ``` [![enter image description here](https://i.stack.imgur.com/KYQh3.png)](https://i.stack.imgur.com/KYQh3.png)
Background: An instance of 3-SAT is called **monotone** if each clause consists only of positive literals or only of negative literals. Given an instance $\phi$ of 3-SAT, we consider the bipartite graph $G\_\phi$ that contains a vertex per each variable and per each clause, and has an edge between a variable-vertex and a clause-vertex if and only if the variable appears in the clause. When $G\_\phi$ is planar then $\phi$ is also said to be **planar**. **Planar Monotone 3-SAT** (de Berg and Khosravi [[1](https://www.win.tue.nl/%7Emdberg/Papers/2010/bk-obspp-10.pdf)]) is an NP-hard SAT variant where instances are both monotone and planar. Furthermore, it remains hard when the graph $G\_\phi$ has the following rectilinear representation: All variable-vertices lie on a horizontal line and all positive clause vertices are above it and all the negative clause vertices are below it. For example: [![enter image description here](https://i.stack.imgur.com/YHP4D.png)](https://i.stack.imgur.com/YHP4D.png) *(Figure adapted from [[1](https://www.win.tue.nl/%7Emdberg/Papers/2010/bk-obspp-10.pdf)])* In this representation, each clause has at most 3 vertical lines, called legs, that go up/down to each of the variables it contains. We say that two legs enclose a clause $C$ if $C$ is horizontally between the two legs (and is on the same side as the legs with respect to the variables). A new definition: A clause (of size 3) is **bi-enclosing** if both its leftmost and rightmost pair of legs enclose some other clauses. For example, in the figure above $C\_i$ is the only bi-enclosing clause (as all others clauses have a pair of legs with no clauses between them). **The SAT variant I'm asking about:** All Planar Monotone 3-SAT instances in rectilinear representation where there are no bi-enclosing clauses. *Is deciding satisfiability in this case NP-hard?* I haven't seen this version addressed explicitly before. Interestingly, the first example of the rectilinear representation for (the more general) planar 3-SAT problem, given in Knuth and Raghunatan [[3](https://epubs.siam.org/doi/pdf/10.1137/0405033)], has no bi-enclosing clauses: [![enter image description here](https://i.stack.imgur.com/5RNm8.png)](https://i.stack.imgur.com/5RNm8.png) **Edit following domotorp's answer:** Here is an instance where we cannot change the order of the variables in order to prevent $(x\_1 ∨ x\_3 ∨ x\_5)$ from being bi-enclosing: [![enter image description here](https://i.stack.imgur.com/AAdJ1.png)](https://i.stack.imgur.com/AAdJ1.png) (Note that in the variant in question we cannot move clauses to the other side of the variables. In Planar 3-SAT, however, we could easily do that to fix the bad clause above and so a more complex example would be required for a "non-fixable" instance in that case).
The following paper answers the question in the affirmative – the variant remains NP-hard using a reduction from Monotone Planar 3-SAT: <http://epubs.siam.org/doi/abs/10.1137/1.9781611976465.105> (arXiv: <http://arxiv.org/abs/2009.12369>) The paper presents a slightly more restricted variant, **Monotone Planar 3-SAT with Neighboring Variable Pairs**, which requires each clause with three variables to contain two variables that are adjacent on the line on which all variables are embedded. If there is a bi-enclosing clause in this variant, it must enclose a redundant clause due to monotonicity, so we can assume no bi-enclosing clauses.
A cordial greeting to all. I am working with a time series and I am trying to predict it with SARIMAX. As there are many variations to adjust the values of the variables ***order (p, d, q)*** and ***seasonal\_order (P, D, Q, s)***. I found the pmdarima library, with its auto\_arima function it automatically discover the optimal order for an ARIMA model. I am looking for something similar for SARIMAX. I was wondering, is there a library that calculates the values of the order and seasonal\_order variables automatically? to get the best result. I appreciate the help you can give me in this regard. Thanks.
In addition to Demetri's answer (+1): 1. The use of GAM is well-established in the field of Ecology so I would add certain books/influential articles. Show you are not reinventing the wheel rather that you are abreast with modern modelling approaches. 2. You do not describe your sample size but you might want to try a validation schema to show that through the use of GAMs you get better goodness-of-fit. While hand-wavy if something like an AIC/BIC shows a clear preference for a particular model this can pacify some (not too sophisticated) criticism... 3. I would emphasise how the GAM fitting procedure looks into shrinkage. It is plausible that someone oversimplified GAMs in his/her head as "a polynomial basis of sorts" and therefore prone to overfit. 4. Take their view-point for a moment: are there any established studies suggesting logarithmic, or exponential decay curves already? The reviewer might be satisfied that you acknowledge them as a possibility. Maybe you can make a critical assessment of that prior work and show how your work is a step forward. 5. As Dimitri mentioned, specifying a functional form without prior knowledge can induce strong bias. You can politely double-down on the fact you are using a non-parametric approach. Maybe even try a different basis functions (e.g. cubic regression splines and thin-plate splines) and show how the results are (hopefully) very similar and thus not dependant on the choice of basis functions. Just to be clear: In my opinion, *using GAMs is the correct approach* here; the criticism of "why not X-functional form" is weak. Such criticism might be warranted if prior research suggested robust evidence for a particular modelling assumption but even then it would not be a particularly strong position to take. That said, try to see where they are come from too, criticism can be helpful strength your manuscript and/or alleviate worries of future readers too.
There is an at-times dizzying array of symbols used in math and CS papers. Yet many assume basic familiarity that seems rarely taught in one place. I am looking for a dictionary something like the following, especially from a CS perspective. * It would list all the basic mathematical symbols and give their meanings and examples. It would talk about symbols that are sometimes used in equivalent ways. It would note common beginner mistakes. * It would talk about the subtleties surrounding different meanings of a single symbol (much like multiple definitions of the same word in a dictionary). * It would not merely be a very terse description of each symbol, such as one word descriptions like "subset". * It would show how symbols are sometimes "overloaded". For example, $\binom{x}{y}$ could have $z$ as an integer, but sometimes $z$ can be a set with this notation and it means to choose elements from this set. $[n]$ sometimes means a set of integers $1 \ldots n$, or other times its a one-element array. * It might talk about how to describe all kinds of different "objects" in terms of different symbols or equivalent ways of referring to them (but which are more clear) and the operations possible on those objects. In other words, kind of like an API for math objects. I.e. it would be also at times a "style manual" for different nuances in how to present mathematical writing. This would be a very helpful resource for anyone writing questions in mathematical stackexchanges, where many questions fail to make sense based on not fitting into tricky mathematical conventions. Some book introductions have many of these features. however ideally it would be a separate treatment. Also, ideally of course it would be online. There are tables of latex symbols, but they don't really fulfill many of the above criteria. > > Has anyone seen a "dictionary of symbols" that matches these features? > > > (Alternatively, it seems like an excellent wiki or FAQ project if good references like this don't exist.)
> > To know something's secret name is to steal its power. - Dr. Daniel Jackson > > > For the times when we can only draw the symbol and want a name. See: [shapecatcher](http://shapecatcher.com/) You draw the symbol and if there is a unicode char for it, it will find it and give you a name. If there are many such as =, it will list many results. Then it is just a matter of Googling. Enjoy!
I'm programming a genetic algorithm using grammatical evolution. My problem is that I reach local optimal values (premature convergence) and when that happens, I don't know what to do. I'm thinking about increasing the mutation ratio (5% is it's default value), but I don't know how to decide when it is necessary. The data I have on every generation is a bidimensional array whose first column is its fitness ``` adn[i][0] ← fitness row → is the values of the Grammar column ↓ Each indiviual result ``` If you need clarification, please ask and I'll be pleased of modifying. Note that this is not my mother language and sorry for the mistakes and for the inconvenience. Answering a request, my operations are the following, and exactly in this order: * I generate a random Population (A matrix with random numbers) * I generate a matrix that contains the desired result. For doing this, i have implemented a couple functions that have aditionally a +-5% of variation, for example: fun(x)= (2\*cos(x) + sen(x) - 2X) \* **(0,95+(a number oscillating between 0 and 0,1)**, the x contains every f(x) with sequentially from 0 to N (Being N the size of row), the y contains exactly the same (more results) * Starts algorithm (generations beginning to change The actions that make every generation are: * Mutation: A random number of every cromosome can mutate on any gene → adn[i][random] = random number (with a 5% of probability of this happening) * Crossover: I cross every adn with other adn (80% is the chance of mutation for every pair), for the pairing I pick a random number and face adn[i] and adn[(i+j) mod NumADNs] * Translate. I get a matrix that contains the values f(0 to N) making in one step transcription and translate aplying the grammar on the image [![the grammar](https://i.stack.imgur.com/wu2EQ.png)](https://i.stack.imgur.com/wu2EQ.png) -fitness: I compare the values obtained with the expected ones and update the fitness. -Elitism: After that, i choose the best 4 adn's and get it to the top, they will be selected -Selection: Any non elitist adn will face a totally random adn, and if its fitness is lower (Lower is better) will prevail, being a possibilities of the worse one surviving
It looks like you're dealing with **premature convergence**. In other words, your population fills itself with individuals that represent the suboptimal solution and/or individuals that are (too) close to said solution. The basic framework of a genetic algorithm is as follows: ``` P <- Population of size N with N random individuals. evaluate fitness of all individuals in P while (stopping criteria not met) { C <- empty Child set of size M while (size of C is not M) { parent1 <- select an individual from P parent2 <- select an individual from P child1, child2 <- combine parent1 and parent2 somehow. child1, child2 <- mutate children by chance evaluate fitness of child1, child2 C <- C + child1 + child2 } P <- combine P and C. (P remains size N) } return individual with best fitness ``` Note that the (e.g.) the size of the population/children doesn't have to be constant *per se*. Or you might combine a variable number of parents into a variable amount of children (e.g. a crossover between 5 parents resulting in 7 children). But I would keep it simple, at first. As you can see, the main operators in a genetic algorithm are, in order * **Selection**: Select individuals from the population that will be combined. Examples: *tournament selection*, *proportionate selection*, *truncation selection*, ... * **Crossover**: Combine selected individuals (parents) to new indidivuals (children). Examples: *one-point crossover*, *n-point crossover*, *uniform crossover*, *cut and splice*, ... * **Mutation**: By chance, (don't) mutate an individual by changing the individual slightly. * **Recombination**: Somehow insert the children into the set of parents. Examples: Add all children to the population, sort the thing by fitness and remove the worst individuals so your population is size N again; Sort your population and drop the worst M individuals and add all children; often the same techniques that are presented in the selection phase are used. In your description, you're confusing multiple steps as if it were one (e.g. you skip the *selection* step but you incoorperate it in the *crossover* step). You also describe techniques as if it were a step of the algorithm (e.g. Elitism is a technique used in the *recombination* step to ensure that at least the best individuals don't die). An example where premature convergence might/will occur is when you only *select* the best individuals as parents and only allow the best individuals survive (in the *recombination* step). Some possible methods to resolve this: * **Increase mutation rate**. However, a mutation is usually a very random process. You would need 'pure' luck to escape the suboptimal solution. * **Redesign your genetic operations**. e.g. allow bad fitness individuals/offspring to survive the generation more frequently. It could be that you're currently selecting too much good individuals to survive. Don't let too much bad individuals survive though, or your algorithm will never converge to something good. * (...) The goal is to tweek your genetic operations in such a way that in each next generation, the average fitness of your population has (preferably) increased while maintaining a big enough fitness variation. This is not easy. There are several other methods to avoid premature convergence if the above doesn't help you out. I strongly recommend experimenting with altering your genetic operations first before doing this, however. Search terms: *preselection*, *crowding*, *fitness sharing*, *incest prevention*, ...
I'm really struggling with this property: > > Let $X,Y$ be [coherence spaces](http://en.wikipedia.org/wiki/Coherent_space) and $f: Cl(X) \rightarrow Cl(Y)$ be a monotone function. $f$ is continuous if and only if $f(\bigcup\_{x\in D} x)=\bigcup\_{x \in D}f(x)$, for all $D \subseteq Cl(X)$ such that $D$ is a directed set. > > > **Directed set** is defined thus: $D \subseteq $ POSET$ $ is a directed set iff $ \forall x, x' \in D$ $ \exists z \in D $ such $ x \subseteq z$ and $x' \subseteq z$. > > $Cl(X) $stands for cliques of X: $\{x \subseteq |X| \mid a,b \in x \Rightarrow a$ coherent $b \}$. > > > Many books give that as a definition of **[Scott-continuous](http://en.wikipedia.org/wiki/Scott_continuity) functions**, but unluckly not my teacher. He gave us this definition of continuous: > > $f : Cl(X) \rightarrow Cl(Y)$ is continuous iff it is monotone and $\forall x \in Cl(X), \forall b \in f(x), \exists x\_0 \subseteq\_{fin} x, b \in f(x\_0)$, > > where **monotone** is defined as: > $f$ is monotone iff $a \subseteq b \Rightarrow f(a) \subseteq f(b)$ > > > This is the proposed proof I have, but I can't understand the last equation. **Proof of $f$ continuous implies $f(\bigcup D)=\bigcup f(D)$**: Let $b \in f(\bigcup D)$. By the definition of continuity, $\exists x\_0 \subset\_{fin} x \mid b \in f(x\_0)$. Note that $x\_0$ is the union of $\{ x\_i \mid x\_i \in D\}$. If $D$ is direct then: $\exists z \in D \mid x\_i \subseteq z$ then $x\_0 \subseteq z$. By the definition of monotony, $f(x\_0)\subseteq f(z)$ so $b \in f(z)$ ***(???)*** $\subseteq \bigcup f(D)$. And even that is true we should show that $\bigcup f(D) = f(\bigcup D)$, not just $\subseteq$. The proof of the other implication is even worse so I can't write it here... Can you explain to me how the proof can work?
The definition of continuity used by your teacher is the nicer one. It tells you pretty concretely what continuity means. Suppose $b \in f(x)$. That means that given all the information of $x$, possibly an infinite set of tokens (atoms), the function produces some element that has the atomic piece of information $b$. (It could have other information too, but we are not concerned with that at the moment.) Your teacher's definition says that it is not necessary to look at all the infinite information of $x$ in order to produce the output information $b$. Some finite subset of $x$ is enough to produce it. (Melvin Fitting's book "Computability theory, semantics and logic programming", Oxford, 1987, calls this property *compactness* and defines a continuous function as being monotone and compact.) This is the *essence* of continuity. To get some finite amount of information about the output of a function, you only need a finite amount of information about the input. The output produced by the function for an infinite input is obtained by piecing together the information it produces for all *finite* approximations of the infinite input. In other words, you don't get any magical jump in going from the finite approximations to their infinite union. Whatever you get at infinity, you should already get at some finite stage. The standard equation $f(\bigcup\_{x \in D} x) = \bigcup\_{x \in D} f(x)$ is pretty to look at, but it doesn't tell you all the intuition I have explained above. However, mathematically, it is equivalent to your teacher's definition. To show that $\bigcup\_{x \in D} f(x) \subseteq f(\bigcup\_{x \in D} x)$, it is enough to show that $f(x)$ is included in $f(\bigcup\_{x \in D} x)$, for each $x \in D$. But that follows directly from monotonicity of $f$ because $x \subseteq \bigcup\_{x \in D} x$. So, this is the "easy" direction. The other direction, proved by your teacher, is the interesting one: $f(\bigcup\_{x \in D} x) \subseteq \bigcup\_{x \in D} f(x)$. To see this, use the intuition I have mentioned above. Any atomic piece of information $b$ in the left hand side comes from some finite approximation of the input: $x\_0 \subseteq\_{fin} \bigcup\_{x \in D} x$. That is, $b \in f(x\_0)$. Since $x\_0$ is finite and it is included in the union of the directed set, there must be something in the directed set that is larger than $x\_0$, perhaps $x\_0$ itself. Call that element $z$. By monotonicity, $f(x\_0) \subseteq f(z)$. So, $b \in f(z)$. Since $z \in D$, $f(z) \subseteq \bigcup\_{x \in D} f(x)$. So, now $b$ is seen to be in the right hand side too. QED. As you have noted, showing that your teacher's continuity implies the pretty equation is the easy bit. The harder bit is to show that the pretty equation, despite looking like it is not saying very much, really does say everything in your teacher's definition.
This is an interview question. I need to implement a data structure that supports the following operations: 1. Insertion of an integer in $O(1)$ 2. Deletion of an integer (for example, if we call delete(7), then 7 is deleted from the data structure, if the integer 7 exists in it) in $O(1)$. 3. Return maximum integer in the data structure (without deleting it) in $O(1)$. You can also use up to $O(n)$ space. I thought of something similar to this [question](https://cs.stackexchange.com/questions/19518/which-data-structure-to-use-for-accessing-min-max-in-constant-time), but here we have $O(\mathrm{log}\ n)$. Do you know how can we implement these operations in $O(1)$? **Edit**: forgot important thing - you can assume the numbers that will be inserted are integers in the range $[0,n]$.
Suppose we have such data structure. We can find in $O(1)$ the max, delete the max in $O(1)$ and repeat it $n$ times. Hence, we can sort $n$ numbers in $O(n)$. Therefore, constructing such data structure must take $\Omega(n\log(n))$ (like sorting, in general, can't be done better than $n\log(n)$. Hence, you might have some constraint on data). Also, as deleting is in $O(1)$, means it must be found. Hence, Searching and Finding is in $O(1)$. Therefore, you must know the position of each value (something like counting sort, but with this constraint that $max <= n$ to take the $O(n)$ space!). Hence, you can act like a counting sort by saving the max value in a variable besides the array, by accepting some constraint on data (not in general).