input
stringlengths
38
38.8k
target
stringlengths
30
27.8k
Let's assume a multilayer perceptron with $l$ layers and $n\_i$ neurons at each layer $i=1, \cdots, l$. The number of input neurons $n\_1$ and the number of output neurons $n\_l$ are fixed. Now I would like to compare different network architectures with each other under the following constraint: * the number of connections between all neurons is constant *or* * the number of neurons in the network is constant. I was told to keep the number of neurons constant. But under this constraint I can maximize the number of connections between neurons by keeping only one hidden layer which results in a network of higher capacity (much more weights $w$). In my opinion the number of connections, and thus the weights should be kept constant while playing around with the network's architecture. I would like to know which constraint makes more sense?
I think the more general way to look at this is not in terms of "connections," which can be challenging to apply in the case of networks that are not multi-layer perceptrons, but instead in terms of *parameters* (weights and biases). For example, there is a dramatic difference in the number of parameters in a GRU and LSTM cell. Keeping the number of cells the same implies that the LSTM network has many more parameters than the GRU network, and hence a larger capacity to learn.
Here is my problem: I basically have 20 or so variables (I have 1000 of these values over an increasing time axis). I want to calculate the weights of these input variables. I am going to try Linear regression to estimate the weights. Is this the correct way to start thinking about it? If I have an output variable which depends on these input variables, I could run a linear regression. But I just have 20 variables with different values at different points in time, and I want to estimate weights to estimate what value a variable will have at a later date (no output variable) Any help will be appreciated. Note: My dataset is a 1000\*20 set
I know my answer is late, but might help others. To answer your first question, yes each time series could be studied independently as univariate time series, by obtaining the mean and the autocovariance function for each series. However this approach doesn't take into account the possible dependence between the series. By doing a linear regression to each series you are estimating the trend of the series, which does come into play when trying to fit a model for forecasting. I will outline a very general approach for univariate time series that can be extended to multivariate time series: * Plot the series: check for trend and seasonality, changes in behaviour, outliers, etc. * Estimate the trend: a) with a smoothing procedure such as moving averages (no estimates) or b) model the trend with a regression equation. * "De-trend" the series. For additive models subtract the trend. For multiplicative models divide the series by the trend values. * Determine seasonal factors. The usual method is to average the "de-trended" values for a specific season. * Determine the random (residuals) component: For an additive model: random = series - trend For a multiplicative model: random = series/(trend\*seasonal). * Choose a model to fit the residuals, using sample autocorrelation function. * Use residuals to forecast and then invert the transformations described above to arrive at forecasts of the original series. You can check out [Rob J Hyndman Forecasting Principles and Practice](https://www.otexts.org/fpp) and the [The Little Book of Time Series](https://a-little-book-of-r-for-time-series.readthedocs.org/en/latest/) for a better exposition. With respect to your second question, you might want to use multivariate time series, in which a vector / matrix approach is used to mostly fit vector autoregresive models (VAR). You can find a much better explanation in [Vector Autoregresive Models for Multivariate Time series](http://faculty.washington.edu/ezivot/econ584/notes/varModels.pdf), and how to use the R package vars in [VAR, SVAR, SVEC Models: Implementation within R Package vars](http://www.jstatsoft.org/article/view/v027i04)
I've run into a simple problem - but no idea how to access it correctly. I've 85 people asked concerning their social network; first example: how many friends do you have. Second, what gender is each of this friend. Then I do a table which displays the average number of female friends and the average number of male frieds of each interviewed person: separated for the male and female interviews and for all together. I got the following table: $$ \begin{array} {rrrrr} &&\text{avg no} &&\text{answers}& & \text{Sdev in}& \\ &&\text{of friends}& & & &\text{no of friends}& \\ & & m &f& m& f& m& f \\ \text{Sex of Interviewed}& M &5.57 &4.61& 54& 54& 3.543& 2.609 \\ & F &4.84 &6.42& 31& 31& 2.734& 3.264 \\ & All& 5.31& 5.27& 85& 85 &3.273& 2.978 \end{array} $$ and I see, that for the male interviewed ("M") the avg number of male friends is higher than the avg number of female friends, and for the female interviewed ("F") it is oppositely. Now what test for significane of the difference were correct here? I could t-test for the means-difference in the M and F-group separately, but this seems a loss of infomation to me. Or the table suggests something like a chi-square; but how should I apply it here? [update] Another approach is to determine the percent of male friends for each respondents, and then determine the average of this percentages for male and female respondents separately: $$ \begin{array} {cccc} &&\text{avg "% of }&N&\text{sdev of} & \\ &&\text{friends are male"}& &\text{"%..."} & \\ \text{sex of respondent}&M&52.746&54&21.087& \\ &W&40.868&31&17.235& \\ &all&48.414&85&20.487& \\ &&&&& \\ &f=7.101&&&& \\ &sig=0.009&&&& \\ \end{array} $$ The comparision of that means is significant due to the f-test - but is this a better sensical approach? [update 2] This another idea to use the chisquare-rationale. I (re-)expand the averages to the sums: "sum of male/female contacts per respondent" and compute the chisqare based on the indifference-table. $ \qquad \small \begin{array} {c | cc |c} \text{Sum} &\text{m} &\text{f} &\text{all} & \\ \hline \text{M} &301&249&550& \\ \text{F} &150&199&349& \\ \hline \text{all} &451&448&899& \\ \\ \\ \text{Indifference} &\text{m} &\text{f} & \\ \text{M} &275.92&274.08& \\ \text{F} &175.08&173.92& \\ \\ \\ \text{Residual} &\text{m} &\text{f} & \\ \text{M} &25.08&-25.08& \\ \text{F} &-25.08&25.08& \\ \\ \\ \text{Chisq} &\text{m} &\text{f} & \\ \text{M} &2.28&2.3& \\ \text{F} &3.59&3.62& \\ \end{array} $ $ \qquad \chi^2 =11.79$ On the other hand, here -I feel- is the chisquare "inflated" because we have such a big N (which is actually an overall sum). Then the significance should be considered critically. Then gaian - what is the most sensical one? [update 3] Here I show a table using an "homophily"-index: 0 means completely heterophil, 1 means completely homophil (in terms of same sex between respondent and his reported friends - requires at least one response/one friend per respondent) $ \qquad \begin{array} {c|cc|c} &&\text{avg of hom} &\text{sdev} &\text{semean} &\text{N} & \\ \hline \text{sex of respondent} &\text{M} &0.53&0.21&0.03&54& \\ &\text{F} &0.58&0.16&0.03&30& \\ \hline &\text{All} &0.55&0.19&0.02&84& \\ &&&&&& \\ &\text{f=1.297} &&&&& \\ &\text{sig=0.258} &&&&& \\ \end{array} $ I've got another test-value f and another significance level; well here I ask, whether male and female respondents are differently homo/heterophil which is another question than before. However, it is more precisely focused to an interesting indicator. The semean shows, that (only) female respondents seem to deviate significantly (5%-level) from indifference (which means hom=0.5) It might, anyway, have a little drawback in that the index for each respondent is based on another number of responses and thus has a more or less reliable value for each of that respondents. But this seems to be a too sophisticated problem here, so I think, I'll stay with that type of measuring. Thanks so far to all respondents here!
### First define the variables: * Participant gender: between subjects predictor variable (two levels male, female) * Gender of friend count: repeated measures predictor variable (two levels male friend, female friend) * Friend count for a given gender: count based outcome variable ### Choose an analytic approach * If the dependent variable had been a normally distributed variable, I'd suggest running a **2 x 2 mixed ANOVA**. * If you **log transformed** the counts (e.g., `log(count + 1)`) the assumptions of ANOVA might be a reasonable approximation for your purposes, although this arguably is not best practice. * Alternatively, you could use something like **generalised estimating equations** ([I have some links to tutorials](http://jeromyanglim.blogspot.com/2009/11/generalized-estimating-equations.html)) with a link function more suited to counts. ### Test the [homophily effect](http://en.wikipedia.org/wiki/Homophily) With all the above approaches you will be left with two binary main effects and an interaction effect. The significance of the interaction effect will indicate whether the average of m-m and f-f friend counts are significantly different from the m-f and f-m counts. Examination of the raw data or the sign of any parameter would indicate the direction of your effect (which you have already seen in the sample data is a homophily effect rather than a [heterophily effect](http://en.wikipedia.org/wiki/Heterophily%5d))
I realize this is pedantic and trite, but as a researcher in a field outside of statistics, with limited formal education in statistics, I always wonder if I'm writing "p-value" correctly. Specifically: 1. Is the "p" supposed to be capitalized? 2. Is the "p" supposed to be italicized? (Or in mathematical font, in TeX?) 3. Is there supposed to be a hyphen between "p" and "value"? 4. Alternatively, is there no "proper" way of writing "p-value" at all, and any dolt will understand what I mean if I just place "p" next to "value" in some permutation of these options?
There do not appear to be "standards". For example: * The [Nature style guide](http://www.nature.com/nature/authors/gta/#a5.6) refers to "P value" * This [APA style guide](http://my.ilstu.edu/~jhkahn/apastats.html) refers to "*p* value" * The [Blood style guide](http://bloodjournal.hematologylibrary.org/authors/stylecheckforfigs.dtl) says: + Capitalize and italicize the *P* that introduces a *P* value + Italicize the *p* that represents the Spearman rank correlation test * [Wikipedia](http://en.wikipedia.org/wiki/P-value) uses "*p*-value" (with hyphen and italicized "p") My brief, unscientific survey suggests that the most common combination is lower-case, italicized *p* without a hyphen.
How can one generate all unlabeled trees with $\le n$ nodes? That is, generate and store the [adjacency matrices](https://en.wikipedia.org/wiki/Adjacency_matrix) of those graphs? (not just [count them](http://oeis.org/A000055)) Visualization of all unlabeled trees with $\le6$ nodes: [![enter image description here](https://i.stack.imgur.com/100P3.gif)](https://i.stack.imgur.com/100P3.gif)
The set of algorithms is countably infinite. This is because each algorithm has a finite description, say as a Turing machine. The fact that an algorithm has finite description allows us to input one algorithm into another, and this is the basis of computability theory. It allows us to formulate the halting problem, for example.
[Andrew More](http://www.cs.cmu.edu/~awm/) [defines](http://www.autonlab.org/tutorials/infogain11.pdf) information gain as: $IG(Y|X) = H(Y) - H(Y|X)$ where $H(Y|X)$ is the [conditional entropy](http://en.wikipedia.org/wiki/Conditional_entropy). However, Wikipedia calls the above quantity [mutual information](http://en.wikipedia.org/wiki/Mutual_information). Wikipedia on the other hand defines [information gain](http://en.wikipedia.org/wiki/Information_gain) as the Kullback–Leibler divergence (aka information divergence or relative entropy) between two random variables: $D\_{KL}(P||Q) = H(P,Q) - H(P)$ where $H(P,Q)$ is defined as the [cross-entropy](http://en.wikipedia.org/wiki/Cross_entropy). These two definitions seem to be inconsistent with each other. I have also seen other authors talking about two additional related concepts, namely differential entropy and relative information gain. What is the precise definition or relationship between these quantities? Is there a good text book that covers them all? * Information gain * Mutual information * Cross entropy * Conditional entropy * Differential entropy * Relative information gain
Both definitions are correct, and consistent. I'm not sure what you find unclear as you point out multiple points that might need clarification. **Firstly**: $MI\_{Mutual Information}\equiv$ $IG\_{InformationGain}\equiv I\_{Information}$ are all different names for the same thing. In different contexts one of these names may be preferable, i will call it hereon [**Information**](https://en.wikipedia.org/wiki/Mutual_information). The **second** point is the relation between the [Kullback–Leibler divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence)-$D\_{KL}$, and ***Information***. The Kullback–Leibler divergence is simply a measure of dissimilarity between two distributions. The **Information** can be defined in these terms of distributions' dissimilarity (see Yters' response). So information is a special case of $K\_{LD}$, where $K\_{LD}$ is applied to measure the difference between the actual joint distribution of two variables (which captures their **dependence**) and the hypothetical joint distribution of the same variables, were they to be **independent**. We call that quantity **Information**. The **third** point to clarify is the inconsistent, though standard **notation** being used, namely that $\operatorname{H} (X,Y)$ is both the notation for [**Joint entropy**](https://en.wikipedia.org/wiki/Joint_entropy) and for [**Cross-entropy**](https://en.wikipedia.org/wiki/Cross_entropy) as well. So, for example, in the definition of **Information**: [\begin{aligned}\operatorname {I} (X;Y)&{}\equiv \mathrm {H} (X)-\mathrm {H} (X|Y)\\&{}\equiv \mathrm {H} (Y)-\mathrm {H} (Y|X)\\&{}\equiv \mathrm {H} (X)+\mathrm {H} (Y)-\mathrm {H} (X,Y)\\&{}\equiv \mathrm {H} (X,Y)-\mathrm {H} (X|Y)-\mathrm {H} (Y|X)\end{aligned}](https://en.wikipedia.org/wiki/Mutual_information) in both last lines, $\operatorname{H}(X,Y)$ is the **joint** entropy. This may seem inconsistent with the definition in the [Information gain](http://en.wikipedia.org/wiki/Information_gain) page however: $DKL(P||Q)=H(P,Q)−H(P)$ but you did not fail to quote the important clarification - $\operatorname{H}(P,Q)$ is being used there as the **cross**-entropy (as is the case too in the [cross entropy](https://en.wikipedia.org/wiki/Cross_entropy) page). **Joint**-entropy and **Cross**-entropy are **NOT** the same. Check out [this](https://math.stackexchange.com/questions/2505015/relation-between-cross-entropy-and-joint-entropy) and [this](https://stats.stackexchange.com/questions/373098/difference-of-notation-between-cross-entropy-and-joint-entropy) where this ambiguous notation is addressed and a unique notation for cross-entropy is offered - $H\_q(p)$ I would hope to see this notation accepted and the wiki-pages updated.
I have an interval of intigers and I need to find all unique cuboids which have volume that falls within said interval. I came up with a loop that goes over all uniqe combinations of 3 numbers (size of the cuboid) (1x1x1, 1x1x2, ...; also 2x1x1 is considered the same as 1x1x2) from 1 to the upper range of the interval. And then checks if the calculated volume falls within the interval. This solution works perfectly if the upper range isnt is too large. But if the interval ends in thousands the solution becomes very slow. I am not really interested in code as I am in an algorithm on how to solve this differently. How would you go on about solving this?
**Hint.** Once you've chosen the first two dimensions, it's trivial to calculate what (if any) range of values for the third dimension give a volume within your interval.
I have a genetic algorithm for an optimization problem. I plotted the running time of the algorithm on several runs on the same input and the same parameters (population size, generation size, crossover, mutation). The execution time changes between executions. Is this normal? I also noticed that against my expectation the running time sometimes decreases in place of increasing when I run it on a larger input. Is this expected? How can I analyze the performance of my genetic algorithm experimentally?
Answer: *you analyse performance statistically.* For example, See the **figure 3** of this paper: [A Building-Block Royal Road Where Crossover is Provably Essential](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.152.7945) where performance of various GA are compared against each other. The plot shows changes in fitness (Y-axis) vs iteration number (X-axis). Each algorithm is run multiple times and the **average, min and max fitness is shown in the plot**. Hence, showing clearly some GA variation have better performance than others. The **asymptotic convergence of fitness** over iteration as suggested by vzn's answer is also very useful for most cases. ... (Except for when fitness doesn't converge when you have an evolving fitness function.)
(I'm asking this question for a friend, honest...) > > Is there an easy way to convert from > an SPSS file to a SAS file, which > preserves the formats AND labels? > Saving as a POR file gets me the > labels (I think) but not the POR file. > I tried to save to a SAS7dat file but > it didn't work. Thanks, > > >
I would just suggest they make the syntax to relabel and reformat the variables. You can use the command, `display dictionary.` in PASW (aka SPSS) to output the dictionary in a table that you can copy and paste the variable names and labels. Looking at this [example](http://www.ats.ucla.edu/stat/sas/modules/labels.htm) of making SAS labels it should be as simple as pasting the text in the appropriate place. Formats may be slightly harder, but I could likely give a suggestion if pointed to a code sample of formats in SAS (if copy and paste from the display dictionary command won't suffice for value labels or data formats).
I have a dataset and have the option to apply either GLM (primitive) or a Random Forest (ensemble). So far the Random Forest is giving way better results than the GLM. As it is generally believed that ensemble models should not be used unless absolutely necessary, hence I am looking for any analysis which I could perform on the dataset, which could prove that indeed the only way/better way to model the relationship between variables in the dataset is by using a ensemble model like Random Forest etc.
Shift-invariance: this means that if we shift the input in time (or shift the entries in a vector) then the output is shifted by the same amount <http://pillowlab.princeton.edu/teaching/mathtools16/slides/lec22_LSIsystems.pdf>
My first intuition was to use a t-test, but after looking in my memories and then in wikipedia I could not discern a conclusive answer... In my course I want to talk about the rationale of the **t-test** in comparision to the **chi-square** (with which I compare the distribution of men/women in my sample with an expectation/theoretical distribution). So I want to ask: *are the empirical means of body-heights for men and women the same as the expected/theoretical means in the population?* and thought I could simply carry over that rationale of the **chi-square** to the **t-test**. But after reading some simple sources I can't yet see, how I could apply that test. Or do I need another thing? *(disclaimer: I've no question about the t-test as a test for the significance of the difference of means of two groups in my sample, say: whether the means of body-heights of men and women are significantly different in an assumed population - that's not my question)* [Update]: According to Peter I could t-test the likelihood of equality of the means between sample-groups and expectations for the men and for the women separately. However, then I have **two** probabilities, but where I want to get **one** (for the **combined** result). Moreover,the focus of the motivating question is more the conceptual one: in an introductory course I want to step from the explanation of the chi-square as a test, where I compare the empirical frequency table with an expected/theoretical one, towards the same concept concerning the means instead of the frequencies. So to say: to introduce a measure for the likelihood that a set of parameters (here: **means**) in the sample is the same as in the expectation/population. I thought, the t-test would be the "natural" candidate for this - but that's the reason, why I asked in the title "what is the required test (...)?" - I assumed the t-test were the "natural" candidate here...
You can do this with the t-test, for men and women separately. Say the assumed population height for men is 70 inches and your vector of heights for men is MHt. Then, in R ``` Meq70 <- t.test(MHt, mu = 70) ``` is all you need, and similar for women. If you don't have R, you can just subtract 70 from each male height and do the usual one sample t-test to see if the result of that is 0.
On [this psychometrics website](https://assessingpsyche.wordpress.com/2014/07/10/two-visualizations-for-explaining-variance-explained/) I read that > > [A]t a deep level variance is a more fundamental concept than the > standard deviation. > > > The site doesn't really explain further why variance is meant to be more fundamental than standard deviation, but it reminded me that I've read some similar things on this site. For instance, in [this comment](https://stats.stackexchange.com/questions/35123/whats-the-difference-between-variance-and-standard-deviation#comment366251_35124) @kjetil-b-halvorsen writes that "standard deviation is good for interpretation, reporting. For developing the theory the variance is better". I sense that these claims are linked, but I don't really understand them. I understand that the square root of the sample variance isn't an unbiased estimator of the population standard deviation, but surely there must be more to it than that. Maybe the term "fundamental" is too vague for this site. In that case, perhaps we can operationalize my question as asking whether variance is more important than standard deviation from the viewpoint of developing statistical theory. Why/why not?
Variance is defined by the first and second *moments* of a distribution. In contrast, the standard deviation is more like a "norm" than a moment. Moments are fundamental properties of a distribution, whereas norms are just ways to make a distinction.
Monadic First Order Logic, also known as the Monadic Class of the Decision Problem, is where all predicates take one argument. It was shown to be decidable by Ackermann, and is [NEXPTIME-complete](http://www.sciencedirect.com/science/article/pii/0022000080900276). However, problems like SAT and SMT have fast algorithms for solving them, despite the theoretical bounds. I'm wondering, is there research analogous to SAT/SMT for monadic first order logic? What is the "state of the art" in this case, and are there algorithms which are efficient in practice, despite hitting the theoretical limits in the worst case?
I found signs that such a decision procedure was implemented in the (general purpose) theorem prover [SPASS](https://www.mpi-inf.mpg.de/departments/automation-of-logic/software/spass-workbench/classic-spass-theorem-prover/). In particular see the thesis of Ann-Christin Knoll, [On Resolution Decision Procedures for the Monadic Fragment and Guarded Negation Fragment.](https://studentnet.cs.manchester.ac.uk/resources/library/thesis_abstracts/MSc15/FullText/Knoll-AnnChristin-diss.pdf) This implements what you want, though I couldn't find the implementation online.
I'm currently building a model to predict early mortgage delinquency (60+ days delinquent within 2 years of origination) for loans originating in 2018Q1. I will eventually train out-of-time (on loans originating in 2015Q4), but for now I'm just doing in-time training (training & testing on 2018Q1) -- and even this I've found challenging. **The dataset contains ~400k observations, of which ~99% are non-delinquent and ~1% are delinquent.** My idea so far has been to use precision, recall, and $F\_1$ as performance metrics. I am working in Python. Things I've tried: * Models: logistic regression & random forest. * Model selection: GridSearchCV to tune hyperparameters with $F\_1$ scoring (results were not significantly different when optimizing for log-loss, ROC-AUC, Cohen's Kappa). * Handing imbalanced data: I tried random undersampling with various ratios and settled on a ratio of ~0.2. I also tried messing with the class weights parameter. **Unfortunately, my validation & testing $F\_1$ scores are only around 0.1, (precision & recall are usually both close to 0.1). This seems very poor, since with many problems you can achieve $F\_1$ scores of 0.9+.** At the same time I've heard there's no such thing as a "good $F\_1$" range, i.e. it is task-dependent. Indeed, a dummy classifier which predicts proportional to the class frequencies only achieves precision, recall, and $F\_1$ of 0.01. I've tried to find references on what a "good" score for this type of task is, but I can't seem to find much. Others' often report ROC-AUC or Brier Score, but I think these are hard to interpret in terms of business value added. Some report $F\_1$ but see overly optimistic results due to data leakage or reporting testing performance on undersampled data. Finally, I've seen some people weight confusion matrix results by expected business costs as opposed to reporting $F\_1$, which seems like it may be a better route. **My questions are: (1) is an $F\_1$ score of 0.1 always bad?, (2) does it even make sense to optimize for $F\_1$ or should I used another metric?, (3) if $F\_1$ is appropriate and a score of 0.1 is bad, how might I improve my performance?**
**From a credit scoring point of view : a $F\_1$ score of $0.1$ seems pretty bad but not impossible with an unbalanced data-set**. It might be enough for your needs (once you weight your errors by the cost). And it might not be possible to go higher (not enough data to predict an event that appears random). In credit scoring there is always a 'random' part in the target (sudden death, divorce ...) depending on the population and the goal of the loans. 1. **You might want to investigate your features and your target.** Basically : statistically, on an univariate approach, do you have features that appears predictive of the target ? (Age of the person ? revenue ? purpose of the loan ?). You might also need to investigate the target : do you have some questionnaire that would allow to get an insight on why the person defaulted ? (If the majority of default come from random event, you might not be able to modelise it). 2. **The main problem with $F\_1$ score in credit scoring is not data imbalance, but cost imbalance.** Type I and Type II errors have far differents consequences. Given that you already gave the loans I am not even sure there is a cost associated with false positive (saying someone will default when it won't). It might be interesting to weight precision and recall (i.e. use $F\_\beta$ as defined [here](https://en.wikipedia.org/wiki/F-score)). **Another problem is that it is usually good for a binary decision.** Depending on what you want to use the model for (measuring risk of already granted loans ? granting new loans ? pricing new loans ?) there might be alternatives that better capture model discrimination (AUC - see its statistical interpretation) or individual % chance of default (Brier Score). 3. Assuming that there is no specific problem with your current modelling (Feature engineering, imbalance treatment, 'power' of your model). There are some credit-scoring specific things you can do. **Work on your target definition** (what if you do 90+ days delinquant in the 5 years after origination ?). **Try to collect more data** about your clients and their behavior (purpose of the loan, others products they use at your bank... etc.).
I know that the CPU has a program counter which takes instructions that are required to execute a program, from the memory, one by one. I also know that once the first instruction is executed, the program counter automatically increments by 1 and accesses the data in the cell with the corresponding address. Now my question is, what if the next instruction in the memory cell, i.e. the ***instruction in the memory cell after incrementing the counter by 1 is not the required instruction to execute the given program?*** And what if the next instruction that is required is, in a memory cell who's address ***can be got to only after 'n' increments?*** If it helps [I used this video as a reference](https://youtu.be/ccf9ngGIb8c) EDIT: If it isn't clear, what I'm trying to ask is this: Suppose to execute a given program I need 5 instructions- A,B,C,D,E. Now suppose instruction A is loaded in a memory cell with address 0000H. So when the program counter reads 0000, it takes the instruction from 0000H, and when the counter reads 0001, it takes the memory from 0001H and so on. Now what if instruction C is in 0007? After 0001H the program counter would increment to 0002. But i don't need the instruction from 0002H. I need it from 0007. So what does the counter do in such a situation?
Code isn't arranged randomly in memory. The next instruction to be executed will, by default, be the one at the next memory location, unless something specific (a "jump" instruction) is done to execute code from somewhere else. So, if your program is literally "Do A, then B, then C, then D, then E", the compiler will place those instructions in consecutive memory cells. If, for some reason, they were in non-consecutive memory cells, the program would have to become something like "Do A, then jump to B's location, then do B, then jump to C's location, etc." The computer has no way of knowing that it's not supposed to execute the next instruction in memory, except being told to do that. And being told to do that requires the execution of some kind of instruction.
I expect there may be no definitive answer to this question. But I have used a number of machine learning algorithms in the past and am trying to learn about Bayesian Networks. I would like to understand under what circumstance, or for what types of problems would you choose to use Bayesian Network over other approaches?
Bayesian Networks (BN's) are generative models. Assume you have a set of inputs, $X$, and output $Y$. BN's allow you to learn the joint distribution $P(X,Y)$, as opposed to let's say logistic regression or Support Vector Machine, which model the conditional distribution $P(Y|X)$. Learning the joint probability distribution (generative model) of data is more difficult than learning the conditional probability (discriminative models). However, the former provides a more versatile model where you can run queries such as $P(X\_1|Y)$ or $P(X\_1|X\_2=A, X\_3=B)$, etc. With the discriminative model, your sole aim is to learn $P(Y|X)$. BN's utilize DAG's to prescribe the joint distribution. Hence they are graphical models. **Advantages:** 1. When you have a lot of missing data, e.g. in medicine, BN's can be very effective since modeling the joint distribution (i.e. your assertion on how the data was generated) reduces your dependency in having a fully observed dataset. 2. When you want to model a domain in a way that is visually transparent, and also aims to capture $\text{cause} \to \text{effect}$ relationships, BN's can be very powerful. Note that the causality assumption in BN's is open to debate though. 3. Learning the joint distribution is a difficult task, modeling it for discrete variables (through the calculation of conditional probability tables, i.e. CPT's) is substantially easier than trying to do the same for continuous variables though. So BN's are practically more common with discrete variables. 4. BN's not only allow observational inference (as all machine learning models allow) but also [causal intervention](http://www.cogsci.ucsd.edu/~ajyu/Readings/pearl_causal.pdf)s. This is a commonly neglected and underappreciated advantage of BN's and is related to counterfactual reasoning.
Say I'd like to transmit a 100-bit packet which has a field containing a continuous value. I'd like to protect this value with an error correction code but an error in the MSB of this value is much more catastrophic than an error in the LSB, so how do I have to design the code so that it will be optimal by means of protection (more redundant bits for the important parts)? For example, let's assume my continuous field is Temperature. My problem is that the temperature is represented by X bits but they are not equally important. If the temperature is 23.452298 degrees than I can't risk an error in the integer part (23) but if I'll get an error in the fractional part I'll be able to live with it (though I prefer protecting it too, with less "protection" bits).?
This is only possible if there are many admissible outputs for a given input. I.e., when the relation $R$ is not a function because it violates uniqueness. For instance, consider this problem: > > Given $n \in \mathbb{N}$ (represented in unary) and a TM $M$, produce another TM $N$ such that $L(M)=L(N)$ and $\# N > n$ (where $\# N$ stands for the encoding (Gödel number) of $N$ into a natural number) > > > Solving this is trivial: keep adding a few redundant states to the TM $M$, possibly with some dummy transitions between them, until its encoding exceeds $n$. This is a basic repeated application of the Padding Lemma on TMs. This will require $n$ paddings, each of which can add one state, hence it can be done in polynomial time. On the other hand, given $n,M,N$ it is undecidable to check if $N$ is a correct output for the inputs $n,M$. Indeed, checking $L(M)=L(N)$ is undecidable (apply the Rice theorem), and the constraint $\#N > n$ only discards finitely many $N$s from those. Since we remove a finite amount of elements from an undecidable problem, we still get an undecidable problem. You can also replace the undecidable property $L(M)=L(N)$ to obtain variations which are still computable but NP hard/complete. E.g. given $n$ (in unary) it is trivial to compute a graph $G$ having a $n$-clique inside. But given $n,G$ it is hard to check whether a $n$-clique exists.
Is it possible to use [Dependent Types](http://en.wikipedia.org/wiki/Dependent_type) in the existing [Typed Racket](http://docs.racket-lang.org/ts-guide/) implementation? (ie do they exist in it?) Is it reasonably possible to implement a Dependent Types System using Typed Racket?
Dependent Types in Racket are being worked on by Andrew Kent at Indiana University. There is a set of [slides](https://github.com/pnwamk/talks/raw/762c22898cba7dfd9d21d37ca8e16e3f6b98bed5/iu-pl-wonks/wonks-apr2015.pdf). There is a [talk](https://www.youtube.com/watch?v=ZK0WtcppZuA). Of interest, this [potentially also impacts Typed Clojure](https://twitter.com/ambrosebs/status/589259830881300480), which is strongly modeled on Typed Racket.
One assumption for regression analysis is that $X$ and $Y$ are not intertwined. However when I think about it It seems to me that it makes sense. Here is an example. If we have a test with 3 sections (A B and C). The overall test score is equal to the sum of individual scores for the 3 sections. Now it makes sense to say that $X$ can be score in section A and $Y$ the overall test score. Then the linear regression can answer this question: what is the variability in overall test score that is attributable to section A? Here, several scenarios are possible: 1. Section A is the hardest of the 3 sections and students always score lowest on it. In such a case, intuitively $R^2$ would be low. Because most of the overall test score would be determined by B and C. 2. Section A was very easy for students. In this case also the correlation would not be high. Because students always score 100% of this section and therefore this section tells us nothing about the overall test score. 3. Section A has intermmediate difficulty. In this case the correlation would be stronger (but this also depends on the other scores (B and C). Another example is this: we analyze the total content of a trace element in urine. And we analyze independently the individual species (chemical forms) of that trace element in urine. There can be many chemical forms. And if our analyses are correct, the sum of chemical forms should give us the same as the total content of an element (analyzed by a different technique). However, it makes sense to ask whether one chemical form is correlated with the total element content in urine, as this total content is an indicator of the total intake from food of that element. Then, if we say that $X$ is the total element in urine and $Y$ is chemical form A in urine then by studying the correlation we can explore whether this chemical form is the major one that contributes to the overall variablity or not. it seems to me that it makes sense sometimes even when $X$ and $Y$ are not independent and that this can in some cases help answer scientific questions. Would you think $R^2$ can be useful or meaningful in the examples above ? If we consider the test score example above, I would already say there would be about 33% contribution of each section had the difficulty been exactly the same for the students. But in practice this is not necessarily true. So I was thinking maybe using regression analysis can help us know the true variability attributed to each section of an exam. So it seems to me that $R^2$ would be meaningful even though we already know the null hypothesis is not true. Are there alternative modified regression methods to account for such situations and provide us with meaningful parameters?
If X is one of several variables that sum to define Y, then clearly the assumptions of linear regression are broken. The P values won't be useful. The slopes and their confidence intervals can't be interpreted in the usual way. But is $R^2$ still useful? I suppose it is as a descriptive statistics. If you have three $R^2$ values quantifying correlation between Y and each of its three components, I suppose you'd learn something interesting by seeing the relative values of $R^2$.
I have a classification problem with 60 data points in a 2-dimensional feature space. The data originally is divided into 2 classes. Earlier I was using Statistics Toolbox of Matlab so it was giving me fairly good results.It was giving 1 false negative and no false positive. I used the following code: ``` SVMstruct = svmtrain(point(1:60,:),T(1:60),'Kernel_Function','polynomial','polyorder',11,'Showplot',true); ``` I am using a polynomial kernel and with polynomial order of 11. [![Boundries Generated after training data using svmTrain from Matlab](https://i.stack.imgur.com/XSWKG.jpg)](https://i.stack.imgur.com/XSWKG.jpg) But when I use same kernel configuration in scikit-learn SVC it does not gives the same result rather it gives very undesirable result with classifying all of them to single class. [![Boundries Generated after training data using SVC from Scikit-learn in python](https://i.stack.imgur.com/ftFJj.png)](https://i.stack.imgur.com/ftFJj.png) I am using it as ``` svc = svm.SVC(kernel='poly', degree=11, C=10) ``` I have used with many values of C too. No major difference. Why there is so much difference in results ? How can I get same result as I got using Matlab ? For me it is compulsory to do with python-scikit.
You have to be sure that the algorithm is the same and the kernel functions are really the same. If you look at this python documentation page for [kernels in scikit learn](http://scikit-learn.org/stable/modules/svm.html#svm-kernels) you will see there a description of poly kernel. Notice that you have a gamma and a degree. Gamma is by default 'auto' which is evaluated at 1/n\_samples. For the same kernel you have 'coef0' (a great name for a parameter), which is used in poly as the free term. I do not know how matlab put this values as defaults, but the usual formula for poly kernel in literature I found it to be $poly(x\_1, x\_2) = (<x\_1,x\_2> + 1)^{d}$. So no gamma and the free term is $1$. I think matlab uses that. (Anyway I found the 'improvements' in scikit learn to have a not so good smell). Also in this [SVC documentation page](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) they state that there is a parameter called shrinking. I really do not know it's effect, but its auto, which means is enabled. Might be an issue. **Later edit**: I found this documentation page for [svm in matlab](http://uk.mathworks.com/discovery/support-vector-machine.html) which describes the kernel in the way I stated (no degree, free term with $1$). Also it states that 'SMO' is used by default, make it sure you use 'SMO' in python also. On the other hand you have to understand that these kind of algorithms are solved by optimization methods which are usually iterative and to save some memory, or cycles their implementations can be different in small details, which will almost produce different results. I agree however that the results should be similar.
I am running a `summary(aov(...))` in R and I got this message: ``` Estimated effects may be unbalanced ``` What does it mean? How may I solve this problem?
`aov` is designed for balanced data ([link](http://stat.ethz.ch/R-manual/R-patched/library/stats/html/aov.html)). Balanced design is: An experimental design where all cells (i.e. treatment combinations) have the same number of observations ([link](http://en.wikipedia.org/wiki/Analysis_of_variance)).
Basically, I currently have two ideas but unsure on which is correct for the following question: "The High level data link control protocol (HDLC), is a popular protocol used for point-to-point data communication. In HDLC, data are organised into frames which begin and end with the sequence 01111110. This sequence never occurs within the main body of the frame, only at the beginning and end (in order to avoid confusion). a.)Design an NFA which recognises the language of binary strings which contain one or more HDLC frames" My possible solutions: [![enter image description here](https://i.stack.imgur.com/ejT9w.png)](https://i.stack.imgur.com/ejT9w.png) The next part is to convert to DFA, but I first need to get this part right.
"Containing 01111110" is easily handled directly with an DFA. Start with something that just recognizes that string; check for each state what you should do to loop back if mismatch (i.e., after 1 if a premature 0 shows up, you have to start over looking for a 1; if the second 0 isn't there, start over at the beginning). After you got your target, anything goes.
I'm reading through "[An Introduction to Statistical Learning](http://www-bcf.usc.edu/%7Egareth/ISL/)" . In chapter 2, they discuss the reason for estimating a function $f$. > > **2.1.1 Why Estimate $f$?** > > > There are two main reasons we may wish to estimate *f* : *prediction* and *inference*. We discuss each in turn. > > > I've read it over a few times, but I'm still partly unclear on the difference between prediction and inference. Could someone provide a (practical) example of the differences?
**Prediction** uses estimated **f** to forecast into the future. Suppose you observe a variable $y\_t$, maybe it's the revenue of the store. You want to make financial plans for your business, and need to forecast the revenue in next quarter. You suspect that the revenue depends on the income of population in this quarter $x\_{1,t}$ and the time of the year $x\_{2,t}$. So, you posit that it is a function: $$y\_t=f(x\_{1,t-1},x\_{2,t-1})+\varepsilon\_t$$ Now, if you get the data on income, say personal disposable income series from BEA, and construct the time of year variable, you may estimate the function **f**, then plug the latest values of the population income and the time of the year into this function. This will yield the prediction for the next quarter of the revenue of the store. **Inference** uses estimated function **f** to study the impact of the factors on the outcome, and do other things of this nature. In my earlier example you might be interested in how much the season of the year determines the revenue of the store. So, you could look at the partial derivative $\partial f/\partial x\_{2t}$ - sensitivity to the season. If **f** was in fact a linear model then it would be a regression coefficient of the second variable $\beta\_2x\_{2,t-1}$. Prediction and inference may use the same estimation procedure to determine **f**, but they have different requirements to this procedure and incoming data. A well-known case is so called *collinearity*, whereas your input variables are highly correlated with each other. For instance, you measure weight, height and belly circumference of obese people. It is likely that these variables are strongly correlated, not necessarily linearly though. It happens so that *collinearity* can be a serious issue for *inference*, but merely an annoyance to *prediction*. The reason is that when predictors $x$ are correlated it's harder to separate the impact of predictor from the impact of other predictors. For prediction this doesn't matter, all you care is the quality of the forecast.
I was reading Operating Systems by Galvin and came across the below line, > > Not all unsafe states are deadlock, however. An unsafe state may lead > to deadlock > > > Can someone please explain how *deadlock != unsafe* state ? I also caught the same line [here](http://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/7_Deadlocks.html) > > If a safe sequence does not exist, then the system is in an unsafe > state, which MAY lead to deadlock. ( All safe states are deadlock > free, but not all unsafe states lead to deadlocks. ) > > >
Safe state is deadlock free for sure, but if you cannot fulfill all requirements to prevent deadlock it might occur. For example if two threads may fall in deadlock when they start thread A, then thread B, but when they start the opposite (B, A) they will work fine - let me assume B is nicer ;) The state of system is unsafe, but with fortunate starting sequence it will be working. No deadlock, but it is possible. If also you synchronize them by hand - start in good order - it is hazardous - for some reason they might not be fired as you like - system still is unsafe (because of possible deadlock) but there is low probability to that. In case of some external events like freezing threads or interupts after continuing it will fail. You have to realise - safe state is sufficient condition to avoid deadlock, but unsafe is only nessesary condition. It is hard to write code out of head right now, but I can search for some. I did encountered code in Ada that more than 99/100 times it was perfectly working for several weeks (and then stopped due to server restart not deadlock) but once in a while it was crashing after several seconds into deadlock state. Let me add some easy example by compare to division: If your function divides c / d and returns result, without checking whether d is equal 0, there might be division by zero error, so code is unsafe (same naming intended), but until you do such division, everything is fine, but after theoretical analysis code is unsafe and might fall into undefined behaviour not handled properly.
I am reading a book that in one page it talks about cdf of a random vector. This is from the book: > > Given $X=(X\_1,...,X\_n)$, each of the random variables $X\_1, ... ,X\_n$ can be characterized from a probabilistic point of view by its cdf. > > > However the cdf of each coordinate of a random vector does not completely describe the probabilistic behaviour of the whole vector. For instance, if $U\_1$ AND $U\_2$ are two independent random variables with the same cdf $G(x)$, the vectors $X=(X\_1, X\_2)$ defined respectively by $X\_1=U\_1$, $X\_2=U\_2$ and $X\_1=U\_1$, $X\_2=U\_1$ have each of their coordinates with the same cdf, and they are quite different. My question is: From the very last paragraph, it says $U\_1$ and $U\_2$ are coming from the same c.d.f. And then they define $X=(X\_1, X\_2)$, but they say $X=(X\_1, X\_2)$ is different from $X=(X\_1, X\_1)$. I don't really understand why the two $X$ are different. (i.e. I don't understand why $X=(X\_1, X\_2)$ and $X=(X\_1, X\_1)$ are different). Isn't $X\_1$ the same as $X\_2$, so it doesn't matter whether you put two $X\_1$ to form $X=(X\_1, X\_1)$ or put one $X\_1$ and one $X\_2$ to form $X=(X\_1, X\_2)$. Shouldn't they be the same? why does the author says they are "quite different"? Could someone explain why they are different?
Random objects can have the same distribution and be almost surely different. Take a look: [Can two random variables have the same distribution, yet be almost surely different?](https://stats.stackexchange.com/questions/24938/can-two-random-variables-have-the-same-distribution-yet-be-almost-surely-differ/24939#24939)
An additive model constructed using the exponential loss function $$L(y, f (x)) = \exp(−yf (x))$$ gives Adaboost. How can we derive the corresponding additive model (known as logitboost) using the logistic loss function $$L(y, f (x)) = \log(1 + \exp(−yf (x)))$$ What steps I should take to do the above proof?
Just an extended comment, for those who didn't notice that *"This guy"* in the question is not the author of the linked blog, but refers to [Gregory Chaitin](http://en.wikipedia.org/wiki/Gregory_Chaitin). The sentence is from the lecture: *A Century of Controversy over the Foundations of Mathematics*; the transcription can be found [here](http://www.cs.auckland.ac.nz/~chaitin/lowell.html). It seems interesting (I'm going to read it now)! > > ... > > Okay, I'd like to talk about some crazy stuff. The general idea is that sometimes ideas are very powerful. I'd like to talk about theory, about the computer as a concept, a philosophical concept. > > > > We all know that the computer is a very practical thing out there in the real world! It pays for a lot of our salaries, right? But what people don't remember as much is that really---I'm going to exaggerate, but I'll say it---the computer was invented in order to help to clarify a question about the foundations of mathematics, a philosophical question about the foundations of mathematics. > > > > Now that sounds absurd, but there's some truth in it. There are actually lots of threads that led to the computer, to computer technology, which come from mathematical logic and from philosophical questions about the limits and the power of mathematics. > ... >
We have to construct a DFA over the alphabet 0 and 1 for: > > Every substring of four symbols has at most two 0's. For example, 001110 and 011001 are in the language, but 10010 is not since one of its substrings, 0010, contains three zeros. > > > I haven't reached to regular expression in the book which I am using. I am currently in unit 2 and regular expressions is from unit 3.
Are we talking about a dictionary containing words of some language? Then there are many, many tricks you can use. Your dictionary may contain 500,000 words, but people don't use that many. And they don't make arbitrary spelling mistakes, but typically only a small number. So you could have a second dictionary containing previous results. If I enter "wierd" you find "weird" after a lengthy search, but then you add "wierd" to a second dictionary. You can look at the word and decide what is most likely the correct spelling. Like "messsage" is probably "message", without consulting your dictionary. You could map "messsege" to "m, vowel, s, vowel, g, optional vowel", and have a second dictionary for mapped words, which would tell you that your word is likely either "message" or "massage". This will work best for complicated words that nobody knows how to spell correctly. If you know that your word was typed on a keyboard, there are errors that are more likely than others. If you know that your word was scanned by a scanner, there's a completely different set of errors, like "wam" might really be "warn" (nobody would make that mistake typing on their keyboard). For keyboard entry, the user's fingers might have moved to a different position on the keyboard. Like "leubpard" is "keyboard" with the right hand moved one position to the right. That's a case where simple algorithms fail completely. Split your dictionary into the 5,000 most common and the 495,000 less common words. Most likely you find a good match within the first 5,000 and can remove most items in the large list that cannot be better.
I am doing a regression analysis on the various factors which influence accident levels in my city. The 2 factors used in my regression model are covariates : i) `urbanization` level of the city and, ii) the `percentage of persons buying cars` within the past year in the city. From my regression analysis, both showed significant positive association with my outcome variable. However, when I presented my regression model, one of my colleagues mentioned that my analysis, i.e., the impact of the 2 covariates on the dependent variable could be flawed and does not reflect reality since I have left out an important covariate such as the `number of alcohol outlets/pubs in the city`. What statistical reasoning can I rebuttal his argument with? I am not quite sure what to make of his argument and if it is even valid. Suggestions/discussion are welcome. Cheers.
@PeterFlom answer is very good and explains very well the general phenomenon known as [Simpson's paradox](https://en.wikipedia.org/wiki/Simpson%27s_paradox). However, he forgot to mention one important thing, that is actually fundamental to our job: to be a confounding variable (one that, when omitted, invalids inference in linear models) a variable has to: * be associated with outcome (and this is the case); * be correlated with the studied variables. In your case you could argue that drinking habits should not be dependent on urbanization nor with number of people buying cars (or maybe correlation exists, but is negative, in that case real effect of your study variable should be even stronger). Also causal graphical models can be used to decide what variables to include in the model, so that their effect is evaluated without the influence of factors supposed to be caused by it, and not causing it. Of course you can never guess the real, intricate relation among variables that cause every crash, and their aggregate measure. Linear models are always approximations, but with some solid reasoning, even littlest models can be shown useful.
I have to regress family income (`faminc`; in dollars) onto **husband's educational attainment** (`he`; in years), **wife's educational attainment** (`we`; in years), and **number of children less than 6 years old in household** (`kl6`) using Stata. (the file only contains data of 4 above factors) I use OLS to estimate a model in the form: $$faminc = b\_1 + b\_2 \* he + b\_3 \* we + b\_4 \* kl6 + \epsilon $$ ``` Source | SS df MS Number of obs = 430 -------------+------------------------------ F( 3, 426) = 28.77 Model | 1.4002e+11 3 4.6673e+10 Prob > F = 0.0000 Residual | 6.9100e+11 426 1.6221e+09 R-squared = 0.1685 -------------+------------------------------ Adj R-squared = 0.1626 Total | 8.3102e+11 429 1.9371e+09 Root MSE = 40275 ------------------------------------------------------------------------------ faminc | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- he | 3185.882 795.4493 4.01 0.000 1622.388 4749.376 we | 4637.415 1059.177 4.38 0.000 2555.551 6719.279 kl6 | -8372.704 4343.059 -1.93 0.055 -16909.2 163.7893 _cons | -5998.224 11161.51 -0.54 0.591 -27936.72 15940.27 ``` I have some questions: 1) The regression yields $b4<0$. Is this true in fact? I mean that if the family has more children, the less income they gain? 2) Is this model good enough? Should I use natural logarithm or add dummy to make it better?
As Glen\_b says that's probably wrong, because the reciprocal is a non-linear function. If you want an approximation to $E(1/X)$ **maybe** you can use a Taylor expansion around $E(X)$: $$ E \bigg( \frac{1}{X} \bigg) \approx E\bigg( \frac{1}{E(X)} - \frac{1}{E(X)^2}(X-E(X)) + \frac{1}{E(X)^3}(X - E(X))^2 \bigg) = \\ = \frac{1}{E(X)} + \frac{1}{E(X)^3}Var(X) $$ so you just need mean and variance of X, and if the distribution of $X$ is symmetric this approximation can be very accurate. EDIT: the **maybe** above is quite critical, see the comment from BioXX below.
A graph property is called **hereditary** if it is closed with respect to deleting vertices. There are many interesting hereditary graph properties. Moreover, a number of nontrivial general facts are also known about hereditary classes of graphs, see "[Global properties of hereditary classes?](https://cstheory.stackexchange.com/questions/1005/global-properties-of-hereditary-classes)" Considering complexity, hereditary graph properties include both polynomial-time decidable and NP-complete ones. We know, however, that there are a number of natural problems in NP that are candidates for NP-intermediate status, see a nice collection in [Problems Between P and NPC](https://cstheory.stackexchange.com/questions/79/problems-between-p-and-npc). Among the numerous answers there, however, none of them looks like a hereditary graph property (unless I overlooked something). > > **Question:** Do you know a hereditary graph property that is a candidate for NP-intermediate status? Or else, is there a dichotomy theorem for hereditary graph properties? > > >
Is there a particular style of problem you are looking for, or anything related to a hereditary graph property? Two common types of problems would be (1) recognition: does a given $G$ have the hereditary property? or (2) find the largest (induced or not) subgraph $H$ in $G$ having the hereditary property. As I'm sure you are familiar, (2) is NP-complete (Mihalis Yannakakis: Node- and Edge-Deletion NP-Complete Problems. STOC 1978: 253-264) But I'm not sure if you are specifically only asking about problems of type (1) (recognition problems.) There are a few recognition problems of hereditary graph classes which are still open. I think the *one-in-one-out* graphs are open to recognize and clearly in NP. Graphclasses.org also reports that the related class of *opposition graphs* is still open to recognize (and these are also clearly in NP.) Apparently, so is the class of Domination graphs. A large list of open (and unknown) recognition status can be found on that site, and pretty much all of those properties appear to be hereditary. <http://www.graphclasses.org/classes/problem_Recognition.html> There is one recognition problem they list under GI-complete, which is not a hereditary property ... so it is interesting to think that perhaps deciding a hereditary problem may indeed have a dichotomy theorem.
This idea occurred to me as a kid learning to program and on first encountering PRNG's. I still don't know how realistic it is, but now there's stack exchange. Here's a 14 year-old's scheme for an amazing compression algorithm: Take a PRNG and seed it with seed `s` to get a long sequence of pseudo-random bytes. To transmit that sequence to another party, you need only communicate a description of the PRNG, the appropriate seed and the length of the message. For a long enough sequence, that description would be much shorter then the sequence itself. Now suppose I could invert the process. Given enough time and computational resources, I could do a brute-force search and find a seed (and PRNG, or in other words: a program) that produces my desired sequence (Let's say an amusing photo of cats being mischievous). PRNGs repeat after a large enough number of bits have been generated, but compared to "typical" cycles my message is quite short so this dosn't seem like much of a problem. Voila, an effective (if rube-Goldbergian) way to compress data. So, assuming: * The sequence I wish to compress is finite and known in advance. * I'm not short on cash or time (Just as long as a finite amount of both is required) I'd like to know: * Is there a fundamental flaw in the reasoning behind the scheme? * What's the standard way to analyse these sorts of thought experiments? *Summary* It's often the case that good answers make clear not only the answer, but what it is that I was really asking. Thanks for everyone's patience and detailed answers. Here's my nth attempt at a summary of the answers: * The PRNG/seed angle doesn't contribute anything, it's no more than a program that produces the desired sequence as output. * The pigeonhole principle: There are many more messages of length > k than there are (message generating) programs of length <= k. So some sequences simply cannot be the output of a program shorter than the message. * It's worth mentioning that the interpreter of the program (message) is necessarily fixed in advance. And it's design determines the (small) subset of messages which can be generated when a message of length k is received. At this point the original PRNG idea is already dead, but there's at least one last question to settle: * Q: Could I get lucky and find that my long (but finite) message just happens to be the output of a program of length < k bits? Strictly speaking, it's not a matter of chance since the meaning of every possible message (program) must be known in advance. Either it *is* the meaning of some message of < k bits *or it isn't*. If I choose a random message of >= k bits randomly (why would I?), I would in any case have a vanishing probability of being able to send it using less than k bits, and an almost certainty of not being able to send it at all using less than k bits. OTOH, if I choose a specific message of >= k bits from those which are the output of a program of less than k bits (assuming there is such a message), then in effect I'm taking advantage of bits already transmitted to the receiver (the design of the interpreter), which counts as part of the message transferred. Finally: * Q: What's all this [entropy](http://en.wikipedia.org/wiki/Entropy_%28information_theory%29)/[kolmogorov complexity](https://en.wikipedia.org/wiki/Kolmogorov_complexity) business? Ultimately, both tell us the same thing as the (simpler) pigeonhole principle tells us about how much we can compress: perhaps not at all, perhaps some, but certainly not as much as we fancy (unless we cheat).
You've got a brilliant new compression scheme, eh? Alrighty, then... ♫ Let's all play, the entropy game ♫ Just to be simple, I will assume you want to compress messages of exactly $n$ bits, for some fixed $n$. However, you want to be able to use it for longer messages, so you need some way of differentiating your first message from the second (it cannot be ambiguous what you have compressed). So, your scheme is to determine some family of PRNG/seeds such that if you want to compress, say, $01000111001$, then you just write some number $k$, which identifies some precomputed (and shared) seed/PRNG combo that generates those bits after $n$ queries. Alright. How many different bit-strings of length $n$ are there? $2^n$ (you have n choices between two items; $0$ and $1$). That means you will have to compute $2^n$ of these combos. No problem. However, you need to write out $k$ in binary for me to read it. How big can $k$ get? Well, it can be as big as $2^n$. How many bits do I need to write out $2^n$? $\log{2^n} = n$. Oops! Your compression scheme needs messages as long as what you're compressing! "Haha!", you say, "but that's in the worst case! One of my messages will be mapped to $0$, which needs only $1$ bit to represent! Victory!" Yes, but your messages have to be unambiguous! How can I tell apart $1$ followed by $0$ from $10$? Since some of your keys are length $n$, all of them must be, or else I can't tell where you've started and stopped. "Haha!", you say, "but I can just put the length of the string in binary first! That only needs to count to $n$, which can be represented by $\log{n}$ bits! So my $0$ now comes prefixed with only $\log{n}$ bits, I still win!" Yes, but now those really big numbers are prefixed with $\log{n}$ bits. Your compression scheme has made some of your messages even longer! And *half* of all of your numbers start with $1$, so *half* of your messages are that much longer! You then proceed to throw out more ideas like a terminating character, gzipping the number, and compressing the length itself, but all of those run into cases where the resultant message is just longer. In fact, for every bit you save on some message, another message will get longer in response. In general, you're just going to be shifting around the "cost" of your messages. Making some shorter will just make others longer. You really can't fit $2^n$ different messages in less space than writing out $2^n$ binary strings of length $n$. "Haha!", you say, "but I can choose some messages as 'stupid' and make them illegal! Then I don't need to count all the way to $2^n$, because I don't support that many messages!" You're right, but you haven't really won. You've just shrunk the set of messages you support. If you only supported $a=0000000011010$ and $b=111111110101000$ as the messages you send, then you can definitely just have the code $a\rightarrow 0$, $b\rightarrow 1$, which matches exactly what I've said. Here, $n=1$. The actual length of the messages isn't important, it's how many there are. "Haha!", you say, "but I can simply determine that those stupid messages are rare! I'll make the rare ones big, and the common ones small! Then I win on average!" Yep! Congratulations, you've just discovered [entropy](http://en.wikipedia.org/wiki/Entropy_%28information_theory%29)! If you have $n$ messages, where the $i$th message has probability $p\_i$ of being sent, then you can get your expected message length down to the entropy $H = \sum\_{i=1}^np\_i\log(1/p\_i)$ of this set of messages. That's a kind of weird expression, but all you really need to know is that's it's biggest when all messages are equally likely, and smaller when some are more common than others. In the extreme, if you know basically every message is going to be $a=000111010101$. Then you can use this super efficient code: $a\rightarrow0$, $x\rightarrow1x$ otherwise. Then your expected message length is basically $1$, which is awesome, and that's going to be really close to the entropy $H$. However, $H$ is a lower bound, and you really can't beat it, no matter how hard you try. Anything that claims to beat entropy is probably not giving enough information to unambiguously retrieve the compressed message, or is just wrong. Entropy is such a powerful concept that we can lower-bound (and sometimes even *upper*-bound) the running time of some algorithms with it, because if they run really fast (or really slow), then they must be doing something that violates entropy.
> > Prove: $n^{5}-3n^{4}+\log\left(n^{10}\right)∈\ Ω\left(n^{5}\right)$. > > > I always get stuck in these types of questions, where there is a $"-(xy^{z})"$ in the expression. Whenever I see the solutions for these type of questions, I can't identify a single method that works every time and it's frustrating. How do I approach these types of questions?
Let me suggest direct simple solution: definition of $\Omega$ contains $2$ bound variables $c$ and $N$. In simple cases, as is in OP, we can choose one and solve second from expression obtained from definition. Obviously for left side we need constant less then one, so taking, for example, $c=\frac{1}{10}$ we have $$n^{5}-3n^{4}+\log\left(n^{10}\right) \geqslant \frac{1}{10} n^{5}$$ which gives $$9n^{5} \geqslant 30n^{4}-10\log\left(n^{10}\right) $$ It is enough to find $N$ for inequality $9n^{5} \geqslant 30n^{4}$, which gives $N= \left\lceil \frac{30}{9} \right\rceil$.
I'm interested in a variant of the Binary Channel, where the bits aren't *flipped* with some probability, but rather they are *completely erased*. Output words therefore vary in length. I'm quite sure that this subject is well researched, and all I need is the correct search term.
I think the term is [deletion channel](http://en.wikipedia.org/wiki/Deletion_channel). As the Wikipedia article says, this "should not be confused with the binary erasure channel".
Let $\Sigma$ be an alphabet, ie a nonempty finite set. A string is any finite sequence of elements (characters) from $\Sigma$. As an example, $ \{0, 1\}$ is the binary alphabet and $0110$ is a string for this alphabet. Usually, as long as $\Sigma$ contains more than 1 element, the exact number of elements in $\Sigma$ doesn't matter: at best we end up with a different constant somewhere. In other words, it doesn't really matter if we use the binary alphabet, the numbers, the Latin alphabet or Unicode. > > Are there examples of situations in which it matters how large the alphabet is? > > > The reason I'm interested in this is because I happened to stumble upon one such example: For any alphabet $\Sigma$ we define the random oracle $O\_{\Sigma}$ to be an oracle that returns random elements from $\Sigma$, such that every element has an equal chance of being returned (so the chance for every element is $\frac{1}{|\Sigma|}$). For some alphabets $\Sigma\_1$ and $\Sigma\_2$ - possibly of different sizes - consider the class of oracle machines with access to $O\_{\Sigma\_1}$. We're interested in the oracle machines in this class that behave the same as $O\_{\Sigma\_2}$. In other words, we want to convert an oracle $O\_{\Sigma\_1}$ into an oracle $O\_{\Sigma\_2}$ using a Turing machine. We will call such a Turing machine a conversion program. Let $\Sigma\_1 = \{ 0, 1 \}$ and $\Sigma = \{ 0, 1, 2, 3 \}$. Converting $O\_{\Sigma\_1}$ into an oracle $O\_{\Sigma\_2}$ is easy: we query $O\_{\Sigma\_1}$ twice, converting the results as follows: $00 \rightarrow 0$, $01 \rightarrow 1$, $10 \rightarrow 2$, $11 \rightarrow 3$. Clearly, this program runs in $O(1)$ time. Now let $\Sigma\_1 = \{ 0, 1 \}$ and $\Sigma = \{ 0, 1, 2 \}$. For these two languages, all conversion programs run in $O(\infty)$ time, ie there are no conversion programs from $O\_{\Sigma\_1}$ to $O\_{\Sigma\_2}$ that run in $O(1)$ time. This can be proven by contradiction: suppose there exists a conversion program $C$ from $O\_{\Sigma\_1}$ to $O\_{\Sigma\_2}$ running in $O(1)$ time. This means there is a $d \in \mathbb{N}$ such that $C$ makes at most $d$ queries to $\Sigma\_1$. $C$ may make less than $d$ queries in certain execution paths. We can easily construct a conversion program $C'$ that executes $C$, keeping track of how many times an oracle query was made. Let $k$ be the number of oracle queries. $C'$ then makes $d-k$ additional oracle queries, discarding the results, returning what $C$ would have returned. This way, there are exactly $|\Sigma\_1|^d = 2^d$ execution paths for $C'$. Exactly $\frac{1}{|\Sigma\_2|} = \frac{1}{3}$ of these execution paths will result in $C'$ returning $0$. However, $\frac{2^d}{3}$ is not an integer number, so we have a contradiction. Hence, no such program exists. More generally, if we have alphabets $\Sigma\_1$ and $\Sigma\_2$ with $|\Sigma\_1|=n$ and $|\Sigma\_2|=k$, then there exists a conversion program from $O\_{\Sigma\_1}$ to $O\_{\Sigma\_2}$ if and only if all the primes appearing in the prime factorisation of $n$ also appear in the prime factorisation of $k$ (so the exponents of the primes in the factorisation doesn't matter). A consequence of this is that if we have a random number generator generating a binary string of length $l$, we can't use that random number generator to generate a number in $\{0, 1, 2\}$ with exactly equal probability. I thought up the above problem when standing in the supermarket, pondering what to have for dinner. I wondered if I could use coin tosses to decide between choice A, B and C. As it turns out, that is impossible.
There are some examples in formal language theory where 2-character and 3-character alphabets give qualitatively different behaviors. Kozen gives the following [nice example](http://www.cs.cornell.edu/~kozen/papers/2and3.pdf) (paraphrased): > > Let the alphabet be $\Sigma$ = {1,..,k} with the standard numerical ordering, and define sort(x) to be the permutation of the word x in which the letters of x appear in sorted order. Extend sort(A) = { sort(x) | x $\in$ A }, and consider the following claim: > > > > > > > If A is context-free then sort(A) is context-free. > > > > > > > > > This claim is true for k = 2, but false for k $\ge$ 3. > > >
With reference to features in languages like ruby (and javascript), which allow a programmer to extend/override classes any time after defining it (including classes like String), is it theoretically feasible to design a language which can allow programs to later on extend its semantics. ex: Ruby does not allow multiple inheritance, yet can I extend/override the default language behaviour to allow an implementation of multiple inheritance. Are there any other languages which allow this? Is this actually a subject of concern for language designers? Looking at the choice of using ruby for building rails framework for web application development, such languages may be very powerful to allow designing frameworks(or DSLs) for wide variety of applications.
[Converge](http://convergepl.org/about.html) has some pretty impressive meta-programming facilities. > > At a simple level, this can be seen as a macro-like facility, although it is more powerful than most existing macro facilities as arbitrary code can be run at compile-time. Using this, one can interact with the compiler, and generate code safely and easily as ITrees (a.k.a. abstract syntax trees). > > > which is a step up from Scheme's [hygienic macros](http://community.schemewiki.org/?scheme-faq-macros) that allow referentially transparent macro definitions. Mechanisms like [quasiliterals](http://www.erights.org/elang/grammar/quasi-overview.html) have allowed constructing and destructuring of parse trees in other languages, but those are more often used for interacting with domain-specific languages (DSLs) instead of self-modification. --- [Newspeak's reflection](http://bracha.org/newspeak-spec.pdf) allow exceptions to be implemented as library code. > > 7.6 Exception Handling > ---------------------- > > > Because Newspeak provides reflective access (7.2) to the activation records(3.6), exception handling is purely a library issue. The platform will provide a standard > library that supports throwing, catching and resuming exceptions, much as in > Smalltalk. > > > --- [Perligata:Romana](http://www.csse.monash.edu.au/~damian/papers/HTML/Perligata.html) demonstrates how an entirely new syntax can be skinned onto a language. > > This paper describes a Perl module -- Lingua::Romana::Perligata -- that makes it possible to write Perl programs in Latin. > > > --- Arguably not semantically significant, PyPy is an interpreter generator for languages whose semantics are specified in a highly statically-analyzable subset of Python, and they use it to experiment with new language constructs in Python like adding [thunks](https://bitbucket.org/pypy/pypy/src/default/pypy/objspace/thunk.py) to the language. --- Also of interest might be [Ometa](http://www.vpri.org/pdf/tr2008003_experimenting.pdf). > > This dissertation focuses on experimentation in computer science. In particular, > I will show that new programming languages and constructs designed specifically to > support experimentation can substantially simplify the jobs of researchers and programmers alike. > > > I present work that addresses two very different kinds of experimentation. The first > aims to help programming language researchers experiment with their ideas, by making it easier for them to prototype new programming languages and extensions to existing languages. The other investigates experimentation as a programming paradigm, by > enabling programs themselves to experiment with different actions and possibilities— > in other words, it is an attempt to provide language support for what if...? or possible > worlds reasoning. > > > Alex Warth's dissertation demonstrates using an Ometa to define significantly new semantics (transactional semantics via worlds) in JavaScript+Ometa.
I have tried transforming the variable (x; *shown below*) using various methods but with nothing changing as 0 is so prominent. How would I handle the variable below in multiple regression analysis as it is predominantly zeros and thus very asymmetrically distributed? Can it be transformed? Any help would be greatly appreciated! ``` x <- c(rep( x = 0, times = 5473 ),7,8,9,9,9.5,10,11.1,11.8,12,13,13,13,17.7,19,27) ```
I assume x is a predictor, not the dependent variable. Fit x as quadratic with an exception that allows for a discontinuity at zero, i.e., add an indicator variable for x > 0.
I have a dataset with features A, B and C and an output label D. I want to perform the likelihood ratio test to see if feature C contributes to the performance of a regression model. Do I: * train two models $M\_1$ and $M\_2$, one using features A B C, one using only features A B, and perform the test using those two models (with weights $\theta\_{A,B}$ and bias $\theta\_0$ being different to a certain extent between the two models, *i.e.* $\theta\_A^{M\_1}\neq\theta\_A^{M\_2}$, $\theta\_B^{M\_1}\neq\theta\_B^{M\_2}$ and $\theta\_0^{M\_1}\neq\theta\_0^{M\_2}$) OR * train one model $M$ using A B C, then perform the test using this model and a model $M\_{\theta\_C=0}$ where the weight for C is set to zero (that is a model that does not take features C into account but with the two models having the same weights and biases, *i.e.* $\theta\_A^{M}=\theta\_A^{M\_{\theta\_C=0}}$, $\theta\_B^{M}=\theta\_B^{M\_{\theta\_C=0}}$ and $\theta\_0^{M\_1}=\theta\_0^{M\_2}$)?
Go with the first option. In the second option, you force the $C$ parameter to be $0$ without letting the other parameters compensate for the fact that this might be an awful value. You want nested models, not two different estimators of the same model (where one estimator takes the MLE and the sets one of the components to zero). In technical language, option #1 is what gives the asymptotic $\chi^2$ distribution with the claimed degrees of freedom. I’d be curious what a simulation reveals to be the distribution under the null hypothesis if you create the test statistic using option #2.
> > Do not vote, one vote will not reverse the election result. What's > more, the probability of injury in a traffic collision on the way to the > ballot box is much higher than your vote reversing the election > result. What is even more, the probability that you would win grand > prize of lottery game is higher than that you would reverse election > result. > > > What is wrong with this reasoning, if anything? Is it possible to statistically prove that one vote matters? I know that there are some arguments like "if everybody thought like that, it would change the election result". But everybody will not think like that. Even if 20% of electorate copy you, always a great number of people will go, and the margin of victory of winning candidate will be counted in hundreds of thousands. Your vote would count only in case of a tie. Judging it with game theory gains and costs, it seems that more optimal strategy for Sunday is horse race gambling than going to the ballot box. **Update, March 3.** I am grateful for providing me with so much material and for keeping the answers related to statistical part of the question. Not attempting to solve the stated problem but rather to share and validate my thinking path I posted an [answer](https://stats.stackexchange.com/a/452214/9519). I have formulated there few assumptions. * two candidates * unknown number of voters * each voter can cast a random vote on either candidate I have showed there a solution for 6 voters (could be a case in choosing a captain on a fishing boat). I would be interested in knowing what are the odds for each additional milion of voters. **Update, March 5.** I would like to make it clear that I am interested in more or less realistic assumptions to calculating the probability of a decisive vote. More or less because I do not want to sacrifice simplicity for precision. I have just understood that my update of March 3 formulated unrealistic assumptions. These assumptions probably formulate the highest possible probability of a decisive vote but I would be grateful if you could confirm it. Yet still unknown for me thing is what is meant by the number of voters in the provided formulas. Is it a maximum pool of voters or exact number of voters. Say we have 1 milion voters, so is the probability calculated for all the cases from 1 to milion voters taking part in election? **Adding more fuel to the discussion heat** In the USA, because president is elected indirectly, your vote would be decisive if only one vote, your vote, were to reverse the electors of your state, and then, owing to the votes of your electors, there was a tie at Electoral College. Of course, breaking this double tie condition hampers the chances that a single vote may reverse election result, even more than discussed here so far. I have opened a separate thread about that [here](https://stats.stackexchange.com/q/452797/9519).
**It's wrong in part because it's based on a mathematical fallacy.** (It's even more wrong because it's such blatant voter-suppression propaganda, but that's not a suitable topic for discussion here.) The implicit context is one in which an election looks like it's on the fence. One reasonable model is that there will be $n$ voters (not including you) of whom approximately $m\_1\lt n/2$ will definitely vote for one candidate and approximately $m\_2\approx m\_1$ will vote for the other, leaving $n-(m\_1+m\_2)$ "undecideds" who will make up their minds on the spot randomly, as if they were flipping coins. Most people--including those with strong mathematical backgrounds--will guess that the chance of a perfect tie in this model is astronomically small. (I have tested this assertion by actually asking undergraduate math majors.) The correct answer is surprising. First, figure there's about a $1/2$ chance $n$ is odd, which means a tie is impossible. To account for this, we'll throw in a factor of $1/2$ in the end. Let's consider the remaining situation where $n=2k$ is even. The chance of a tie in this model is given by the Binomial distribution as $$\Pr(\text{Tie}) = \binom{n - m\_1 - m\_2}{k - m\_1} 2^{m\_1+m\_2-n}.$$ When $m\_1\approx m\_2,$ let $m = (m\_1+m\_2)/2$ (and round it if necessary). The chances don't depend much on small deviations between the $m\_i$ and $m,$ so writing $N=k-m,$ an *excellent* approximation of the Binomial coefficient is $$\binom{n - m\_1-m\_2}{k - m\_1} \approx \binom{2(k-m)}{k-m} = \binom{2N}{N} \approx \frac{2^{2N}}{\sqrt{N\pi}}.$$ The last approximation, due to [Stirling's Formula](https://math.stackexchange.com/questions/1256545), works well even when $N$ is small (larger than $10$ will do). Putting these results together, and remembering to multiply by $1/2$ at the outset, gives a good estimate of the chance of tie as $$\Pr(\text{Tie}) \approx \frac{1}{2\sqrt{N\pi}}.$$ In such a case, your vote will tip the election. What are the chances? In the most extreme case, imagine a direct popular vote involving, say, $10^8$ people (close to the number who vote in a US presidential election). Typically about 90% of people's minds a clearly decided, so we might take $N$ to be on the order of $10^7.$ Now $$\frac{1}{2\sqrt{10^7\pi}} \approx 10^{-4}.$$ That is, your participation in a close election involving *one hundred million people* still has about a $0.01\%$ chance of changing the outcome! In practice, most elections involve between a few dozen and a few million voters. Over this range, your chance of affecting the results (under the foregoing assumptions, of course) ranges from about $10\%$ (with just ten undecided voters) to $1\%$ (with a thousand undecided voters) to $0.1\%$ (with a hundred thousand undecided voters). > > In summary, **the chance that your vote swings a closely-contested election tends to be inversely proportional to the square root of the number of undecided voters.** Consequently, voting is important even when the electorate is large. > > > --- **The history of US state and national elections supports this analysis.** Remember, for just one recent example, how the 2000 US presidential election was decided by a plurality in the state of Florida (with several million voters) that could not have exceeded a few hundred--and probably, if it had been checked more closely, would have been even narrower. If (based on recent election outcomes) it appears there is, say, a few percent chance that an election involving a few million people will be decided by at most a few hundred votes, then the chance that the next such election is decided by just one vote (intuitively) must be at least a hundredth of one percent. That is about one-tenth of what this inverse square root law predicts. But that means *the history of voting and this analysis are in good agreement,* because this analysis applies *only* to close races--and most are not close. For more (anecdotal) examples of this type, across the world, see the [Wikipedia article on close election results](https://en.wikipedia.org/wiki/List_of_close_election_results). It includes a table of about 200 examples. Unfortunately, it reports the margin of victory as a *proportion of the total.* As we have seen, regardless of whether all (or even most) assumptions of this analysis hold, a more meaningful measure of the closeness of an election would be the margin divided by the *square root* of the total. --- By the way, your chance of an injury due to driving to the ballot box (if you need to drive at all) can be estimated as the rate of injuries annually (about one percent) divided by the average number of trips (or distance-weighted trips) annually, which is several hundred. We obtain a number well below $0.01\%.$ Your chance of winning the lottery grand prize? Depending on the lottery, one in a million or less. **The quotation in the question is not only scurrilous, it is outright false.**
I got onto the topic of HDLs (Hardware Description Languages) from a [comment elsewhere on this site](https://cs.stackexchange.com/questions/539/visual-programming-languages#comment2020_545). The [Wikipedia article](https://en.wikipedia.org/wiki/Hardware_description_language) states, rather tantalizingly: > > One important difference between most programming languages and HDLs is that HDLs explicitly include the notion of time. > > > I tried to dig deeper in the article to better understand this statement, but I didn't get far. One thing I did find was this great diagram of a [flip flop](https://en.wikipedia.org/wiki/Flip-flop_(electronics)), which certainly demonstrates the *need* to consider timing in hardware design. Rather than dealing with explicit on/off states, you are also dealing with high/low voltage *propagation.* [![enter image description here](https://i.stack.imgur.com/dfysR.gif)](https://i.stack.imgur.com/dfysR.gif) Naively, though, my immediate thought was, "Huh? *All* imperative languages include the notion of time. Only declarative or functional programming languages could be said to exclude the notion of time, and only by certain specific interpretations." I think there *is* a valid point being made here, though, and I would very much like to better understand it. It's even stated to be an "important" difference. :) --- So, again, **what does it mean that HDLs "explicitly include the notion of time"?** How does that distinguish them from programming languages in general from a CS standpoint?
Programming languages usually don't have a notion of time. They describe how things are sequenced, but usually they don't go further. When they do, that's relegated to the library part of the language, and it is to handle time computation or to handle real time measurement and delay. HDL are used for two purposes: synthesis and simulation. For the first purpose, there is no more use of time than for a programming language. For the second purpose, time is inherently part of what is to be simulated. You need to know after which delay $Q$ will change after a change of $R$, and after which delay $\bar Q$ will do the same. The time here is thus not the real time, but a simulated one, which has no easy relationship with the real time during which the simulation will occurs. Your circuit will be described in a VHDL-like syntax: ``` Q <= not (R or QBar) delay 1ns; QBar <= not (S or Q) delay 1ns; ``` You can also write an oscillator, with no stable state: ``` Clock <= not Clock delay 5ns; ``` Time is thus present in the syntax of the language, and in its semantic as well (the language clearly states that $\bar Q$ will change 2ns of simulated time after $R$ does).
Irreversible computations can be intuitive. For example, it is easy to understand roles of AND, OR, NOT gates and design a system without any intermediate, compilable layer. The gates can be directly used as they conform to human's thinking. I have read a paper where it was stated that it is obviously correct way to code irreversibly, and compile to reversible form (can't find the paper now). I am wondering if there exists a reversible model, that is as easy to understand as AND, OR, NOT model. The model should be therefore "direct" use of reversibility. So no compilation. **But also:** no models of form: $f(a) \rightarrow (a,f(a))$ (ie. models created by taking irreversible function $f$ and making it reversible by keeping copy of its input).
Ryan Williams surveyed and studied the complexity of reversibility in this paper, [space efficient reversible simulations](http://www.cs.cmu.edu/~ryanw/spacesim9_22.pdf) which contains various constructions. The problem seems to be closely linked to lower bounds on time/space tradeoffs.. In your question you seem to be making some kind of distinction between "reversibility" and "direct reversibility" but there seems to be no such distinction in the literature. One questions whether your distinction can be formally defined.
I was hoping someone could provide clarity surrounding the following scenario. You are asked "What is the expected number of observed heads and tails if you flip a fair coin 1000 times". Knowing that coin flips are i.i.d. events, and relying on the law of large numbers you calculate it to be: $$N\_{heads} = 500 \; N\_{tails} = 500$$ Now, let us have observed/realized the first 500 flips to *all be heads*. We want to know the updated expected number of realizations of the remaining 500 flips. Because the first 500 events have been realized and they do not effect the underlying physical coin flipping process, we know that the expected number of heads and tails of the remaining 500 flips are: $$N\_{heads} = 250 \; N\_{tails} = 250$$ So, here is my question/confusion: I understand that each coin flip is independent and that any single individual coin flip has a probability of $\frac{1}{2}$ coming up heads. However, based on the law of large numbers we know that the (if we value tails as 0 and heads as 1) mean of the tosses will approach $0.5$ as the number of tosses approaches $\infty$. So, based on that, if we have observed 500 heads in a row, why do we not statistically expect to realize more tails going forward? I fully realize the following thought is incorrect, but it *feels* like we are (statistically) *due* for a tails and that the probability of tails should be raised and heads lowered. Since this is not the case, it feels as though this is conflicting with the original expectation of $N\_{heads} = 500$ and $N\_{tails} = 500$. Again, I realize that this thinking is incorrect, but I am hoping someone can help me understand why this past information (500 realizations of heads in a row) does not provide any new, updated information that updates the probability for the remaining flips? Clearly the coin does not *know* that it just came up heads $500$ times, so is the correct way to think about this that the law of large numbers doesn't imply that in the following 500 flips tails is more likely, but rather that as $N \rightarrow \infty$ we expect 50% of realizations to be heads and 50% to be tails. In which case my error in reasoning is based on applying a limit theorem that applies in the asymptote to a preasymptotic situation? I also feel like this has to deal with a bit of confusion between *single events* (a single coin toss coming up heads), and the collective action of a set of events (1000 coin tosses) that exhibit nonrandom properties. After searching I came across a wonderful quote by Kolmogorov$^1$: > > "In reality, however, the epistemological value of the theory of probability is revealed only by limit theorems. ... In fact, all epistemological value of the theory of probability is based on this: that large-scale random phenomena in their collective action create strict, nonrandom regularity. The very concept of mathematical probability would be fruitless if it did not find its realization in the frequency of occurrence of events under large-scale repetition and uniform conditions." > > > I believe this quote clears up some of my confusion, but if anyone could elaborate on why realizations (based on a known statistical process) cannot be used to update subsequent probabilities, I would greatly appreciate it! 1. B. V. Gnedenko and A. N. Kolmogorov: Limit distributions for sums of independent random variables. Addison-Wesley Mathematics Series
The key thing to remember is that throws are IID. Realization could be included when it is considered in the design of your model. One of the example is if your model is a markov model, in fact many models that use bayesian framework includes realization on updating the probability. This is a great [example](https://stats.stackexchange.com/questions/47771/what-is-the-intuition-behind-beta-distribution) to what I mentioned earlier. The reason that it does not apply for your case because realization is not included by design of your model.
My question: do the non-regular languages have closure properties? For example, if the reverse of L is non-regular, then L is non-regular ? thank you :-)
Non regular languages are closed under reverse, because $L = (L^R)^R$. Same is true for complement. Non regular languages are not closed under most other basic operations though. Consider, for example, that $L \cup \overline{L} = \Sigma^\star$. Similarly, if $L = \{1^{x^2} | x > 1\}$ then $\overline{L} \circ \overline{L} = 1^\star$. There is no complete list of operations on languages, so a complete answer cannot be given.
Could somebody explain the difference between dependent types and refinement types? As I understand it, a refinement type contains all values of a type fulfilling a predicate. Is there a feature of dependent types which distinguishes them? If it helps, I came across Refined types via the Liquid Haskell project, and dependent types via Coq and Agda. That said, I'm looking for an explanation of how the theories differ.
The main differences are along two dimensions -- in the underlying theory, and in how they can be used. Lets just focus on the latter. As a user, the "logic" of specifications in LiquidHaskell and refinement type systems generally, is restricted to decidable fragments so that verification (and inference) is completely automatic, meaning one does not require "proof terms" of the sort needed in the full dependent setting. This leads to significant automation. For example, compare insertion sort in LH: <http://ucsd-progsys.github.io/lh-workshop/04-case-study-insertsort.html#/ordered-lists> vs. in Idris <https://github.com/davidfstr/idris-insertion-sort/blob/master/InsertionSort.idr> However, the automation comes at a price. One cannot use arbitrary functions as specifications as one can in the fully dependent world, which restricts the class of properties one can write. Thus, one goal of refinement systems is to *extend* the class of what can be specified, while that of fully dependent systems is to *automate* what can be proved. Perhaps there is a happy meeting ground where we can get the best of both worlds!
I have one puzzle whose answer I have boiled down to finding the total number and which type of permutation they are. For example if the string is of length ten as $w = aabbbaabba$, the total number of permutations will be $\qquad \displaystyle \frac{|w|}{|w|\_a! \cdot |w|\_b!} = \frac{10!}{5!\cdot 5!}$ Now had the string been of distinct characters, say $w'=abcdefghij$, I would have found the permutations by this algorithm : ``` for i = 1 to |w| w = rotate(w) w = rotate(w) return w.head + rotate(w.tail) ``` Can some one throw new ideas on this - how to find the number of permutations for a string having repeated characters? Is there any other mathematical/scientific name of for what I am trying to do?
The total number of (different) permutations of strings with $n\_i$ characters of type $i$ is given by $$\frac{(\sum\_i n\_i)!}{\prod\_i n\_i!}.$$ In other words, if the length of the string is $n=n\_1 + n\_2 +...$, you have $\frac{n!}{n\_1!n\_2!n\_3!\ldots}$ different permutations. Note that the formula fits your simple case. Why is this formula correct? Because $n!$ is the number of permutations. But some permutations give the same string (e.g., `aa` rotated is still `aa`). Then we need to "remove" all the permutations that give the same string. For a given string, for each letter $i$, there are $n\_i$ permutations that are identical to the string we started with. So (for each possible letter $i$) we divide by this number to get the final result.
When using A\* (or any other best path finding algorithm), we say that the heuristic used should be **admissible**, that is, it should never overestimate the actual solution path's length (or moves). How does an admissible heuristic ensure an optimal solution? I am preferably looking for an intuitive explanation. If you want you can explain using the **Manhattan** distance heuristic of the **8-puzzle.**
While Anton's answer is absolutely perfect let me try to provide an alternative answer: being admissible means that the heuristic does not overestimate the effort to reach the goal, i.e., $h(n) \leq h^\*(n)$ for all $n$ in the state space (in the 8-puzzle, this means just for any permutation of the tiles and the goal you are currently considering) where $h^\*(n)$ is the optimal cost to reach the target. I think the most logical answer to see why $A^\*$ provides optimal solutions if $h(n)$ is admissible is becauase it sorts all nodes in OPEN in ascending order of $f(n)=g(n)+h(n)$ and, also, because it does not stop when generating the goal but when expanding it: 1. Since nodes are expanded in ascending order of $f(n)$ you know that no other node is more promising than the current one. Remember: $h(n)$ is admissible so that having the lowest $f(n)$ means that it has an opportunity to reach the goal through a cheaper path that the other nodes in OPEN have not. And this is true unless you can prove the opposite, i.e., by expanding the current node. 2. Since $A^\*$ stops only when it proceeds to expand the goal node (as oppossed to stop when generating it) you are sure (from the first point above) that no other node leads through a cheaper path to it. And this is, essentially, all you will find in the original proof by Nilsson et al. Hope this helps,
I have a random forest regression built using skl and I note that I yield different results based on setting the random seed to different values. If I use LOOCV to establish which seed works best, is this a valid method?
The answer is **no**. Your model gives a different result for each seed you use. This is a result of the non-deterministic nature of the model. By choosing a specific seed that maximizes the performance on the validation set means that you chose the "arrangement" that best fits this set. However, **this does not guarantee that the model with this seed would perform better on a separate test set**. This simply means that you have **overfit the model on the validation set**. This effect is the reason you see many people that rank high in competitions (e.g. kaggle) on the public test set, fall way off on the hidden test set. This approach is **not** considered by any means the correct approach. --- Edit (not directly correlated to the answer, but I found it interesting) You can find an interesting study showing the influence of random seeds in computer vision [here](https://arxiv.org/pdf/2109.08203.pdf). The authors first prove that you **can** achieve better results when using a better seed than the other and offer the critique that many of the supposed SOTA solutions could be merely better seed selection than the others. This is described in the same context as if it is *cheating*, which in all fairness it kind of is... **Better seed selection does not make your model inherently better, it just makes it appear better on the specific test set**.
I'm trying to solve this problem: > > Let $L$ be some infinite language, show that there exists a sub-language of $L$ that is not regular > > > But can this be correct? If I have the language $\{a\}^\*$ for example, that's infinite but you can make a DFA for any sub-language of it, right? There's a hint that this can be proved using diagonalization, but I think I must be misunderstanding the question.
The powerset of the language contains all the sublanguages. This will be uncountably infinite, but there are only countably many regular languages (for a particular, finite alphabet)
What are the limitations of the Symbolic Model Checking? As far as I know, "state-space explosion" can still happen by this technique but it can explore much larger state space. So, is the Symbolic Model Checking always better than the conventional model checking? If not, when to apply which one? By the conventional model checking I mean: Clarke Jr, Edmund M., et al. Model checking. MIT press. And by Symbolic model checking I mean: McMillan, Kenneth L. "Symbolic model checking." Symbolic Model Checking. Springer, Boston, MA, 1993. 25-60.
No. Since the permutation language is finite, you can produce a grammar by enumerating the individual sentences which comprise the language. Some simplifications are possible, for example by finding common prefixes and/or suffixes. But there is no grammar which is significantly smaller than the language itself (for some definition of "significantly"). This is not really as dramatic a conclusion as it sounds. Most finite languages lack compact grammars; grammars achieve concision when they can use recursive constructs which represent infinite languages. In fact, in an interesting paper cited in [the answer to this question](https://cstheory.stackexchange.com/questions/5014/lower-bounds-on-the-size-of-cfgs-for-specific-finite-languages) on <https://cstheory.stackexchange.com/>, there's a proof that the no regular expression for the language $P\_n$ consisting of the permutations of $\{1,2,\dots,n\}$ can be shorter than $2^n$. However, the (infinite) inverse $\overline{P\_n}$ can be described by a regular expression of size $O(n^2)$. (There are lots of other interesting results in that paper.) [@yuval-filmus](https://cs.stackexchange.com/users/683/yuval-filmus) extended that result in [Lower Bounds for Context-Free Grammars](http://www.cs.toronto.edu/~yuvalf/CFG-LB.pdf) to a larger set of languages, including the language of permutations of the multiset of $n$ different elements in which each element appears $k$ times. In the halcyon era of SGML, whose syntax description language included a permutation operator, there was some investigation into how to optimise state machines to recognise such languages. IIRC, the final conclusion was that it can't be done and therefore the permutation operator is not a good idea. In practice, such languages are generally recognised in two phases: first, the input is matched against the infinite language of unrestricted repetitions, and then a second scan (not implemented with a CFG) is done to count repetitions of each element. But that algorithm is not completely sound. First, it assumes that there is a unique decomposition of a string into elements, which is not always the case, although it certainly was the case for SGML. Second, the algorithm greedily processes all successive elements. If the intention of the grammar were to recognise the longest *legal* permutation, leaving open the possibility that (over-)repeated elements be part of a different syntactic construct, then legal inputs will fail because of the greediness.
this is a piece of Assembly code ``` section .text global _start ;must be declared for using gcc _start: ;tell linker entry point mov edx, len ;message length mov ecx, msg ;message to write mov ebx, 1 ;file descriptor (stdout) mov eax, 4 ;system call number (sys_write) int 0x80 ;call kernel mov eax, 1 ;system call number (sys_exit) int 0x80 ;call kernel section .data msg db 'Hello, world!',0xa ;our dear string len equ $ - msg ;length of our dear string ``` Given a specific computer system, is it possible to predict precisely the actual run time of a piece of Assembly code.
Would the choice of "computer system" happen to include microcontrollers? Some microcontrollers have very predictable execution times, for example the 8 bit PIC series have four clock cycles per instruction unless the instruction branches to a different address, reads from flash or is a special two-word instruction. Interrupts will obvously disrupt this kind of timimg but it is possible to do a lot without an interrupt handler in a "bare metal" configuration. Using assembly and a special coding style it is possible to write code that will always take the same time to execute. It isn't so common now that most PIC variants have multiple timers, but it is possible.
I understand the proof that $$Var(aX+bY) = a^2Var(X) +b^2Var(Y) + 2abCov(X,Y), $$ but I don't understand how to prove the generalization to arbitrary linear combinations. Let $a\_i$ be scalars for $i\in {1,\dots ,n}$ so we have a vector $\underline a$, and $\underline X = X\_i,\dots ,X\_n$ be a vector of correlated random variables. Then $$ Var(a\_1X\_1 + \dots +a\_nX\_n) = \sum\_{i=1}^n a\_i^2 \sigma\_i^2 + 2 \sum\_{i=1}^n \sum\_{j>i}^n a\_i a\_j \text{ Cov}(X\_i,X\_j)$$ How do we prove this? I imagine there are proofs in the summation notation and in vector notation.
Here is a slightly different proof based on matrix algebra. **Convention:** a vector of the kind $(m,y,v,e,c,t,o,r)$ is a column vector unless otherwise stated. Let $a = (a\_1,\ldots,a\_n)$, $\mu = (\mu\_1,\ldots,\mu\_n) = E(X)$ and set $Y = a\_1X\_1+\ldots+a\_nX\_n = a^\top X$. Note first that, by the linearity of the integral (or sum) $$E(Y) = E(a\_1X\_1+\ldots+a\_nX\_n) = a\_1\mu\_1+\cdots +a\_n\mu\_n = a^\top \mu.$$ Then \begin{align} \text{var}(Y) &= E(Y-E(Y))^2 = E\left(a\_1X\_1+\ldots+a\_nX\_n-E(a\_1X\_1+\ldots+a\_nX\_n)\right)^2\\ & = E\left[(a^\top X - a^\top\mu)(a^\top X - a^\top\mu)\right]\\ & = E\left[(a^\top X - a^\top\mu)(a^\top X - a^\top\mu)^\top\right]\\ & = E\left[a^\top(X - \mu)(a^\top(X - \mu))^\top\right]\\ & = a^\top E\left[(X - \mu)(X - \mu)^\top a\right] \\ & = a^\top E\left[(X - \mu)(X - \mu)^\top\right]a \\\tag{\*} & = a^\top \operatorname{cov}(X)a. \end{align} Here $\operatorname{cov}(X) = [\operatorname{cov}(X\_i,X\_j)]$, is the covariance matrix of $X$ and with entries $\operatorname{cov}(X\_i,X\_j)$ such that $\operatorname{cov}(X\_i,X\_i) = \operatorname{var}(X\_i)$. Note the trick of placing a $^\top$ symbol in the third line of the last equation, which is valid since $r^\top = r$ for any real $r$. In passing from the 4th equality to the 5th equality and from the 5th equality to the 6th equality I have again used the linearity of the expectation. Straightforward matrix multiplication will reveal that the desired result is nothing but the expanded version of the quadratic form (\*).
Why Do Computers Use the Binary Number System (0,1)? Why don't they use Ternary Number System (0,1,2) or any other number system instead?
Since we're in Computer Science, I'll answer this way: they don't. What do we mean by a "computer?" There are many definitions, but in computer science as a science, the most common is the Turing machine. A turing machine is defined by several aspects: a state-set, a transition table, a halting set, and important for our discussion, an alphabet. This alphabet refers to the symbols which the machine can read as input, and that it can write to its tape. (You could have different input and tape alphabets, but let's not worry about that for now.) So, I can make a Turing machine with input alphabet $\{0,1\}$, or $\{a,b\}$, or $\{0,1,2\}$, or $\{\uparrow,\downarrow\}$. It doesn't matter. The fact is, I can use any alphabet I choose to encode data. So, I can say that $0001001$ is 9, or I can say that $\uparrow \uparrow \uparrow \downarrow \uparrow \uparrow \downarrow$ is 9. It doesn't matter, since they're just symbols we can distinguish. The trick is that binary is enough. Any sequence of bits can be interpreted as a number, so you can convert from binary to any other system and back. But, it turns out unary is enough too. You can encode 9 as 111111111. This isn't particularly efficient, but it has the same computational power. Things get even crazier when you look into alternate models of computation, like the Lambda calculus. Here, you can view numbers as functions. In fact, you can view everything as functions. Things are encoded not as bits, 0s and 1s, but as closed mathematical functions with no mutable state. See the [Church numerals](http://en.wikipedia.org/wiki/Church_numeral) for how you can do numbers this way. The point is that, 0s and 1s is a completely hardware specific issue, and the choice is arbitrary. What encoding you're using isn't particularly relevant to computer science, outside of a few subfields like operating systems or networking.
What algorithm do computers use to compute the square root of a number ? EDIT It seems there is a similar question here: [Finding square root without division and initial guess](https://cs.stackexchange.com/questions/113107/finding-square-root-without-division-and-initial-guess) But I like the answers provided here more. Plus the person asking the similar question had the question formed in a personal way. My question has an easier to find wording. The similar one was asked having an insight about the inner mechanics of such an algorithm and was being asked like if it was something that had not yet been solved-found. I don't know if there is a way for a moderator to merge the two posts.
Have a look at the paper <https://www.lirmm.fr/arith18/papers/burgessn-divider.pdf> which describes the division / square root hardware implementation in some ARM processors. Roughly the idea: For a division unit, you can look up the initial bits of divisor and dividend in a table to get two bits of result. For double precision with a 53 bit mantissa, you repeat that 26 or 27 times and you have the result. For single precision with 24 bit mantissa, you need to repeat only 12 or 13 times. All the other bits, calculating the exponent etc., can be done in parallel. Now what does square root have to do with division? Simple: sqrt (x) = x / sqrt (x). You get the square root by dividing x by its square root. Now when you start this division, you don't know the square root yet. But as soon as you have say eight bits and you are very, very careful, you can produce two mantissa bits per cycle. And getting the first say eight bits is not hard, since unlike a division x / y, there is only one operand involved. The paper describes this in much more detail, and importantly describes why this is superior for example to using Newton-Raphson or similar. Note: It is superior for typical floating-point numbers with 24 or 53 bit mantissa. Since that is what is used 99.9999% of the time it is important, but for different situations things are different.
> > The problem is to determine whether a patient referred to the clinic is hypothyroid. Therefore three classes are built: normal (not hypothyroid), hyperfunction and subnormal functioning. Because 92 percent of the patients are not hyperthyroid a good classifier must be significant better than 92%. > > > This is for some data I found for [Thyroid Disease](https://archive.ics.uci.edu/ml/datasets/Thyroid+Disease).
> > a good classifier must be significant better than 92%. > > > There are two issues here * **Where does this statement come from?** It is not so clear what is meant with the 92%, but it seems to relate to accuracy. $$ \text{accuracy} = \frac{\text{good estimates}}{\text{total estimates}} $$ this 92% accuracy can already by achieved with a naive classifier that assigns every patient to the negative category. For the 92% of the patients that are negative it will be right and for the 8% of the patients that are positive it will be wrong. But also * **Is this statement right?** I disagree. The classifier does *not* need to be better than 92%. Imagine the following situation: Your test could be 100% sensitive and classify 8% of the people as having hyperthyroid disease while they also have the disease, but it also accidentally classifies 9% of the people as having hyperthyroid disease while they actually do not have hyperthyroid disease. Then you make a mistake in 9% of the cases and the accuracy is 91%, which is less than 92%. Did you do bad?
In University we used the [Sipser text](https://en.wikipedia.org/wiki/Introduction_to_the_Theory_of_Computation) and while at the time I understood most of it, I forgot most of it as well, so it of course didn't leave all to great of an impression. I borrowed that book and don't have one in my collection, so I need one. So to the question, are there are any other books which could be seen as better and possibly more complete? I didn't see a community wiki section here, so I couldn't note it as such.
I strongly recommend the book [Computational Complexity: A Modern Approach](http://www.cs.princeton.edu/theory/complexity/) by Arora and Barak. When I took computational complexity at my Master level, the main textbook is [Computational Complexity](http://rads.stackoverflow.com/amzn/click/0201530821) by Papadimitriou. But, maybe due to my background in Software Engineering, I found the writing in Papadimitriou challenging at times. Whenever I had problem understanding Papadimitriou's book, I simply went back to Sipser, or read the draft of Arora and Barak. In retrospect, I really like Papadimitriou's book, and I often find myself looking up from this book. His book has plenty of exercises that are quite effective at connecting readers to research-level questions and open problems. In any case, you should have a look at both Papadimitriou and Arora-Barak. People also suggest [Oded Goldreich's textbook](http://rads.stackoverflow.com/amzn/click/052188473X), but I really prefer the organization of Arora-Barak.
Having learned a rough summary of Hoare logic (i.e. learning just the basic concept of Hoare triples and a few of the rules) I kept seeing a statement along these lines: > > The rule of consquence allows us to strengthen the precondition and weaken the postcondition. (paraphrased) > > > 1. What does it actually mean to *strengthen the precondition* and to *weaken the postcondition*? 2. Is *and* used in its strict logical meaning in this sense, i.e. if we apply the rule of consequence we will always do *both*? Or can it be read as allowing us to choose to strengthen the precondition or weaken the postcondition? I get the (very basic) idea of the rule, but this statement implies the rule can be applied for a strategic reason that isn't clearly explained. Thanks.
Condition $A$ is *stronger than* condition $B$ if $A$ implies $B$. That is, if $B$ holds in all situations in which $A$ holds. Conversely, if $A$ is stronger than $B$, then $B$ is weaker than $A$. Note that, from the definition, $A$ is stronger and weaker than itself, since $A$ implies $A$. (We might prefer to say "at least as strong as" instead of "stronger than" but I guess that gets convoluted and it's easier to just remember that "stronger than" really means "at least as strong as".) So, to strengthen a condition is to replace it with a stronger one; to weaken a condition is to replace it with a weaker on. Because a condition is always stronger and weaker than itself, the phrase "strengthen the precondition and weaken the postcondition" does indeed mean that we can leave one (or even both!) of them unchanged.
**Given** $L\_1, L\_2 \in \mathsf{NP}$, $L\_1 \cup L\_2 \in \mathsf{P}$ and $L\_1 \cap L\_2 \in \mathsf{P}$, **Prove**: $\ L\_1, L\_2 \in \mathsf{coNP}$ What I've done so far is: $$ L\_1 \cup L\_2 \in \mathsf{P} \Rightarrow (L\_1 \cup L\_2) ^\complement \in \mathsf{P} \Rightarrow L\_1^\complement \cap L\_2^\complement \in \mathsf{P} $$ $$ L\_1 \cap L\_2 \in \mathsf{P} \Rightarrow (L\_1 \cap L\_2) ^\complement \in \mathsf{P} \Rightarrow L\_1^\complement \cup L\_2^\complement \in \mathsf{P} $$ How do I proceed from here?
You will have $O(n^2)$ in worst case. Actually what you do is really close to insertion sort. Worst case is attained when you got the array in decreasing order. \begin{align} T(n) &= T(n-1) + n\\ &=T(n-2) + n-1 + n \\ &= \vdots \\ &=T(n-k) + \sum\_{i=0}^k (n - i) \end{align} Algorithm terminates at $k = n$, i.e. we shall get $$T(n) = T(0) + \sum\_{i=0}^n (n - i)$$ where $T(0) = O(1)$. So, upon a change of variable, we get $$T(n) = \sum\_{i=0}^n i = \frac{n(n+1)}{2} = O(n^2)$$
Lambda Calculus is very simple. Are there even simpler turing-complete systems? Which is the simplest of them all?
Conway's FRACTRAN comes to mind: <http://en.wikipedia.org/wiki/FRACTRAN> A FRACTRAN program is a list $q\_1 , \ldots, q\_k$ of positive rational numbers. The current state is a natural number $n$, and a computation step consists in multiplying $n$ by the first $q\_i$ whose denominator divides $n$. Starting from the input, the computation step is repeated until no longer possible, i.e., until no $q\_i$ has a denominator dividing $n$. The curent state at that point is the output.
Is there a well established adjective or name for an algorithm such that, given as input any of its own output, always outputs it unchanged? In other words, an algorithm such that it implements a function $f$ with the property $f\circ f=f$. What would be a reference? My application is a cryptographic security reduction. An example would be an algorithm which on input $X\in\{0,1\}^\*$, outputs $1^{|X|}$. Besides identity, an algorithm implementing $X\mapsto|X|$ in one of $\Bbb Z$, $\Bbb Q$, $\Bbb R$, $\Bbb C$ also comes to mind. --- *Stationary algorithm* or *circular algorithm* would do, but neither googles. In my experience that means quickly it will google here, bringing circularity to a new level!
In mathematics we would say $f$ is an [*idempotent function*](https://en.wikipedia.org/wiki/Idempotence#Idempotent_functions). It's a widely known term and I suppose most TCS people should also recognize it.
So for the last few months I've been doing a lot of forecasting for my company and specifically I've been looking at monthly forecasts of total weight of different categories of products output's each month. I've been using time series models such as **Arima**, **ETS**, and **tslm()** within **R** to do my forecasting, as well as using cross validation to select a model. Over the last two days I've been presenting my results and discussing implementation of my forecasts. But, I've been asked the same question multiple times and I don't know the answer to it, so let me ask you guys. If this has been asked before, I apologize. I'll write out a few questions that hopefully will make clear what I'm trying to understand. Also, I'd like to keep technical answers about the models in the context of R, since that's what I'm using. * ***Do time series models take into account the number of days in a month?*** * ***Particularly, do time series models consider the number of business days in a month? (or is there a way to incorporate this?)*** * ***Do we even need to worry about this when forecasting using a time series model or does the model account for this?*** For instance, lets say in October of 2014 a certain category sold 35,000 lbs of a product, and that there were 31 days, 23 of which were business days. Well for this year, 2015, there are still 31 days, but now there are only 22 business days. Just some background on the data, I have monthly data that starts in August of 2008. * *So would it possibly be better to average the weight per month with the number of business days and forecast out this way since the # of business days change month to month and year to year?*
This is a very frequent happening. The series data shows spikes due to the difference in the number of days. So, there is a technique called **Calender Adjustment**, where instead of plotting the net value against the time, the average value per time stamp is considered. Example: Average value per month. So, when the time series is adjusted accordingly (by considering the average value per time stamp; instead of the net overall value), the problem of complexity is solved. And on your question about: **Do we even need to worry about this when forecasting using a time series model or does the model account for this?** As you can see that this difference in the number of days per month increases the complexity of the series, so by the calender adjustment technique, one can simplify the model without losing on the features and details of the data. So, now **we take into account the number of days**, and **do not need to consider the number of business days in a month** and also **do not need to worry about this when forecasting using a time series model**. So, that would smoothen the plot and effectively remove the variation due to the different month lengths; thus making the pattern simpler. [![The picture would make things more clear:](https://i.stack.imgur.com/oMTB6.png)](https://i.stack.imgur.com/oMTB6.png) This is excellently explained [here](https://www.otexts.org/fpp/2/4). The R code: ``` library("fpp") monthdays <- rep(c(31,28,31,30,31,30,31,31,30,31,30,31),14) monthdays[26 + (4*12)*(0:2)] <- 29 par(mfrow=c(2,1)) plot(milk, main="Monthly milk production per cow", ylab="Pounds",xlab="Years") plot(milk/monthdays, main="Average milk production per cow per day", ylab="Pounds", xlab="Years") ```
This is a definition of the sufficient statistic from Wikipedia. > > A statistic $t = T(X)$ is sufficient for underlying parameter $θ$ precisely if the conditional probability distribution of the data $X$, given the statistic $t = T(X)$, does not depend on the parameter $θ$, i.e., > $$ > \text{Pr}(x|t,\theta) = \text{Pr}(x|t) > $$ > > > I have a trouble to interpret the definition. I don't understand the meaning of the probability conditional on a "parameter" $\theta$. How can knowing the value of a parameter make a difference on the probability distribution? If $\theta$ is a "parameter", which is not a random variable so has no uncertainty, should $Pr(x|\theta) = Pr(x)$ always? For example, let's consider throwing an unfair dice. The probability of each face may be considered as a parameter. The result of the experiment throwing the dice does not depend on my knowledge on the probabilities of each face. The experiment is physically determined and my knowledge on the dice has nothing to do with the outcomes. So conditioning on $\theta$ is meaningless and we have trivially $\text{Pr}(x|t,\theta) = \text{Pr}(x|t)$ for any $t$ and $\theta$. What is my misconception here?
If your issue is with the use of a conditioning symbol "|" instead of a semi-colon or an index in the notation of a family of distributions $$\text{Pr}(x|\theta)\qquad\text{vs.}\qquad \text{Pr}(x;\theta) \qquad \text{vs.}\qquad\text{Pr}\_\theta(x)$$ a first answer is that it is a matter of notations and that once one has clearly set a notation for this family it can be used as such. (Notation-wise, using $\text{Pr(x)}$ for a distribution should be discouraged as it does not apply to continuous and mixed random variables.) The distribution (and the associated density) are dependent on the value of $\theta$, meaning that two different $\theta$'s [should] lead to two different densities. This function is thus defined conditional on the chosen value of $\theta$ and one could not use it without knowing this value of $\theta$. A second answer is that a conditional distribution sets the conditioning variable to a fixed value. In a conditional density $f(x|y)$ the value $y$ is constant while $x$ varies on the state space $\mathcal{X}$, e.g., $\mathbb{R}$. The value of $y$ taken by the conditioning variable $Y$ thus determines the function [of $x$] and for all purposes acts like a parameter. There is thus no [pragmatic] distinction to be made between $y$ and a parameter $\theta$ when using this conditional density. Furthermore, if $\theta$ becomes random, as for instance in a Bayesian analysis, adopting the conditional sign on $\theta$ makes notations coherent [with a joint distribution on $x$ and $\theta$].
I recently got asked the following question: > > A set of $n$ cities are numbered from 1 to $n$. Given a positive integer $g$, two cities are connected if their greatest common divisor is greater than $g$. The number $n$ may be as large as $10^5$. > > > Alice has a list of $q$ cities that she'd like to visit. Given Alice's list of $q$ origin cities and $q$ destination cities, return a boolean vector with a 1 in the $i$th position indicating that it's possible to reach the $i$th destination city from the $i$th origin city, and 0 if not. The length of the list $q$ may be as large as $10^5$ as well. > > > I solved this problem with a quadratic time solution. However, the large upper bound for $n$ and $q$ clearly implies there is a better runtime. I haven't yet seen a way to speed up the creation of the graph, however. My solution builds the graph in quadratic time, using a [union find](https://en.wikipedia.org/wiki/Disjoint-set_data_structure) data structure to handle the component merging, and Euclid's algorithm for computing gcd's. The fundamental question is whether there exists a faster than $O(n^2)$ algorithm to construct and query an undirected graph, when edges are defined by a pairwise rule, and it appears that every pair $(i, j)$ for $1 \leq i,j \leq n$ needs to be looked at.
Pseudo code for finding the minimum connected edges. ``` n=100 lst = range(1,101) g=10 ans = list(tuple()) for d in range(g+1,n+1): for i in range(1,n/d): ans.append((d*i,d*(i+1))) print ans print "size="+str(len(ans)) ``` Output: ``` [(11, 22), (22, 33), (33, 44), (44, 55), (55, 66), (66, 77), (77, 88), (88, 99), (12, 24), (24, 36), (36, 48), (48, 60), (60, 72), (72, 84), (84, 96), (13, 26), (26, 39), (39, 52), (52, 65), (65, 78), (78, 91), (14, 28), (28, 42), (42, 56), (56, 70), (70, 84), (84, 98), (15, 30), (30, 45), (45, 60), (60, 75), (75, 90), (16, 32), (32, 48), (48, 64), (64, 80), (80, 96), (17, 34), (34, 51), (51, 68), (68, 85), (18, 36), (36, 54), (54, 72), (72, 90), (19, 38), (38, 57), (57, 76), (76, 95), (20, 40), (40, 60), (60, 80), (80, 100), (21, 42), (42, 63), (63, 84), (22, 44), (44, 66), (66, 88), (23, 46), (46, 69), (69, 92), (24, 48), (48, 72), (72, 96), (25, 50), (50, 75), (75, 100), (26, 52), (52, 78), (27, 54), (54, 81), (28, 56), (56, 84), (29, 58), (58, 87), (30, 60), (60, 90), (31, 62), (62, 93), (32, 64), (64, 96), (33, 66), (66, 99), (34, 68), (35, 70), (36, 72), (37, 74), (38, 76), (39, 78), (40, 80), (41, 82), (42, 84), (43, 86), (44, 88), (45, 90), (46, 92), (47, 94), (48, 96), (49, 98), (50, 100)] size=101 ``` We can see that edges like (11,33), (11, 44) etc are not part of the set even though they are directly connected. After we get the minimum number of connected edges we can apply disjoints sets to solve the above question.
I have a generator of files with approximately 7 bits /byte entropy. The files are about 30KB each in length. I'd like to use these as sources of entropy to generate random numbers. Theoretically I should be able to achieve about 26 KB (max) of randomness per file. This is a hobby and I know about hashes, but they're primarily checksums. I'm looking for more sophisticated methods of extraction designed specifically for that purpose. I'm hoping to use this as a true random number generator. Supplemental: Following comments, and in order to solicit non hash based responses, I'm evaluating the following architecture... [![Extractor architecture](https://i.stack.imgur.com/KmWJQ.png)](https://i.stack.imgur.com/KmWJQ.png) It works like this:- The entropy is the 30KB file The compressor reduces the entropy to a target size of 26KB A seed is derived from all of the bytes of the entropy Some PRNG implementation produces 26KB of pseudo random output The two outputs are xored together to produce true random numbers So in summary, a 30KB file of 86% entropy is manipulated into a 26KB file of 100% entropy. Entropy is preserved throughout the extraction process, and all the output is totally dependant on the input. 26KB of full entropy goes in, and 26KB of full entropy comes out. I suggest that this is a good method to extract entropy from complete files. Or not.
In practice, in about 95% of cases, the correct answer is probably going to be: forget about those files, use `/dev/urandom` or `CryptGenRandom()` (or equivalent). They provide very high quality random numbers -- That's what they're designed for. So this is the most pragmatic answer, and it will provide excellent quality randomness. --- Let's say you can't do that, maybe because it's a headless embedded device with no access to entropy, or you want your random numbers to be a repeatable deterministic function of the files. Then in that case the best answer is likely to be -- guess what? -- hashing. In particular: concatenate all the files, take the SHA256 hash of the concatenation, and then use that as a key for AES-256-CTR; then use AES-256-CTR to generate as much output as you want. This is basically using a cryptographic hash to generate a seed for a cryptographic-quality pseudorandom generator, and it satisfies all your requirements. This is a solid answer, from the perspective of robust engineering and high quality random numbers. --- OK, let's say you are a theorist. If you are a theorist, you won't like the last answer, because it relies upon cryptographic assumptions that have not been mathematically proven to hold. Pragmatically, you probably shouldn't worry about that -- you rely upon those assumptions every day (e.g., when doing e-commerce or entering your password into a website), and other issues are far more likely to affect you in practice -- but let's say you are a theorist. From a theoretical perspective, the cryptographic solution is potentially unsatisfying, because it relies upon an unproven assumption, and that is... inelegant. So, it's interesting to ask what can be achieved that will provably work. If that's what you're interested in, you'll want to read about [randomness extractors](https://en.wikipedia.org/wiki/Randomness_extractor). If you care about pragmatics, randomness extractors are probably not the best possible solution: extractors need stronger assumptions about the distribution of the data in your files, they are more limited in how much random output they can produce, and they are more fragile. But they *do* come with provably good properties, and they are mathematically elegant and beautiful.
How can I compute the significance test (P value) for the correlation coefficient (r) using R or Matlab? i.e Can anybody help me with the suitable code to compute the p value for the correlation coefficient in R or Matlab? The output of the calculators available online to compute p value for r is totally different! That's why I am looking for a trusted code to compute it using R or Matlab
Say `r <- cor(x, y) n<-length(x)` Two-tailed tests: 1. Testing using Student's t-distribution: `t<-r*sqrt((n-2)/(1-r^2)) p<-2*pt(-abs(t),n-2)` 2. Using the Fisher transformation `z<-(atanh(r)-0)*sqrt(n-3) p<-2*pnorm(-abs(z))`
a friend of mine has asked me to help him with predictive modelling of car traffic in a medium sized parking garage. The garage has its busy and easy days, its peak hours, dead hours opening hours (it is opened during 12 hours during weekdays and during 8 hours during weekends). The goal is to predict how many cars will enter the garage during a given day (say, tomorrow) and how are these cars supposed to be distributed over the day. Please point me to general references (preferably, publicly available) to strategies and techniques. Thank you
The field that is relevant to the problem is the [Queuing theory](http://en.wikipedia.org/wiki/Queueing_model), a particular sub-field is a [Birth-death](http://en.wikipedia.org/wiki/Birth-death_process) processes. An article that in my opinion is helpful to your task is R.C. Larson and K.Satsunama (2010) [*Congestion Pricing: A Parking Queue Model*](http://www.jise.info/issues/volume4no1/01-V4N1-Full.pdf), following the links in references would give more ideas on where to proceed. Note, that recently R package [queueing](http://cran.r-project.org/web/packages/queueing/) has been released (with misprint in the title however). Finally, I think, that this link for [queuing software](http://web2.uwindsor.ca/math/hlynka/qsoft.html) could be helpful.
I am not sure if this question is more suitable for CS, theoretical CS or math so feel free to improve the description and migrate it. In a scenario very similar to popular binary classification machine learning contests, competitors are required to submit their answers to the test set containing N data points. The accuracy of the submission will be announced. The competitors can then submit a new answer, and the new accuracy will always be published, no matter improved or not. If the competitor has no information about the test set at all, what is the optimal way (in the sense that the least submissions are needed) to crack the challenge?
Your problem is tackled in Vaishampayan, [Query Matrices for Retrieving Binary Vectors Based on the Hamming Distance Oracle](https://arxiv.org/abs/1202.2794). Vaishampayan relates this problem to one considered in Lev and Yuster, [On the size of dissociated bases](https://arxiv.org/abs/1005.0155), and the upshot (if I understand things correctly) is that a random test set of size $O(N/\log N)$ (for an appropriate hidden constant) would do if you don't care about carrying the decoding procedure efficiently. Vaishampayan describes a more structured solution using $o(N)$ tests which can be implemented efficiently.
I am trying to understand if regular languages and their regular expressions are concepts of computer science in general and if these are discovered, or invented, by computer scientists, in particular. In math discourse there was always the question like "Are mathematics discovered or invented?". I am quite sure regel-regex is not a mathematical theory but a linguistics theory (even though it was first theorized by a mathematician - Stephen Cole Kleene). Is regex part of computer science and if so, is it generally accepted as "discovered" or as "invented", and what could be one example of what's being further researched in regards to matching information on a computer document?
There are several things that are all called regular expressions. The answer to your question is different depending upon which thing you want to talk about. The three relevant distinctions for this question in my opinion are as follows: ### First The notion of [regular languages](https://en.wikipedia.org/wiki/Regular_language) and related things like [recursive enumerability](https://en.wikipedia.org/wiki/Recursively_enumerable_language). Individual regular languages are isomorphic, (i.e. able to be losslessly transformed to and from,) to [deterministic finite automata](https://en.wikipedia.org/wiki/Deterministic_finite_automaton) and reducing something to a regular language demonstrate results about that thing's computablity, so I would argue it is part of computing science. If linguistics folks find the notion or regular languages useful, however, then we can share it. Human languages are (generally?) not regular languages but something more complex, so I would be surprised if that was the case. I think most interested people would give the same answer to whether regular languages are invented or discovered as they would about mathematics. ### Second The particular notation for describing regular languages, in the formal literature, developed by Kleene and others. This is clearly invented. There's no particular reason we have to have used `*` and `+` in particular to represent those ideas. If you want to assign this notation to one or more fields, I would argue it belongs to any field that publishes papers using it. ### Third Computer programs which take as input an expression in a language and produce an automata or something resembling one and that result can then be used to perform operations on a different input including search and replace. The languages these programs define which usually contains some form of the Kleene Star and Kleene Plus, along with several other more complicated but useful constructions, which are generally absent from formal regular language literature to my knowledge. Backreferences would be one example of this. These programs are invented, as are their particular sets of notations. They are also instances of computer engineering rather than computing science per se. Although of course they are building on at least a certain amount of computing science results, and computing science results can be stated about these programs. For example, it is possible to construct regular expressions of this sort that produce exponential running times, and subsets of those notations can be defined where that is impossible.
Have been trying to get a working database connection to OrientDB (v2.1.16) from my Java-based Spark (v2.0.0) code but had no luck so far. The only available connector specifically for OrientDB/Spark seems to only work with Scala code: <https://github.com/metreta/spark-orientdb-connector> Since Java and Scala both execute on the JVM I imagine it should be possible to use it from Java but haven't seen how. OrientDB has a JDBC driver and I have also tried that route: ``` SparkSession spark = SparkSession.builder().master("local").appName("My Spark App").getOrCreate(); Map<String, String> options = new HashMap<String,String>(); options.put("driver", "com.orientechnologies.orient.jdbc.OrientJdbcDriver"); options.put("url", "jdbc:orient:remote:192.168.1.2/mydb"); options.put("dbtable", "Customer"); options.put("user", "user"); options.put("password", "password"); Dataset< org.apache.spark.sql.Row> df = spark.read().format("jdbc").options(options).load(); ``` This gives an error on all queries I've tried (supplying Spark with the OrientDB JDBC jar in spark-submit): ``` Exception in thread "main" java.sql.SQLException: No current record at com.orientechnologies.orient.jdbc.OrientJdbcResultSetMetaData.getCurrentRecord(OrientJdbcResultSetMetaData.java:202) at com.orientechnologies.orient.jdbc.OrientJdbcResultSetMetaData.getColumnCount(OrientJdbcResultSetMetaData.java:84) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:130) at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:117) ``` Has anyone successfully accomplished this?
Daniel, What you did goes under the name of "oversampling." There is a sample of some "real" population, and you replace it with a sample from a "manufactured" population. The problem that makes sense in application is the estimation of $$P\_r(Y=1|X) = \text{probability of response=1 in the $\mathbf{real}$ population given the predictor $X$} $$ but by using an oversample you are estimating $$P\_m(Y=1|X) = \text{probability of response=1 in the $\mathbf{manufactured}$ population given the predictor $X$}$$ The two probabilities are related. I'll worked the details. I'll pretend the predicor $X$ is discrete. If $X$ takes numerical values one has to replace some probabilities by probability densities. $$\dots\dots\dots$$ To simplify the notation, let $\pi\_1 = P\_r(Y=1)$ and $\mu\_1 = P\_m(Y=1)$ be the probabilities of response in the real and manufactured populations, let $$ L\_r = \frac{P\_r(X=x|Y=1)}{P\_r(X=x|Y=0)} = \frac{\frac{P\_r(Y=1|X=x)}{P\_r(Y=0|X=x)}}{\frac{\pi\_1}{1-\pi\_1} }$$ be the odds ratio of $Y=1$, i.e.: the ratio of the odds among cases with $X=x$ and the odds in the general $\mathbf{real}$ population. Finally, let $L\_m$ be the corresponding ratio in the $\mathbf{manufactured}$ population. By Bayes' Theorem: $$ P\_r(Y=1|X=x) = \frac{P\_r(Y=1,X=x)}{P\_r(X=x)} = \\ =\frac{P\_r(X=x|Y=1)\space \pi\_1}{P\_r(X=x|Y=1)\space\pi\_1 + P\_r(X=x|Y=0)\space (1 - \pi\_1)} = \\ =\frac{L\_r\space \pi\_1}{L\_r\space\pi\_1 + \space (1 - \pi\_1)} \tag{1} $$ In a similar way, we get an analogous result for the manufactured population: $$ P\_m(Y=1|X=x) = \frac{L\_m\space \mu\_1}{L\_m\space\mu\_1 + \space (1 - \mu\_1)} \tag{2} $$ Since the manufactured sample is a random sample, stratified by $Y$, the conditional distribution of X within responders is the same as in the real population. Same as for non responders, i.e.: $$ P\_r(X=x|Y=j) = P\_r(X=x|Y=j) $$ for $j=0,1.$ If the sample stratified by values of Y was anything other than random sample these would not be true. It follows that $\boxed{ L\_r = L\_m }$. Next we solve for $L\_m$ in terms of $P\_m(Y=1|X)$ from (2) and replace in (1). $$\dots\dots\dots$$ $\mathbf{Digression}$: Here is my easy way to carry the steps, without mess. Two non-zero vectors $\mathbf{v\_1}$, $\mathbf{v\_2}$ are parallel iff there is $\lambda \ne 0$ such that $\mathbf{v\_1}=\lambda \mathbf{v\_2}.$ Below I will use this idea, and I will not care about the exact value of $\lambda$, so I will be using "$\lambda$" as a short-hand for "$\mathbf{\text{some non-zero mumber}}$." $\mathbf{\text{End Digression}}$: $$\dots\dots\dots$$ The easy way to solve is to observed that for non-zero messy values of $\lambda$ (not the same in each occurrence!) one has: $$ \begin{bmatrix} P\_r(Y=1|X) \\ 1 \\ \end{bmatrix} = \lambda \begin{bmatrix} \pi\_1 &0\\ \pi\_1 &1-\pi\_1 \end{bmatrix} \begin{bmatrix} L\_r\\ 1 \end{bmatrix} , $$ and $$ \begin{bmatrix} P\_m(Y=1|X) \\ 1 \\ \end{bmatrix} = \lambda \begin{bmatrix} \mu\_1 &0 \\ \mu\_1 &1-\mu\_1 \end{bmatrix} \begin{bmatrix} L\_m\\ 1 \end{bmatrix} . $$ Therefore, $$ \begin{bmatrix} L\_m\\ 1 \end{bmatrix} = \lambda \begin{bmatrix} \mu\_1 &0 \\ \mu\_1 &1-\mu\_1 \end{bmatrix}^{-1} \begin{bmatrix} P\_m(Y=1|X) \\ 1 \\ \end{bmatrix} , $$ and so (remember that here $\lambda$ stands for "some non-zero number") $ \space \begin{bmatrix} P\_r \\ 1 \end{bmatrix} = \lambda \begin{bmatrix} \pi\_1 &0 \\ \pi\_1 &1-\pi\_1 \end{bmatrix} \begin{bmatrix} L\_r \\ 1 \end{bmatrix} = \\ \text{ }=\lambda \begin{bmatrix} \pi\_1 &0 \\ \pi\_1 &1-\pi\_1 \end{bmatrix} \begin{bmatrix} \mu\_1 &0 \\ \mu\_1 &1-\mu\_1 \end{bmatrix}^{-1} \begin{bmatrix} P\_m \\ 1 \\ \end{bmatrix} = \\ \text{ }=\lambda \begin{bmatrix} \pi\_1 (1- \mu\_1) &0 \\ \pi\_1 - \mu\_1 & \mu\_1 (1- \pi\_1) \end{bmatrix} \begin{bmatrix} P\_m \\ 1 \\ \end{bmatrix} = \lambda \begin{bmatrix} \pi\_1 (1-\mu\_1) P\_m \\ (\pi\_1 - \mu\_1) \; P\_m + \mu\_1 (1- \pi\_1) \end{bmatrix}. $ Thus, $$ P\_r = \frac{\pi\_1 (1- \mu\_1) P\_m}{(\pi\_1 - \mu\_1) \; P\_m + \mu\_1 (1- \pi\_1) } $$ $$\dots\dots\dots$$ Example: Let's work the details of a Binomial model, $$P\_m(Y=1|X) = \frac{e^{\beta\_0 + \beta X}}{1+e^{\beta\_0 + \beta X}} $$ or in the "where $\lambda$ is some non-zero scalar" notation (I would not had digressed before if I did not had ulterior motive.. :) ): $$ \begin{bmatrix} P\_m \\ 1 \\ \end{bmatrix} = \lambda \begin{bmatrix} e^{\beta\_0 + \beta X} \\ 1 + e^{\beta\_0 + \beta X} \end{bmatrix} $$ What is the implied model in the real population? $ \space \begin{bmatrix} P\_r \\ 1 \end{bmatrix} = \lambda \begin{bmatrix} \pi\_1 (1- \mu\_1) &0 \\ \pi\_1 - \mu\_1 &\mu\_1 (1- \pi\_1) \end{bmatrix} \begin{bmatrix} P\_m \\ 1 \end{bmatrix} = $ $\;$ $ = \lambda \begin{bmatrix} \pi\_1 (1- \mu\_1) &0 \\ \pi\_1 - \mu\_1 &\mu\_1 (1- \pi\_1) \end{bmatrix} \begin{bmatrix} e^{\beta\_0 + \beta X} \\ 1 + e^{\beta\_0 + \beta X} \end{bmatrix}= $ $\;$ $ = \lambda \begin{bmatrix} \pi\_1 (1- \mu\_1) e^{\beta\_0 + \beta X} \\ \pi\_1 (1- \mu\_1) e^{\beta\_0 + \beta X} + \mu\_1 (1- \pi\_1) \end{bmatrix} = \lambda \begin{bmatrix} \frac{\pi\_1 (1- \mu\_1)}{\mu\_1 (1- \pi\_1)} e^{\beta\_0 + \beta X} \\ 1 + \frac{\pi\_1 (1- \mu\_1)}{\mu\_1 (1- \pi\_1)} e^{\beta\_0 + \beta X} \end{bmatrix} .$ If we let $\tau = \ln(\frac{\pi\_1 (1- \mu\_1)}{\mu\_1 (1- \pi\_1)})$, we can absorve this constant in the exponents to get: $$ \begin{bmatrix} P\_r \\ 1 \end{bmatrix} = \lambda \begin{bmatrix} e^{\tau + \beta\_0 + \beta X} \\ 1 + e^{\tau + \beta\_0 + \beta X} \end{bmatrix} .$$ Taking the ratio and simplifying the non-zero constant in numerator and denominator we get that fitting a logistic model to the manufactured population results in an implied logistic model for the real population, $\mathbf{\text{with the same coefficients for X}}$ and with a difference in the constant (in the logistic model) given by: $$ \beta\_{real} = \tau + \beta\_0 $$ $$\dots$$ Note that, according to your reference, the ratio of $\gamma\_1 = Pr(Z=1|Y=1)$ and $\gamma\_0 = Pr(Z=1|Y=0)$ should come up. Indeed: $$ \gamma\_1 = Pr(Z=1|Y=1) = \frac{P(Z=1,Y=1)}{P(Y=1)} = \frac{P\_r(Y=1|Z=1)P\_r(Z=1)}{P\_r(Y=1)} = \frac{P\_m(Y=1)}{P\_r(Y=1)} P\_r(Z=1)= \frac{\mu\_1}{\pi\_1}P\_r(Z=1) $$ likewise (i.e. change Y to 1-Y), $$ \gamma\_0 = \frac{1-\mu\_1}{1-\pi\_1}P\_r(Z=1) $$ so $$ \ln(\frac{\gamma\_1}{\gamma\_0}) = - \ln(\frac{\pi\_1 (1-\mu\_1)}{\mu\_1 (1-\pi\_1}) = \tau $$ $$\dots\dots\dots$$ Notes for full disclosure: I worked with the probability model. When one works with finite samples the example above suggests two ways of estimating the coefficients: \* estimate coefficients using the sample from the real population \* estimate coefficients using the manufactored populations It terns out that this two estimators are not the same (it is obvious if one consideres one estimator is based on more cases than the other). Both estimators are asymtopically consistent, but it can be shown the one based on the manufactored population is more biased (forgot the reference :( ). In the data science space we are more concern with the quality of the predictions than the parameters of the parameters used to make those predictions, so as long as you check results properly (e.g.: using a testing set to build models and another to validate them), the bias in the parameters should not deter us from using oversampling. $$\dots\dots\dots$$
From what I understand (which is very little, so please correct me where I err!), theory of programming languages is often concerned with "intuitionistic" proofs. In my own interpretation, the approach requires us to take seriously the consequences of *computation* on logic and provability. A proof cannot exist unless there exists an *algorithm* constructing the consequences from the hypotheses. We might reject as an axiom the principle of the excluded middle, for instance, because it exhibits some object, which is either $X$ or $\lnot X$, nonconstructively. The above philosophy might lead us to prefer intuitionistically valid proofs over ones that are not. However, I have not seen any concern about actually using intuitionistic logic in papers in other areas of theoretical CS. We seem happy to prove our results using classical logic. For example, one might imagine using the principle of the excluded middle to prove that an algorithm is correct. In other words, we care about and take seriously a computationally-limited universe in our results, but not necessarily in our proofs of these results. **1. Are researchers in theoretical CS ever concerned about writing intuitionistically valid proofs?** I could easily imagine a subfield of theoretical computer science that seeks to understand when TCS results, especially algorithmic ones, hold in intuitionistic logic (or more interestingly, when they don't). But I have not yet come across any. **2. Is there any philosophical argument that they should?** It seems like one could claim that computer science results ought to be proven intuitionistically when possible, and we ought to know which results require *e.g.* PEM. Has anyone tried to make such an argument? Or perhaps there is a consensus that this question is just not very important? **3.** As a side question, I am curious to know examples of cases where this actually matters: **Are there important TCS results known to hold in classical logic but not in intuitionistic logic?** Or suspected not to hold in intuitionistic logic. Apologies for the softness of the question! It may require rewording or reinterpretation after hearing from the experts.
It is is worth thinking about WHY intuistionistic logic is the natural logic for computation, since all too often people get lost in the technical details and fail to grasp the essence of the issue. Very simply, classical logic is a logic of perfect information: all statements within the system are assumed to be known or knowable as unambiguously true or false. Intuistionistic logic, on the other hand, has room for statements with unknown and unknowable truth values. This is essential for computation, since, thanks to the undecidability of termination in the general case, it will not always be certain what the truth value of some statements will be, or even whether or not a truth value can ever be assigned to certain statements. Beyond this, it turns out that even in strongly normalizing environments, where termination is always guaranteed, classical logic is still problematic, since double negation elimination $\neg\neg P \implies P$ ultimately boils down to being able to pull a value "out of thin air" rather than directly computing it. In my opinion, these "semantic" reasons are a much more important motivation for the use of intuistionistic logic for computation than any other technical reasons one could marshal.
Polynomial identity testing is the standard example of a problem known to be in **co-RP** but not known to be in **P**. Over arithmetic *circuits*, it does indeed seem hard, since the degree of the polynomial can be made exponentially large by repeated squaring. [This question](https://cs.stackexchange.com/questions/39610/polynomial-identity-testing-evaluating-a-polynomial-on-a-circuit) addresses the issue of how to work around this and keep the problem in randomized polynomial time. On the other hand, when the problem is initially presented (e.g. [here](http://www.math.uni-bonn.de/~saxena/papers/pit-survey09.pdf)), it is often illustrated over arithmetic *expressions* containing only constants, variables, addition, and multiplication. Such polynomials have total degree at most polynomial in the length of the input expression, and for any such polynomial the size of the output value is polynomial in the size of the input values. But since a polynomial of degree $d$ has at most $d$ roots, isn't this trivial? Just evaluate the polynomial over the rationals at *any* $d + 1$ distinct points and check whether the result is zero at each point. This should take only polynomial time. Is this correct? If so, why are arithmetic expressions without shared subexpressions often used as examples, when sharing is essential to the difficulty of the problem?
That [isn't known to be trivial](http://www.cs.sfu.ca/~kabanets/papers/poly_derand.pdf). The polynomial ​$x \cdot y$ ​ has infinitely many roots. (When either variable is zero, the other variable won't affect the polynomial's value.)
I am asking this question out of curiosity. I recently encountered this well-known paper on (published in 2009): [the hardness\_of\_Euclidean\_kmeans](https://link.springer.com/content/pdf/10.1007/s10994-009-5103-0.pdf) The paper showed that the previous NP-hardness result ([link](https://www.cc.gatech.edu/~vempala/papers/dfkvv.pdf)) for the Euclidean k-means (discovered in 2004 and a preliminary version appeared in 1999) was wrong. Note that after around 5 years somebody pointed out that the previously known result was incorrect. They also mention that many of the well-known papers (like [kmeans++](https://dl.acm.org/doi/pdf/10.5555/1283383.1283494) paper) cited the incorrect hardness result till then. Even when I read a paper, I find some minor mistakes. However, they are easily fixable and do not change the main result very much. I want to ask if there had been any fundamental or highly cited paper, which was later found to be incorrect, and due to which the entire understanding of the field changed. **Edit:** After reading some of the answers, I want to point out another issue that, why these incorrect papers are not updated after being pointed out wrong. I mean some kind of notice must be provided by the governing body who shares the link. In my case (for the example I gave above), it took me two years to figure out why people in the year 2002 were designing the PTAS for the k-means problem(for fixed k) if the result of hardness came later in the year 2009. It can be quite frustrating for a person who is not familiar with that field.
One example is the claimed proof of the Gilbert-Pollak conjecture on the Steiner ratio, which appears in [FOCS'90](https://doi.org/10.1109/FSCS.1990.89526), and [Algorithmica](https://doi.org/10.1007/BF01758755). The conjecture is [now considered open](https://doi.org/10.1007%2Fs00453-011-9508-3). Another examples include a sequence of [algorithms on graph embedding](http://dx.doi.org/10.1016/j.jcss.2010.06.002). There are more examples, but without similarly published refutations as above, it is difficulty to list due to the subtle nature.
On smaller window sizes, `n log n` sorting might work. Are there any better algorithms to achieve this?
If you maintain a length-k window of data as a sorted doubly linked list then, by means of a binary search (to insert each new element as it gets shifted into the window) and a circular array of pointers (to immediately locate elements that need to be deleted), each shift of the window requires O(log(k)) effort for inserting one element, only O(1) effort for deleting the element shifted out of the window, and only O(1) effort to find the median (because every time one element is inserted or deleted into the list you can update a pointer to the median in O(1) time). The total effort for processing an array of length N therefore is O((n-k)log(k)) <= O(n log(k)). This is better than any of the other methods proposed so far and it is not an approximation, it is exact.
In XGBoost, the objective function is $J(f\_i)=\sum\_{i=1}^{n}L(y\_i,\hat{y}\_i^{(t-1)}+f\_t(x\_i))+\Omega{(f\_i)}+C$, If we take Taylor expansion of the objective function and let $$g\_i=\frac{\partial{L(y\_i,\hat{y}\_i^{(t-1)})}}{\partial{\hat{y}\_i^{(t-1)}}}$$, and $$h\_i=\frac{\partial^2{L(y\_i,\hat{y}\_i^{(t-1)})}}{\partial{\hat{y}\_i^{(t-1)}}}$$ If the loss function $L$ is the loss function of logistic regression, i.e. $L=-\sum\_{i=1}^{m}y\_ilog(h\_i)+(1-y\_i)log(1-h\_i)$,then I think $$g\_i=\frac{\partial{L}}{\partial{h\_i}}=-(\frac{y\_i}{h\_i}+(1-y\_i)\*\frac{-1}{1-h\_i})$$,that is $$g\_i=-\frac{y\_i-h\_i}{h\_i(1-h\_i)}$$ However, in the example given by the XGBoost package, they think $g\_i$ of loss function of logistic regression is $g\_i=h\_i-y\_i$. Here is the g and h definition: ``` def logregobj(preds, dtrain): labels = dtrain.get_label() preds = 1.0 / (1.0 + np.exp(-preds)) grad = preds - labels hess = preds * (1.0-preds) return grad, hess ``` full code can be found [here](https://gist.github.com/zhpmatrix/83846972e2631b94e38af4d25c2de797#file-custom_objective-py). I don't get it. Can anyone help? Thanks in advance! Well, it seems that there is another version which I think is correct. ``` def custom_loss(y_pre,D_label): label=D_label.get_label() penalty=2.0 grad=-label/y_pre+penalty*(1-label)/(1-y_pre) hess=label/(y_pre**2)+penalty*(1-label)/(1-y_pre)**2 return grad,hess ``` Although the penalty seems wired.
The discrepancy is due to the interpretation of $y\_i^{t-1}$. In your derivation, you're assuming it is the probability $h\_i$, whereas the code author has defined it as the log odds $logit(h\_i) = log(\frac{h\_i}{1-h\_i})$. Re-express the loss as a function of log odds instead of probability (define $O\_i = logit(h\_i)$): $$ L\_i = -y\_i O\_i + log(1 + exp(O\_i)) $$ And find the derivative with respect to the log odds: $$ g\_i = \frac{d L\_i}{d O\_i} = h\_i - y\_i $$ (Side note: As stated by @Sycorax, you're overloading the term $h\_i$ because the xgboost paper authors define it as the 2nd order gradient statistic)
What are the best practices to save, store, and share machine learning models? In Python, we generally store the binary representation of the model, using pickle or joblib. Models, in my case, can be ~100Mo large. Also, joblib can save one model to multiple files unless you set `compress=1` (<https://stackoverflow.com/questions/33497314/sklearn-dumping-model-using-joblib-dumps-multiple-files-which-one-is-the-corre>). But then, if you want to control access rights to models, and be able to use models from different machines, what's the best way to store them? I have a few choices: * Store them as files, and then put them in a repository using Git LFS * Store them in an SQL database as binary files: + For instance in Postgresql <https://wiki.postgresql.org/wiki/BinaryFilesInDB> + This is also the method recommended by the SQL Server team: - <https://docs.microsoft.com/en-us/sql/advanced-analytics/tutorials/walkthrough-build-and-save-the-model> - <https://microsoft.github.io/sql-ml-tutorials/python/rentalprediction/step/3.html> - <https://blogs.technet.microsoft.com/dataplatforminsider/2016/10/17/sql-server-as-a-machine-learning-model-management-system> * HDFS
I faced this problem (and still face it today) for many years. I really thing that, if you don't provide detailed requirements, you can't expect a serious answer. I explain myself with examples of my work: * I regularly try multiple variations of the same model to find what parameters work best. It takes several day to train one single model which produces some output that is later used for evaluation. To do so, I make a simple NumPy dump of the model since it is easy to share it between servers, or colleagues. You should avoid pickle since it stores much more (instances of class, libraries...) than just the parameters learned by your model. Importing the model on another machine might not work if the python environment slightly differs. * When pushing a model in production, I need 1) a version of the model that I can load fast in case of a server breakdown (typically a binary format, storing only what is necessary such as weights of a neural network) and 2) a way to keep the model in-RAM to quickly deal with the API requests. For two different purposes, I need three different formats. Then, more generally speaking, the choice of the format depends on the tools you use. For example, if you work with TensorFlow, you might be interested in their [TensorFlow Serving](https://www.tensorflow.org/serving/) system
Problem ------- Let $X\sim f\_X$, where $f\_X$ is the probability density function of $X$. Let $g: \mathbb{R} \to \mathbb{R}$ a strictly monotonic (decreasing or increasing) mapping. I aim to prove or disprove: $$ g[\mathrm{median} (X)] = \mathrm{median}[g(X)] $$ --- Try --- Let $x\_0 = \mathrm{median} (X)$, $Y=g(X)$, $y\_0 = g(x\_0)$. Then, we want to show: $$ 0.5 = \int\_{-\infty}^{y\_0} f\_Y (y) dy $$ WLOG, assume $g$: increasing. By change of variables, $f\_Y(y) = f\_X(x)/g'(x)$ and $dy=dx g'(x)$. Thus, $$ \begin{align} \int\_{-\infty}^{y\_0} f\_Y (y) dy &= \int\_{-\infty}^{y\_0} f\_X(x) dx \\ &\neq \int\_{-\infty}^{x\_0} f\_X(x) dx = 0.5 \end{align} $$ So, if $f\_X>0$ a.s., then the above statement holds **only if** $g(x\_0) = x\_0$. However, take an example of log-normal distribution, where $X\sim N(\mu, \sigma^2)$ and $g(x)=\exp(x)$. There is no real solution to $g(x\_0) = x\_0$, and $f\_X >0$ a.s. But it is still true that $\mathrm{median}[g(X)] = g(\mu)$. So my disproof is invalid? Any suggestions are welcome.
**Here is a generalization.** It is intended to reveal which properties of probability are involved in the result. It turns out that density functions are irrelevant. --- Any number $\mu$ determines two events relative to the random variable $X$: $\mathscr E^-\_\mu(X): X \le \mu$ and $\mathscr E^+\_\mu: X \ge \mu.$ Any (strictly) monotonic transformation $g$ either preserves these events or reverses them, in the sense that the two sets $g(E^{\pm}\_\mu(X))$ are the two sets $E^{\pm}\_{g(\mu)}(g(X)).$ This follows immediately from the definition of (strict) monotonicity as preserving equality and either preserving or reversing strict inequality. When $0\lt q \lt 1$ is a probability, it defines a *nonempty set* of numbers -- any one of which can be called the $q^{\text{th}}$ *quantile* of $X$ -- consisting of all $\mu\_q$ for which $\Pr(E^-\_{\mu\_q}(X)) \ge q$ and $\Pr(E^+\_{\mu\_q}(X)) \ge 1-q.$ (**Proof**: start with a very large number $\alpha$ and decrease it as long as $\Pr(E^-\_{\alpha}(X)) \ge q$. Start with a very small number $\beta$ and increase it as long as $\Pr(E^+\_{\beta}(X))\ge 1-q.$ If $\alpha \gt \beta,$ there exists $\gamma$ strictly between $\alpha$ and $\beta$ that, by construction, satisfies $\Pr(E^-\_{\gamma}(X)) \lt q$ (because $\gamma \lt \alpha$) and $\Pr(E^+\_{\gamma}(X)) \lt 1-q$ (because $\gamma \gt \beta$). But then $$1 = \Pr(\mathbb R) = \Pr(E^-\_{\gamma}(X)\cup E^+\_{\gamma}(X)) \lt \Pr(E^-\_{\gamma}(X)) + \Pr(E^+\_{\gamma}(X)) \lt q + 1-q = 1,$$ a contradiction. Notice how this *constructive* demonstration relies only on two simple probability axioms and a basic property of real numbers.) The $q$ quantiles of $X$ are defined to be the interval $[\alpha,\beta],$ which we have seen is nonempty. As a matter of temporary notation, denote such a set by $X[q].$ Putting these two observations together, we conclude that > > any strictly monotonically increasing transformation $g$ maps quantiles to quantiles (that is, $g(X[q]) \subseteq g(X)[q]$ for all $q\in(0,1)$) while a strictly monotonically decreasing transformation maps the $q$ quantiles into the $1-q$ quantiles (that is, $g(X[q])\subseteq g(X)[1-q]$). > > > (As @Ilmari Karonen kindly points out in comments, $g$ is guaranteed to preserve quantiles when it is continuous, for then it will have an inverse that maps quantiles back into quantiles.) **Applying this to the case $q=1/2=1-q$ shows that $g$ preserves medians *as sets*.** When the median is unique (a singleton set), the median of $g(X)$ therefore is $g$ applied to the median of $X.$ You may wish to show that when $X$ has a density function defined in a neighborhood of its $q$ quantile and is not identically zero in any such neighborhood, it has a unique $q$ quantile: that is, $X[q]$ is a singleton.
I am conducting an experiment investigating lineup accuracy and witness confidence. A long story short: we want to know what the pattern of false positives, hits and misses on a lineup task are under different lineup conditions and how confidence may vary with/independently of accuracy. Logically, witness confidence may also be affected by the different conditions, and we'd like to know this as well. The between subjects variables are: Gender (male, female), ethnicity (Asian, Caucasian), and lineup type (sequential- where people see each lineup member one at a time and make a decision about each one, and simultaneous- where people see all the lineup members and make a decision about whether they see the perpetrator or not) The within subjects variables are: Photo type (same vs different photo of the person), lineup ethnicity (Asian vs. Caucasian lineups), confidence (5 levels of a Likert scale from 1 "not confidence at all" to 5 "extremely confident) The dependent variable is accuracy in terms of hits, misses and false positives (these could be coded as 0 or 1?) and correct recognition (hits-false positives) One of the problems is that we want to know the relationship between confidence and accuracy, which would necessitate that confidence is an independent variable, however we also want to know if the other variables might affect confidence (such as ethnicity or lineup type), so I'm having trouble figuring out the best way to analyse this data. Does anyone have any answers for me? Someone suggested maybe logistic regression, but they weren't really sure. I'm really not used to dealing with categorical data, so am in need of help!
The relationship between confidence and accuracy might potentially be investigated by considering them as a multivariate response; that it, you could analyze the conditional correlation between the two, conditional on some set of predictors. The difficulty with that is that accuracy is a proportion while confidence is a Likert-scale item, makes the usual multivariate analysis tricky. It might be possible to deal with the Likert item as a multinomial logit and deal with some multivariate binomial relating those with the accuracy variable. Another possibility might be to look at partial least squares type models (but again, even if you treat the Likert item as interval, there's the problem of the binomial accuracy); yet another possibility is some Bayesian graphical model.
I am trying to detect patterns in huge code bases. I managed to filter the entire codebase into a tagged string, as in: ABACBABAABBCBABA The result should be: ABA \*3 CBA \*2 I'm trying to build / use an algorithm which will find ANY unknown repeating pattern inside the string. The length of the pattern, it's composition, and the number of repeats is unknow. To be a pattern it must occur atleast twice. And have atleast 2 items. Once I detect the patterns I can represent them back in their original context I have tried iterating over each tag. For each tag find the following tag in the string. continue until adding a tag matches only one repeat - hence no more pattern. I get lost in implemetation (in JS or Python) and I'm hoping there is a better way. Thanks.
Construct the [suffix tree](https://en.wikipedia.org/wiki/Suffix_tree) of your string, which takes time linear in the length of the string (assuming a finite alphabet). Every inner node represents a repeat, their respective descendant leaves encode the positions.
I am looking for an invertible discrete function $f:\{0,1,2,\dots,n-1\} \to \{0,1,2,\dots,n-1\}$ for some given integer $n$. I want $f(0),f(1),\dots,f(n-1)$ to return all the integers in range $[0..n)$ exactly once, but in a "messy", random-seeming arrangement. I anticipate that $n$ will be not bigger than $2^{30}$. I thought about finding a generator for the group `<Zn,*>`, but I'm not sure if it would work for any given $n$ (would it?). Any other ideas?
You are looking for a pseudorandom permutation on the set $\{0,1,2,\dots,n-1\}$. In cryptography, this has been studied under the (counter-intuitive) name "[format-preserving encryption](https://en.wikipedia.org/wiki/Format-preserving_encryption)". There are a number of constructions you could use for your purposes. There's a bunch of research literature on the problem, with different schemes that are optimized for different values of $n$. You can also find some summaries on Cryptography.SE. I recommend you start by reading the question and the answers at [Lazily computing a random permutation of the positive integers](https://cs.stackexchange.com/q/29822/755) and [Encrypting a 180-bit plaintext into a 180 bit ciphertext with a 128-bit block cipher](https://cs.stackexchange.com/q/47775/755) and [What are the examples of the easily computable "wild" permutations?](https://cs.stackexchange.com/q/41124/755).
I recently came across a strange concept and was wondering if this was a known / named concept in the realm of CS. The concept is that you evaluate some computation or logical circuit that takes in N number of binary inputs and gives an output. (and if doing multiple of these in parallel it could then be N inputs to M outputs) You can then change your mind after the fact about the values of the binary inputs you want the algorithm to have evaluated and essentially re-interpret the result of the original equation to get the answer for those different inputs. The benefit here would be that if there was some complex and lengthy computation, that you could do it once, and then just re-interpret the result to get the answer of that same algorithm for different inputs. It sounds strange I know, but are there any existing methods for this, or terminology at least? **Example:** As an example let's say you calculated a function $f(x) = x \* 8$ for $x=5$ in a specific way that gave you a resulting bit string. When an operation was done on the bit string, which was a function of the inputs (say, XOR against a number which was the function of the inputs for example), that the value came out to be 40. But then, you say "OK but what would it be for $f(6)$?". Since 6 is different than 5, the XOR constant changes, but you can use that new XOR constant against the result you already got from the previous calculations, to get the new correct answer of 48. The decoding process is the same regardless of the algorithm/function being evaluated. It is a function of the inputs, and has nothing to do with the details of the function itself (so isn't iterative computation). it almost seems a little related to a karnaugh map, in that you get something boiled down to the results of the algorithm, no matter how complex the steps were to get to that result originally.
It's difficult to know what you mean because you're staying at a level that's so high that there's nothing interesting. Specific cases could be very interesting, but the basic idea that having computed a function for one input can make it easier to compute it for other inputs is too general. It may be that you have a function $f$ such that having computed $f(x)$, you gain some information about $f(g(x))$ for certain choices of functions $g$: there's an equation of the form $\forall x, f(g(x)) = h(f(x))$. The general term for such properties is a [morphism](https://en.wikipedia.org/wiki/Morphism): $f$ is a morphism from $(D,g)$ to $(R,h)$ where $D$ is the domain of $f$ and $R$ is its range. The interesting part is finding useful algebraic structures with such properties. Sometimes the computation of $f(x)$ involves work that's the same for all possible inputs, or for a subset of possible inputs that are in some sense sufficiently similar. Splitting the computation into a common part and an input-specific part is known as [partial evaluation](https://en.wikipedia.org/wiki/Partial_evaluation). The interesting aspect is finding useful ways to partially evaluate a function, and useful ways to memorize and consult partial results without wasting more time consulting cached partial results than would have been spent recalculating the thing.
This seems related to these questions at a glance: [What are some problems which are easily solved by human brain but which would take more time computers?](https://cs.stackexchange.com/questions/9911/what-are-some-problems-which-are-easily-solved-by-human-brain-but-which-would-ta) [What would show a human mind is/is not reducible to a Turing machine?](https://cs.stackexchange.com/questions/24312/what-would-show-a-human-mind-is-is-not-reducible-to-a-turing-machine) But not quite, I am not asking about "time", but power. Also, I am not interested in the turing test. That said, my question can also be expressed as two parts: * *Is there a language, which cannot be recognized by any turing machine, that can be recognized by a human?* * *Is there a language, which cannot be decided by any turing machine, that can be decided by a human?* And vice versa. The "language" I am talking about is the "mathematical" language, not only a "human" or "programming" language: $$L \subseteq \Sigma^\*$$ Since this is a question about computational power, I would make the following assumptions: * Human do not make mistakes (here I mean mistakes like copying the wrong character or computing arithmetics incorrectly, typical human errors) * There is no space limit (turing machine gets infinite tape, you get infinite medium to write) * There are no time constraints * However, the recognition/decision must be achieved within finite time. * And of course, in finite space Please give an example if you have an answer. Remember, this is a theoretical question, so practical issues are not in concern. **EDIT 1** OK, as someone pointed out, I will add the following assumptions. *Human* is probably not easy to define in precise mathematical words, so let's just assume "you". About a human recognizing a string in a language, I am talking about performing the same task the turing machine is "programmed" to do. Say, given a string, whether you (a human) can recognize it when it conforms to a set of rules, or decide whether it conforms to a set of rules or not. I am not sure if I can make the point clear enough... **EDIT 2** OK, to clarify, this *is* a question about model of computation, so yes, like André Souza Lemos mensioned, I am talking about "given a word $w$ and a language $L$, is sentence $w\in L$ decidable". I am not talking about a physical computer. **EDIT 3** OK, this is another idea I came up with. Does model of computation theory include inputs that are volatile by itself? That is, the input changes itself? That is probably not the "language recognition" problem though...
The *computational* problem of defining a language has led science to the concept of **model of computation**. It is within the context of a model of computation that problems such as "given a word $w$ and a language $L$, is the sentence '$w \in L$' decidable?" can be tackled. The computer I am using now to write this answer is not a model of computation, per se. Which languages can be decided by my computer (or by any computer that has the same specifications), is a question that cannot be answered from a purely theoretical point of view. If my computer had unlimited memory, it would not be my computer, it would be something else, completely different. Should you (or anybody else) formulate a model of computation that is *inspired* on how the human brain (or body) works $-$ and models like this exist $-$ it could possibly help answering some version of your question, but probably not in a way that could satisfy your fundamental curiosity. Finally: a computational system can solve a problem by learning from its mistakes. Mistakes can be an essential part of the computational process, as defined by some models of computation.
Can anyone recommend a tool to quickly label several hundred images as an input for classification? I have ~500 microscopy images of cells. I would like to assign categories such as `'healthy'`, `'dead'`, `'sick'` manually for a training set and save those to a csv file. Basically, the same as described in [this](https://datascience.stackexchange.com/questions/13335/build-a-tool-for-manually-classifying-training-data-images) question, except I do not have proprietary images, so maybe that opens up additional possibilities?
Try this tool. It is very simple and does exactly what you want → assign label(s) to images in a given folder. <https://github.com/robertbrada/PyQt-image-annotation-tool> [![enter image description here](https://i.stack.imgur.com/iihhf.png)](https://i.stack.imgur.com/iihhf.png)
I am fairly new to statistics. Currently I am into (histograms) medians, arithmetic mean and all the general basics. And I came across the fact/rule that the arithmetic mean is (always) larger than the median if the distribution is skewed to the right. Why is that? (I would appreciate a rather simple or easily understandable answer).
Here is a simple answer: Skew to the right means that the largest values are farther from the mean than the smallest values are (I know that isn't technically right, and not specific, but it gets the idea). If the largest values are farther from the mean they will influence the mean more than the smallest values will, thus making it larger. However, the effect on the median will be the same for the largest and smallest values. For example, let's start with some symmetrically distributed data: 1 2 3 4 5 mean = 3, median = 3. Now, let's skew it to the right, by making the largest values bigger (farther from the mean): 1 2 3 40 50 mean = 96/5 = 19.2 ... but median still = 3.
This is problem 2.44 from Introduction to the theory of computation by Michael Sipser. > > If $A$ and $B$ are languages, define $A \diamond B = \{xy: x \in A \land y \in B \land |x| = |y|\}$ > > > Show that if $A$ and $B$ are regular languages, then $A \diamond B$ is a $CFL$. > > > **My try:** Let us define the following languages: $$L\_1 = \{ x\#y : |x| = |y| \}$$ $$L\_2 = \{ x\#y : x \in A \land x\in B \}$$ $L\_1$ is context-free, can be proven in a similar way to as done [here](https://cs.stackexchange.com/questions/307/show-that-xy-%E2%88%A3-x-y-x-%E2%89%A0-y-is-context-free) $L\_2$ is concatenation of regular languages, and hence regular. Context-free languages are closed under intersection with regular languages, and hence $L\_1 \cap L\_2 = \{x\#y: x \in A \land y \in B \land |x| = |y|\}$ is context free. Let us define the homomorphism $h$ such that $h(\#)=\epsilon$ and as the identity homomorphism for all other symbols. $h(L\_1 \cap L\_2)=A \diamond B$, and since Context-free languages are closed under homomorphism, we conclude the requested result. --- Does my proof make sense?
There are many ways to prove this. Here is another one, which proves a stronger result: $A \diamond B$ is [linear](https://en.wikipedia.org/wiki/Linear_grammar). Suppose that both $A,B$ are languages over a common alphabet $\Sigma$. Let $\langle Q\_A,q\_{0A},\delta\_A,F\_A \rangle$ and $\langle Q\_B,q\_{0B},\delta\_B,F\_B \rangle$ be DFAs for $A^R$ (the reverse of $A$) and for $B$. We construct a linear grammar with a nonterminals $\{S\} \cup Q\_A \times Q\_B$, with the following rules: * For every $f\_A \in Q\_A$ and $f\_B \in Q\_B$, the rule $S \to \langle f\_A, f\_B \rangle$. * For every $q\_A \in Q\_A$, $q\_B \in Q\_B$, and $\sigma\_A,\sigma\_B \in \Sigma$, the rule $\langle \delta\_A(q\_A,\sigma\_A), \delta\_B(q\_B,\sigma\_B) \rangle \to \sigma\_A \langle q\_A, q\_B \rangle \sigma\_B$. * The rule $\langle q\_{0A},q\_{0B} \rangle \to \epsilon$. In fact, your proof also shows that $A \diamond B$ is linear, since $\{a^n \# b^n : n \geq 0 \}$ is easily seen to be linear, and all the closure properties you use also hold for linear languages.
I have a prediction model that performs a binary classification. The model takes 3 independent predictor parameters variables that can only be integers. I have calculated true positive rates (TPR) and false positive rates for each combination of three predictors. Each dot in the figure below represents TPR (y-axis) vs FPR (x-axis) of each combination. [![True positive rate vs False positive rate for all possible combinations of the three parameters](https://i.stack.imgur.com/oyPQW.png)](https://i.stack.imgur.com/oyPQW.png) Since there are more than 1 parameter, I can not apply a conventional ROC curve, however is there a way to represent success of a model with multiple parameters? Or would it still be reasonable to construct ROC for best and worst case scenarios, such as the figure below? [![best case scenario](https://i.stack.imgur.com/ht5YO.png)](https://i.stack.imgur.com/ht5YO.png)
In practice, where a large number of probabilities must be estimated from data, I have my doubts, unless the word 'cause' is used loosely. It may help to think through the situation where there are only two variables in the dataset. What would the Bayes-ball algorithm reduce to? To me a more fruitful way of thinking about this is to put a Bayesian prior probability on causation. An excellent example is the cigarette smoking and lung cancer one in Nate Silver's book *The Signal and the Noise* which I highly recommend.
What are the basic complexity class seperation and inclusion results that everybody should know? (I mean specifically results that are known, and the proofs can be understood by a non-expert) It would be great to have a book or paper (or set of papers) which go through and prove all the simplest and most fundamental results in this area. Where could I find it? Thank you.
I like [Papadimitriou's book](http://rads.stackoverflow.com/amzn/click/0201530821). Specially, it has "Class Review" sections at the end of several chapters, illustrating the relations among complexity classes. The relevant sections are: * Section 10.4.1 (page 235): P, NP, coNP. * Section 11.5.1 (page 272): P, ZPP, RP, coRP, BPP, NP, coNP, PP. * Section 15.5.1 (page 385): AC, NC, RNC, P. * Section 16.4.1 (page 405): NC$\_1$, L, SL, RL, SC, NL, PolyL, P. * Section 17.3.1 (page 433): PH and PSPACE. * Section 20.2.1 (page 499): P, NP, coNP, PSPACE, EXP, NEXP, coNEXP, EXPSPACE, 2-EXP, ELEMENTARY, R. The book also includes a nice take on time- and space-hierarchy theorems; see Chapter 7. **EDIT:** I also recommend Complexity Zoo's [Active Inclusion Diagram](http://www.math.ucdavis.edu/~greg/zoology/diagram.xml) (requires Firefox 1.5 or later, or Opera 9 or later). If you don't have that, you may try Zoo's [Static Inclusion Diagram](http://www.math.ucdavis.edu/~greg/zoology/diagram.pdf).
I've been working on a classification problem and have some good results, but now I struggle with trying to put together a good plot to illustrate the probabilities for each prediction. Here is my current data: ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt df_voting_output.head(n=5) prob actual pred correct 0 0.460200 0 0 1 1 0.548478 1 1 1 2 0.270609 0 0 1 3 0.686557 0 1 0 4 0.527935 0 1 0 5 0.098687 0 0 1 ``` I've been able to create a bar chart with probabilites using this code: ``` plt.bar(np.arange(len(voting_predictions[:,1])), voting_predictions[:,1]) plt.xlabel("record number") plt.ylabel("probability") plt.title("Classification Probabilities") plt.show() ``` [![current chart](https://i.stack.imgur.com/SbJA2.png)](https://i.stack.imgur.com/SbJA2.png) However, I'm thinking there has to be a better way to include more information and a key. I'd like the "correct" labels to be clear to see, so from a visual standpoint you can see how the probabilities relate to a correct classification.
When considering how to clean the text, we should think about the data problem we are trying to solve. Here are few more step for preprocessing which can improve your features. 1.) Use Good tokenizer(textblob,stanford tokenizer) 2.) Try Lemmatization , stemming always not perform well in case news article. 3.) word segmentation 4.) Normalization (equivalence classing of terms) **For selecting model** 1.) In your example above, we classified the document by comparing the number of matching terms in the document vectors. In the real world numerous more complex algorithms exist for classification such as **Support Vector Machines (SVMs)**, **Naive Bayes** and **Decision Trees** , **Maximum Entropy**. 2.) You can think your problem as making clusters of news and getting semantic relationship of source news from these cluster. You can try topic modelling(**LDA and LSA**) and **Doc2vec/word2vec** technique for getting vector for document/word and then use these vectors for classification task. Further if you are confuse to select a appropriate model for problem , you can read from this link [Choosing Machine Learning Algorithms: Lessons from Microsoft Azure](http://machinelearningmastery.com/choosing-machine-learning-algorithms-lessons-from-microsoft-azure/)
From my understanding of the proof that halting problem is not computable, this problem is not computable because if we have a program P(x) which computes if the program x halts or not, we got a paradox when giving P as an input to the same P, having: P(P), trying to decide if P halts or not using P itself. So my question is: is halting problem computable by program P for all other programs used as input but P itself? In other words: is halting problem not computable only in this special case or the proof is more general and I'm missing something?
If $f$ is any computable function, then $g$, defined as $$ g(n) = \begin{cases} f(n) & \mbox{if } n \neq k \\ v & \mbox{otherwise} \end{cases} $$ is also computable, for any choice of $k,v$. Basically, if you have a program $P'$ which computes $g(n)$ for all $n$'s except for $n=k$, you can "fix" that case (e.g. using an `if then else`) and obtain another program $P$ which computes $g(n)$ for all $n$. Hence, if you could compute the halting function "except for one case", you could also compute the halting function (with no exceptions). From that, you can obtain a contradiction as usual. Conclusion: no, you can't decide the halting problem "except one case" (nor "except finitely many cases").
I'm trying to analyze two negatively-correlated variables, `A` and `B` (where `A` is the independent variable) while somehow taking into account a categorical variable `C`, with the intention of highlighting data that deviates above expected values. For example, in the following subset of my data: ``` #, A, B, C 1, 14, 55, "X" 2, 12, 75, "X" 3, 10, 65, "X" 4, 14, 40, "Y" 5, 12, 30, "Y" 6, 10, 35, "Y" Average: A, B 14, 55 12, 60 10, 65 ``` I'd like to be able to highlight data point 2 because it deviates above the average value, but I'd also like to highlight data point 4, because although it deviates below the average value, it deviates above the expected value within its category. I know how to do a simple linear regression on `A` and `B`, but I don't know how to account for the categorical variable.
Using ***R*** you can just fit a linear model, such as `fit <- lm(A ~ B + C, data = your_data_name)`. This will produce two intercepts, one for each level ("X" and "Y") of the factor "C". From there you can perform diagnostics by simply invoking `plot(fit)`, or with specific commands, such as `dfbeta(fit)`. **EDIT:** Following is an example of *OLS* regression calculated both with linear algebra and with the *R* built-in functions to kind of avoid leaving any stoned unturned... Scenario: We want to predict the money spent in self-indulgence in one year (don't ask...) in $US based on two parameters: demographics (*categorical* or *factor* variable with *levels* "ch" for child; "f" for female; and "m" for male) and height (in cm). I know... silly... But I'll stay out of controversy... Here are the vectors to code the *y* variable (yearly expenditures) and to code for the demographics dummy variable: ``` y <- c(50, 60, 800, 1000, 1800, 2000) ch <- c(1, 1, 0, 0, 0, 0) # Selecting the children f <- c(0, 0, 1, 1, 0, 0) # Selecting the women m <- c(0, 0, 0, 0, 1, 1) # Selecting the men ``` Let's construct the *design matrix A*: ``` A <- cbind(ch, f, m) ``` It looks like this: ``` ch f m [1,] 1 0 0 [2,] 1 0 0 [3,] 0 1 0 [4,] 0 1 0 [5,] 0 0 1 [6,] 0 0 1 ``` And let's calculate the OLS model, remembering the formula: $\hat{\beta}=(A^{T}A)^{-1}A^{T}y$ ``` solve(t(A) %*% A) %*% t(A) %*% y [,1] ch 55 f 900 m 1900 ``` Exactly the average expenditure for each demographic group! Makes sense... Using the built-in formula in R: ``` lm(y ~ A) (Intercept) Ach Af Am 1900 -1845 -1000 NA ``` We get the same results with the (Intercept) corresponding to males (that's why "Am" is NA); children being 1900 - 1845 = 55; and women, 1900 - 1000= 900. Now let's add the variable height in cm: ``` H <- c(90, 80, 145, 150, 175, 180) #children, women, men ``` And let's add this vector to the design matrix: ``` A <- cbind(A, H) ``` And run the regression - we want to predict money spent based on gender and height: ``` round(solve(t(A) %*% A) %*% t(A) %*% y,3) [,1] ch -1021.667 f -968.333 m -348.333 H 12.667 ``` Now the intercepts make no intuitive sense, because we want a slope to fit the entire data cloud: H = 12.7. The built-in function $lm(y \sim A)$ results in: ``` Coefficients: (Intercept) Ach Af Am AH -348.33 -673.33 -620.00 NA 12.67 ``` Corresponding to -348.33 - 673.33 = -1021.66 for children and -348.33 - 620.00 = -968.33 for females. Let's see if it sort of works: ``` fit <- lm(y ~ A) round(predict(fit, A = as.data.frame(H)),0) 1 2 3 4 5 6 118 -8 868 932 1868 1932 ``` compared to `y <- c(50, 60, 800, 1000, 1800, 2000)...` Not that bad...
Most resource regarding categorical notions in programming describe monads, but I've never seen a categorical description of monad transformers. **How could [monad transformers](https://en.wikibooks.org/wiki/Haskell/Monad_transformers) be described in the terms of category theory?** In particular, I'd be interested in: * the relationship between monad transformers and their corresponding base monads; * the relationship between them and the monads they're transforming into new monads; * monad transformer stacks.
Augmenting Andrej's answer: There is still no widespread agreement on the appropriate interface monad transformers should support in the functional programming context. Haskell's [MTL](https://hackage.haskell.org/package/mtl) is the de-facto interface, but Jaskelioff's [Monatron](https://hackage.haskell.org/package/Monatron) is an alternative. One of the earlier technical reports by Moggi, [an abstract view of programming languages](http://www.disi.unige.it/person/MoggiE/ftp/abs-view.pdf), discusses what should be the right notion of transformer to some extent (section 4.1). In particular, he discusses the notion of an operation for a monad, which he (20 years later) revisits with Jaskelioff in [monad transformers as monoid transformers](http://www.disi.unige.it/person/MoggiE/ftp/tcs10.pdf). (This notion of operation is different from Plotkin and Power's notion of an [algebraic operation for a monad](http://homepages.inf.ed.ac.uk/gdp/publications/sem_alg_ops.pdf), which amounts to a Kleisli arrow.)
Quote from Babe Ruth: > > Every strike brings me closer to the next home run. > > > As I understand memorylessness, this is meaningless. For every at-bat, there is a certain probability that he will strike, and there is a certain probability that he will hit a home run, and that's that. The likelihood of a home run at any particular point in time does not increase as strikes accrue. However, I have an *intuitive* understanding of what he means. Is there some statistically-rigorous way to express it or make sense of it? Maybe it makes sense for someone looking back on Babe Ruth's career with the benefit of hindsight. Or, maybe if we imagine an omniscient deity who can see the entire timeline of the universe at once. The deity can indeed see that, from any particular moment, there are N strikes remaining before Ruth hits the next home run. Another strike reduces that number to N-1. So, indeed, every strike brings him closer to the next home run. **Epilogue** If I could go back in time and rewrite this question, I would have omitted all the baseball references and simply described a guy rolling dice, hoping for a seven. He says, "I'm hoping for a seven, but I'm not bothered when I get something else, because every roll of the dice brings me closer to that seven!" Assuming he eventually rolls a seven, is his assertion the gambler's fallacy? Why or why not? Thanks to @Ben for articulating that this is *not* the gambler's fallacy. It would have been the gambler's fallacy if he had instead said, "Every roll of the dice which does not result in a seven makes it *more likely* that the next roll results in a seven." The guy didn't make any such statement, and he didn't make any statement at all about probability, merely about the passage of time. By assuming that there is a seven in his future, we have made it undeniably true that every roll of the dice brings him closer to the seven. In fact, it is trivially true. Every second that ticks by, even when he is sleeping, brings him closer to that seven.
Suppose that at at-bat $t$ Babe has access to the information in the filtration $\mathcal{F}\_t$. Write that Ruth's next home run with be at at-bat $N$. Further suppose that each at-bat has probability $p$ of being a homerun. Based on the currently available information at at-bat $t$, our best guess of $N$ is $\mathbb{E}[N \mid \mathcal{F}\_t] = t + \frac{1-p}{p}$. At time $t+1$,our best guess is $\mathbb{E}[N \mid \mathcal{F}\_{t+1}] = t + 1 + \frac{1-p}{p}$. Notice that the expected home run time is always a constant $\frac{1-p}{p}$ at-bats in the future. Ruth is forgetting that the filtration updates.
For days, I'm trying to figure out, whether it is possible to find an item in array which would be kind of *weighted median* in linear time. It is very simple to do that in exponential time. So let's say that we have an array, each item $x$ of this array has two attributes - price $c(x)$ and weight $w(x)$. The goal is to find an item $x$ such that $$ \sum\_{y\colon c(y) < c(x)} w(y) \leq \frac{1}{2} \sum\_y w(y) \text{ and } \sum\_{y\colon c(y) > c(x)} w(y) \leq \frac{1}{2} \sum\_y w(y). $$ If the array was sorted by price, it would be simple: Go from the first item one by one, count sum and if the sum becomes greater that half the total weight, then you found the desired item. Could you give me some hint how to find such an item in linear time?
Let $A$ be an input array containing $n$ elements, $a\_i$ the $i$-th element and $w\_i$ its corresponding weight. You can determine the weighted median in worst case linear time as follows. If the array length is $\leq 2$, find the weighted median by exhaustive search. Otherwise, find the (lower) median element $a\_x$ using the worst case $O(n)$ *selection* algorithm and then partition the array around it (using the worst case $O(n)$ partition algorithm from QuickSort). Now determine the weight of each partition. If weight of the left partition is $< \frac{1}{2}$ and weight of the right partition is $\leq \frac{1}{2}$ then the weighted (lower) median is $a\_x$. If not, then the weighted (lower) median must necessarily lie in the partition with the larger weight. So, you add the weight of the "lighter" partition to the weight of $a\_x$ and recursively continue searching into the "heavier" partition. Here is the algorithm's pseudocode (written in Latex). ``` FIND-WEIGHTED_LOWER_MEDIAN(A) if $n$ == 1 return $a_1$ elseif $n$ == 2 if $w_1 \geq w_2$ return $a_1$ else return $a_2$ else determine $a_x$, the (lower) median of A partition A around $a_x$ $W_{low}$ = $\sum\limits_{{a_i} < {a_x}} {{w_i}}$ $W_{high}$ = $\sum\limits_{{a_i} > {a_x}} {{w_i}}$ if $W_{low} < \frac{1}{2}$ AND $W_{high} \leq \frac{1}{2}$ return $a_x$ else if $W_{low} \geq \frac{1}{2}$ $w_x = w_x + W_{high}$ B = \{ a_i \in A: a_i \leq a_x \} FIND-WEIGHTED_LOWER_MEDIAN(B) else $w_x = w_x + W_{low}$ B = \{ a_i \in A: a_i \geq a_x \} FIND-WEIGHTED_LOWER_MEDIAN(B) ``` Now let's analyze the algorithm and derive its complexity in the worst-case. The recurrence of this recursive algorithm is $T(n) = T(\frac{n}{2} + 1) + \Theta(n)$. Indeed, we have no more than a recursive call on half the elements plus the (lower) median. The initial exhaustive search on up to 2 elements costs $O(1)$, determining the (lower) median using the select algorithm requires $O(n)$, and partitioning around the (lower) median requires $O(n)$ as well. Computing $W\_{low}$ or $W\_{high}$ is again $O(n)$. Solving the recurrence, we get the complexity of the algorithm in the worst case, which is $O(n)$. Of course, a real implementation should not use the worst case $O(n)$ selection algorithm, since it is well known that the algorithm is not practical (it is of theoretical interest only). However, the randomized $O(n)$ selection algorithm works pretty well in practice and can be used since it's really fast.
I am running the following unit root test (Dickey-Fuller) on a time series using the `ur.df()` function in the `urca` package. The command is: ``` summary(ur.df(d.Aus, type = "drift", 6)) ``` The output is: ``` ############################################### # Augmented Dickey-Fuller Test Unit Root Test # ############################################### Test regression drift Call: lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag) Residuals: Min 1Q Median 3Q Max -0.266372 -0.036882 -0.002716 0.036644 0.230738 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.001114 0.003238 0.344 0.73089 z.lag.1 -0.010656 0.006080 -1.753 0.08031 . z.diff.lag1 0.071471 0.044908 1.592 0.11214 z.diff.lag2 0.086806 0.044714 1.941 0.05279 . z.diff.lag3 0.029537 0.044781 0.660 0.50983 z.diff.lag4 0.056348 0.044792 1.258 0.20899 z.diff.lag5 0.119487 0.044949 2.658 0.00811 ** z.diff.lag6 -0.082519 0.045237 -1.824 0.06874 . --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.06636 on 491 degrees of freedom Multiple R-squared: 0.04211, Adjusted R-squared: 0.02845 F-statistic: 3.083 on 7 and 491 DF, p-value: 0.003445 Value of test-statistic is: -1.7525 1.6091 Critical values for test statistics: 1pct 5pct 10pct tau2 -3.43 -2.86 -2.57 phi1 6.43 4.59 3.78 ``` 1. What do the significance codes (Signif. codes) mean? I noticed that some of them where written against: z.lag.1, z.diff.lag.2, z.diff.lag.3 (the "." significance code) and z.diff.lag.5 (the "\*\*" significance code). 2. The output gives me two (2) values of test statistic: -1.7525 and 1.6091. I know that the ADF test statistic is the first one (i.e. -1.7525). What is the second one then? 3. Finally, in order to test the hypothesis for unit root at the 95% significance level, I need to compare my ADF test statistic (i.e. -1.7525) to a critical value, which I normally get from a table. The output here seems to give me the critical values through. However, the question is: which critical value between "tau2" and "phi1" should I use. Thank you for your response.
I found Jeramy's answer pretty easy to follow, but constantly found myself trying to walk through the logic correctly and making mistakes. I coded up an R function that interprets each of the three types of models, and gives warnings if there are inconsistencies or inconclusive results (I don't think there ever should inconsistencies if I understand the ADF math correctly, but I thought still a good check in case the ur.df function has any defects). Please take a look. Happy to take comments/correction/improvements. <https://gist.github.com/hankroark/968fc28b767f1e43b5a33b151b771bf9>
I have some questions about how to analyze ranked data. The data looks like this: 4 groups of people with HIV and 16 other groups of people living in the same village were asked to rank 12 challenges for people with HIV according to importance. (f.e. physical health - social acceptance - mental health - ...) How can I know if a certain challenge is perceived differently by people with HIV than by other people? Another question: All respondents (120) were asked to pick individually from the list of challenges the 5 that were most challenging to themselves. How can I know if people with HIV choose different challenges than other people? What is the best way to present the findings? Are there any statistical test for it? Kruskal wallis is possible? I've been looking all over the internet but i'm stuck..
For your second task, you could consider ordination methods, so that you are plotting the location of individuals within a multivariate 'challenge-space'. Distinct clusters for those with and without HIV would seem to support your hypothesis.
I am trying to use the '[density](http://stat.ethz.ch/R-manual/R-patched/library/stats/html/density.html)' function in R to do kernel density estimates. I am having some difficulty interpreting the results and comparing various datasets as it seems the area under the curve is not necessarily 1. For any [probability density function (pdf)](https://secure.wikimedia.org/wikipedia/en/wiki/Probability_density_function) $\phi(x)$, we need to have the area $\int\_{-\infty}^\infty \phi(x) dx = 1$. I am assuming that the kernel density estimate reports the pdf. I am using [integrate.xy](http://rss.acs.unt.edu/Rdoc/library/sfsmisc/html/integrate.xy.html) from [sfsmisc](http://cran.r-project.org/web/packages/sfsmisc/index.html) to estimate the area under the curve. ``` > # generate some data > xx<-rnorm(10000) > # get density > xy <- density(xx) > # plot it > plot(xy) ``` ![plot of the density](https://i.stack.imgur.com/9zT9s.png) ``` > # load the library > library(sfsmisc) > integrate.xy(xy$x,xy$y) [1] 1.000978 > # fair enough, area close to 1 > # use another bw > xy <- density(xx,bw=.001) > plot(xy) ``` ![density with bw= .001](https://i.stack.imgur.com/agW3K.jpg) ``` > integrate.xy(xy$x,xy$y) [1] 6.518703 > xy <- density(xx,bw=1) > integrate.xy(xy$x,xy$y) [1] 1.000977 > plot(xy) ``` ![density with bw = 1](https://i.stack.imgur.com/6QQ6s.jpg) ``` > xy <- density(xx,bw=1e-6) > integrate.xy(xy$x,xy$y) [1] 6507.451 > plot(xy) ``` ![density with bw=1e-6](https://i.stack.imgur.com/SZjuJ.jpg) Shouldn't the area under the curve always be 1? It seems small bandwidths are a problem, but sometimes you want to show the details etc. in the tails and small bandwidths are needed. **Update/Answer:** It seems that the answer below about the overestimation in convex regions is correct as increasing the number of integration points seems to lessen the problem (I didn't try to use more than $2^{20}$ points.) ``` > xy <- density(xx,n=2^15,bw=.001) > plot(xy) ``` ![density with higher number of points to sample at](https://i.stack.imgur.com/f4rob.png) ``` > integrate.xy(xy$x,xy$y) [1] 1.000015 > xy <- density(xx,n=2^20,bw=1e-6) > integrate.xy(xy$x,xy$y) [1] 2.812398 ``` ---
Think about the trapezoid rule `integrate.xy()` uses. For the normal distribution, it will *underestimate* the area under the curve in the interval (-1,1) where the density is concave (and hence the linear interpolation is below the true density), and *overestimate* it elsewhere (as the linear interpolation goes on top of the true density). Since the latter region is larger (in Lesbegue measure, if you like), the trapezoid rule tends to overestimate the integral. Now, as you move to smaller bandwidths, pretty much all of your estimate is piecewise convex, with a lot of narrow spikes corresponding to the data points, and valleys between them. That's where the trapezoid rule breaks down especially badly.
**Can a Spearman correlation coefficient of 0.38 *for a specific parameter* be considered to demonstrate reasonable agreement between two biological cell types, more specifically a cell line and a primary cell type?** The parameter in question is the difference between percent of mRNA with and without last exon in two subcellular compartments, so for every gene (of which there are ~40000): $\text{param} = x\_{i} - x\_{j}$, where $x\_{i}$ is the proportion of RNA with last exon in compartment $i$ $x\_{j}$ is the proportion of RNA with last exon in compartment $j$. For each cell type, this results in a table of $\text{gene} - \text{param}$ And the Spearman correlation is found between the ranked order of this parameter in the two cell types. Sample size is number of isoforms considered, in the order of 40000. Number of biological replicates per cell line - three each (standard in the field).
Correlations, such as Pearson's product moment correlation or Spearman's rank correlation, are not measures of agreement, no matter what their values are (i.e., even if $r = 1.0$). Consider a simple case using Pearson's correlation: > > [![enter image description here](https://i.stack.imgur.com/WgmR7.png)](https://i.stack.imgur.com/WgmR7.png) > > A guy takes a woman on a date to what he thinks is a nice restaurant. Afterwards, they talk about the restaurant and give it a rating on a scale of $0 - 10$ for the overall experience (ambiance, service, food). The guy rates it a $7$, his date a $1$. So the next day, he takes her to a nicer place. He rates it an $8$; she rates it a $2$. The next day he takes her to an even nicer place. He rates it a $9$; she rates it a $3$. > > > Here are the ratings: ``` guy woman 7 1 8 2 9 3 ``` The correlation is $r = 1.0$. You can decide for yourself if you think they agree on the quality of the restaurants. I suspect this relationship isn't going to last. In essence, Pearson's correlation measures agreement with respect to the ordering of the ratings and the relative spacing between those ratings. Spearman's correlation measures the agreement on the ordering *only*. But people typically think of similar ratings as being at least as important as the ordering for there to be true agreement. For continuous ratings, [Lin's concordance coefficient](https://en.wikipedia.org/wiki/Concordance_correlation_coefficient) can be used as a measure of agreement that isn't subject to these flaws. For categorical ratings [Cohen's kappa](https://en.wikipedia.org/wiki/Cohen's_kappa) or [Bangdawala's $B$](https://en.wikipedia.org/wiki/Bangdiwala's_B) can be used to assess agreement instead of the chi-squared test for similar reasons.
Setting: * Longitudinal data on outcome Yi,t of a group of individuals, i={1,...,N}, over time, t={1,...,T} * On this group, a sequence of RCTs (r={1,...,R}) staggered over time is applied. * Each RCT is measuring the effect of a specific treatment and treats a small percentage of the population. * Once an individual is treated in a certain round (r=x) then he is no longer eligible for other future RCTs (rounds). I am trying to measure the specific impact of each treatment (eq 1), as well as the average impact of the treatments (eq 2). 1. Yi,t,r = ALPHAr*TREATEDi,r + BETAr*(TREATEDi,r\*I\_POST\_TREATMENTt,r) 2. Yi,t = ALPHA\*TREATEDi + BETA \*(TREATEDi \*I\_POST\_TREATMENTt) The control group of that same round (r=x) on the other hand may be "contaminated" by being treated in a future RCT. Because trials treat a small percentage of the population each round (without replacement), and there aren't that many rounds, a significant proportion of the control group remains untreated over the entire period. What is the correct statistical procedure to adopt here? Is there a technical term for this sort of setting? Any relevant literature I should look at? EDIT: To try to collect me inputs, I cross posted on Stata List [here](http://www.statalist.org/forums/forum/general-stata-discussion/general/1427618-proper-statistical-procedure-for-repeated-rcts-on-the-same-group-without-replacement)
You could do a chi-square test of goodness of fit. This is explained on this page: <http://stattrek.org/chi-square-test/goodness-of-fit.aspx?Tutorial=AP> So, let's see what your expected value is. There's always 0 days after a marketing event, but there may seldom be 5 days after a marketing event (if the marketing events often occur 3 days apart). So, we might end up with an uneven number of possible days since event. The number of days can be percentaged and applied to the data so we get an expected number of the 102 calls (I wanted to use a number different than 100, for clarity). [![enter image description here](https://i.stack.imgur.com/1jpiM.jpg)](https://i.stack.imgur.com/1jpiM.jpg) Since the days data are ordinal (4 days is longer than 3 days, etc.) you could also use a Kolmogorov-Smirnov one-sample test.
This is my solution for [Leetcode 395](https://leetcode.com/problems/longest-substring-with-at-least-k-repeating-characters/), and I'm wondering how I can come up with its time complexity: * Input: string $s = s\_1,\ldots,s\_n$, integer $k$ * Go over all symbols $s\_1,\ldots,s\_n$, one by one * For each symbol $s\_i$, check whether it appears less than $k$ times in $s$ * If all symbols appeared at least $k$ times, return $n$ * Otherwise, let $i$ be the first index such that $s\_i$ appears less than $k$ times * Call the procedure recursively on $s\_1,\ldots,s\_{i-1}$ and on $s\_{i+1},\ldots,s\_n$, and output the maximum Here is a C++ implementation: ``` int longestSubstring(string s, int k) { for (int i = 0; i < s.length(); i++) { if (count(s.begin(), s.end(), s[i]) < k) { string left = s.substr(0,i); string right = s.substr(i+1,s.size()-1); return max(longestSubstring(left,k),longestSubstring(right,k)); } } return s.length(); } ``` The `count` method runs in $O(n)$. I'm new to asymptotic complexity, but from what I understand, "half n' half" recursion has time $T(n) = 2T(n/2)+f(n)$ where $f(n)$ is the time the rest of the execution takes. At this point, we'd induct, and get a general pattern. Then, given that we usually know $T(0)$ and $T(1)$, we can get an expression in $n$, which would be our time complexity. Now, my problem is that my implementation doesn't split the string into exact halves (in the same way a BST traversal splits the input in halves, for example), so I can't really write $T(n/2)$, correct? Are $T(0)$ and $T(1)$, $O(1)$ and $O(n)$ respectively? Is there, perhaps, a faster way to solve this problem? Leetcode says it runs in 0ms, but I find that dubious. It takes a longer time to execute than one of the highest upvoted Python implementations, which is supposedly $O(n)$.
No. The Padding Lemma states that there is a primitive recursive function $\sf pad$ such that, if $n$ is a code for $f$, then ${\sf pad}(n)$ is another code for $f$ which is larger than $n$. Therefore, if $f$ has a code, it has infinitely many. The intuition is: if you have a TM $M$ computing $f$, you can modify the TM so that it starts with some useless steps (e.g. move right, then left), and then behaves as $M$. The modified TM has a few more states, and under the "usual encoding" it will have a larger code than $M$.
In order to calibrate a confidence level to a probability in supervised learning (say to map the confidence from an SVM or a decision tree using oversampled data) one method is to use Platt's Scaling (e.g., [Obtaining Calibrated Probabilities from Boosting](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.5153&rep=rep1&type=pdf)). Basically one uses logistic regression to map $[-\infty;\infty]$ to $[0;1]$. The dependent variable is the true label and the predictor is the confidence from the uncalibrated model. What I don't understand is the use of a target variable other than 1 or 0. The method calls for creation of a new "label": > > To avoid overfitting to the sigmoid train set, an out-of-sample model is used. If there are $N\_+$ positive examples and $N\_-$ negative examples in the train set, for each training example Platt Calibration uses target values $y\_+$ and $y\_-$ (instead of 1 and 0, respectively), where > $$ > y\_+=\frac{N\_++1}{N\_++2};\quad\quad y\_-=\frac{1}{N\_-+2} > $$ > > > What I don't understand is how this new target is useful. Isn't logistic regression simply going to treat the dependent variable as a binary label (regardless of what label is given)? **UPDATE:** I found that in SAS changing the dependent from $1/0$ to something else reverted back to the same model (using `PROC GENMOD`). Perhaps my error or perhaps SAS's lack of versatility. I was able to change the model in R. As an example: ``` data(ToothGrowth) attach(ToothGrowth) # 1/0 coding dep <- ifelse(supp == "VC", 1, 0) OneZeroModel <- glm(dep~len, family=binomial) OneZeroModel predict(OneZeroModel) # Platt coding dep2 <- ifelse(supp == "VC", 31/32, 1/32) plattCodeModel <- glm(dep2~len, family=binomial) plattCodeModel predict(plattCodeModel) compare <- cbind(predict(OneZeroModel), predict(plattCodeModel)) plot(predict(OneZeroModel), predict(plattCodeModel)) ```
I suggest to check out the [wikipedia page of logistic regression](http://en.wikipedia.org/wiki/Logistic_regression). It states that in case of a binary dependent variable logistic regression maps the predictors to the probability of occurrence of the dependent variable. Without any transformation, the probability used for training the model is either 1 (if y is positive in the training set) or 0 (if y is negative). So: Instead of using the absolute values 1 for positive class and 0 for negative class when fitting $p\_i=\frac{1}{(1+exp(A\*f\_i+B))}$ (where $f\_i$ is the uncalibrated output of the SVM), Platt suggests to use the mentioned transformation to allow the opposite label to appear with some probability. In this way some regularization is introduced. When the size of the dataset reaches infinity, $y\_+$ will become 1 and $y\_{-}$ will become zero. For details, see the [original paper of Platt](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1639).
When doing time series research in R, I found that `arima` provides only the coefficient values and their standard errors of fitted model. However, I also want to get the p-value of the coefficients. I did not find any function that provides the significance of coef. So I wish to calculate it by myself, but I don't know the degree of freedom in the t or chisq distribution of the coefficients. So my question is how to get the p-values for the coefficients of fitted arima model in R?
Since `arima` uses maximum likelihood for estimation, the coefficients are assymptoticaly normal. Hence divide coefficients by their standard errors to get the z-statistics and then calculate p-values. Here is the example with in R with the first example from `arima` help page: ``` > aa <- arima(lh, order = c(1,0,0)) > aa Call: arima(x = lh, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.5739 2.4133 s.e. 0.1161 0.1466 sigma^2 estimated as 0.1975: log likelihood = -29.38, aic = 64.76 > (1-pnorm(abs(aa$coef)/sqrt(diag(aa$var.coef))))*2 ar1 intercept 1.935776e-07 0.000000e+00 ``` The last line gives the p-values.
Suppose I have 2 possible algorithms: one runs in O(m+n) and the other runs in O(mn). Suppose also that the task is performed on a connected graph with m edges and n vertices. No other information is given about the graph. How do I know which algorithm is faster? EDIT: I don't think my question is a duplicate of the question quicksort refers to since that question asks for a definition and I'm asking for a calculation. I did search for a solution before I posted but couldn't find any sufficient solution to my question.
Since big O is only an upper bound, you cannot really tell which algorithm is faster. Let us therefore assume that the running times of the two algorithms are actually $\Theta(n+m)$ and $\Theta(nm)$. Since your graphs are all connected, you have $n-1 \leq m \leq \binom{n}{2}$, and so the first algorithm runs in time $\Theta(m)$ and the second in time $\Omega(m^{1.5})$. This shows that as $m\to\infty$ (equivalently, as $n\to\infty$), the first algorithm is asymptotically faster.
I have four samples $x\_1, x\_2$ and $y\_1, y\_2$ with $n\_{x1} \neq n\_{x2} \neq n\_{y1} \neq n\_{y2} $. I calculated, using a Wilcoxon rank sum test, that $x\_1$ is significantly different to $x\_2$ and $y\_1$ significantly different to $y\_2$. However, I would like to test whether the difference $x\_1 - x\_2$ differs significantly to $y\_1 - y\_2$ but I have no idea how do to that given the unequal sample sizes. Any ideas or suggestions would be really appreciated.
You can find the least-square estimate of the parameters using nonlinear regression. Example: ``` f=function(B,x) (B[1]-B[4])/(1+(x/B[3])^B[2])+B[4] LS=function(B,y,x) sum((y-f(B,x))^2) x=runif(100,0,5) B=c(1,5,2.5,5) y=f(B,x) plot(x,y) ### Estimate should be very close to B nlm(LS,c(1,1,1,1),x=x,y=y) ```