input
stringlengths 38
38.8k
| target
stringlengths 30
27.8k
|
---|---|
How do you implement STS(Semantic Textual Similarity) on an unlabelled dataset? The dataset column contains `Unique_id`, `text1` (contains paragraph), and `text2` (contains paragraph).
Ex: Column representation: Unique\_id | Text1 | Text2
**Unique\_id** 0
`Text1` public show for Reynolds suspension of his coaching licence. portrait Sir Joshua Reynolds portrait of omai will get a public airing following fears it would stay hidden because of an export wrangle.
`Text2` then requested to do so by Spain's anti-violence commission. The fine was far less than the expected amount of about £22 000 or even the suspension of his coaching license.
**Unique\_id** 1
`Text1` Groening. Gervais has already begun writing the script but is keeping its subject matter a closely guarded secret. he will also write a part for himself in the episode. I've got the rough idea but this is the most intimidating project of my career.
`Text2` Philadelphia said they found insufficient evidence to support the woman s allegations regarding an alleged incident in January 2004. The woman reported the allegations to Canadian authorities last month. Cosby s lawyer Walter m Phillips jr said the comedian was pleased with the decision.
In the above problem, I've to compare two paragraphs of texts i.e. `Text1` & `Text2`, and then I've to compare semantic similarity between two texts. If they are semantically similar then it will print '1' if not then '0'
Any reference implementation link or any suggestions!
Thanks in advance! | Lets go back to basics here.
It is not possible to only use Keras without using a backend, such as Tensorflow, because **Keras is only an extension for making it easier to read and write machine learning programs**. All the actual calculations needed to create models are not implemented in Keras, which is why you need to use a backend library for anything to work.
When you are creating a model in Keras, **you are actually still creating a model using Tensorflow**, Keras just makes it easier to code. |
I have some time series data and want to test for the existence of and estimate the parameters of a linear trend in a dependent variable w.r.t. time, i.e. time is my independent variable. The time points cannot be considered IID under the null of no trend. Specifically, the error terms for points sampled near each other in time are positively correlated. Error terms for samples obtained at sufficiently different times can be considered IID for all practical purposes.
I do not have a well-specified model of how the error terms are correlated for points close to each other in time. All I know from domain knowledge is that they are positively correlated to some degree or another. Other than this issue, I believe the assumptions of ordinarly least squares linear regression (homoskedasticity, linearity, normally distributed error terms) are met. Modulo the correlated error term issue, OLS would solve my problem.
I am a complete novice at dealing with time series data. Is there any "standard" way to proceed in these circumstances? | To add to the existing answers, if you are using R a simple way to proceed is to allow the ARMA errors to be modelled automatically using `auto.arima()`. If `x` is your time series, then you can proceed as follows.
```
t <- 1:length(x)
auto.arima(x,xreg=t,d=0)
```
This will fit the model $x\_t = a + bt + e\_t$ where $e\_t\sim\text{ARMA}(p,q)$ and $p$ and $q$ are selected automatically using the AIC.
The resulting output will give the value of $b$ and its standard error. Here is an example:
```
Series: x
ARIMA(3,0,0) with non-zero mean
Call: auto.arima(x = x, xreg = t)
Coefficients:
ar1 ar2 ar3 intercept t
-0.3770 0.1454 -0.2351 563.9654 0.0376
s.e. 0.1107 0.1190 0.1145 11.4725 0.2378
sigma^2 estimated as 5541: log likelihood = -475.85
AIC = 963.7 AICc = 964.81 BIC = 978.21
```
In this case, $p=3$ and $q=0$. The first three coefficients give the autoregressive terms, $a$ is the intercept and $b$ is in the `t` column. In this (artificial) example, the slope is not significantly different from zero.
The `auto.arima` function is using MLE rather than GLS, but the two are asymptotically equivalent.
The use of a Cochrane-Orcutt procedure only works if the error is AR(1). So the above is much more general and flexible. |
I have an NP-complete decision problem. Given an instance of the problem, I would like to design an algorithm that outputs YES, if the problem is feasible, and, NO, otherwise. (Of course, if the algorithm is not optimal, it will make errors.)
I cannot find any approximation algorithms for such problems. I was looking specifically for SAT and I found in Wikipedia page about [Approximation Algorithm](https://en.wikipedia.org/wiki/Approximation_algorithm) the following: *Another limitation of the approach is that it applies only to optimization problems and not to "pure" decision problems like satisfiability, although it is often possible to ...*
Why we do not, for example, define the approximation ratio to be something proportional to the number of mistakes that the algorithm makes? How do we actually solve decision problems in greedy and sub-optimal manner? | The reason you don't see things like approximation ratios in decision making problems is that they generally do not make sense in the context of the questions one typically asks about decision making problems. In an optimization setting, it makes sense because it's useful to be "close." In many environments, it doesn't make sense. It doesn't make sense to see how often you are "close" in a discrete logarithm problem. It doesn't make sense to see how often you are "close" to finding a graph isomer. And likewise, in most decision making problems, it doesn't make sense to be "close" to the right decision.
Now, in practical implementations, there are many cases where it's helpful to know what portion of the problems can be decided "quickly" and what portion cannot. However, unlike optimization, there's no one-size-fits-all way to quantify this. You can do it statistically, as you suggest, but *only* if you know the statistical distribution of your inputs. Most of the time, people who are interested in decision problems are not so lucky to have such distributions.
As a case study, consider the halting problem. The halting problem is known to be undecidable. It's a shame, because its a really useful problem to be able to solve if you're making a compiler. In practice, however, we find that most programs are actually very easy to analyze from a halting problem perspective. Compilers take advantage of this to generate optimal code in these circumstances. However, a compiler must recognize that there is a possibility that a particular block of code is *not* decidable. Any program which relies on code being "likely decidable" can get in trouble.
However, the metric used by compilers to determine how well they do at solving these particular cases of the halting problem is very different from a metric used by a cryptography program to test whether a particular pair of primes is acceptably hardened against attacks. There is no one size fits all solution. If you want such a metric, you will want to tailor it to fit your particular problems space and business logic. |
I have just started learning Neural Networks for deep learning from cs231. I am trying to implement Neural Network in Python. I am looking at using Tensorflow or scikit-learn. What are some pros and cons of these libraries for this application? | Sklearn doesn't have much support for Deep Neural Networks. Among the two, since you are interested in *deep learning*, pick **tensorflow**.
However, I would suggest going with [keras](https://keras.io/), which uses tensorflow as a backend, but offers an easier interface. |
In country A, during the ten-year period $2002 - 2012$ judges sentenced convicts to the death penalty 45 times. In contrast, the number of such penalties in 2013 was $19$ and in $2014$ it was $4$. An analyst claimed that the evident change in the rate of conviction is not statistically significant because of the sample.
I am trying to formulate a possible hypothesis that the analyst might have had in mind, given ONLY these data, and carry out a test to check whether or not the result is statistically significant. My idea for the hypothesis was the following:
\begin{align\*}
H\_0&: \mu\_{x \le 2012} = \mu\_{x > 2012} \\
H\_1&: \mu\_{x \le 2012} < \mu\_{x > 2012}
\end{align\*}
where $\mu\_{x \le 2012}$ denotes the average annual number of convictions up to $2012$ and $\mu\_{x > 2012}$ denotes the average annual number of prosecutions from $2012$ onwards.
Assuming that this is a reasonable hypothesis, I am thinking of computing the likelihood ratio statistic and then proceeding from there. However, that statistic requires me to compute the likelihood function under the null and divide it by the maximum of the likelihood function under the full model. How is am I to approach this since I am not given anything about the distribution of the observations? Any suggestions will be greatly appreciated, especially ones regarding the correctness of the stated hypothesis. | The simplest approach that I can think of that is theoretically valid would be to assume that these convictions are generated by a Poisson process - meaning that each death sentence is a rare event, independent of other death sentences and that the probability distribution for the time between two such sentences is given by the exponential distribution.
With that, you can simply estimate the $\lambda$:s for the three different time periods and calculate the confidence intervals for the sample means.
2002-2012 (assuming it is a 10-year period as you said, and not 11):
$\widehat{\lambda} = \frac{45} {10} = 4.5$, with a 95% confidence interval of $\widehat{\lambda} ~ \pm ~ \sqrt{\widehat{\lambda}} \approx 4.5 \pm 0.67 $
2013:
$\widehat{\lambda} = \frac{19} {1} = 19$, with a 95% confidence interval of $\widehat{\lambda} ~ \pm ~ \sqrt{\widehat{\lambda}} \approx 19 \pm 4.36 $
2014:
$\widehat{\lambda} = \frac{4} {1} = 4$, with a 95% confidence interval of $\widehat{\lambda} ~ \pm ~ \sqrt{\widehat{\lambda}} = 4 \pm 2 $
*(Note: the confidence intervals assume that the number of events is "great" and that the probability distribution for $\widehat{\lambda}$ can therefore be approximated with the normal distribution. This assumption does not hold very well for the 2014 time period)*
You can then do pairwise comparisons. If you wish to check whether e.g. 2013 in particular stands out, then treat 2002-2012 and 2014 as a single 11-year period with 49 sentences. Clearly, the number of sentences in 2013 stand out as exceptional. |
I have access to data containing min, max and mode. Is it possible to estimate a frequency distribution only with this data? If yes, how? | You can give some very crude information about your distribution. Specifically, you can estimate the min, the max and the mode, and you know that the number of data points at the min and the max (and in between) is less than or equal to the mode. So, precisely what you already know.
Apart from that, your distribution can look very different. Here are three possible histograms with 101 data points, a min of 1, a max of 10 and a mode of 7:
[![histograms](https://i.stack.imgur.com/9eVlW.png)](https://i.stack.imgur.com/9eVlW.png)
R code:
```
breaks <- seq(0.5,10.5,1)
par(mfrow=c(1,3),las=1)
hist(c(rep(1:10,each=4),7),xlab="",ylab="",main="",breaks=breaks,ylim=c(0,40))
hist(c(1,10,rep(7,39)),xlab="",ylab="",main="",breaks=breaks,ylim=c(0,40))
hist(c(rep(c(1,10),each=13),rep(7,14)),xlab="",ylab="",main="",breaks=breaks,ylim=c(0,40))
``` |
Given a two-dimensional data set where each point is labeled $ \{0,1\}$, I want to implement a sparse classifier with $L\_p \ \text({ 0<p \leq 1) }$.
I have been reading on logistic regression and regularization. Let me give you an example of what I have been working on. The concrete example is: Let $\left((x^{(i)},y^{(i)} )\right)\_{i\in \{1,\dots, m\}} $ be my data set with $y^{(i)}\in \{0,1\} $ and $x^{(i)}\in \mathbb{R}^2$. And the cost function I minimized is
$ J(\theta) = - \frac{1}{m} \cdot \sum\_{i=1}^m \large[ y^{(i)}\ \log (h\_\theta (x^{(i)})) + (1 - y^{(i)})\ \log (1 - h\_\theta(x^{(i)}))\large] + \frac{\lambda}{2m}\sum\_{j=1}^n \theta\_j^2$.
where $h\_\theta(x) = \frac{1}{1+e^{-\theta^{T}x}}$. I thought that this would be a good introduction to sparse.
Currently I use a neural networks and was wondering if I am heading in the right direction in understanding sparse methods.
That leaves me with the question:
What is the definition of sparse classifiers? What would be an example? | I'll provide an example model using linear regression, however the idea translates to classification in a straightforward manner.
Sparse linear regression is used when we have a model $y = X\beta + \epsilon$ where $y \in \mathbb{R}^n$, $X \in \mathbb{R}^{n \times p}$, $\beta \in \mathbb{R}^p$ and $\epsilon \in \mathbb{R}^n$ when $n \ll p$. If we expect only a small number of columns in $X$ to actually contribute to $y$ then we can impose a penalty on $\beta$ such that "non-important" columns $X\_i$ have their corresponding $\beta\_i = 0$. We can formally write this as $$\arg \min\_{\beta} \|y - X\beta \|^2\_2 + \lambda\|\beta\|\_0$$
where $\|\cdot\|\_0$ is the $\ell\_0$ norm that counts the number of non-zero entries. Unfortunately fitting this model exactly is [difficult](http://web.stanford.edu/~yyye/lpmin_v14.pdf). We can approximate this objective by using the $\ell\_1$ norm [instead](https://stats.stackexchange.com/questions/45643/why-l1-norm-for-sparse-models). That is we find $\beta$ for $$\arg \min\_{\beta} \|y - X\beta \|^2\_2 + \lambda\|\beta\|\_1.$$ This model is known as LASSO in the context of linear regression and can be fit by a variety of methods in relatively little time.
All of this hinges on $n \ll p$. If your data are of the form when $n \approx p$ or $n > p$ I'm not sure if sparsity will help much, as you should have enough data to guide inference to true $\beta$ values (provided other assumptions hold, heteroskedasticity, independence, etc). The key takeaway is that you have a large number of predictors and you suspect a small amount of them to actually characterize $y$. |
As we know, [Cholesky decomposition](http://en.wikipedia.org/wiki/Cholesky_decomposition) of $A = L\*L^T$.
I tried to write a simple function to decompose the lower triangular matrix $L$. I know there is a C++ function of `GSL/gsl_linalg_cholesky_ decomp` that can do it. I read its manual but I do not quite understand. Anyone can help?
---
I was asked to offer the manual of the function, which is <http://www.gnu.org/software/gsl/manual/html_node/Cholesky-Decomposition.html#> 1 index-gsl 005flinalg 005fcholesky 005fdecomp-1343 | The function you link to takes an input matrix, $A$ (symmetric positive semidefinite), and gives back a matrix with $L$ and $L^T$ in it:
![enter image description here](https://i.stack.imgur.com/4YDW3.png)
Both $L$ and $L^T$ include the diagonal (marked in white above).
So if you need for some reason to have $L$ by itself (for many purposes you'll probably be able to use it directly from the returned matrix), you can copy the relevant elements out. |
Before I start with my question, I want to state some notation I am using. I fix some arbitrary but fixed enumeration of Turing Machines (TMs) and denote with $\Phi\_i : \mathbb{N}\to\mathbb{N}$ the function that is computed by the $i^\text{th}$ TM in this enumeration. Furthermore $\Phi\_i(x) \downarrow$ denotes that the computation of $\Phi\_i(x)$ terminates with some result $y$, i.e., $\Phi\_i(x) = y$. Furthermore, $\Phi\_i(x) \uparrow$ denotes that the computation of $\Phi\_i(x)$ never terminates.
I know that the following function is not computable:
$$
f(x) = \begin{cases}
1 & \text{if } \Phi\_x(x)\downarrow \\
0 & \text{otherwise}
\end{cases}
$$
Now, suppose we fix some $n \in \mathbb{N}$ and define:
$$
h(x) = \begin{cases}
1 & \text{if } \Phi\_x(x)\downarrow \text{ and } x \le n\\
0 & \text{otherwise}
\end{cases}
$$
I found some lecture notes which state that $h$ is computable because it is the characteristic function of a recursive set (because it is finite) and is therefore computable. Is this claim correct?
If $h$ is actually computable I can't think of an algorithm because one simply can't decide if $\Phi\_x(x)$ diverges to return $0$ in the case that $x \le n$. | Fix a value of $n$. For $b \in \{0,1\}^n$, consider the following algorithm $A\_b$:
>
> If $x \leq n$ then output $b\_x$, otherwise output $0$.
>
>
>
Clearly one of the $2^n$ algorithms of the form $A\_b$ computes your function $h$, hence $h$ is computable. |
I'm teaching myself about reinforcement learning, and trying to understand the concept of discounted reward. So the reward is necessary to tell the system which state-action pairs are good, and which are bad. But what I don't understand is why the discounted reward is necessary. Why should it matter whether a good state is reached soon rather than later?
I do understand that this is relevant in some specific cases. For example, if you are using reinforcement learning to trade in the stock market, it is more beneficial to make profit sooner rather than later. This is because having that money now allows you to do things with that money now, which is more desirable than doing things with that money later.
But in most cases, I don't see why the discounting is useful. For example, let's say you wanted a robot to learn how to navigate around a room to reach the other side, where there are penalties if it collides with an obstacle. If there was no discount factor, then it would learn to reach the other side perfectly, without colliding with any obstacles. It may take a long time to get there, but it will get there eventually.
But if we give a discount to the reward, then the robot will be encouraged to reach the other side of the room quickly, even if it has to collide with objects along the way. This is clearly not a desirable outcome. Sure, you want the robot to get to the other side quickly, but not if this means that it has to collide with objects along the way.
So my intuition is that any form of discount factor, will actually lead to a sub-optimal solution. And the choice of the discount factor often seems arbitrary -- many methods I have seen simply set it to 0.9. This appears to be very naive to me, and seems to give an arbitrary trade-off between the optimum solution and the fastest solution, whereas in reality this trade-off is very important.
Please can somebody help me to understand all this? Thank you :) | **TL;DR:** Discount factors are associated with time horizons. Longer time horizons have have much more **variance** as they include more irrelevant information, while short time horizons are **biased** towards only short-term gains.
The discount factor essentially determines how much the reinforcement learning agents cares about rewards in the distant future relative to those in the immediate future. If $\gamma = 0$, the agent will be completely myopic and only learn about actions that produce an immediate reward. If $\gamma = 1$, the agent will evaluate each of its actions based on the sum total of all of its future rewards.
So why wouldn't you always want to make $\gamma$ as high as possible? Well, most actions don't have long-lasting repercussions. For example, suppose that on the first day of every month you decide to treat yourself to a smoothie, and you have to decide whether you'll get a blueberry smoothie or a strawberry smoothie. As a good reinforcement learner, you judge the quality of your decision by how big your subsequent rewards are. If your time horizon is very short, you'll only factor in the immediate rewards, like how tasty your smoothie is. With a longer time horizon, like a few hours, you might also factor in things like whether or not you got an upset stomach. But if your time horizon lasts for the entire month, then every single thing that makes you feel good or bad for *the entire month* will factor into your judgement on whether or not you made the right smoothie decision. You'll be factoring in lots of irrelevant information, and therefore your judgement will have a huge variance and it'll be hard to learn.
Picking a particular value of $\gamma$ is equivalent to picking a time horizon. It helps to rewrite an agent's discounted reward $G$ as
$$
G\_t = R\_{t} + \gamma R\_{t+1} + \gamma^2 R\_{t+2} + \cdots \\
= \sum\_{k=0}^{\infty} \gamma^k R\_{t+k} = \sum\_{\Delta t=0}^{\infty} e^{-\Delta t / \tau} R\_{t+\Delta t}
$$
where I identify $\gamma = e^{-1/\tau}$ and $k \rightarrow \Delta t$. The value $\tau$ explicitly shows the time horizon associated with a discount factor; $\gamma = 1$ corresponds to $\tau = \infty$, and any rewards that are much more than $\tau$ time steps in the future are exponentially suppressed. You should generally pick a discount factor such that the time horizon contains all of the relevant rewards for a particular action, but not any more. |
What is the right approach and clustering algorithm for geolocation clustering?
I'm using the following code to cluster geolocation coordinates:
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.cluster.vq import kmeans2, whiten
coordinates= np.array([
[lat, long],
[lat, long],
...
[lat, long]
])
x, y = kmeans2(whiten(coordinates), 3, iter = 20)
plt.scatter(coordinates[:,0], coordinates[:,1], c=y);
plt.show()
```
Is it right to use K-means for geolocation clustering, as it uses Euclidean distance, and not [Haversine formula](https://en.wikipedia.org/wiki/Haversine_formula) as a distance function? | I am probably very late with my answer, but if you are still dealing with geo clustering, you may find [this study](https://cybergeo.revues.org/27035) interesting. It deals with comparison of two fairly different approaches to classifying geographic data: K-means clustering and latent class growth modeling.
One of the images from the study:
[![enter image description here](https://i.stack.imgur.com/DsOxx.jpg)](https://i.stack.imgur.com/DsOxx.jpg)
The authors concluded that the end results were overall similar, and that there were some aspects where LCGM overperfpormed K-means. |
>
> **Possible Duplicate:**
>
> [Testing hypothesis of no group differences](https://stats.stackexchange.com/questions/3038/testing-hypothesis-of-no-group-differences)
>
>
>
Suppose I have $k$ samples from 2 independent experiments (service times by 2 methods) and their means are similar. How do I statistically show that both methods have similar service times? | You have two distributions of service times. What you want is to compare those distributions and check whether they are really different.
There are a few statistical tests that can do this for you each with different drawbacks (e.g. sensitivity to changes in scale, location, etc.)
Have a look at the [Kolmogorov-Smirnov](http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) test or [Mann–Whitney U](http://en.wikipedia.org/wiki/Wilcoxon_rank_sum) |
I'm aware that there are many different practices of initializing the weights when training a neural network. It seems traditionally *standard normal distribution* is the first choice. Most articles I found argue there are better ways to initialize the weights other than using normal distrubtion, but they did not explain why normal distribution would at least work.
(1) I think restricting the weights to have `mean` at 0 and `std` at 1 can make the weights as small as possible, which make it convenient for regularization. **Am I understanding it correctly**?
(2) On the other hand, **what are the theoretical/practical reasons to use the normal distribution**? Why not sampling random weights from any other arbitrary distributions? Is it because normal distribution has the [maximum entropy](https://www.dsprelated.com/freebooks/sasp/Maximum_Entropy_Property_Gaussian.html) given the mean and variance? Having the maximum entropy means it's most possible chaotic and thus making least assumptions about the weights. Am I understanding it correctly? | I agree with @Ben above, and I thought I would provide a simple example of where a Bayesian versus a Frequentist interval would be of value in the same circumstance.
Imagine a factory with parallel assembly lines. It is costly to stop a line, and at the same time, they want to produce quality products. They are concerned about both false positives and false negatives over time. To the factory, it is an averaging process: both power and guaranteed protection against false positives matter. Confidence intervals, as well as tolerance intervals, matter to the factory. Nonetheless, machines will go out of alignment, that is $\theta\ne\Theta$, and detection gear will observe spurious events. The average outcome matters while the specific outcome is an operational detail.
On the opposite side of this is a single customer purchasing a single product or a single lot of products. They do not care about the repetition properties of the assembly line. They care about the one product that they purchased. Let us imagine the customer is NASA and they need the product to meet a specification, say $\gamma\le\Gamma.$ They do not care about the quality of the parts they did not purchase. They need a Bayesian interval of some form.
Furthermore, a single failure could kill many astronauts and cost billions of dollars. They need to know that every single part purchased meets specifications. Averaging would be deadly. For a Saturn V rocket, a one percent defect rate would have implied 10,000 defective parts during the Apollo flights. They required 0% defects on all missions.
You worry about having a confidence interval when you are working in the sample space as a factory is doing. It is creating the sample space. You worry about credible intervals when you are working in the parameter space, as a customer would be doing. If you do not care about the observations outside yours, then you are Bayesian. If you do care about the samples that were not seen, but could have been seen, then you are a Frequentist.
Are you concerned with long-run averaging or the specific event? |
Question: How does one formulate a back propagation algorithm (either batch, gradient, anything that works) for a neural net, playing a game of Tic Tac Toe?
(Java is being utilized)
Scenario: There are 9 input neurons, one for each square. A value of -1 represents X, 1 represents O, and 0 represents an empty space. These values are stored in 1x10 array (the tenth input being bias). These lead to a single hidden layer, with n nodes (n selected on boot). The weights between the input and hidden layer are stored in a 10 x n array. A typical sigmoid activation function is used on the hidden layer. The hidden layer then leads to 9 output neurons, with weights stored in a n x 9 array. A separate function is used to normalize the output, dividing every point by the largest output, and whichever neuron is storing "1", the next move is made there. I've gotten this portion to work.
The cost function is as follows- Every board has an "advantage", which is equal to 10(turns till lose) - 8(turns till win) - 100(boolean invalid[i.e. placing a piece over another piece]). A negative advantage is bad for the AI, positive is good. Cost therefore, is cost = 1/2(best possible delta(advantage), - the delta(advantage) that the AI comes up with)^2. Squared in order to make the function convex, multiplied by .5 for the power rule to make it cleaner.
Here lies the problem: I need to calculate the derivative, so I know what way is down hill. How do I calculate the derivative for the advantage function, if it uses if statements, loops, etc.? It really is more of an operator, than a function- is there a way to re-write a cost function for tic-tac-toe which would be more conducive to machine learning? Or is there a way to derive around this?
I hope I explained that correctly- P.S. This is not for a class, rather my own blatant curiosity. | As D.W. points out, please consult a textbook on neural networks, or Wikipedia. The answer to your question lies in the insight that neural networks do not inspect the cost function or algorithm used to compute it, but it is an empirical method. That is, you will have to play many games of Tic Tac Toe (or generate configurations at random, whichever), compute the cost function yourself, and then feed the configuration with its cost function to your neural network, which then does backpropagation, and repeat. Alternatively, you may collect a large batch of data and feed the batch to the neural network. Either way, the gradient of the network at a particular configuration is a value computed only from available data.
That said, and while most numerical algorithms have no well-defined derivatives, there are edge cases, for example the identity function $f(x) = x$, but most have no practical use. Then there are algorithms that estimate the derivative of a function such as the one you supplied form a large batch of data that a neural net can use to aid its search by improving its gradient estimate, but that will get you into some very deep waters. Deep waters, but fun waters, so consult a textbook if you want to know about that. |
Let $L$ be a regular language.
Is the language $L\_2 = \{y : \exists x,z\ \ s.t.|x|=|z|\ and\ xyz \in L \}$ regular?
I know it's very similar to the [question here](https://cs.stackexchange.com/questions/7588/regularity-of-middles-of-words-from-regular-language), but the catch is that it's not a simple substring of a word in a regular language, but rather an "exact middle" - we have to count the prefix and suffix length.
Therefore, I assume it's not regular, but I couldn't find a way to prove it. I also couldn't think of any way to modify the NFA of $L$ to accept $L\_2$. | Hint: Consider some DFA for $L$. For every $n \geq 0$, let $A\_n$ be the set of states $s$ such that there is *some* word of length $n$ which leads the DFA from the initial state to $s$. Let $B\_n$ be the set of states $t$ such that there is *some* word of length $n$ which leads the DFA from $t$ to an accepting state. Finally, for any two states $s,t$, let $R\_{s,t}$ be the (regular) set of words leading the DFA from $s$ to $t$. We have
$$ L\_2 = \bigcup\_{n \geq 0} \bigcup\_{\substack{s \in A\_n \\ t \in B\_n}} R\_{s,t}. $$
Since there are only finitely many possibilities for $s,t$, the union is in fact finite, and so regular. |
I measured the change in three traits (*y1*, *y2*, *y3*) over time (*x*). The three traits each reached a maximum at different times and then declined. I am interested in the phase until the max and would like to fit Gompertz curves in the following form:
$$
y = \alpha \exp(\beta(1−\exp(−\gamma x)))
$$
as the estimated parameters have biological meaning and will be used for further calculations. One approach I found in an (old) paper is a two-phase (until and after the max) fitting of Gompertz curves.
**Major problems:**
I could not determine until which time point of the data I would use for the fitting (i.e. to discard the decline phase):
1. Within the same group, *y1*, *y2* and *y3* reached max at different times (graph below: *y2* reached max earlier than *y1* and *y3*).
2. The decline is greater in some groups than others (graph below: the decline phase is more obvious in treatment a than in treatment b).
The graph shows the Gompertz curves I fit to all data (*y1*) and data until x = 55 (*y2* and *y3*) for treatments a and b.
[![enter image description here](https://i.stack.imgur.com/9xupt.png)](https://i.stack.imgur.com/9xupt.png)
**Attemp:**
A solution I think of is to first fit broken-line regression to find the max point, and then fit the Gompertz curve to the data until that point. However, it is difficult to fit Gompertz curves to less than four time points of observations (e.g. for *y3* in treatment b, the estimated upper asymptote of the Gompertz is higher biased).
Is there any better approach to fit the Gompertz curves? Any suggestions or comments would be greatly appreciated. Thanks a lot in advance.
---
**Edit:**
Sample data are pasted.
There are 7 time points each with ~10 observations, which also differed in time in a microscopic scale. In the dataset below, 3-4 observations per time are provided. After reading the comments , I am abandoning the idea of fitting Gompertz curves to those data (I did not expect the declines at different times).
I would like to compare 1) time to max among treatments and among traits, 2) max *y* among treatments 3) growth rate and 4) decline rate.
For 2), as they reached max *y* at different times, would it be a fair comparison if I just take the max *y* regardless of *x*? For 3) and 4), the same problem with time remains. As the $\Delta x$ are different, would it be more suitable to compare the instantaneous initial growth rate (at *x* = 0) and final decline rate (at *x* = 100) rathre than compare the averages from the intial to max point (or from max to final point)?
```
x y1 y2 y3
1 3.088 7.922 0.368 2.575
2 2.994 8.061 0.368 2.636
3 2.860 7.962 0.361 2.560
4 2.741 7.915 0.353 2.499
5 21.430 8.383 0.396 2.955
6 21.285 8.127 0.391 2.811
7 21.561 8.262 0.386 2.863
8 35.046 8.690 0.385 3.056
9 35.142 8.513 0.376 2.984
10 35.256 8.386 0.377 2.998
11 53.666 8.743 0.386 3.122
12 53.767 8.664 0.359 2.997
13 53.545 8.542 0.365 2.966
14 68.315 8.490 0.347 2.744
15 68.083 8.820 0.359 2.983
16 68.215 8.205 0.347 2.756
17 86.383 8.567 0.346 2.823
18 86.275 8.709 0.346 2.860
19 86.505 8.707 0.331 2.823
20 99.480 8.465 0.328 2.705
21 99.399 8.545 0.331 2.747
22 99.553 8.523 0.325 2.657
``` | Obviously your data has to be modeled with a function which is increasing at low value of $x$ and decreasing at high values of $x$.
On the other hand the function $$y = \alpha \exp(\beta(1−\exp(−\gamma x)))\tag 1$$ is always increasing. Thus the fitting of such a function is wrong. That way the estimated parameters will have no physical meaning.
It is strange that the experimental measurements doesn't agree even roughly with the proposed theoretical law. You should clarify this point. But this is not the subject to be discussed here.
If you maintain that the function $(1)$ is a correct model only for low values of $x$ and that your measurements are false at high values of $x$ then I understand why you want to eliminate the points above a maximum.
I think that isn't a good way to obtain significant result. Nevertheless you could first choose another model without physical meaning but using a function which has a maximum (for example the quadratic function). Fitting this "toy" model will give a maximum $(x\_m\:,\:y\_m)$. Then you could fit the function $(1)$ with only the points $x\leq x\_m$.
I would have liked to test this proposed method with your data. But this was not possible with only your graphs and without numerical data. |
During my college I studied C/C++, C#.Net and ASP.Net. Now one my friend suggested me to learn Python & MATLAB also. He wasn't much specific why to learn these languages, He said it would be helpful to get a job with these languages. I've also seen lots of good resume, mostly they have Python (Scripting Language), MATLAB, plus above languages that I've studied. I've to appear for interviews in 3 months, So please guide me what should I do. | Computer Science and software engineering/development/etc are not about the languages, but about creating solutions to meet a need (solve a problem). Different platforms and languages are tools you will learn to use as you learn and get more experience. All of the languages/platforms you've mentioned in your question are heavily used in software development these days (except maybe FoxPro). You should become knowledgable about all of them (and many others you might come across), and become very proficient in a select few.
You will find that, having mastered one or two languages, becoming productive in others will get easier. |
I've been reading the decidablity and undecidability chapters in Sipser's "Intro to Theory of Computation" however I could not find an explanation on the existence of a language that is both non-context free and decidable.
The only reference to this was a simple language hierarchy diagram showing where the decidable/recognisable bounds were in relation to language types.
I'm unsure as to how I should approach this but I've thought about proving this by diagonlisation:
* Let $M$ be the set of all decideable Turing Machines, and $L$ the set
of all languages that are context-free. (Assume finite alphabet)
* By drawing up and filling the table where each language corresponds
to a Turing Machine, I was hoping that I could find a contradiction
in some $m \in M$ where there is no corresponding language $l \in L$.
I know that this will not work as both $M$ and $L$ are countable.
Any ideas on how I should approach this? | I had not the enough reputation in this community to leave a comment on @Umamg's answer; so, I try to complete Umang's answer in mine.
One way to show that the language $L=\{a^p: \text{p is a prime number}\}$ is not context-free is to use pumping lemma for CFLs in the following way:
If $L$ was a CFL, then given an arbitrary long string in this language, say $a^p$ with $p$ being greater than the pumping length of $L$, this string would be decomposed into five parts $uvxyz$ with $|vy|\geq 1$ and so that for any $i\geq 0$ the string $uv^ixy^iz$ would belong to $L$.
Let $l:=|vy|$, then the above discussion shows that all the following strings are part of the language $L$:
$$a^p, a^{p+l}, a^{p+2l}, \ldots, a^{p+nl}, \ldots$$
That is, the set of prime numbers contains an infinite arithmetic progression which is impossible since the number $p+pl$ is divisible by p. |
wikipedia entry says without reference that
"There are even some context-sensitive grammars whose fixed grammar recognition problem is PSPACE-complete."
This is stronger than saying that CSG is PSPACE-complete.
But is this proved? where are the references? | $\mathrm{CSL}=\mathrm{NSPACE}(O(n))$. Thus, take your favourite PSPACE-complete problem. If it is decidable in $\mathrm{NSPACE}(O(n))$ (for example, QBF is), you are done. Otherwise, introduce a polynomial amount of padding to make it so. |
Here is an example code:
```
set.seed(3)
data1 = iris[sample(c(1:dim(iris)[1]), 30), ]
data2 = iris[sample(c(1:dim(iris)[1]), 50), ]
model1 = lm(Petal.Length ~ log(Petal.Width),
data = data1)
model2 = lm(data2$Petal.Length ~ log(data2$Petal.Width))
par(mfrow = c(1, 4))
plot(data1$Petal.Width,
data1$Petal.Length)
points(sort(data1$Petal.Width),
predict(model1, newdata = data1[order(data1$Petal.Width), ]),
col = "red",
type = "l")
plot(data2$Petal.Width,
data2$Petal.Length)
points(sort(data2$Petal.Width),
predict(model1, newdata = data2[order(data2$Petal.Width), ]),
col = "red",
type = "l")
summary(model1)
summary(model2)
```
As in the code, I fitted two different curves from two different datasets using the simple regression method in which the explanatory variable is in logarithm form. Like in the code, the datasets have the same variables with different size.
I want to check that the two curves (model1 and model2) are statistically different or not.
How can I do it in R?
---- edited ----
I know that there are many questions about comparing regression lines, but my case is a different one from previous ones because I want to compare different models built from different data sets while other cases are to compare different models built from the same data set.
Let me state more about the context of my purpose to elaborate on the term "compare."
I made a model (the curve) for an experimental site and want to apply this model to many other sites (over the country). I acknowledge that it would not proper to apply the model to other sites where the model is not based on (the resulted curve of another site may be different from the previous one). So, I made one more curve for another study site and trying to compare the two models built from different sites.
This is why the two data sets have the same variable list. And the common points of the two data sets in the example can be ignored in term of my purpose.
Thank you! | There is no single magic bullet to estimate treatment effects in the context of confounding (note: "selection bias" can mean [something else](http://www.annualreviews.org/doi/10.1146/annurev-soc-071913-043455)). There is also no agreement in the field about the best method, and the best method for a given problem may differ from the best method for another (and neither will be immediately apparent). My understanding is that some of the best performing methods are the "multiply robust" methods, which include targeted minimum loss-based estimation (TMLE) and Bayesian additive regression trees (BART) with a BART propensity score. I describe these methods with references in [this post](https://stats.stackexchange.com/questions/415571/exact-matching-multiple-regression-on-high-dimensional-treatment-control-study/415598#415598).
These methods are multiply robust in that there are numerous forms of misspecification that they are robust to (i.e., they will give you an unbiased or low-error estimate even if you get some things wrong about the relationships among variables). The more standard doubly robust methods are those that give you two chances to correctly specify a model in order to arrive at an unbiased estimate of the treatment effect. Augmented inverse probability weighting (AIPW) with parametric outcome and propensity score models is one such example; if either the outcome model or propensity score model is correct, the effect estimate is unbiased. Multiply robust methods are robust to these misspecifications but also to misspecifications of the functional form of the relationship between the covariates and the treatment or outcome. They gain this property through flexible nonparameteric modeling of these relationships. Such methods are highly preferred because they require fewer untestable assumptions to get the right answer, in contrast to propensity score matching or regression, which require strong assumptions about functional form.
I would check out the best performers of the annual Atlantic Causal Inference Conference competition, as these represent the cutting edge of causal inference methods and are demonstrated to perform well in a variety of conditions. TMLE and BART were two of the best performers, and are both accessible and easy to use.
I'm not going to write off the other methods you mention, but they do require many assumptions that cannot easily be assessed or they have been demonstrated to perform poorly in a number of contexts. They are still the standards in the health sciences, but that is slowly changing as the advanced methods become better studied and more accessible. |
In my academic career, I've read quite a few academic papers on various computer science topics. Many of which involve an implementation and some assessment of that implementation, yet I have found that very few of them actually publish the code they used.
To me, the benefits of including the actual implementation would be significant, namely:
* Extension of trust or reproducability (just test it yourself!)
* Clarification of ambiguities (particularly for papers written by non-native speakers)
* Reuse of code for applications
So why is it that so few papers actually include any code?
I suppose that it might be the intention of the organization behind the paper to utilize the implementation in their own applications, and thus would not want to release it, but if that's the case, why even write the paper? | You believe that code should be published, but you ask why papers do not include code. These are two different things.
Most of the time, there is simply not enough room to publish a significant amount of code. In my research field (image processing) pseudocode or architecture information is often far more valuable and I have never found myself stuck due to the lack of code in a paper. It's often left as an exercise to the reader who grasped the article.
Yet there is a lot of code available to illustrate papers. Authors usually have a webpage and even if the reviewer doesn't get a chance to try and check the code itself, natural selection appears to work pretty well and authors who do not publish code are a lot less cited. |
This question is concerning a similar problem as mentioned in this [question](https://stats.stackexchange.com/questions/453396/finding-the-decision-boundary-between-two-gaussians). The only difference is that in my case the variances are **unequal**.
To recap, consider a two class scenario. At the decision boundary, the posterior probability of classifying a data point into two classes will be equal i.e. $p(y=1|x) = p(y=2|x)$
Posterior definition
$p(y=1|x) = \frac{p(x|y=1) \* P(y=1)}{p(x)}$
$p(y=2|x) = \frac{p(x|y=2) \* P(y=2)}{p(x)}$
where likelihoods are Gaussian i.e.
$p(x|y=1) = \mathcal{N}(x|\mu\_1, \sigma\_1)$
$p(x|y=2) = \mathcal{N}(x|\mu\_2, \sigma\_2)$
So at the decision boundary, $p(y=1|x^\*) = p(y=2|x^\*)$ where $x^\*$ is the threshold
$
\begin{align}
&p(y=1|x^\*) = p(y=2|x^\*) \\
&\Longrightarrow\frac{1}{\sqrt{2\pi\sigma\_1^2}}\exp(-\frac{(x^\* - \mu\_1)^2}{2\sigma\_1^2}) \* P(y=1) = \frac{1}{\sqrt{2\pi\sigma\_2^2}}\exp(-\frac{(x^\* - \mu\_2)^2}{2\sigma\_2^2})\* P(y=2)\\
\end{align}
$
Taking log on both sides,
$
\begin{align}
\Rightarrow & \small-\frac{(x^\* - \mu\_1)^2}{2\sigma\_1^2} -\log\sqrt{2\pi}\sigma\_1 + \log P(y=1)= -\frac{(x^\* - \mu\_2)^2}{2\sigma\_2^2} -\log\sqrt{2\pi}\sigma\_2 + \log P(y=2)\\
\end{align}
$
To get the threshold, we would solve for $x^\*$ in the above equation. But since the variances are unequal this will remain a quadratic equation and hence it is possible to get complex values for $x^\*$.
If this is the case, then what does it mean to have a complex threshold?
#### Further context:
I'm using a 2 component Gaussian mixture model and planning to find the threshold to create a mask as specified in the paper snippet below. As such, I was expecting the threshold to be real in order to create the mask. Hence my confusion.
[![enter image description here](https://i.stack.imgur.com/dDGb5m.jpg)](https://i.stack.imgur.com/dDGb5m.jpg) | In my opinion you cannot remain vague about "outliers" when asking such questions. The answer to your question will most likely depend on what you mean by outlier and what procedure will be used to deal with outliers. A few imaginary scenarios:
1. You have photographs of animals and some of them are damaged by technical errors. In this case you would simply discard them from the entire dataset as they would equally be discarded in, as you put it, real world scenario.
2. You have gene expression data and some genes have abnormally high expression levels. You decide to deal with this by capping the expression at some arbitrary threshold $c$. Since this is a within-sample procedure - meaning the results will be the same regardless of whether you process each sample one by one or all of them together - you can again perform this before splitting into training and testing.
3. You have similar gene expression data as before with some abnormally high values but you decide to do a cross-validation to get an optimal threshold parameter $c$. Now you actually would have to do such outlier "normalization" step not only separately for testing and training data, but separately for each cross-validation fold.
4. You have customer data from an insurance company where samples can have missing features. You decide to impute those features using average values from the samples of the same class. Here you would have to perform this correction after splitting into training and testing. And again - if you do cross validation - separately in each cross-validation fold.
In summary, your general observation about checking whether this procedure would transfer to the "real world" setting is on point. Or alternatively - you could get intuition by pondering whether a certain procedure can be performed on a single sample (such procedures are called "in-sample" or "within-sample" procedures). As an example you cannot subtract a feature-wise mean from a single sample because you will get all 0s.
When dealing with an "out-sample" (between-sample) procedure you have to make sure that any estimation (a.k.a. "learning") is always done using only the data that is being used for estimation ("training data"). Then, once you get a value in this training data you have to use the obtained values on the testing data. And yes - simple things like centering the data by subtracting a feature-wise mean is also "learning". So you get the mean in the training step and subtract this training-data-obtained mean in the testing stage. |
I understand how differential calculus is useful for basic Maximum Likelihood estimation techniques. However, my question is: what broad types of statistics require an understanding of integral calculus? | Continuous distributions have cumulative distribution functions that involve integrals. In general
$F\_X(x) = \int\_{-\infty}^x f\_X(x)\,dx$
where $f\_X(x)$ is the PDF |
Can we study programming languages in the context of linguistics? Do programming languages evolve naturally in similar ways to natural languages?
Although full rationality, and mathematical consistency is essential to programming languages, there still is the need (especially modern languages) to make them readable and comfortable to humans.
Are programming languages evolving to become more linguistic and thus more natural? For example machine code, punch cards and assembly languages have given way to more readable languages like Ruby and Python etc.
When I say computer languages are becoming more natural, I don't mean they contain more 'words we have in english', I mean they seem to becoming more like a natural language, in terms of their complexity of grammer and ability to express meaning (for example, being able to eloquently describe a query from a database in both rational and human understandable ways).
What do you all think? Are programming languages becoming more like natural languages, and thus becoming applicable to the laws of Linguistics?
Or perhaps languages live on a spectrum, where on one side you have the extreme rational languages and the other the more creative. Maybe, programming and natural languages are identical and both just lie on this language spectrum (their only difference, perhaps being the 'thing' they are trying to give their meaning to).
Is there a connection between the (Babel Tower effect) separation of human languages and of computer langages? Do they become more diverse for the same reasons (i.e. to solve different problems within ever-evolving computer-systems/culture-systems etc.)? | Computer languages tend to do well with terseness and precision, somewhat like mathematical notation, which has shown no particular inclination to evolve towards natural language (that I'm aware of) over the past few thousand years.
I also doubt that if you communicated with your infant exclusively in Haskell for the first few years of his life he would develop natural language fluency. So, I think there is pretty sharp contrast between natural and computer languages.
Perhaps wider spread of language construction techniques have improved the "naturalness" slightly over time, I suppose, since programmers "vote with there feet" by using languages that seem easier to them and the number of people capable of creating languages has gone up with more practitioners and better tools, but this is a small effect around the edges and doesn't represent a fundamental transformation of programming languages into human ones. |
I am currently trying to understand the BPTT for Long Short Term Memory (LSTM) in TensorFlow. I get that the parameter `num_steps` is used for the range that the RNN is rolled out and the Error backpropagated. I got a general question of how this works.
For reference a repitition of the formulas. I'm referring to:
[Formulas LSTM](https://i.stack.imgur.com/3zKXt.png) (<https://arxiv.org/abs/1506.00019>)
**Question:**
What paths are backpropagated that many steps? The constant error carousel is created by the formula 5, and the derivate for backpropagation ($s(t)\rightarrow s(t-1)$) is $1$ for all timesteps. This is why LSTMs capture long range dependencies. I get confused with the dependencies of $g(t)$, $i(t)$, $f(t)$ and $o(t)$ of $h(t-1)$. In words: The current gates do not just depend on the input, but also on the last hidden state.
**Doesn't this dependency lead to the exploding/vanishing gradients problem again?**
If I backpropagate along these connections I get gradients that are not one. Peephole connections essentially lead to the same problem.
Thanks for your help! | >
> Doesn't this dependency lead to the exploding/vanishing gradients problem again?
>
>
>
***Absolutely***, and you better have vanishing gradients otherwise you have a training problem.
Vanishing gradient in this case is not a bad thing, it is a good thing (unlike in feedforward). Let $C(t)$ be the cost function evaluated at time $t$ and $ W(t)$ be some weight of the network at time $t$. What vanishing gradient in this case means is that $ dC(t)/dW(t-u)$ becomes smaller and smaller as u becomes bigger and bigger. That is good because $ dC(t)/dW = \sum\_{u=0}^{num\\_steps} dC(t)/dW(t-u) $, so if the gradients didn't vanish in time, then the gradients would explode. So $W$ gets a proper non-vanishing gradient even if the gradients in time vanish because the gradient for $W$ is the sum at all times of the gradient for $W$.
In LSTMs the gradients are sure to vanish in time because the activation functions are sigmoids and tanh's so their derivatives are less than or equal to one, so as they get multiplied they slowly become smaller.
This compares to what is normally called the vanishing gradient problem which occurs when gradients vanish while passing from top layers to bottom layers, because that means that $ dC/dW $ for $W $ of the lower layer is vanishing and so the lower layers don't get trained, only the upper layers get trained.
Also, as mentioned in the comments, the above applies to any RNN, not only LSTMs. What sets LSTMs appart from vanilla RNNs in with regards to this question is the gating functions which allows the LSTM to control what it remembers and what it forgets and how much of the new input it takes in. While the above is true in practice for LSTMs (and is true on average also in theory), in theory, one could have a time step $t$ where the output has ignored the last 10 inputs and only depends on the input 11 timestep back ($t-11$), in which case the gradient for the weights 11 timesteps ago will not have decayed. Of course that means that at the next time step ($t+1$) the gradients for 11 steps ago ($t+1 -11 = t-10$) will be zero because the input was totally disregarded at $t-10$. So on average it averages out and you still have the same situation for LSTMs. |
It says on [Wikipedia](https://en.wikipedia.org/wiki/Frequentist_probability#Alternative_views) that:
>
> the mathematics [of probability] is largely independent of any interpretation of probability.
>
>
>
**Question:** Then if we want to be mathematically correct, shouldn't we disallow *any* interpretation of probability? I.e., are both Bayesian and frequentism mathematically incorrect?
I don't like philosophy, but I do like math, and I want to work exclusively within the framework of Kolmogorov's axioms. If this is my goal, should it follow from what it says on Wikipedia that I should reject *both* Bayesianism and frequentism? If the concepts are purely philosophical and not at all mathematical, then why do they appear in statistics in the first place?
**Background/Context:**
[This blog post](https://web.archive.org/web/20210920223903/https://simplystatistics.org/2014/10/13/as-an-applied-statistician-i-find-the-frequentists-versus-bayesians-debate-completely-inconsequential/) doesn't quite say the same thing, but it does argue that attempting to classify techniques as "Bayesian" or "frequentist" is counter-productive from a pragmatic perspective.
If the quote from Wikipedia is true, then it seems like from a philosophical perspective attempting to classify statistical methods is also counter-productive -- if a method is mathematically correct, then it is valid to use the method when the assumptions of the underlying mathematics hold, otherwise, if it is not mathematically correct or if the assumptions do not hold, then it is invalid to use it.
On the other hand, a lot of people seem to identify "Bayesian inference" with probability theory (i.e. Kolmogorov's axioms), although I'm not quite sure why. Some examples are Jaynes's treatise on Bayesian inference called "Probability", as well as James Stone's book "Bayes' Rule". So if I took these claims at face value, that means I should prefer Bayesianism.
However, Casella and Berger's book seems like it is frequentist because it discusses maximum likelihood estimators but ignores maximum a posteriori estimators, but it also seems like everything therein is mathematically correct.
So then wouldn't it follow that the only mathematically correct version of statistics is that which refuses to be anything but entirely agnostic with respect to Bayesianism and frequentism? If methods with both classifications are mathematically correct, then isn't it improper practice to prefer some over the others, because that would be prioritizing vague, ill-defined philosophy over precise, well-defined mathematics?
**Summary:** In short, I don't understand what the mathematical basis is for the Bayesian versus frequentist debate, and if there is no mathematical basis for the debate (which is what Wikipedia claims), I don't understand why it is tolerated at all in academic discourse. | >
> I don't like philosophy, but I do like math, and I want to work
> exclusively within the framework of Kolmogorov's axioms.
>
>
>
How exactly would you apply [Kolmogorov's axioms](https://en.wikipedia.org/wiki/Probability_axioms) alone without any interpretation? How *would* you interpret probability? What would you say to someone who asked you *"What does your estimate of probability $0.5$ mean?"* Would you say that your result is a number $0.5$, which is correct since it follows the axioms? Without any interpretation you couldn't say that this suggests how often we would expect to see the outcome if we repeated our experiment. Nor could you say that this number tells you how certain are you about the chance of an event happening. Nor could you answer that this tells you how likely do you believe the event to be. How would you interpret expected value - as some numbers multiplied by some other numbers and summed together that are valid since they follow the axioms and a few other theorems?
If you want to apply the mathematics to the real world, then you need to interpret it. The numbers alone without interpretations are... numbers. People do not calculate expected values to estimate expected values, but to learn something about reality.
Moreover, probability is abstract, while we apply statistics (and probability per se) to real world happenings. Take the most basic example: a fair coin. In the frequentist interpretation, if you threw such a coin a large number of times, you would expect the same number of heads and tails. However, in a real-life experiment this would almost never happen. So $0.5$ probability has really nothing to to with any particular coin thrown any particular number of times.
>
> Probability does not exist
>
>
>
-- Bruno de Finetti |
I've never liked how people typically analyze data from Likert scales as if error were continuous & Gaussian when there are reasonable expectations that these assumptions are violated at least at the extremes of the scales. What do you think of the following alternative:
If the response takes value $k$ on an $n$-point scale, expand that data to $n$ trials, $k$ of which have the value 1 and $n-k$ of which have the value 0. Thus, we're treating response on a Likert scale as if it is the overt aggregate of a covert series of binomial trials (in fact, from a cognitive science perspective, this is actually an appealing model for the mechanisms involved in such decision making scenarios). With the expanded data, you can now use a mixed effects model specifying respondent as a random effect (also question as a random effect if you have multiple questions) and using the binomial link function to specify the error distribution.
Can anyone see any assumption violations or other detrimental aspects of this approach? | I don't know of any articles related to your question in the psychometric literature. It seems to me that ordered logistic models allowing for random effect components can handle this situation pretty well.
I agree with @Srikant and think that a proportional odds model or an ordered probit model (depending on the link function you choose) might better reflect the intrinsic coding of Likert items, and their typical use as rating scales in opinion/attitude surveys or questionnaires.
Other alternatives are: (1) use of adjacent instead of proportional or cumulative categories (where there is a connection with log-linear models); (2) use of item-response models like the partial-credit model or the rating-scale model (as was mentioned in my response on [Likert scales analysis](https://stats.stackexchange.com/questions/2374/likert-scales-analysis/2375#2375)). The latter case is comparable to a mixed-effects approach, with subjects treated as random effects, and is readily available in the SAS system (e.g., [Fitting mixed-effects models for repeated ordinal outcomes with the NLMIXED procedure](http://brm.psychonomic-journals.org/content/34/2/151.full.pdf)) or R (see [vol. 20](http://www.jstatsoft.org/v20) of the *Journal of Statistical Software*). You might also be interested in the discussion provided by John Linacre about [Optimizing Rating Scale Category Effectiveness](http://www.winsteps.com/a/linacre-optimizing-category.pdf).
The following papers may also be useful:
1. Wu, C-H (2007). [An Empirical Study on the Transformation of Likert-scale Data to Numerical Scores](http://www.m-hikari.com/ams/ams-password-2007/ams-password57-60-2007/wuchienhoAMS57-60-2007.pdf). *Applied Mathematical Sciences*, **1(58)**: 2851-2862.
2. Rost, J and and Luo, G (1997). [An Application of a Rasch-Based Unfolding Model to a Questionnaire on Adolescent Centrism](http://ipn.uni-kiel.de/aktuell/buecher/rostbuch/c26.pdf). In Rost, J and Langeheine, R (Eds.), *Applications of latent trait and latent class models in the social sciences*, New York: Waxmann.
3. Lubke, G and Muthen, B (2004). [Factor-analyzing Likert-scale data under the assumption of multivariate normality complicates a meaningful comparison of observed groups or latent classes](http://gseis.ucla.edu/faculty/muthen/Likart.pdf). *Structural Equation Modeling*, **11**: 514-534.
4. Nering, ML and Ostini, R (2010). *Handbook of Polytomous Item Response Theory Models*. Routledge Academic
5. Bender R and Grouven U (1998). Using binary logistic regression models for ordinal data with non-proportional odds. *Journal of Clinical Epidemiology*, **51(10)**: 809-816. (Cannot find the pdf but this one is available, [Ordinal logistic regression in medical research](http://www.rbsd.de/PDF/olr_mr.pdf)) |
I am looking for the English name of the following algorithm:
We are given an array `a` with numbers and we need to be able to efficiently retrieve the sum of a continuous interval `[f,t]` of numbers in that array. In order to do that we precompute an array `sums`(of size `size(a) + 1`) that stores the sums of the prefixes of the initial array. More formally `sums[i] = a[0] + a[1] + ... a[i-1]`. This array can be constructed with linear complexity and now in order to compute the sum of the numbers in the interval `[f,t]`, we simply compute `sums[t]-sums[f-1]`.
Direct translation of the name of the algorithm(or more precisely the datastructure) that I've seen used in Bulgaria is `prefix array`, but in my experience direct translation often turns out to be wrong when it comes to algorithms and data structures.
How is this algorithm(or datastructure) called in English? | I think the array `sum` is the result of the prefix computation of the original array ([link](http://en.wikipedia.org/wiki/Prefix_sum)). |
Let $\mathcal A$ be an arbitrary language over $\Sigma^\*$
**Proof.**
To prove, $\mathcal A^{\*\*} = \mathcal A^\* $
$\mathcal A^{\*\*} = \left( \mathcal A^0 \cup \mathcal A^1 \cup {...} \cup \mathcal A^n \right)^\*$ by definition of **Kleene Star**
My idea is that Kleene star operation distributes over the union of languages but then, I dont know what to do next.
I need some directions. | Since $L \subseteq L^\*$ for all $L$, we have $\mathcal{A}^\* \subseteq \mathcal{A}^{\*\*}$. In the other direction, suppose that $w \in \mathcal{A}^{\*\*}$. Then there exists an integer $n \geq 0$ and words $x\_1,\ldots,x\_n \in \mathcal{A}^\*$ such that $w = x\_1 x\_2 \ldots x\_n$. Since $x\_i \in \mathcal{A}^\*$, there exists an integer $m\_i$ such that $x\_i \in \mathcal{A}^{m\_i}$. Thus $w \in \mathcal{A}^{m\_1 + \cdots + m\_n} \subseteq \mathcal{A}^\*$, and it follows that $\mathcal{A}^{\*\*} \subseteq \mathcal{A}^\*$. |
I'm doing and assignment where the problem is to find the combination with the least number of elements form an array of integers, given an integer sum. I have solved this using a gready algorithm which doesn't find the optimal solution, however I'm having problems finding the optimal solution using dynamic programming.
The gready algorithm I've written is:
```
Function min_comb(array, value)
min = 0
for i in 1:length(array)
if array[i] <= value
min += floor(value / array[i])
value = value % array[i]
end
end
return min
end
```
which works fine for Example 1 below, but of course not for Example 2.
Example 1: If given an array $A=[1000,500,100,20,5,1]$ and a sum $S=1226$, the least number of combinations would be $N=6$ ($1000+100+100+20+5+1$).
Example 2: If given an array $A=[4,3,1]$ and a sum $S=6$, the least number of combinations would be $N=2$ ($3+3$).
How should I go about solving this problem? | Let $f(s, i)$ be the minimum number of elements (only the first $i$ elements of the array are considered) required to sum up to $s$, then we have
$$f(s,i)=\min\_{0\le j\le s/A[i]}\left\{j+f(s-jA[i], i-1)\right\}.$$
You can use this formula to compute $f(s,i)$ for all $s$ and $i$. With knowing $f$, you can figure out the optimal combination. This is left as an exercise for you. |
I want to understand how a turing machine that will accept only words of length bigger than 100 will look like.
My idea: it will copy a word and move to the right 100 times.
If non of the cells was empty it will accept.
Furthermore if it is true I can also conclude that it is decidable .
If there is no problem with my assertions so far, how will a turing machine that prints the number of letters in a word will look like? is it possible as well? | It is just an LL(1) parser implemented with recursive descent.
Starts with:
```
AdditionExpression ::=
MultiplicationExpression
| AdditionExpression '+' MultiplicationExpression
| AdditionExpression '-' MultiplicationExpression
```
apply [left-recursion removal](https://cs.stackexchange.com/a/2720/584) to get an LL(1) grammar:
```
AdditionExpression ::=
MultiplicationExpression AdditionExpressionTail
AdditionExpressionTail ::=
| '+' MultiplicationExpression AdditionExpressionTail
| '-' MultiplicationExpression AdditionExpressionTail
```
write the corresponding functions:
```
function parse_AdditionExpression() {
parse_MultiplicationExpression()
parse_AdditionExpressionTail()
}
function parse_AdditionExpressionTail() {
if (has_token()) {
get_token()
if (current_token == PLUS) {
parse_MultiplicationExpression()
parse_AdditionExpressionTail()
} else if (current_token == MINUS) {
parse_MultiplicationExpression()
parse_AdditionExpressionTail()
} else {
unget_token()
}
}
}
```
remove tail recursion:
```
function parse_AdditionExpressionTail() {
while (has_token()) {
get_token()
if (current_token == PLUS)
parse_MultiplicationExpression()
else if (current_token == MINUS)
parse_MultiplicationExpression()
else {
unget_token()
return
}
}
}
```
inline:
```
function parse_AdditionExpression() {
parse_MultiplicationExpression()
while (has_token()) {
get_token()
if (current_token == PLUS)
parse_MultiplicationExpression()
else if (current_token == MINUS)
parse_MultiplicationExpression()
else {
unget_token()
return
}
}
}
```
and you have just to add the semantic processing to get your function. |
Can a valid Huffman tree be generated if the frequency of words is same for all of them?
Example :
```
Value | Frequency
--------------------
Google | 2
Yahoo | 2
Microsoft | 2
Amazon | 2
``` | I'd like to expand on one point in Richerby's answer:
>
> When an input is given to the machine, it is either accepted or not.
>
>
>
The reason for this is that the Turing machine is deterministic: given the same input and starting state, it will always do the same thing every time you run it (either terminate in the same accept state or in the same reject state, or loop forever).
Additionally, we can easily prove that every Turing machine recognizes exactly one language:
Suppose, by contradiction, that a Turing machine M recognizes two distinct languages L1 and L2. Since L1 and L2 are distinct, there must exist a string S that is in L1 but not in L2 (without loss of generality - it could be the other way around but the proof would proceed in the same way from here with L1 and L2 exchanged). Now run M on S. If it accepts, then a contradiction is reached because then S would be in L2. If it doesn't accept (rejects or loops), then a contradiction is reached because S would not be in L1. |
I'm hoping that someone can explain, in layman's terms, what a characteristic function is and how it is used in practice. I've read that it is the Fourier transform of the pdf, so I guess I know *what* it is, but I still don't understand its purpose. If someone could provide an intuitive description of its purpose and perhaps an example of how it is typically used, that would be fantastic!
Just one last note: I have seen the [Wikipedia page](http://en.wikipedia.org/wiki/Characteristic_function_%28probability_theory%29), but am apparently too dense to understand what is going on. What I'm looking for is an explanation that someone not immersed in the wonders of probability theory, say a computer scientist, could understand. | @charles.y.zheng and @cardinal gave very good answers, I will add my two cents. Yes the characteristic function might look like unnecessary complication, but it is a powerful tool which can get you results. If you are trying to prove something with cumulative distribution function it is always advisable to check whether it is not possible to get the result with characteristic function. This sometimes gives very short proofs.
Although at first the characteristic function looks unintuitive way of working with probability distributions, there are some powerful results directly related with it, which imply that you cannot discard this concept as a mere mathematical amusement. For example my favorite result in probability theory is that any [infinitely divisible distribution](http://en.wikipedia.org/wiki/Infinite_divisibility_%28probability%29) has the unique [Lévy–Khintchine representation](http://en.wikipedia.org/wiki/Levy-Khintchine_representation). Combined with the fact that the infinitely divisible distributions are the only possible distribution for limits of sums of independent random variables (excluding bizarre cases) this is a deep result using which central limit theorem is derived. |
We often hear of project management and design patterns in computer science, but less frequently in statistical analysis. However, it seems that a decisive step toward designing an effective and durable statistical project is to keep things organized.
I often advocate the use of R and a consistent organization of files in separate folders (raw data file, transformed data file, R scripts, figures, notes, etc.). The main reason for this approach is that it may be easier to run your analysis later (when you forgot how you happened to produce a given plot, for instance).
What are the **best practices for statistical project management**, or the recommendations you would like to give from your own experience? Of course, this applies to any statistical software. (*one answer per post, please*) | I am compiling a quick series of guidelines I found on [SO](http://www.stackoverflow.com) (as suggested by @Shane), [Biostar](http://biostar.stackexchange.com/) (hereafter, BS), and this SE. I tried my best to acknowledge ownership for each item, and to select first or highly upvoted answer. I also added things of my own, and flagged items that are specific to the [R] environment.
**Data management**
* Create a project structure for keeping all things at the right place (data, code, figures, etc., [giovanni](http://www.biostars.org/p/821/#825) /BS)
* Never modify raw data files (ideally, they should be read-only), copy/rename to new ones when making transformations, cleaning, etc.
* Check data consistency ([whuber](https://stats.stackexchange.com/questions/2768/what-is-a-consistency-check/2785#2785) /SE)
* Manage script dependencies and data flow with a build automation tool, like GNU make ([Karl Broman](http://kbroman.github.io/minimal_make/)/[Zachary Jones](http://zmjones.com/make/))
**Coding**
* organize source code in logical units or building blocks ([Josh Reich](https://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing/1434424#1434424)/[hadley](https://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing/1430569#1430569)/[ars](https://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs/1269808#1269808) /SO; [giovanni](http://www.biostars.org/p/821/#825)/[Khader Shameer](http://www.biostars.org/p/821/#828) /BS)
* separate source code from editing stuff, especially for large project -- partly overlapping with previous item and reporting
* Document everything, with e.g. [R]oxygen ([Shane](https://stackoverflow.com/questions/2284446/organizing-r-source-code/2284486#2284486) /SO) or consistent self-annotation in the source file -- a good discussion on Medstats, [Documenting analyses and data edits Options](http://groups.google.com/group/medstats/browse_thread/thread/601793e6ce36e789)
* [R] Custom functions can be put in a dedicated file (that can be sourced when necessary), in a new environment (so as to avoid populating the top-level namespace, [Brendan OConnor](https://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs/1319786#1319786) /SO), or a package ([Dirk Eddelbuettel](https://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs/1266400#1266400)/[Shane](https://stackoverflow.com/questions/2284446/organizing-r-source-code/2284486#2284486) /SO)
**Analysis**
* Don't forget to set/record the seed you used when calling RNG or stochastic algorithms (e.g. k-means)
* For Monte Carlo studies, it may be interesting to store specs/parameters in a separate file ([sumatra](http://neuralensemble.org/trac/sumatra) may be a good candidate, [giovanni](http://www.biostars.org/p/821/#825) /BS)
* Don't limit yourself to one plot per variable, use multivariate (Trellis) displays and interactive visualization tools (e.g. GGobi)
**Versioning**
* Use some kind of [revision control](http://en.wikipedia.org/wiki/Revision_control) for easy tracking/export, e.g. Git ([Sharpie](https://stackoverflow.com/questions/2712421/r-and-version-control-for-the-solo-data-analyst/2715569#2715569)/[VonC](https://stackoverflow.com/questions/2545765/how-can-i-email-someone-a-git-repository/2545784#2545784)/[JD Long](https://stackoverflow.com/questions/2286831/how-do-you-combine-revision-control-with-workflow-for-r/2290194#2290194) /SO) -- this follows from nice questions asked by @Jeromy and @Tal
* Backup everything, on a regular basis ([Sharpie](https://stackoverflow.com/questions/2712421/r-and-version-control-for-the-solo-data-analyst/2715569#2715569)/[JD Long](https://stackoverflow.com/questions/2286831/how-do-you-combine-revision-control-with-workflow-for-r/2290194#2290194) /SO)
* Keep a log of your ideas, or rely on an issue tracker, like [ditz](http://ditz.rubyforge.org/ditz/) ([giovanni](http://www.biostars.org/p/821/#825) /BS) -- partly redundant with the previous item since it is available in Git
**Editing/Reporting**
* [R] Sweave ([Matt Parker](https://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing/1430013#1430013) /SO) or the more up-to-date [knitr](http://yihui.name/knitr/)
* [R] Brew ([Shane](https://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing/1436809#1436809) /SO)
* [R] [R2HTML](http://cran.r-project.org/web/packages/R2HTML/index.html) or [ascii](http://cran.r-project.org/web/packages/ascii/index.html)
As a side note, Hadley Wickham offers a comprehensive overview of [R project management](http://github.com/hadley/devtools/wiki), including *reproducible exemplification* and an *unified philosophy of data*.
Finally, in his R-oriented [Workflow of statistical data analysis](http://www.kirchkamp.de/oekonometrie/pdf/wf-screen2.pdf) Oliver Kirchkamp offers a very detailed overview of why adopting and obeying a specific workflow will help statisticians collaborate with each other, while ensuring data integrity and reproducibility of results. It further includes some discussion of using a weaving and version control system. Stata users might find J. Scott Long's [The Workflow of Data Analysis Using Stata](http://www.stata.com/bookstore/workflow-data-analysis-stata/) useful too. |
In regression in general and in linear regression in particular, causal interpretation of parameters is sometimes permitted. At least in econometrics literature, but not only, when causal interpretation is permitted is not so clear. For a discussion, you can see *Regression and Causation: A Critical Examination of Six Econometrics Textbooks* - Chen and Pearl (2013).
For proper handling of causality in a statistical model the best way probably is to use Structural Causal Model as explained, for example (shortly), in *Trygve Haavelmo and the Emergence of Causal Calculus* – Pearl 2012 Feb.
However, currently, these are not the standard method in basic econometrics model (classic multiple linear regression). Indeed is frequently used the concept of “true model” or “data generating process” that sometimes has explicit causal meaning. In any case, I want to consider only causal sense. Therefore if we estimate the sample counterpart of “true model” we achieve causal interpretation of parameters.
Keeping in mind the above consideration, my attempt is to grasp
* the link between the concept of “true model” (of current econometrics textbooks) and structural causal model (of Pearl) … if any.
* The link between the previous point and the concept of *randomized controlled experiment*, as used in laboratory, that sometimes is the reference point in econometrics observational study (*as good as* it). For example Stock and Watson (2013) spend a lot of discussion about that (particularly cap 13). Moreover, in Pearl 2012feb page 14, there is a debate review between “structuralists” and “experimentalists” that is strongly related to this point.
Can you explain to me something about these two points in the simplest possible scenario? | In the context of the Pearl paper you've given, what most econometricians would call a *true model* is input I-1 to the Structural Causal Model: a set of assumptions $A$ and a model $M\_A$ that encodes these assumptions, written as a system of structural equations (as in Models 1 and 2) and a list of statistical assumptions relating the variables. In general, the true model need not be recursive, so the corresponding graph can have cycles.
**What's an example of a true model?** Consider the relationship between schooling and earnings, described in Angrist and Pischke (2009), section 3.2. For individual $i$, what econometricians would call the *true model* is an assumed function mapping any level of schooling $s$ to an outcome $y\_{si}$:
$$
y\_{si} = f\_i(s).
$$
This is exactly the potential outcome. One could go further and assume a parametric functional form for $f\_i(s)$. For example, the linear constant effects causal model:
$$
f\_i(s) = \alpha + \rho s + \eta\_i.
$$
Here, $\alpha$ and $\rho$ are unobserved parameters. By writing it this way, we assume that $\eta\_i$ does not depend on $s$. In Pearl's language, this tells us what happens to expected earnings if we fix an individual's schooling at $s\_i = s\_0$, but we don't observe $\eta\_i$:
$$
E[y\_{si} \mid do(s\_i = s\_0)] = E[f\_i(s\_0)] = \alpha + \rho s\_0 + E[\eta\_i].
$$
We haven't said what queries we're interested in, or what data we have. So the "true model" is not a full SCM. (This is generally true, not just in this example.)
**What's the connection between a true model and a randomized experiment?** Suppose an econometrician wants to estimate $\rho$. Just observing $(s\_i, y\_i)$ for a bunch of individuals isn't sufficient. This is identical to Pearl's point about statistical conditioning. Here
$$
E[y\_{si} \mid s\_i = s\_0] = E[f\_i(s\_0) \mid s\_i = s\_0] = \alpha + \rho s\_0 + E[\eta\_i \mid s\_i = s\_0].
$$
As Angrist and Pischke point out, $\eta\_i$ may be correlated with $s\_i$ in observational data, due to selection bias: an individual's decision about schooling might depend on her value of $\eta\_i$.
Randomized experiments are one way to correct for this correlation. Using Pearl's notation loosely here, if we randomly assign our subjects to $do(s\_i = s\_0)$ and $do(s\_i = s\_1)$ then we can estimate $E[y\_{si} \mid do(s\_i = s\_1)]$ and $E[y\_{si} \mid do(s\_i = s\_0)]$. Then $\rho$ is given by:
$$
E[y\_{si} \mid do(s\_i = s\_1)] - E[y\_{si} \mid do(s\_i = s\_0)] = \rho(s\_1 - s\_0).
$$
With additional assumptions and data, there are other ways to correct for the correlation. A randomized experiment is only considered the "best" because we may not believe the other assumptions. For example, with the Conditional Independence Assumption and additional data, we could estimate $\rho$ by OLS; or we could bring in instrumental variables.
**Edit 2 (CIA)**: This is mainly a philosophical point, and Angrist and Pischke may disagree with my presentation here. The Conditional Independence Assumption (selection on observables) lets us correct for selection bias. It adds an assumption about joint distributions: that
$$
f\_i(s) \perp\!\!\!\perp s\_i \mid X\_i
$$
for all $s$. Using just conditional expectation algebra (see the derivation in Angrist and Pischke) it follows that we can write
$$
y\_i = f\_i(s\_i) = \alpha + \rho s\_i + X\_i' \gamma + v\_i
$$
with $E[v\_i \mid X\_i, s\_i] = 0$. This equation allows us to estimate $\rho$ in the data using OLS.
Neither randomization nor the CIA goes into the system of equations that defines the true model. They are statistical assumptions that give us ways to estimate parameters of a model we've already defined, using the data we have. Econometricians wouldn't typically consider the CIA part of the true model, but Pearl would include it in $A$. |
So I have a problem that I'm looking over for an exam that is coming up in my Theory of Computation class. I've had a lot of problems with the *pumping lemma*, so I was wondering if I might be able to get a comment on what I believe is a valid proof to this problem. From what I have seen online and in our review I don't think this is the customary answer to this problem so I want to know if I am applying the concepts behind the pumping lemma successfully. The problem is *not* a homework problem and can be found on my professor's previous exams [here](http://www.cs.ucf.edu/%7Edmarino/ucf/transparency/cot4210/exam/) under the fourth problem of his exam given in Fall of 2011, which is...
>
> Let $L = \{0^p \mid \text{\(p\) is a prime number}\}$. Prove that $L$ is not context-free using the pumping lemma for context-free languages.
>
>
>
So here is my proof:
>
> Assume that the pumping length is $m$, where $m+1$ is a prime number. I shall also assume that there is a string $uvxyz = 0^{(m/2)}00^{m/2} \in L$. There are two possible positions that do not violate conditions 2 and 3 of the pumping lemma for context languages, being $|vy| > 0$ and $|vxy| \leq m$. These are:
>
>
> 1. $u = 0^{(m/2)}, v = 0, x = 0^{m/2}$, pumping by one results in $0^{m/2}000^{m/2}$. Since m/2 + m/2 is m, which is one less than the prime number m+1, it is an even number. m+2 is also an even number and since $|0^{m/2}000^{m/2}| = m + 2$, this number of zeroes is also even and thus cannot be prime, resulting in a contradiction.
> 2. The other placement is to place the string on the symmetric opposite or $x = 0^{m/2}, y = 0, z = 0^{m/2}$. This results in the same contraction as in case 1.
>
>
>
The string cannot be placed in the center such that $v = 0^{m/2}, x = 0, y = 0^{m/2}$ as this would violate condition three or $|vxy| \leq m$, since $|vxy| = m + 1 > m$.
So my question is essentially, is this a valid proof and if not what is wrong with it? | If $L$ is context-free, then by [Parikh's theorem](http://en.wikipedia.org/wiki/Parikh's_theorem), the set $\{p \mid \text{$p$ is a prime number}\}$ is a finite union of arithmetic progressions. Therefore, there exists an infinite arithmetic progression $a, a+r, a+2r, \dotsm$ consisting only of prime numbers. In particular $a$ should be prime, but then $a + ar = a(r+1)$ is not prime. Contradiction. |
I just starting taking a course on Data Structures and Algorithms and my teaching assistant gave us the following pseudo-code for sorting an array of integers:
```
void F3() {
for (int i = 1; i < n; i++) {
if (A[i-1] > A[i]) {
swap(i-1, i)
i = 0
}
}
}
```
It may not be clear, but here $n$ is the size of the array `A` that we are trying to sort.
In any case, the teaching assistant explained to the class that this algorithm is in $\Theta(n^3)$ time (worst-case, I believe), but no matter how many times I go through it with a reversely-sorted array, it seems to me that it should be $\Theta(n^2)$ and not $\Theta(n^3)$.
Would someone be able to explain to me why this is $Θ(n^3)$ and not $Θ(n^2)$? | This algorithm can be re-written like this
1. Scan `A` until you find an [inversion](https://en.wikipedia.org/wiki/Inversion_%28discrete_mathematics%29).
2. If you find one, swap and start over.
3. If there is none, terminate.
Now there can be at most $\binom{n}{2} \in \Theta(n^2)$ inversions and you need a linear-time scan to find each -- so the worst-case running time is $\Theta(n^3)$. A beautiful teaching example as it trips up the pattern-matching approach many succumb to!
*Nota bene:* One has to be a little careful: some inversions appear early, some late, so it is not per se trivial that the costs add up as claimed (for the lower bound). You also need to observe that swaps never introduce *new* inversions. A more detailed analysis of the case with the inversely sorted array will then yield something like the quadratic case of Gauss' formula.
As @gnasher729 aptly comments, it's easy to see the worst-case running time is $\Omega(n^3)$ by analyzing the running time when sorting the input $[1, 2, \dots, n, 2n, 2n-1, \dots, n+1]$ (though this input is probably not *the* worst case).
Be careful: don't assume that a reversely-sorted array will necessarily be the worst-case input for all sorting algorithms. That depends on the algorithm. There are some sorting algorithms where a reversely-sorted array isn't the worst case, and might even be close to the best case. |
I am working with a time series of **anomaly scores** (the background is anomaly detection in computer networks). Every minute, I get an anomaly score $x\_t \in [0, 5]$ which tells me how "unexpected" or abnormal the current state of the network is. The higher the score, the more abnormal the current state. Scores close to 5 are theoretically possible but occur almost never.
Now I want to come up with an algorithm or a formula which automatically determines a **threshold** for this anomaly time series. As soon as an anomaly score exceeds this threshold, an alarm is triggered.
The frequency distribution below is an example for an anomaly time series over 1 day. However, it is **not** safe to assume that every anomaly time series is going to look like that. In this special example, an anomaly threshold such as the .99-quantile would make sense since the few scores on the very right can be regarded as anomalies.
![enter image description here](https://i.stack.imgur.com/Ieo7b.jpg)
And the same frequency distribution as time series (it only ranges from 0 to 1 since there are no higher anomaly scores in the time series):
![time series](https://i.stack.imgur.com/0ZPoe.jpg)
Unfortunately, the frequency distribution might have shapes, where the .99-quantile is **not useful**. An example is below. The right tail is very low, so if the .99-quantile is used as threshold, this might result in many false positives. This frequency distribution **does not seem to contain anomalies** so the threshold should lie outside the distribution at around 0.25.
![enter image description here](https://i.stack.imgur.com/gZw9a.jpg)
Summing up, the difference between these two examples is that the first one seems to exhibit anomalies whereas the second one does not.
From my naive point of view, the algorithm should consider these two cases:
* If the frequency distribution has a large right tail (i.e. a couple abnormal scores), then the .99-quantile can be a good threshold.
* If the frequency distribution has a very short right tail (i.e. no abnormal scores), then the threshold should lie outside the distribution.
/edit: There is also no ground truth, i.e. labeled data sets available. So the algorithm is "blind" against the nature of the anomaly scores.
Now I am not sure how these observations can be expressed in terms of an algorithm or a formula. Does anyone have a suggestion how this problem could be solved? I hope that my explanations are sufficient since my statistical background is very limited.
Thanks for your help! | You might find [this paper](http://www.stat.duke.edu/~mw/Smith+West1983.pdf) of interest. See also more detailed presentation of similar models in [West & Harrison](http://rads.stackoverflow.com/amzn/click/0387947256). There are other examples of this sort of monitoring as well, many which are more recent, but this isn't exactly my wheelhouse :). Undoubtedly there are suitable implementations of these models, but I don't know what they might be offhand...
The basic idea is that you have a switching model where some observations/sequence of observations are attributed to abnormal network states while the rest are considered normal. A mixture like this could account for the long right tail in your first plot. A dynamic model could also alert you to abnormal jumps like at 8:00 and 4:00 in real-time by assigning high probability to new observations belonging to a problem state. It could also be easily extended to include things like predictors, periodic components (perhaps your score rises/falls a bit with activity) and that sort of thing.
Edit: I should also add, this kind of model is "unsupervised" in the sense that anomalies are caught either by showing a large mean shift or increase in variance. As you gather data you can improve the model with more informative prior distributions. But perhaps once you have enough data (and hard-won training examples by dealing with network problems!) you could devise some simple monitoring rules (thresholds, etc) |
Let's say we are working with a system that has 40 physical address bits. The total physical address space (assuming byte-addressable memory) is $2^{40}$ bytes, or 1 TiB. And if virtual addresses are 48 bits in length, that means there are more addresses available to virtual memory than there are locations in physical memory.
This makes sense to me, because the "excess" addresses could refer to hard disk locations as well. However, what I don't understand is how the translation between virtual and physical addresses occurs. I assume there is a mapping stored somewhere which links VAS locations to the physical locations. If there are more virtual address locations than physical locations, how can all of these mappings possibly be stored in memory? At minimum you would need 48 bits to store each virtual address, and then another 40 to store the physical location it maps to. So obviously you cannot just store a 1:1 mapping of each virtual address to its physical counterpart, as mapping every location would take more memory than physical memory itself.
What exactly am I missing here? | The trick to making this work is "paging." When bringing data from a hard disk into physical memory, you don't just bring a few bytes. You bring an entire page. 4k bytes is a very common page size.
If you only need to keep track of pages, not each individual byte, the mapping becomes much cheaper. If you have a 48 bit address space and 4096 byte pages, you only need to track which of the 2^36 pages (roughly 69 billion pages). That's much easier! The record of where all of the pages are found is known as a "page table."
If you actually need 1-256 TiB of memory, then giving up a few gigabytes to store this page table isn't a big deal. In practice however, we'll do things like use [multi-level page tables](https://en.wikipedia.org/wiki/Page_table#Multilevel_page_table), which lets us be a bit more efficient, keeping pages only for regions of the address space that we are actually using. |
I'm reading Sedgewick and Wayne's book of Algorithm. When I read the following proof in the attached picture, I don't understand why it assumed the comparison number is lg(number of leaves). Any help is appreciated!![enter image description here](https://i.stack.imgur.com/qHTRv.png) | A sorting algorithm using at most $h$ comparisons on all inputs corresponds to a tree of height at most $h$. Such a tree has at most $2^h$ leaves. On the other hand, each permutation of $1,\ldots,N$ must land at a different leaf, and so there must be at least $N!$ leaves. Putting these together, we deduce that $2^h \geq N!$ and so $h \geq \log\_2 N! = \Omega(N\log N)$ (using Stirling's approximation). So every sorting algorithm must use at least $\log\_2 N! = \Omega(N\log N)$ comparisons in the worst case (on some inputs it can use less). |
Is it common practice in RL to have only one reward given at the end of the task?Or it is also possible to introduce subtasks/intermediate goals, so that feedback is not so delayed and more reward (functions) are necessary? | >
> Is it common practice in RL to have only one reward function awarded when a task is fulfilled in the end?
>
>
>
This isn't quite the correct definition of a reward function. An MDP has a *single* reward function, $R(s,a,s'): S \times A \times S \mapsto \mathbb{R}$, where $S, A$ are the sets of states, actions in the problem. You'll sometimes see versions with fewer arguments, say $R(s,a)$ or $R(s)$.
$R$ returns rewards for *every* state transition. Many of them, or even all but one, can be zero. Or, other intermediate states can include positive or negative rewards. Both are possible, and dependent on the particular application.
This the definition you'll find at the start of most reinforcement learning papers, e.g. [this one on reward shaping](http://www.cs.berkeley.edu/~pabbeel/cs287-fa09/readings/NgHaradaRussell-shaping-ICML1999.pdf), the related study of how one can alter the reward function without affecting the optimal policy. |
Are there any other methods to update my belief in a hypothesis aside from the Bayesian update rule? | There are some alternatives, in fact, but they rely on using non-probabilistic methods. (The uniqueness of Bayes Law is implied by the uniqueness of a single probability measure, and the definition of joint probability - see [this other answer](https://stats.stackexchange.com/a/154377/35185) for details)
[Dempster Shafer theory](https://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory) is an alternative, as are more complex formalisms such as DSMT. Similarly, [Imprecise probabilities](https://en.wikipedia.org/wiki/Imprecise_probability) can do similar things - but they still use Bayes' law. Fuzzy Logic and other formalisms may also be of interest. |
I read about Big-O notation with modular arithmetic. So, Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, where an elementary operation takes a fixed amount of time to perform. Thus the amount of time taken and the number of elementary operations performed by the algorithm differ by at most a constant factor. Some algorithms have a worst-case time of $O(n)$ or $O(n^2)$ or $O(n^3)$.
What I want to know is what is the best time complexity of an algorithm? And why? | The problem you are describing is indeed a shortcoming of asymptotic notation. Asymptotic notation has several advantages:
1. It doesn't depend as much on the underlying computation model. For example, the microprocessor executing the compiled code shouldn't affect the running time by more than a constant.
2. It avoids cumbersome exact results, for example instead of $n^2 + 3.5n\log n + 700\log n\lfloor \log\log n \rfloor$ we can write simply $O(n^2)$ (or even $\Theta(n^2)$).
3. In many cases it is hard to analyze exactly the running time (even ignoring the first issue), but easy to come up with asymptotics. Standard examples are recursive algorithms. In those case, however, it is often possible to come up with first-order estimates, that is, perhaps we can say that the running time satisfies $T(n) \sim 2n^2$, which is more accurate than $T(n) = \Theta(n^2)$.
4. Practically speaking, in many cases the constants hidden by the big O notation are "small" or at least "reasonable", and therefore for $n$ which is not too small, a $\Theta(n)$ algorithm will usually be faster than a $\Theta(n^2)$ algorithm.
5. Big O estimates function as ball park heuristics, which allow us to guess which algorithm is better or which part of an algorithm needs optimization the most. If in doubt, however, it is better to use profiling.
On the other hand, in some cases the problem you're indicating is serious. A classical case is fast matrix multiplication, where the trivial $O(n^3)$ algorithm often beats Strassen's $O(n^{\log\_2 7})$, and always beats all other algorithms (which have better asymptotic running time).
Summarizing, there is no easy answer. Asymptotic notation is a good model of reality in many cases, but not always. Still, it is useful since it is concise and allows quick comparison between different algorithms. |
First of all, sorry for the strange title, I had no idea how to describe my problem better. My issue is the following, I think it is pretty much limited to geosciences.
I have several properties for every sample, which are divided by depth.
For instance:
$ \qquad \displaystyle \small \begin{array} {r|rrr} \hline
ID & 1 & 2 &3 & ...\\ \hline
\text{var1}\_{0-20cm} & 2.3 &2.0 &1.0& ...\\
\text{var1}\_{20-50cm} & 2.1 &1.1 &0.0& ...\\
\text{var1}\_{50-100cm}& 2.6 &1.1 &0.0& ...\\ \hline
\text{var2}\_{0-20cm} & 10.5 &5.5 &3.5& ...\\
\text{var2}\_{20-50cm} & 10.9 &5.9 &1.9& ...\\
\text{var2}\_{50-100cm}& 15.0 &5.0 &1.0& ...\\ \hline
\vdots & \vdots & \vdots\\ \hline \end{array}
$
Basically these are geological layers going from surface down to 100 cm depth.
I am trying to decrease the number of variables, either with PCA or factor analysis.
The issue is, that I would like to handle properties together, no matter what the depth is.
(For instance I do not want to get rid of a layer in between the surface and the bottom layer.)
Is there any way to handle them together, or group them for PCA or whatever. I tried to find some relevant information, but I think the problem is limited to a small portion of the science (maybe I am wrong), so I could not find anything useful. | What you could do is use [Multiple Factor Analysis](https://en.wikipedia.org/wiki/Multiple_factor_analysis). This method allows for factor analysis in which you consider multiple groups of variables.
If you set up your analysis so that each group is a depth then it garantees that all your depth will be 'preserved'.
EDIT : Maybe explaining a bit more would be useful
In MFA, as in PCA, you have coordinates for your individuals and your variables. But what's new with MFA is the groups of variables for which you can compute coordinates too so you can extract coordinates for all of your groups (depth) on the first few dimensions, effectively reducing variable number and keeping all your depths.
If you consider your individuals, you will have several sets of coordinates, one for each of the groups of variables (a description of the individuals by each group of variable if you will) and a set of coordinates which is the centroid of all groups coordinates (partial representations), that last set of coordinates could be interpreted by how the individuals are described overall |
According to [Immerman](http://books.google.ca/books?id=kWSZ0OWnupkC&pg=PA224#v=onepage&q&f=false), the complexity class associated with [SQL](http://en.wikipedia.org/wiki/SQL) queries is exactly the class of *safe queries* in $\mathsf{Q(FO(COUNT))}$ (first-order queries plus counting operator): SQL captures safe queries. (In other words, all SQL queries have a complexity in $\mathsf{Q(FO(COUNT))}$, and all problems in $\mathsf{Q(FO(COUNT))}$ can be expressed as an SQL query.)
Based on this result, from theoretical point of view, there are many interesting problems that can be solved efficiently but are not expressible in SQL. Therefore an extension of SQL which is still efficient seems interesting. So here is my question:
>
> Is there an **extension of SQL** (implemented and **used in the industry**) which **captures $\mathsf{P}$** (i.e. can express all polynomial-time computable queries and no others)?
>
>
>
I want a database query language which stisfies all three conditions. It is easy to define an extension which would extend SQL and will capture $\mathsf{P}$. But my questions is if such a language makes sense from the practical perspective, so I want a language that is being used in practice. If this is not the case and there is no such language, then I would like to know if there is a reason that makes such a language uninteresting from the practical viewpoint? For example, are the queries that rise in practice usually simple enough that there is no need for such a language? | As for your main question, I recommend this [short survey](http://www2.informatik.hu-berlin.de/~grohe/pub/gro08b.pdf) by Martin Grohe.
---
>
> Are the queries that are needed in practice usually simple enough that there is no need for a stronger language?
>
>
>
I'd say this holds most of the time, given the fair amount of extensions added to common query languages (transitive closure, arithmetic operators, counting, etc.). This comes from the point of view of somebody who has done some work as a freelance developer of relatively simple web sites and other applications, so I'm not sure about the real uses of SQL in bigger industries/larger databases.
In the rare cases a more powerful language might be needed, my guess is that software developers deal with them by using the language in which they write the application, not the queries (like C++ or java). |
i have a problem regarding the following situation.
I have two arrays of numbers like this:
```
index/pos 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Array 1(i): 1 2 3 4 7 5 4 3 7 6 5 1 2 3 4 2
Array 2(j): 4 4 8 10 10 7 7 10 10 11 7 4 7 7 4
```
now suppose the second array is very hard to compute but I have noticed that if I add
A[i] + A[i+1]
in the array 1 I get the number very close to the number A[j] in the array 2.
1. Is my solution a heuristic or approximation?
2. If I had a reason to believe that I will never overshoot the value of A[j] by +-x with this algorithm and can prove it, would then my solution be a heuristic or approximation?
Is there any literature that deals with heuristic vs. approximation questions for P class problems where the solution can be achieved in polynomial time but the input is just too big for a poly time algorithm to be practical.
thank you | A *heuristic* is essentially a hunch, i.e., the case you describe ("I noticed it is near", you don't have a *proof* it is so) is a heuristic. As is solving the [traveling salesman problem](http://en.wikipedia.org/wiki/Travelling_salesman_problemhttp://) by starting at a random vertex and going to the nearest not yet visited each step. It is a *plausible* idea, that *should not* give a too bad solution. In this case, it can be shown that it won't always give a good solution.
An *approximation algorithm* is usually understood to give an approximate solution, with some kind of guarantee of performance (i.e., it solves TSP, and the total cost is never off by more than a factor of 2; or even better, it solves TSP and, depending on <some parameters that can be varied> the solution is never worse than optimal by more than a factor $1 + \epsilon$, where $\epsilon$ depends on <parameters>). |
I have 3 input variables and the output for all 8 possible combinations is 0 (false). When making a circuit, how would I show this using gates or no gates at all? Thanks! | The key property that we want from (non-cryptographic) pseudorandom numbers is that they "look" independent. In particular, say you have some algorithm that requires a PRNG to perform well and you give it a current time function as a PRNG. Then, if the algorithm repeatedly queries what is supposed to be a PRNG, it will actually see that it gets the same value many times in a row until, until it eventually increments by one. These values are very, very far from random and can be detrimental to algorithmic performance. |
Consider the following scenario:
Alice subscribes to a video rental service that allows her to watch movies. Every time A watches a movie, she rates it either thumbs up (1) or thumbs down (0), and she then chooses the next movie she wants to watch. Every movie belongs to exactly one director, and a director can have directed many movies. The question is, what is the best way to determine who A's "favorite" director is?
My initial thought was do something like:
* For each director of at least one movie that A watched, calculate the lower bound of some binomial confidence interval (e.g. [Wilson score interval](http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Wilson_score_interval)) as A's "favorability" score for that director
However, this binomial approach seems flawed because it ignores a seemingly crucial piece of information: Alice has an entire universe of movies to choose from, and if she consistently chooses to watch movies from a certain director, then doesn't that tell us something about her preference for that director, even if she then rates that director's movies below her average? I feel like there must be some element of "voting with one's feet" that is ignored if we only consider ratings on the movies that were watched.
What is the best way to combine both the selection of movie/director with the ratings on individual movies to determine who is A's favorite director? It seems like A's preference for director D has to be a function of A's ratings on D's movies that she watched, and also the % of all of D's movies that A chose to watch.
UPDATE: I should make clear, the problem I'm dealing with isn't quite as simple as the thumbs up/thumbs down case, it's really more like "A watches a movie and then checks a box if she liked it." So each viewing does result in a 0 or a 1, but the absence of checking a box isn't quite the same thing as a "thumbs down" because the viewer might only feel compelled to check the "approve" box if she *really* likes something. All the more reason that the choice of what to watch has to factor into preferences | A really simple answer is the modal director, but that does not adjust for the composition, since some directors may be more prolific or simply older.
For each user, I would consider the ratio of **liked** movies by director $i$ to all movies watched by director $i$, scaled by the ratio of all movies watched to **all** movies made by director $i$. When a user has not seen any movies directed by $i$, this ratio is undefined, and can be reset to zero. The director with the highest value of this quantity is the favorite. I got the geometric intuition for this formula from a Venn diagram. I think this controls for the size of each director's corpus and is bounded between 0 and 1.
Here are some examples with made up numbers. Alice has liked 5 movies by Herzog out of the 10 he has made and all of which she has seen. She has watched 20 movies total. The Herzog score is $\frac{5}{10}/{\frac{20}{10}}=0.25$. Suppose she has only seen seven Herzogs. The score jumps to $\frac{5}{7}/{\frac{20}{10}}=0.35$.
People also tend to watch movies in groups, and so they see movies they dislike. This needs to be accounted for. Suppose Alice has only watched Herzog to humor her husband Bob and she liked none of them. The score is now $\frac{0}{10}/{\frac{20}{10}}=0$. To me, this is the sensible interpretation of watching and disliking. It does not depend on how many Herzogs she has seen.
This approach does not explicitly use the sequence of movies. It probably matters if she watched all the Greenway films before all the Herzogs. On the other hand, people develop tastes over time, so maybe the order is less interesting, though maybe you can use the timing to break ties. It also does not use the "clumpiness". If she watched every Herzog in a row after seeing the first one, that's a strong signal she likes his work, compared to if they were all scattered throughout her viewing history. Maybe you can scale the score above by an entropy measure, but I don't know enough about this to really help. |
Using R or SAS, I want to fit the following Gaussian model:
$$
\begin{pmatrix}
y\_{1j1} \\ y\_{1j2} \\ y\_{1j3} \\ y\_{2j1} \\ y\_{2j2} \\ y\_{2j3}
\end{pmatrix}
\sim\_{\text{i.i.d.}}
{\cal N}
\left(
\begin{pmatrix}
\mu\_1 \\
\mu\_1 \\
\mu\_1 \\
\mu\_2 \\
\mu\_2 \\
\mu\_2
\end{pmatrix}
, \Sigma \right), j=1, \ldots n
$$
with covariance matrix having the following structure:
$$
\Sigma =
\begin{pmatrix}
\Sigma\_0 & M \\
M & \Sigma\_0
\end{pmatrix}
$$
with $\Sigma\_0$ a "compound symmetry" (exchangeable) covariance matrix and $M=\delta \begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{pmatrix}$, $\delta \in \mathbb{R}$.
Importantly, I need a general exchangeable matrix $\Sigma\_0$, with possibly negative correlation.
EDIT: In view of some comments (and even an answer) given below I should add a precision: I am not a beginner with PROC MIXED in SAS and nlme in R, and I know how to consult the documentations. But in spite of my experience I am not able to fit this model. | I think you may have to go for direct optimisation of the loglikelihood, especially if you want to allow negative correlations in the diagonal block. At least in R; I don't speak SAS.
I would follow what `lme4` does in profiling out the $\mu$ terms and an overall scale factor and then feeding the -2 log profile likelihood to `minqa::bobyqa`, which is a fast, derivative-free optimiser for smooth functions. That's what I've done for composite-likelihood mixed models.
The profile likelihood calculations are in the `lme4` "Computational Methods" vignette. |
Basically, I currently have two ideas but unsure on which is correct for the following question:
"The High level data link control protocol (HDLC), is a popular protocol used for point-to-point data communication. In HDLC, data are organised into frames which begin and end with the sequence 01111110. This sequence never occurs within the main body of the frame, only at the beginning and end (in order to avoid confusion).
a.)Design an NFA which recognises the language of binary strings which contain one or more HDLC frames"
My possible solutions:
[![enter image description here](https://i.stack.imgur.com/ejT9w.png)](https://i.stack.imgur.com/ejT9w.png)
The next part is to convert to DFA, but I first need to get this part right. | The previous answers seem to be incorrect. Let $A = \{0,1\}$. Since you don't want to have the pattern $0110$ as a strict internal factor, the language you are looking can be written as $01111110 A^\* 01111110 \setminus 0A^\* 01111110 A^\*0$. You may use the question [Designing a regular expression for the set of all binary that contain strings without a particular sub string (i.e. 110)](https://cs.stackexchange.com/questions/11787/designing-a-regular-expression-for-the-set-of-all-binary-that-contain-strings-wi) but you will still have to work a little bit to find the minimal DFA of this language, which has 24 states. The unique final state is 24.
```
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
0 2 0 0 0 0 0 0 9 17 17 17 17 17 17 0 17 17 17 17 17 17 17 24 0
1 0 3 4 5 6 7 8 0 10 11 12 13 14 15 16 16 18 19 20 21 22 23 16 0
``` |
These days I'm using a lot (and discovering) nice ways to use k-means' clustering. For example, clustering word embeddings (word2vec vectors) to find synonyms or clustering doc vectors (doc2vec) to classify topics.
But for these tasks I always need to specify the number of clusteres (the 'k'). Is there some unsupervised algorithm that estimates a good number of clusteres and (but not must do) do the clustering task? | Yet another approach is to use the [Mean Shift](https://en.wikipedia.org/wiki/Mean_shift) algorithm. You must specify a radius around points to explore, but it determines the number of clusters. |
In Data Science, many seem to be using [**pandas**](http://pandas.pydata.org/index.html) dataframes as the datastore. What are the features of pandas that make it a superior datastore compared to **regular relational databases** like [MySQL](https://www.mysql.com/), which are used to store data in many other fields of programming?
While pandas does provide some useful functions for data exploration, you can't use SQL and you lose features like query optimization or access restriction. | I think the premise of your question has a problem. Pandas is not a "datastore" in the way an RDBMS is. Pandas is a Python library for manipulating data that will fit in memory. Disadvantages:
* Pandas does not persist data. It even has a (slow) function called TO\_SQL that will persist your pandas data frame to an RDBMS table.
* Pandas will only handle results that fit in memory, which is easy to fill. You can either use dask to work around that, or you can work on the data in the RDBMS (which uses all sorts of tricks like temp space) to operate on data that exceeds RAM. |
I want to know if there are any examples of real-life applications of the Laplace and Cauchy density functions. How do they differ in their applications?
This [related post](https://stats.stackexchange.com/q/33776/119261), however, does not answer my question. | One example is using them as [robust priors](https://stats.stackexchange.com/questions/177210/why-is-laplace-prior-producing-sparse-solutions/177217#177217) for regression parameters, where Laplace prior corresponds to LASSO (Tibshirani, 1996) , but $t$-distribution, or Cauchy are other alternatives (Gelman et al, 2008).
Moreover, you can have [L1 regularized](https://stats.stackexchange.com/questions/163388/l2-regularization-is-equivalent-to-gaussian-prior/163413#163413) regression with Laplace errors (i.e. minimizing absolute error).
Another example: [Laplace noise](https://stats.stackexchange.com/questions/223494/what-is-meant-by-laplace-noise) is used in currently trendy field of [differential-privacy](/questions/tagged/differential-privacy "show questions tagged 'differential-privacy'").
---
Tibshirani, R. (1996). [Regression shrinkage and selection via the lasso.](http://www.gautampendse.com/software/lasso/webpage/Tibshirani1996.pdf) Journal of the Royal Statistical Society. Series B (Methodological), 267-288.
Gelman, A., Jakulin, A., Pittau, G.M., and Su, Y.-S. (2008). [A weakly informative default prior distribution for logistic and other regression models.](http://projecteuclid.org/euclid.aoas/1231424214) The Annals of Applied Statistics, 2(4), 1360-1383. |
Will software eventually make statisticians obsolete? What is done that can't be programmed into a computer? | Academic studies which look at the probability of automation of different occupations or task do not think that statisticians will be soon substituted by computers. See for example the controversial [Frey & Osborne (2013)](http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf) study which ranks occupations according to their probability of computerization, statisticians are ranked low 213 out of 702 with a probability of 22% (see table in the appendix). If you are further interested, see also the [Slate article here](http://www.slate.com/blogs/future_tense/2013/09/27/researchers_claim_many_jobs_at_risk_for_automation_here_s_what_they_missed.html).
[Arntz et al. (2016)](http://www.oecd-ilibrary.org/the-risk-of-automation-for-jobs-in-oecd-countries_5jlz9h56dvq7.pdf;jsessionid=1p412y3oq5k5o.x-oecd-live-02?contentType=%2Fns%2FWorkingPaper&itemId=%2Fcontent%2Fworkingpaper%2F5jlz9h56dvq7-en&mimeType=application%2Fpdf&containerItemId=%2Fcontent%2Fworkingpaperseries%2F1815199x&accessItemIds=) ([here](http://www.economist.com/news/finance-and-economics/21699930-reasons-be-less-afraid-about-march-machines-im-afraid-i-cant-do) an The Economist article) look at tasks rather than occupations for the European Union and come to a similar conclusion: Doing "Complex Math or Statistics" is statistically significantly negatively related to job automatibilty (see Table 3).
But some caution is advisable, academics and/or economists have not always been very good in predicting the future (the Nobel laureate Robert Lucas for example concluded in 2003, a few years before the financial crises, that the ["central problem of depression prevention as been solved, for all practical purposes, and has in fact been solved for many decades."](http://pages.stern.nyu.edu/~dbackus/Taxes/Lucas%20priorities%20AER%2003.pdf)). Both studies appear to be working paper, which are widely discussed but have not been published in standard peer-reviewed journals.
Regarding the academic debate, [here](http://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.29.3.3) you can find an overview article about the state of research about automation. |
To demonstrate the importance of algorithms (e.g. to students and professors who don't do theory or are even from entirely different fields) it is sometimes useful to have ready at hand a list of examples where core algorithms have been deployed in commercial, governmental, or widely-used software/hardware.
I am looking for such examples that satisfy the following criteria:
1. The software/hardware using the algorithm should be in wide use right now.
2. The example should be specific.
Please give a reference to a specific system and a specific algorithm.
E.g., in "algorithm X is useful for image processing"
the term "image processing" is not specific enough;
in "Google search uses graph algorithms"
the term "graph algorithms" is not specific enough.
3. The algorithm should be taught in
typical undergraduate or Ph.D. classes in algorithms or data structures.
Ideally, the algorithm is covered in typical algorithms textbooks.
E.g., "well-known system X uses little-known algorithm Y" is not good.
---
### Update:
Thanks again for the great answers and links!
Some people comment that it is hard to satisfy the criteria
because core algorithms are so pervasive that it's hard to point to a specific use.
I see the difficulty.
But I think it is worthwhile to come up with specific examples because
in my experience telling people:
"Look, algorithms are important because they are *just about everywhere*!" does not work. | [PageRank](http://en.wikipedia.org/wiki/Pagerank) is one of the best-known such algorithms. Developed by Google co-founder Larry Page and co-authors, it formed the basis of Google's original search engine and is [widely credited](http://blog.computationalcomplexity.org/2013/07/altavista-versus-google.html) with helping them to achieve better search results than their competitors at the time.
We imagine a "random surfer" starting at some webpage, and repeatedly clicking a random link to take him to a new page. The question is, "What fraction of the time will the surfer spend at each page?" The more time the surfer spends at a page, the more important the page is considered.
More formally, we view the internet as a graph where pages are nodes and links are directed edges. We can then model the surfer's action as a random walk on a graph or equivalently as a Markov Chain with transition matrix $M$. After dealing with some issues to ensure that the Markov Chain is ergodic (where does the surfer go if a page has no outgoing links?), we compute the amount of time the surfer spends at each page as the steady state distribution of the Markov Chain.
The algorithm itself is in some sense trivial - we just compute $M^k \pi\_0$ for large $k$ and arbitrary initial distribution $\pi\_0$. This just amounts to repeated matrix-matrix or matrix-vector multiplication. The algorithms content is mainly in the set-up (ensuring ergodicity, proving that an ergodic Markov Chain has a unique steady state distribution) and convergence analysis (dependence on the spectral gap of $M$). |
Factor analysis has several rotation methods, such as varimax, quartimax, equamax, promax, oblimin, etc. I am unable to find any information that relates their names to their actual mathematical or statistical doings. Why is it called "equa-max" or "quarti-max"? In what way are the axes or matrices rotated so they have such name?
Unfortunately, most of them were invented in 1950s-1970s, so I cannot contact their authors. | This answer succeeds [this](https://stats.stackexchange.com/q/151653/3277) general question on rotations in factor analysis (please read it) and briefly describes a number of specific methods.
Rotations are performed iteratively and on every pair of factors (columns of the loading matrix). This is needed because the task to optimize (*max*imize or *min*imize) the objective criterion simultaneously for all the factors would be mathematically difficult. However, in the end the final rotation matrix $\bf Q$ is assembled so that you can reproduce the rotation yourself with it, multiplying the extracted loadings $\bf A$ by it, $\bf AQ=S$, getting the rotated factor structure matrix $\bf S$. The objective criterion is some property of the elements (loadings) of resultant matrix $\bf S$.
**Quartimax** orthogonal rotation seeks to *max*imize the *sum of all loadings raised to power 4* in $\bf S$. Hence its name ("quarti", four). It was shown that reaching this mathematical objective corresponds enough to satisfying the 3rd Thurstone's criterion of "simple structure" which sounds as: *for every pair of factors there is several (ideally >= m) variables with loadings near zero for any one of the two and far from zero for the other factor*. In other words, there will be many large and many small loadings; and points on the loading plot drawn for a pair of rotated factors would, ideally, lie close to one of the two axes. Quartimax thus **minimizes the number of factors needed to explain a variable**: it "simplifies" the rows of the loading matrix. But quartimax often produces the so called "general factor" (which most of the time is not desirable in FA of variables; it is more desirable, I believe, in the so called Q-mode FA of respondents).
**Varimax** orthogonal rotation tries to *max*imize *variance of the squared loadings in each factor* in $\bf S$. Hence its name (*var*iance). As the result, **each factor has only few variables with large loadings by the factor**. Varimax directly "simplifies" columns of the loading matrix and by that it greatly facilitates the interpretability of factors. On the loading plot, points are spread wide along a factor axis and tend to polarize themselves into near-zero and far-from-zero. This property seems to satisfy a mixture of Thurstones's simple structure points to an extent. Varimax, however, is not safe from producing points lying far away from the axes, i.e. "complex" variables loaded high by more than one factor. Whether this is bad or ok depends of the field of the study. Varimax performs well mostly in combination with the so called *Kaiser's normalization* (equalizing communalities temporarily while rotating), it is advised to always use it with varimax (and recommended to use it with any other method, too). It is the most popular orthogonal rotation method, especially in psychometry and social sciences.
**Equamax** (rarely, Equimax) orthogonal rotation can be seen as a method sharpening some properties of varimax. It was invented in attempts to further improve it. *Equa*lization refers to a special weighting which Saunders (1962) introduced into a working formula of the algorithm. Equamax *self-adjusts for the number* of the being rotated factors. It tends to distribute variables (highly loaded) more uniformly between factors than varimax does and thus further is less prone to giving "general" factors. On the other hand, equamax wasn't conceived to give up the quartimax's aim to simplify rows; equamax is *rather a combination of varimax and quartimax* than their in-between. However, equamax is claimed to be considerably less "reliable" or "stable" than varimax or quartimax: for some data it can give disastrously bad solutions while for other data it gives perfectly interpretable factors with simple structure. One more method, similar to equamax and even more ventured in quest of simple structure is called **parsimax** ("maximizing parsimony") (See Mulaik, 2010, for discussion).
I am sorry for stopping now and not reviewing the oblique methods - **oblimin** ("oblique" with "minimizing" a criterion) and **promax** (unrestricted *pro*crustes rotation after vari*max*). The oblique methods would require probably longer paragraphs to describe them, but I didn't plan any long answer today. Both methods are mentioned in Footnote 5 of [this answer](https://stats.stackexchange.com/a/151688/3277). I may refer you to Mulaik, *Foundations of factor analysis* (2010); classic old Harman's book *Modern factor analysis* (1976); and whatever pops out in the internet when you search.
See also [The difference between varimax and oblimin rotations in factor analysis](https://stats.stackexchange.com/q/113003/3277); [What does “varimax” mean in SPSS factor analysis?](https://stats.stackexchange.com/q/144966/3277)
Later addendum, with the history and the formulae, for meticulous
-----------------------------------------------------------------
**Quartimax**
In the 1950s, several factor analysis experts tried to embody Thurstone’s qualitative features of a “simple structure” (See footnote 1 [here](https://stats.stackexchange.com/a/151688/3277)) into strict, quantitative criteria:
* Ferguson reasoned that the most parsimonious disposition of points (loadings) in the space of factors (axes) will be when, for most pairs of factors, each of the two axes pierces its own clot of points, thus maximizing its own coordinates and minimizing the coordinates onto the perpendicular axis. So he suggested to minimize the products of loadings for each variable in pairs of factors (i,j), summed across all variables: $\sum^p\sum\_{i,j;i<j}(a\_i a\_j)^2$ ($a$ is a loading, an element of a `p variables x m factors` loading matrix $\bf A$, in this case we mean, the final loadings - after a rotation).
* Carroll also thought of pairs of factors (i,j) and wanted to minimize $\sum\_{i,j;i<j}\sum^p(a\_i^2 a\_j^2)$. The idea was that for each pair of factors, the loadings should mostly be unequal-sized or both small, ideally a zero one against a nonzero or zero one.
* Neuhaus and Wrigley wanted to maximize the *variance* of the squared values of loadings in the whole $\bf A$, in order the loadings to split themselves into big ones and near-zero ones.
* Kaiser also chose variance, but variance of the squared loadings in rows of $\bf A$; and wanted to maximize the sum of these variances across the rows.
* Saunders offered to maximize the *kurtosis* in the doubled distribution of the loadings (i.e., every loading from $\bf A$ is taken twice - with positive and with negative sign, since the sign of a loading is basically arbitrary). High kurtosis in this symmetric around zero distribution implies maximization of the share (contribution) of extreme (big) loadings as well of near-zero loadings, at the expense of the moderate-size loadings.
It then occurred (and it can be shown mathematically) that, in the milieu of orthogonal rotation, the optimization of all these five criteria is in fact equivalent from the “argmax” point of view, and they all can boil down to the maximization of
$Q= \sum^p\sum^m a^4$,
the overall sum of the 4-th power of loadings. The criterion therefore was called the **quartimax**. To repeat what was said in the beginning of the answer, quartimax minimizes the number of factors needed to explain a variable: it "simplifies" the rows of the loading matrix. But quartimax not rarely produces the so called "general factor".
**Varimax**
Having observed that quartimax simplifies well rows (variables) but is prone to “general factor”, Kaiser suggested to simplify $\bf A$’s columns (factors) instead. It was put above, that Kaiser’s idea for quartimax was to maximize the summed variance of squared loadings in rows of $\bf A$. Now he transposed the proposal and suggested to maximize the summed variance of squared loadings in columns of $\bf A$. That is, to maximize $\sum^m[\frac{1}{p} \sum^p (a^2)^2 - \frac{1}{p^2} (\sum^p a^2)^2]$ (the bracketed part is the formula of the variance of `p` squared values $a$), or, if multiplied by $p^2$, for convenience:
$V = \sum^m[p \sum^p (a^2)^2 - (\sum^p a^2)^2] = p \sum^m\sum^p (a^4) - \sum^m(\sum^p a^2)^2 = pQ - W$
where $V$ is the **varimax** criterion, $Q$ is the quartimax criterion, and $W$ is the sum of squared variances of the factors (after the rotation) [a factor's variance is the sum of its squared loadings].
[I’ve remarked that Kaiser obtained varimax by simply transposing the quartimax’s problem - to simplify columns in place of rows, - and you may switch places of `m` and `p` in the formula for $V$, to get the symmetric corresponding expression, $mQ – W^\*$, for quartimax. Since we are rotating columns, not rows, of the loading matrix, the quartimax’s term $W^\*$, the sum of squared communalities of the variables, does not change with rotation and therefore can be dropped from the objective statement; after which can also drop multiplier `m` - and stay with sole $Q$, what quartimax is. While in case of varimax, term $W$ changes with rotations and thus stays an important part of the formula, to be optimized along with it.]
*Kaiser normalization*. Kaiser felt dissatisfied with that variables with large communalities dictate the rotation by $V$ criterion much more than variables with small communalities. So he introduced normalizing all communalities to unit before launching the procedure maximizing $V$ (and, of course, de-normalizing back after the performed rotation - communalities don’t change in an orthogonal rotation). Per tradition, Kaiser normalization is often recommended to do – mainly with varimax, but sometimes along with quartimax and other rotation methods too, because, logically, it is not tied with varimax solely. Whether the trick is really beneficial, is an unsettled issue. Some software do it by default, some – by default only for varimax, still some – don’t set it to be a default option. (In the end of this answer, I have a remark on the normalization.)
So was varimax, who maximizes variances of squared loadings in columns of $\bf A$ and therefore simplifies the factors – in exact opposition to quartimax, who did that in rows of $\bf A$, simplifying the variables. Kaiser demonstrated that, if the population factor structure is relatively sharp (i.e., variables tend to cluster together around different factors), varimax is more robust (stable) than quartimax to removal of some variables from the rotation operation.
**Equamax** and **Parsimax**
Saunders decided to play up the fact that quartimax and varimax are actually one formula, $pQ - cW$, where $c=0$ (and then `p` traditionally is dropped) for quartimax and $c=1$ for varimax. He experimented with factor analytic data in the search of a greater value for coefficient $c$ in order to accentuate the varimaxian, non-quartimaxian side of the criterion. He found that $c=m/2$ often produces factors that are more interpretable than after varimax or quartimax rotations. He called $pQ – \frac{m}{2}W$ **equamax**. The rationale to make $c$ dependent on `m` was that as the number of factors grows while `p` does not, the *a priori* expected proportion of variables to be loaded by any one factor diminishes; and to compensate it, we should raise $c$.
In a similar pursuit of further “bettering” the generic criterion, Crawford arrived at yet another coefficient value, $c = p(m−1)/(p+m−2)$, depending both on `m` and `p`. This version of the criterion was named **parsimax**.
It is possible further to set $c=p$, yielding criterion **facpars**, “factor parsimony”, which, as I’m aware, is very seldom used.
(I think) It is still an open question if equamax or parsimax are really better than varimax, and if yes, then in what situations. Their dependence on the parameters `m` (and `p`) makes them self-tuning (for advocates) or capricious (for critics). Well, from purely math or general data p.o.v., raising $c$ means simply pushing factors in the direction of more equal final variances, - and not at all making the criterion “more varimax than varimax” or “balanced between varimax and quartimax” w.r.t. their objective goals, for both varimax and quartimax optimize well to the limit what they were meant to optimize.
The considered generic criterion of the form $pQ - cW$ (where Q is quartimax, $\sum^p\sum^m a^4$, and W is the sum of squared factor variances, $\sum^m(\sum^p a^2)^2$, is known as **orthomax**. Quartimax, varimax, equamax, parsimax, and facpars are its particular versions. In general, coefficient $c$ can take on any value. When close to `+infinity`, it produces factors of completely equal variances (so use that, if your aim is such). When close to `-infinity`, you get loadings equal to what you get if you rotate your loading matrix into its principal components by means of PCA (without centering the columns). So, value of $c$ is the parameter stretching the dimension “great general factor vs all factors equal strength”.
In their important paper of 1970, Crawford & Ferguson extend the varying $c$ criterion over to the case of nonorthogonal factor rotations (calling that more general coefficient kappa).
Literature
* Harman, H.H. Modern factor analysis. 1976.
* Mulaik, S.A. Foundations of factor analysis. 2010.
* Clarkson, D.B. Quartic rotation criteria and algorithms // Psychometrica, 1988, 53, 2, p. 251-259.
* Crawford, C.B., Ferguson, G.A. A general rotation criterion and its use in orthogonal rotation // Psychometrica, 1970, 35, 3, p. 321-332.
Comparing main characteristics of the criteria
----------------------------------------------
I’ve been generating `p variables x m factors` loading matrices as values from uniform distribution (so yes, that was not a sharp, clean factor structure), 50 matrices for each combination of `p` and `m/p` proportion, and rotating each loading matrix by quartimax (Q), varimax (V), equamax (E), parsimax (P), and facpars (F), all methods accompanied by Kaiser normalization. Quartimax (Q0) and varimax (V0) were also tried without Kaiser normalization. Comparisons between the criteria on three characteristics of the rotated matrix are displayed below (for each matrix generated, the 7 values of the post-rotational characteristic were rescaled into the 0-1 range; then means across the 50 simulations and 95% CI are plotted).
**Fig.1. Comparing the sum of variances of squared loadings in rows (maximizing this is the quartimax’s prerogative):**
[![enter image description here](https://i.stack.imgur.com/D0Hlm.png)](https://i.stack.imgur.com/D0Hlm.png)
Comment: Superiority of quartimax over the other criteria tend to grow as `p` increases or as `m/p` increases. Varimax most of the time is second best. Equamax and parsimax are quite similar.
**Fig.2. Comparing the sum of variances of squared loadings in columns (maximizing this is the varimax’s prerogative):**
[![enter image description here](https://i.stack.imgur.com/5E9fO.png)](https://i.stack.imgur.com/5E9fO.png)
Comment: Superiority of varimax over the other criteria tend to grow as `p` increases or as `m/p` increases. Quartimax’s tendency is opposite: as the parameters increase it loses ground. In the bottom-right part, quartimax is the worst, that is, with large-scale factor analysis it fails to mimic “varimaxian” job. Equamax and parsimax are quite similar.
**Fig.3. Comparing inequality of factor variances (this is driven by coefficient $c$); the variance used as the measure of “inequality”:**
[![enter image description here](https://i.stack.imgur.com/ThsnJ.png)](https://i.stack.imgur.com/ThsnJ.png)
Comment: Yes, with growing $c$, that is, in the line Q V E P F, the inequality of factor variances falls. Q is the leader of the inequality, which tells of its propensity for “general factor”, and at that its gap with the other criteria enlarges as `p` grows or `m/p` grows.
**Comparing inequality of factor variances (this is driven by coefficient $c$); proportion “sum of absolute loadings of the strongest factor / average of such sums across the rest m-1 factors” was used as the measure of “inequality”:**
This is another and more direct test for the presence of “general factor”. The configuration of results was almost the same as on the previous picture **Fig.3**, so I’m not showing a picture.
Disclaimer. These tries, on which the above pics are based, were done on loading matrices with random nonsharp factor structures, i.e. there were no specially preset clear clusters of variables or other specific structure among the loadings.
**Kaiser normalization**. From the above **Fig.1-2** one can learn that versions of quartimax and varimax without the normalization perform the two tasks (the maximizations) markedly better than when accompanied by the normalization. At the same time, absence of the normalization is a little bit more prone to “general factor” (**Fig.3**).
The question whether Kaiser normalization should be used (and when), seems still open to me. Perhaps one should try both, with and without the normalization, and see where the applied factor *interpretation* was more satisfying. When we don’t know what to choose based on math grounds, it’s time we resort to “philosophical” consideration, what are set contrasted, as usual. I could imagine of two positions:
* Contra normalization. A variable with small communality (high uniqueness) is not much helpful with any rotation. It contains only traces of the totality of the `m` factors, so lacks a chance to get a large loading of any of them. But we are interpreting factors mostly by large loadings, and the smaller is the loading the harder is to sight the essence of the factor in the variable. It would be justified even to exclude a variable with small communality from the rotation. Kaiser normalization is what is counter-directed to such motive/motif.
* Pro normalization. Communality (non-uniqueness) of a variable is the amount of its inclination to the space of `m` factors from the outside (i.e., it is the magnitude of its projection into that space). Rotation of axes inside that space is not related with that inclination. The rotation – solving the question which of the `m` factors will and which will not load the variable – concerns equally a variable will any size of communality, because the initial suspense of the said “internal” decision is sharp to the same degree to all variables with their “external” inclination. So, as long as we are choosing to speak of the variables and not their projections inside, there’s no reason to spread them weights depending on their inclinations, in the act of rotation. And, to manage to discern the essence of a factor in the variable under any size of the loading – is a desideratum (and theoretically a must) for an interpreter of factors.
Orthogonal analytic rotations (Orthomax) algorithm pseudocode
-------------------------------------------------------------
```
Shorthand notation:
* matrix multiplication (or simple multiplication, for scalars)
&* elementwise (Hadamard) multiplication
^ exponentiation of elements
sqrt(M) square roots of elements in matrix M
rsum(M) row sums of elements in matrix M
csum(M) column sums of elements in matrix M
rssq(M) row sums of squares in matrix M, = rsum(M^2)
cssq(M) column sums of squares in matrix M, = csum(M^2)
msum(M) sum of elements in matrix M
make(nr,nc,val) create nr x nc matrix populated with value val
A is p x m loading matrix with m orthogonal factors, p variables
If Kaiser normalization is requested:
h = sqrt(rssq(A)). /*sqrt(communalities), column vector
A = A/(h*make(1,m,1)). /*Bring all communalities to unit
R is the orthogonal rotation matrix to accrue:
Initialize it as m x m identity matrix
Compute the initial value of the criterion Crit;
the coefficient c is: 0 for Quartimax, 1 for Varimax, m/2 for Equamax,
p(m-1)/(p+m-2) for Parsimax, p for Facpars; or you may choose arbitrary c:
Q = msum(A^4)
If “Quartimax”
Crit = Q
Else
W = rssq(cssq(A))
Crit = p*Q – c*W
Begin iterations
For each pair of factors (columns of A) i, j (i<j) do:
ai = A(:,i) /*Copy out the
aj = A(:,j) /*two factors
u = ai^2 – aj^2
v = 2 * ai &* aj
@d = 2 * csum(u &* v)
@c = csum(u^2 – v^2)
@a = csum(u)
@b = csum(v)
Compute the angle Phi of rotation of the two factors in their space
(coefficient c as defined above):
num = @d – c * 2*@a*@b/p
den = @c – c * (@a^2 - @b^2)/p
Phi4 = artan(num/den) /*4Phi (in radians)
If den>0 /*4Phi is in the 1st or the 4th quadrant
Phi = Phi4/4
Else if num>0 /*4Phi is in the 2nd quadrant (pi is the pi value)
Phi = (pi + Phi4)/4
Else /*4Phi is in the 3rd quadrant
Phi = (Phi4 – pi)/4
Perform the rotation of the pair (rotate if Phi is not negligible):
@sin = sin(Phi)
@cos = cos(Phi)
r_ij = {@cos,-@sin;@sin,@cos} /*The 2 x 2 rotation matrix
A(:,{i,j}) = {ai,aj} * r_ij /*Rotate factors (columns) i and j in A
R(:,{i,j}) = R(:,{i,j}) * r_ij /*Update also the columns of the being accrued R
Go to consider next pair of factors i, j, again copying them out, etc.
When all pairs are through, compute the criterion:
Crit = … (see as defined above)
End iterations if Crit has stopped growing any much (say, increase not greater than
0.0001 versus the previous iteration), or the stock of iterations (say, 50) exhausted.
If Kaiser normalization was requested:
A = A &* (h*make(1,m,1)) /*De-normalize
Ready. A has been rotated. A(input)*R = A(output)
Optional post-actions, for convenience:
1) Reorder factors by decreasing their variances (i.e., their cssq(A)).
2) Switch sign of the loadings so that positive loadings prevail in each factor.
Quartimax and Varimax are always positive values; others can be negative.
All the criteria grow on iterations.
``` |
When performing a linear regression on my dataset, Durbin-Watson was very low (0.276). I found a tutorial online that suggested performing an Prais-Winston autocorrelation. The tutoral came with screenshots of SPSS on how to perform the analysis. The screenshots however, are from SPSS 14. I have SPSS 16, and I am not able to find the same test anywhere in my menu structure.
Please see the tutorials screenshots (above) and mine (below) here: ![http://i.imgur.com/gN0jG.png](https://i.stack.imgur.com/4ygJ2.png)
Can someone direct me to where the Prais-Winsten autoregression can be found in SPSS 16? | Yes, you can say that the algorithm is converging because it is increasing the objective value on average.
The most tricky part of stochastic gradient descent (SGD) is the 'learning rate'. Common choices are 1/t, 1/sqrt(t), D/(G\*t) where t is the iteration number, D is the max diameter of your feasible set, and G is the infinity-norm of the current gradient. You should experiment with these. You can even split your data into two for cross validation of the learning rate.
Another thing you can try is mini-batch SGD. In this variant, instead of using a single data point to compute the gradient, you use a batch of points like 10,20. This way, the objective-versus-trial plot (the first plot) will be smoother and will look like the second plot you have. |
I am seeking some directions for a proper path to research the solve for this problem:
My company made all our employees take a "StrengthFinders" test, which results in every employee being assigned their top five (ordered) "strengths" from a possible list of 34 strengths. We have 500 employees. I am supposed to identify all the employees that match each other for the same 5 strengths (order not important), and also for employees that match each other for 4 out of 5 strengths (again, order doesn't matter). I could potentially have multiple groups matching on different sets of strengths, e.g.:
Group 1: Billy, Sally, Michael have strengths A, H, I, K, Z
Group 2: Bobby and Suzy have strengths A, B, L, S, W
For the case where strengths match for 4 out of 5, I might have the same people from Group 1 above, plus Joe, whose strengths are A, H, M, K, Z; and
Seth, whose strengths are A, H, G, K, Z. I would expect more groupings for the case of 4 out of 5 than the 5 out of 5 case.
The strengths are categorical in nature, so what I've read so far has largely revolved around clustering of continuous numerical variables.
I am looking for an algorithmic way to identify clusters and the members of those clusters for this situation. I think I could do this brute force by repeatedly sorting data in Excel, but I'm confident that a better way must exist, and I ask you to point me in that direction. Thank you. | You have just 500 data points...
Excel of course is the worst possible tool though.
Anyway, build a dictionary. Put everybody in there 6 times: 1 with all five strengths, and 5 times with one strength omitted. Then you can easily identify the largest groups, and you can also perform various completion operations easily: if you have identified a group with strengths A B C D E, you can add all that have ABCD etc. using the dictionary. |
Is there such a formula? Given a set of data for which the mean, variance, skewness and kurtosis is known, or can be measured, is there a single formula which can be used to calculate the probability density of a value assumed to come from the aforementioned data? | This sounds like a ['moment-matching' approach](http://www.johndcook.com/blog/2010/09/20/skewness-andkurtosis/) to fitting a distribution to data. It is generally regarded as not a great idea (the title of John Cook's blog post is 'a statistical dead end'). |
After months of study I still do not get it. I apologize (see: [Estimating values knowing their Pearson's r and their means and standard deviations](https://stats.stackexchange.com/questions/355221/estimating-values-knowing-their-pearsons-r-and-their-means-and-standard-deviati))
Imagine, for example, I have two bivariate normal populations:
* A: mean 100, standard deviation 10
* B: mean 100, standard deviation 10
* Correlation: 0.9
**How can I calculate the probability of obtaining a value of "95" or greater sampling from both A and B (that is, to obtain two "95"s or greater at the same time)?**
From a simulation I performed in R, the probability might be around `62.9%`
**Edit:**
Thank you, user2974951. The probability might be around `69.2 %`
**Edit:**
Thank you Xi'an. Now it is "Correlation: 0.9" | I would recommend not performing feature selection with SVMs if you are interested in generalisation performance. The SVM approximately implements a bound on the generalisation that is independent of the number of features. However, as soon as you try to optimise the kernel (or equivalently select features), then all of the theory underpinning the bound is violated and the bound no longer holds. It is easy to over-fit feature selection criteria, and this can make performance much worse. See, for example, my answer [here](https://stats.stackexchange.com/questions/27750/feature-selection-and-cross-validation/27751#27751).
If you are interested in finding the most important features as an end in itself (rather than to improve performance) then I would recommend using a model that has feature selection built in, such as the LASSO or LARS.
I can strongly recommend the paper of [Guyon and Elisseeff](https://www.jmlr.org/papers/volume3/guyon03a/guyon03a.pdf) for guidance on the use of feature selection (and some of the potential pitfalls). |
What is the big-O of the function $2\log(\log(n)) + 3n\log(n) + 5\log(n)$?
Is it just $O(n\log(n))$ for the whole function? I'm not sure how to represent $2\log(\log(n))$. | Please note that "the big-O of the function" [isn't a correct formulation](https://cs.stackexchange.com/questions/53044/what-is-the-big-theta-of-log-n2-9-log-n7/53085#53085). We assume $\log$ is the binary logarithm $\log\_2$. But actually the proof can be extended to any base.
---
We have
$$\log(x)\lt x, \forall x>0$$
and, if we plug in $\log(n)$ for $x$
$$\log(\log(n))\lt \log(n).\tag 1$$
Recall the [definition](https://en.wikipedia.org/wiki/Big_O_notation#Formal_definition)
of $f(n)=O(g(n))$:
$$|f(n)|\le M |g(n)|,\forall n\ge n\_0$$
for appropriate $M$ and $n\_0$.
So if we choose
$f(n)=\log(\log(n))$, $g(n)=\log(n)$, $M=1$ ,$n\_0=2$
we see that $(1)$ is
$$\log(\log(n))=O(\log(n))$$
and of course
$$\log(\log(n))=O(n\log(n)).$$
So all three function in your expressions are $O(n\log(n))$ and therefore every linear combination of them
$$a\log(\log(n)) + b \, n\log(n) + c\log(n), \quad a,b,c \in \mathrm R$$
is $O(n\log(n))$. |
I was reading about learning curve and [in a page](http://www.astroml.org/sklearn_tutorial/practical.html#cross-validation-and-testing), this curve is shown:
![enter image description here](https://i.stack.imgur.com/UYUfJ.png)
But I think something is wrong with it. If an estimator tunes it parameters on validation set, then validation error should be lower than training error. Because we have tuned estimator's hyper parameters to achieve the best result on validation error.
Why is training error lower than validation error in this figure? | The variable X is distributed according to the Normal distribution with mean vector $\mu$ and standard deviation $\sigma$. |
I am playing a little with convnets. Specifically, I am using the kaggle cats-vs-dogs dataset which consists on 25000 images labeled as either cat or dog (12500 each).
I've managed to achieve around 85% classification accuracy on my test set, however I set a goal of achieving 90% accuracy.
My main problem is overfitting. Somehow it always ends up happening (normally after epoch 8-10). The architecture of my network is loosely inspired by VGG-16, more specifically my images are resized to $128x128x3$, and then I run:
```
Convolution 1 128x128x32 (kernel size is 3, strides is 1)
Convolution 2 128x128x32 (kernel size is 3, strides is 1)
Max pool 1 64x64x32 (kernel size is 2, strides is 2)
Convolution 3 64x64x64 (kernel size is 3, strides is 1)
Convolution 4 64x64x64 (kernel size is 3, strides is 1)
Max pool 2 32x32x64 (kernel size is 2, strides is 2)
Convolution 5 16x16x128 (kernel size is 3, strides is 1)
Convolution 6 16x16x128 (kernel size is 3, strides is 1)
Max pool 3 8x8x128 (kernel size is 2, strides is 2)
Convolution 7 8x8x256 (kernel size is 3, strides is 1)
Max pool 4 4x4x256 (kernel size is 2, strides is 2)
Convolution 8 4x4x512 (kernel size is 3, strides is 1)
Fully connected layer 1024 (dropout 0.5)
Fully connected layer 1024 (dropout 0.5)
```
All the layers except the last one have relus as activation functions.
Note that I have tried different combinations of convolutions (I started with simpler convolutions).
Also, I have augmented the dataset by mirroring the images, so that in total I have 50000 images.
Also, I am normalizing the images using min max normalization, where X is the image
$X = X - 0 / 255 - 0$
The code is written in tensorflow and the batch sizes are 128.
The mini-batches of training data end up overfitting and having an accuracy of 100% while the validation data seems to stop learning at around 84-85%.
I have also tried to increase/decrease the dropout rate.
The optimizer being used is AdamOptimizer with a learning rate of 0.0001
At the moment I have been playing with this problem for the last 3 weeks and 85% seems to have set a barrier in front of me.
For the record, I know I could use transfer learning to achieve much higher results, but I am interesting on building this network as a self-learning experience.
**Update:**
I am running the SAME network with a different batch size, in this case I am using a much smaller batch size (16 instead of 128) so far I am achieving 87.5% accuracy (instead of 85%). That said, the network ends up overfitting anyway. Still I do not understand how a dropout of 50% of the units is not helping... obviously I am doing something wrong here. Any ideas?
**Update 2:**
Seems like the problem had to do with the batch size, as with a smaller size (16 instead of 128) I am achieving now 92.8% accuracy on my test set, with the smaller batch size the network still overfits (the mini batches end up with an accuracy of 100%) however, the loss (error) keeps decreasing and it is in general more stable. The cons are a MUCH slower running time, but it is totally worth the wait. | Ok, so after a lot of experimentation I have managed to get some results/insights.
**In the first place, everything being equal, smaller batches in the training set help a lot in order to increase the general performance** of the network, as a negative side, the training process is muuuuuch slower.
Second point, data is important, nothing new here but as I learned while fighting this problem, more data always seems to help a bit.
Third point, dropout is useful in large networks with lots of data and lots of iterations, in my network I applied dropout on the final fully connected layers only, convolution layers did not get dropout applied.
Fourth point (and this is something I am learning over and over): neural networds take A LOT to train, even on good GPUs (I trained this network on floydhub, which uses quite expensive NVIDIA cards), so **PATIENCE is key**.
Final conclusion: Batch sizes are more important that one might think, apparently it is easier to hit a local minimum when batches are larger.
The code I wrote is available as a [python notebook](https://github.com/moriano/loco-learning/blob/master/cats-vs-dogs/cats-vs-dogs.ipynb), I think it is decently documented. |
I am currently doing a Ph.D. in Theoretical Computer Science, and any research paper I encountered so far has the author's names in alphabetical order of their surnames.
For example consider the most fundamental book on algorithms: "Introduction to algorithms" by Thomas Cormen, Charles Leiserson, Ronald Rivest, and Clifford Stein.
Also consider the book: "Parameterized Algorithms" by Marek Cygan, Fedor V. Fomin,
Lukasz Kowalik, Daniel Lokshtanov,
Dániel Marx, Marcin Pilipczuk,
Michal Pilipczuk and Saket Saurabh
Basically take any paper in the TCS domain, all follow this pattern. This pattern is not followed in other domains where the authorship is decided based on the individual contribution of the authors. In other words, a person having the most contribution to the paper is given the first authorship. Likewise, a person with less contribution would have his/her name appear later in the list of authors.
I consider this norm, fundamentally flawed. Can somebody provide a good reason as to why such a norm is followed in the TCS domain? | The American Mathematical Society has released a [statement](http://www.ams.org/comm-prof/CultureStatement04.pdf) (pdf link) about the commonly accepted practice of listing authors in alphabetical order in mathematics. Their reasoning applies, in my opinion, to TCS as well.
This kind of theoretical research is often done in small groups of 2-5 people. In close collaboration it can be difficult to determine who came up with which idea and/or whose ideas are more important for the main results. Using the alphabetical order lets us circumvent arguments about it and default to "everyone contributed equally", which is often (but not always) more or less the case anyway. It's also easy to explain to non-mathematicians by e.g. showing the above statement. When you apply to math/TCS positions, the hiring committee will be aware of this convention and won't hold your last name against you. My impression is that researchers in fields close to math/TCS tend to know about it as well.
In biology and other fields where research is based on experiments, papers are written by large groups whose members have distinct roles. A professor has an idea and obtains funding, one researcher designs the experiments, some lab assistant runs them, someone else analyzes the results, others write the paper. In this context it makes more sense to differentiate the authors by their contribution. |
I am trying to understand the key differences between GBM and XGBOOST. I tried to google it, but could not find any good answers explaining the differences between the two algorithms and why xgboost almost always performs better than GBM. What makes XGBOOST so fast? | Quote from the author of `xgboost`:
>
> Both xgboost and gbm follows the principle of gradient boosting. There are however, the difference in modeling details. Specifically, xgboost used a more regularized model formalization to control over-fitting, which gives it better performance.
>
>
> We have updated a comprehensive tutorial on introduction to the model, which you might want to take a look at. [Introduction to Boosted Trees](https://xgboost.readthedocs.io/en/latest/tutorials/model.html)
>
>
> The name xgboost, though, actually refers to the engineering goal to push the limit of computations resources for boosted tree algorithms. Which is the reason why many people use xgboost. For model, it might be more suitable to be called as regularized gradient boosting.
>
>
>
Edit: There's a detailed [guide](https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/) of xgboost which shows more differences.
References
----------
<https://www.quora.com/What-is-the-difference-between-the-R-gbm-gradient-boosting-machine-and-xgboost-extreme-gradient-boosting>
<https://xgboost.readthedocs.io/en/latest/tutorials/model.html> |
Given a data matrix $X$ of say 1000000 observations $\times$ 100 features,
is there a fast way to build a tridiagonal approximation
$A \approx cov(X)$ ?
Then one could factor $A = L L^T$,
$L$ all 0 except $L\_{i\ i-1}$ and $L\_{i i}$,
and do fast decorrelation (whitening) by solving
$L x = x\_{white}$.
(By "fast" I mean $O( size\ X )$.)
(Added, trying to clarify): I'm looking for a quick and dirty whitener
which is faster than full $cov(X)$ but better than diagonal.
Say that $X$ is $N$ data points $\times Nf$ features, e.g. 1000000$\times$ 100,
with features 0-mean.
1) build $Fullcov = X^T X$, Cholesky factor it as $L L^T$,
solve $L x = x\_{white}$ to whiten new $x$ s.
This is quadratic in the number of features.
2) diagonal: $x\_{white} = x / \sigma(x)$
ignores cross-correlations completely.
One *could* get a tridiagonal matrix from $Fullcov$
just by zeroing all entries outside the tridiagonal,
or not accumulating them in the first place.
And here I start sinking: there must be a better approximation,
perhaps hierarchical, block diagonal → tridiagonal ?
---
(Added 11 May): Let me split the question in two:
1) is there a fast approximate $cov(X)$ ?
No (whuber), one must look at all ${N \choose 2}$ pairs
(or have structure, or sample).
2) given a $cov(X)$, how fast can one whiten new $x$ s ?
Well, factoring $cov = L L^T$, $L$ lower triangular, once,
then solving $L x = x\_{white}$
is pretty fast; scipy.linalg.solve\_triangular, for example, uses Lapack.
I was looking for a yet faster whiten(), still looking. | If @EEE's concern can be addressed and you proceed with an hypothesis test, then rather than logistic regression I'd recommend a chi-square test. For a person fairly new to statistical testing, it'll be dramatically easier to conduct, interpret, and explain to an audience. Plus I think it'll give you just about as much information. |
I was reading up on something called the PRAM model without bit operations.
What exactly does it mean that this PRAM model cannot do bit operations?
I can't find a straightforward definition anywhere.
Surely the different processors will still be able to do all sorts of stuff with bits. The reason why I am asking is that the max-flow problem (which is in P cannot be solved in this model using a polynomial number of processors. So in fact it is a non-trivial implication of the conjecture
that $P \neq NC$. So it is further "evidence" of the $P \neq NC$ conjecture. | Without further information, it's hard to tell exactly. However, to the best of my knowledge, bit operations may be intended as follows. Consider the problem of finding the maximum element in an array of $n$ real numbers. This problem can be solved by an EREW algorithm in $\Omega(\lg n)$; moreover, no CREW algorithm can do any better.
However, this problem can be solved in $O(1)$ time with $n^2$ processors using a common-CRCW algorithm (in this model, when several processors write to the same location, they all write the same value). The key point here is that a CRCW PRAM is capable of performing a boolean AND of $n$ variables in $O(1)$ time with $n$ processors (which allows overcoming the $\Omega(\lg n)$ lower bound). I think that bit operations may refer to this powerful AND capability of a common-CRCW.
Additional information regarding the FAST-MAX algorithm can be found in the first edition of CLRS Introduction to Algorithms, chapter 30 (ALGORITHMS FOR PARALLEL COMPUTERS). |
It's common short-hand in neural networks literature to refer to *categorical* cross-entropy loss as simply "cross-entropy." However, this terminology is ambiguous because different probability distributions have different cross-entropy loss functions.
So, in general, how does one move from an assumed probability distribution for the target variable to defining a cross-entropy loss for your network? What does the function require as inputs? (For example, the categorical cross-entropy function for one-hot targets requires a one-hot binary vector and a probability vector as inputs.)
A good answer will discuss the general principles involved, as well as worked examples for
* categorical cross-entropy loss for one-hot targets
* Gaussian-distributed target distribution and how how this reduces to usual MSE loss
* A less common example such as a gamma distributed target, or a heavy-tailed target
* Explain the relationship between minimizing cross entropy and maximizing log-likelihood. | Suppose that we are trying to infer the parametric distribution $p(y|\Theta(X))$, where $\Theta(X)$ is a vector output inverse [link function](https://en.wikipedia.org/wiki/Generalized_linear_model) with $[\theta\_1,\theta\_2,...,\theta\_M]$.
We have a neural network at hand with some topology we decided. The number of outputs at the output layer matches the number of parameters we would like to infer (it may be less if we don't care about all the parameters, as we will see in the examples below).
[![enter image description here](https://i.stack.imgur.com/jynZU.jpg)](https://i.stack.imgur.com/jynZU.jpg)
In the hidden layers we may use whatever activation function we like. What's crucial are the output activation functions for each parameter as they have to be compatible with the support of the parameters.
[![enter image description here](https://i.stack.imgur.com/bXQcA.png)](https://i.stack.imgur.com/bXQcA.png)
Some example correspondence:
* Linear activation: $\mu$, mean of Gaussian distribution
* Logistic activation: $\mu$, mean of Bernoulli distribution
* Softplus activation: $\sigma$, standard deviation of Gaussian distribution, shape parameters of Gamma distribution
Definition of cross entropy:
$$H(p,q) = -E\_p[\log q(y)] = -\int p(y) \log q(y) dy$$
where $p$ is ideal truth, and $q$ is our model.
Empirical estimate:
$$H(p,q) \approx -\frac{1}{N}\sum\_{i=1}^N \log q(y\_i)$$
where $N$ is number of independent data points coming from $p$.
Version for conditional distribution:
$$H(p,q) \approx -\frac{1}{N}\sum\_{i=1}^N \log q(y\_i|\Theta(X\_i))$$
Now suppose that the network output is $\Theta(W,X\_i)$ for a given input vector $X\_i$ and all network weights $W$, then the training procedure for expected cross entropy is:
$$W\_{opt} = \arg \min\_W -\frac{1}{N}\sum\_{i=1}^N \log q(y\_i|\Theta(W,X\_i))$$
which is [equivalent to Maximum Likelihood Estimation](https://en.wikipedia.org/wiki/Cross_entropy#Relation_to_log-likelihood) of the network parameters.
Some examples:
* Regression: [Gaussian distribution](https://en.wikipedia.org/wiki/Normal_distribution) with heteroscedasticity
$$\mu = \theta\_1 : \text{linear activation}$$
$$\sigma = \theta\_2: \text{softplus activation\*}$$
$$\text{loss} = -\frac{1}{N}\sum\_{i=1}^N \log [\frac{1} {\theta\_2(W,X\_i)\sqrt{2\pi}}e^{-\frac{(y\_i-\theta\_1(W,X\_i))^2}{2\theta\_2(W,X\_i)^2}}]$$
under homoscedasticity we don't need $\theta\_2$ as it doesn't affect the optimization and the expression simplifies to (after we throw away irrelevant constants):
$$\text{loss} = \frac{1}{N}\sum\_{i=1}^N (y\_i-\theta\_1(W,X\_i))^2$$
* Binary classification: [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution)
$$\mu = \theta\_1 : \text{logistic activation}$$
$$\text{loss} = -\frac{1}{N}\sum\_{i=1}^N \log [\theta\_1(W,X\_i)^{y\_i}(1-\theta\_1(W,X\_i))^{(1-y\_i)}]$$
$$= -\frac{1}{N}\sum\_{i=1}^N y\_i\log [\theta\_1(W,X\_i)] + (1-y\_i)\log [1-\theta\_1(W,X\_i)]$$
with $y\_i \in \{0,1\}$.
* Regression: [Gamma](https://en.wikipedia.org/wiki/Gamma_distribution) response
$$\alpha \text{(shape)} = \theta\_1 : \text{softplus activation\*}$$
$$\beta \text{(rate)} = \theta\_2: \text{softplus activation\*}$$
$$\text{loss} = -\frac{1}{N}\sum\_{i=1}^N \log [\frac{\theta\_2(W,X\_i)^{\theta\_1(W,X\_i)}}{\Gamma(\theta\_1(W,X\_i))} y\_i^{\theta\_1(W,X\_i)-1}e^{-\theta\_2(W,X\_i)y\_i}]$$
* Multiclass classification: [Categorical distribution](https://en.wikipedia.org/wiki/Categorical_distribution)
Some constraints cannot be handled directly by plain vanilla neural network toolboxes (but these days they seem to do very advanced tricks). This is one of those cases:
$$\mu\_1 = \theta\_1 : \text{logistic activation}$$
$$\mu\_2 = \theta\_2 : \text{logistic activation}$$
...
$$\mu\_K = \theta\_K : \text{logistic activation}$$
We have a constraint $\sum \theta\_i = 1$. So we fix it before we plug them into the distribution:
$$\theta\_i' = \frac{\theta\_i}{\sum\_{j=1}^K \theta\_j}$$
$$\text{loss} = -\frac{1}{N}\sum\_{i=1}^N \log [\Pi\_{j=1}^K\theta\_i'(W,X\_i)^{y\_{i,j}}]$$
Note that $y$ is a vector quantity in this case. Another approach is the [Softmax](https://en.wikipedia.org/wiki/Softmax_function).
\*ReLU is unfortunately not a particularly good activation function for $(0,\infty)$ due to two reasons. First of all it has a dead derivative zone on the left quadrant which causes optimization algorithms to get trapped. Secondly at exactly 0 value, many distributions would go singular for the value of the parameter. For this reason it is usually common practice to add a small value $\epsilon$ to assist off-the shelf optimizers and for numerical stability.
As suggested by @Sycorax Softplus activation is a much better replacement as it doesn't have a dead derivative zone.
[![enter image description here](https://i.stack.imgur.com/O043I.png)](https://i.stack.imgur.com/O043I.png)
**Summary:**
1. Plug the network output to the parameters of the distribution and
take the -log then minimize the network weights.
2. This is equivalent to Maximum Likelihood Estimation of the
parameters. |
It seems that lots of people (including me) like to do exploratory data analysis in Excel. Some limitations, such as the number of rows allowed in a spreadsheet, are a pain but in most cases don't make it impossible to use Excel to play around with data.
[A paper by McCullough and Heiser](http://www.pages.drexel.edu/~bdm25/excel2007.pdf), however, practically screams that you will get your results all wrong -- and probably burn in hell as well -- if you try to use Excel.
Is this paper correct or is it biased? The authors do sound like they hate Microsoft. | An interesting paper about using Excel in a Bioinformatics setting is:
>
> Mistaken Identifiers: Gene name errors
> can be introduced inadvertently when
> using Excel in bioinformatics, BMC
> Bioinformatics, 2004 ([link](http://www.biomedcentral.com/1471-2105/5/80)).
>
>
>
This short paper describes the problem of automatic type conversions in Excel (in particular [date](http://www.biomedcentral.com/1471-2105/5/80/figure/F1) and floating point conversions). For example, the gene name Sept2 is converted to 2-Sept. You can actually find this error in [online databases](http://www.biomedcentral.com/1471-2105/5/80/figure/F2).
Using Excel to manage medium to large amounts of data is dangerous. Mistakes can easily creep in without the user noticing. |
I know finite-state machines can be used to solve yes/no kind of problems, such as finding whether a word is in language or not. Can somebody tell me exactly what FSM cannot do and why? | The answer depends on what you mean by FSM.
If by FSM you mean finite automaton, then there's a lot that it can't do. As others have pointed out, you can't remember more than a constant amount of information using a finite automaton; the memory must be encoded in the state. This means you can't do things like recognize $a^nb^n$, since you'd need to remember how many $a$ symbols you've seen, which is not possible unless you have an arbitrary amount of memory. The pumping lemma for regular languages and, especially, the Myhill-Nerode theorem offer insight into the kinds of things that can and cannot be done.
If by FSM you mean any automaton formalism which is limited to using finitely many states, then Turing machines are among the most computationally capable FSM models available. Indeed, anything which is effectively computable is computable by a Turing machine, according to the Church-Turing thesis. However, there are still limitations to what a Turing machine can decide: there are problems it cannot answer, such as the halting problem.
Models which aren't limited to a finite number of states aren't particularly interesting for a few reasons, most prominent of which are the following: they are of limited practical importance since we cannot construct real computers with infinitely many configurations; any language can be recognized in such a model by mapping strings in the language to unique accepting states, and strings not in the language to unique non-accepting states. |
I want to prove that
$$\operatorname{Var}\left(\sum\limits\_{i=1}^m{X\_i}\right) \leq m\sum\limits\_{i=1}^m{\operatorname{Var}(X\_i)} \,. \>$$
A too complicated proof is to write
$$
a\_{ij}=\sqrt {Cov(X\_i,X\_j)} \,,
$$
$$\operatorname{Var}\left(\sum\limits\_{i=1}^m{X\_i}\right) = \sum a\_{ij}^2 \leq \sum a\_{ii}a\_{jj} \leq \sum\sum a\_{ii}^2$$
By Cauchy-Scwarz and then the permutation inequality.
I'm sure it can be shorter, but how? | The inequality is true in general under the assumption $\textrm{Var}(X\_i)<\infty$ for all $i$ (otherwise both sides are infinite). Applying Cauchy-Schwarz to the random quantities $X\_i-\mathbb E X\_i$ yields that
$$
\mathbb P\left[\left(\sum\_{i=1}^m X\_i-\mathbb E X\_i\right)^2\leq m\sum\_{i=1}^m (X\_i-\mathbb E X\_i)^2\right]=1.
$$
Consequently,
$$
\mathbb E\left(\sum\_{i=1}^m X\_i-\mathbb EX\_i\right)^2\leq m\ \mathbb E\sum\_{i=1}^m (X\_i-\mathbb EX\_i)^2.
$$
Using the definition of variance, we see this is exactly the desired inequality. |
What are some interesting techniques that can be used to analyze Likert data?
As a frame of reference, I have constructed a survey with about 50 items which are intended to assess the attitudes of the survey participants towards two very broad categories. Each item is on a 5 point scale.
The survey will be offered multiple times, thus pre and post testing is possible, although not necessarily the aim of the survey. | It looks like you want to do something like **principal component analysis** and find out two components from these 50 items. A very good technique for Likert data will be conducting **Non-linear Principal Component Analysis** or **Categorical PCA** instead of usual PCA. Papers have shown that it works better than usual PCA for ordinal data.
Key papers are from Jacqueline J. Meulman, see e.g. [Nonlinear principal components analysis: Introduction and application](https://doi.org/10.1037/1082-989X.12.3.336) (Psychol Methods. 2007 12(3):336-58) or [PCA with nonlinear optimal scaling transformations for ordinal and nominal data](http://www.corwin.com/upm-data/5040_Kaplan_Final_Pages_Chapter_3.pdf) (SAGE Handbook of Quantitative Methodology for the Social Sciences, 2004). |
I don't know if there is a canonical problem reducing my practical problem, so I will just try to describe it the best that I can.
I would like to cluster files into the specified number of groups, where each groups size (= the sum of sizes of files in this group) is as close as possible to each other group in this cluster (but not in other clusters). Here are the requirements:
1. The first group always contain one file, which is the biggest of all groups in the cluster.
2. Any other group but the first can have multiple files in it.
3. The number of groups in each cluster is constrained to a maximum specified by user (but there can be less groups if it's better or there's not enough files).
4. There's no constraint on the number of clusters (there can be as little or as many as necessary).
5. **Goal (objective function): minimize the space left in all groups (of all clusters) while maximizing the number of groups per cluster (up to the maximum specified).**
The reason behind these requirements is that I am encoding files together, and any remaining space in any group will need to be filled by null bytes, which is a waste of space.
Clarification on the objective and constraints that follow from the requirements and the problem statement:
* Input is a list of files with their respective size.
* Desired output: a list of clusters with each clusters being comprised of groups of files, each group having one or several concatenated files.
* There must be at least 2 groups per cluster (except if no file is remaining) and up to a maximum of G groups (specified by user).
* Each file can be assigned to any group whatsoever and each group can be assigned to any cluster.
* The number of clusters can be chosen freely.
Here is a schema that shows one wrong and one good example of clustering schemes on 5 files (1 big file, and 4 files of exactly half the size of the big file) with a number of groups = 2:
![Clustering schemes example](https://i.stack.imgur.com/JuTWW.png)
The solution needs not be optimal, it can be sub-optimal as long as it's good enough, so greedy/heuristics algorithms are acceptable as long as their complexity is good enough.
Another concrete example to be clear: let's say I have this list of 10 files with their sizes, this is the input (in Python):
```
['file_3': 7,
'file_8': 11,
'file_6': 14,
'file_9': 51,
'file_1': 55,
'file_4': 58,
'file_5': 67,
'file_2': 68,
'file_7': 83,
'file_0': 85]
```
The final output is a list of clusters like this (constrained here to 3 groups per cluster):
```
{1: [['file_0'], ['file_7'], ['file_2', 'file_6']],
2: [['file_5'], ['file_4', 'file_3'], ['file_1', 'file_8']],
3: [['file_9']]}
```
And for example here (this is not a necessary output, it's just to check) the total size of each groups (ie, sum of file sizes for each group) for each cluster:
```
{1: [85, 83, 82], 2: [67, 65, 66], 3: [51]}
```
If the problem is NP-complete and/or impossible to solve in polynomial time, I can accept a solution to a reduction of the problem, dropping the first and fourth requirements (no clusters at all, only groups):
![Reduced clustering problem](https://i.stack.imgur.com/REeLt.png)
Here is the algorithm I could come up with for the full problem, but it's running in about O(n^g) where n is the length of the list of files, and g the number of groups per cluster:
```
Input: number of groups G per cluster, list of files F with respective sizes
- Order F by descending size
- Until F is empty:
- Create a cluster X
- A = Pop first item in F
- Put A in X[0] (X[0] is thus the first group in cluster X)
For g in 1..len(G)-1 :
- B = Pop first item in F
- Put B in X[g]
- group_size := size(B)
If group_size != size(A):
While group_size < size(A):
- Find next item C in F which size(C) <= size(A) - group_size
- Put C in X[g]
- group_size := group_size + size(C)
```
How can I do better? Is there a canonical problem? It seems like it's close to scheduling of parallel tasks (with tasks instead of files and time instead of size), but I'm not quite sure? Maybe a divide-and-conquer algorithm exists for this task? | Finally I could devise a better algorithm by turning the problem upside down (ie, using bottom up construction instead of top-down).
In my OP algo, I first create a cluster and its groups, and then I walk through the whole files list until I could either fill completely the groups sizes, or there's no file in the files list small enough to fit.
Here it's the other way around: I walk through the files list, and for each file I either assign it to a group if it fits, or if it's too big I create a cluster and init it with this file. To do that, I continually maintain a list of groups sizes to fill, using insertion sort so that when I pick a file to organize, I just need to check its size against the first item of the "to-fill" list.
Here's the algorithm, running in O(n log(g)) (thank's to insertion sort or binary search trees):
```
For each file:
- If to-fill list is empty or file.size > first-key(to-fill):
* Create cluster c with file in first group g1
* Add to-fill[file.size].append([c, g2], [c, g3], ..., [c, gn])
- Else:
* ksize = first-key(to-fill)
* c, g = to-fill[ksize].popitem(0)
* Add file to cluster c in group g
* nsize = ksize - file.size
* if nsize > 0:
. to-fill[nsize].append([c, g])
. sort to-fill if not an automatic ordering structure
```
Since it's running in O(n log(g)), the number of groups has little impact on the running time, contrary to the algo in the OP. Maybe it's possible to do better, but for now it is fast enough for me, since it can reasonably work on lists of 10M files under 20 seconds.
For those interested, here's a working implementation in Python (I used the [sortedcontainers module](http://www.grantjenks.com/docs/sortedcontainers/) for the sorted list, which runs in O(log(g)) instead of O(g) for insertion sort):
```
from collections import OrderedDict
from random import randint
from lib.sortedcontainers import SortedList
def gen_rand_fileslist(nbfiles=100, maxvalue=100):
fileslist = {}
for i in xrange(nbfiles):
fileslist["file_%i" % i] = randint(1, maxvalue)
return fileslist
def group_files_by_size_fast(fileslist, nbgroups, mode=1):
'''Given a files list with sizes, output a list where the files are grouped in nbgroups per cluster.'''
For each file:
- If to-fill list is empty or file.size > first-key(to-fill):
* Create cluster c with file in first group g1
* Add to-fill[file.size].append([c, g2], [c, g3], ..., [c, gn])
- Else:
* ksize = first-key(to-fill)
* c, g = to-fill[ksize].popitem(0)
* Add file to cluster c in group g
* nsize = ksize - file.size
* if nsize > 0:
. to-fill[nsize].append([c, g])
. sort to-fill if not an automatic ordering structure
'''
ftofill = SortedList()
ftofill_pointer = {}
fgrouped = [] # [] or {}
ford = sorted(fileslist.iteritems(), key=lambda x: x[1])
last_cid = -1
while ford:
fname, fsize = ford.pop()
#print "----\n"+fname, fsize
#if ftofill: print "beforebranch", fsize, ftofill[-1]
#print ftofill
if not ftofill or fsize > ftofill[-1]:
last_cid += 1
#print "Branch A: create cluster %i" % last_cid
fgrouped.append([])
#fgrouped[last_cid] = []
fgrouped[last_cid].append([fname])
if mode==0:
for g in xrange(nbgroups-1, 0, -1):
fgrouped[last_cid].append([])
if not fsize in ftofill_pointer:
ftofill_pointer[fsize] = []
ftofill_pointer[fsize].append((last_cid, g))
ftofill.add(fsize)
else:
for g in xrange(1, nbgroups):
try:
fgname, fgsize = ford.pop()
#print "Added to group %i: %s %i" % (g, fgname, fgsize)
except IndexError:
break
fgrouped[last_cid].append([fgname])
diff_size = fsize - fgsize
if diff_size > 0:
if not diff_size in ftofill_pointer:
ftofill_pointer[diff_size] = []
ftofill_pointer[diff_size].append((last_cid, g))
ftofill.add(diff_size)
else:
#print "Branch B"
ksize = ftofill.pop()
c, g = ftofill_pointer[ksize].pop()
#print "Assign to cluster %i group %i" % (c, g)
fgrouped[c][g].append(fname)
nsize = ksize - fsize
if nsize > 0:
if not nsize in ftofill_pointer:
ftofill_pointer[nsize] = []
ftofill_pointer[nsize].append((c, g))
ftofill.add(nsize)
return fgrouped
fgrouped2 = group_files_by_size_fast(fileslist, nbgroups)
def grouped_count_sizes(fileslist, fgrouped):
'''Compute the total size per group and total number of files. Useful to check that everything is OK.'''
fsizes = {}
total_files = 0
allitems = None
for fkey, cluster in enumerate(fgrouped):
fsizes[fkey] = []
for subcluster in cluster:
tot = 0
for fname in subcluster:
tot += fileslist[fname]
total_files += 1
fsizes[fkey].append(tot)
return fsizes, total_files
if __name__ == '__main__':
nbfiles = 10000000
nbgroups = 10
fileslist = gen_rand_fileslist(nbfiles)
fgrouped = group_files_by_size(fileslist, nbgroups)
fsizes, total_files = grouped_count_sizes(fileslist, fgrouped)
print total_files
print fsizes
``` |
When considering machine models of computation, the Chomsky hierarchy is normally characterised by (in order), finite automata, push-down automata, linear bound automata and Turing Machines.
For the first and last levels1 (regular languages and recursively enumerable languages), it makes no difference to the power of the model whether we consider deterministic or nondeterministic machines, i.e. DFAs are equivalent to NFAs, and DTMs are equivalent to NTMs2.
However for PDAs and LBAs, the situation is different. Deterministic PDAs recognise a strictly smaller set of languages than nondeterministic PDAs. It is also a significant open question whether deterministic LBAs are as powerful as nondeterministic LBAs or not [1].
This prompts my question:
>
> Is there a machine model that characterises the context-free languages, but for which non-determinism adds no extra power? (If not, is there some property of CFLs which suggests a reason for this?)
>
>
>
It seems unlikely (to me) that it would be provable that context-free languages somehow *need* nondeterminism, but there doesn't seem to be a (known) machine model for which deterministic machines are sufficient.
The extension question is the same, but for context-sensitive languages.
**References**
1. S.-Y. Kuroda, ["Classes of Languages and Linear Bound Automata"](http://ac.els-cdn.com/S0019995864901202/1-s2.0-S0019995864901202-main.pdf?_tid=5c0f3ca0-0819-11e5-9096-00000aab0f6c&acdnat=1433134251_344737eed5fdf70b02d9f3814e984188), Information and Control, 7:207-223, 1964.
**Footnotes**
1. Side question for the comments, is there a reason for the levels (ordered by set inclusion) of the Chomsky hierarchy to be number 3 to 0, instead of 0 to 3?
2. To be clear, I'm talking about the languages that can be recognised only. Obviously questions of complexity are radically affected by such a change. | As far as I know, we do not know whether Non- deterministic context-sensitive languages are more powerful than deterministic context-sensitive languages. And the simplest reason behind this is: Till now no-one has been able to give a language that is accepted by the non-deterministic LBA and not accepted by the deterministic LBA. So if someone can come up with one such language that is accepted by ND LBA and not accepted by deterministic LBA then only this will be proven. As we know this method of proving is called proof by contradiction.
Because proving something is TRUE is a hard problem but proving something FALSE is easy, just get one instance where it fails and we're done. |
The example below is taken from the lectures in [deeplearning.ai](http://deeplearning.ai) shows that the result is the sum of the **element-by-element product** (or "element-wise multiplication". The red numbers represent the weights in the filter:
$(1\*1)+(1\*0)+(1\*1)+(0\*0)+(1\*1)+(1\*0)+(0\*1)+(0\*0)+(1\*1)
= 1+0+1+0+1+0+0+0+1
= 4 $
[![enter image description here](https://i.stack.imgur.com/MkFSC.png)](https://i.stack.imgur.com/MkFSC.png)
HOWEVER, most resources say that it's the ***dot product*** that's used:
>
> "…we can re-express the output of the neuron as , where is the bias
> term. In other words, we can compute the output by y=f(x\*w) where b is
> the bias term. In other words, we can compute the output by
> performing the dot product of the input and weight vectors, adding in
> the bias term to produce the logit, and then applying the
> transformation function."
>
>
>
Buduma, Nikhil; Locascio, Nicholas. Fundamentals of Deep Learning: Designing Next-Generation Machine Intelligence Algorithms (p. 8). O'Reilly Media. Kindle Edition.
>
> "We take the 5*5*3 filter and slide it over the complete image and
> along the way take the dot product between the filter and chunks of
> the input image. For every dot product taken, the result is a
> scalar."
>
>
>
[The best explanation of Convolutional Neural Networks on the Internet!](https://medium.com/technologymadeeasy/the-best-explanation-of-convolutional-neural-networks-on-the-internet-fbb8b1ad5df8)
>
> "Each neuron receives some inputs, performs a dot product and
> optionally follows it with a non-linearity."
>
>
>
[CS231n Convolutional Neural Networks for Visual Recognition](http://CS231n%20Convolutional%20Neural%20Networks%20for%20Visual%20Recognition)
>
> "The result of a convolution is now equivalent to performing one large
> matrix multiply np.dot(W\_row, X\_col), which evaluates the dot product
> between every filter and every receptive field location."
>
>
>
[CS231n Convolutional Neural Networks for Visual Recognition](http://CS231n%20Convolutional%20Neural%20Networks%20for%20Visual%20Recognition)
However, when I research [how to compute the dot product of matrics](https://www.khanacademy.org/math/precalculus/precalc-matrices/properties-of-matrix-multiplication/a/matrix-multiplication-dimensions), it seems that the dot product is **not** the same as summing the element-by-element multiplication. What operation is actually used (element-by-element multiplication or the dot product?) and what is the primary difference? | Have a look at [The HASYv2 dataset](https://arxiv.org/pdf/1701.08380.pdf). I tried to do as much of the exploratory work as possible to make sure that others can directly try more interesting thing with the dataset.
Image format specific stuff
---------------------------
* (Minimum / Median / Mean / Maximum) (width / height / area)
* Image formats
* Timestamps
* Exiff meta data
For this kind of stuff, you might want to have a look at [`edapy`](https://github.com/MartinThoma/edapy).
Image/ML specific stuff
-----------------------
Things you can do with images:
* Compute the mean image
+ Mean image by class
* [Eigenfaces](https://en.wikipedia.org/wiki/Eigenface) (or rather "Eigenimages")
* [Fisher-Faces](https://en.wikipedia.org/wiki/Facial_recognition_system#Traditional)
You can compute the correlation of pixels, e.g. [Figure 3](https://arxiv.org/pdf/1701.08380.pdf):
[![enter image description here](https://i.stack.imgur.com/cOhqp.jpg)](https://i.stack.imgur.com/cOhqp.jpg)
Classification-specific stuff
-----------------------------
* Plot the distribution of classes.
* Behaviour of standard classification algorithms (CNNs, VGG-16)
* [Confusion Matrix Ordering](https://arxiv.org/pdf/1707.09725.pdf) (page 48 - 52, especially Figure 5.12 and 5.13): Find similar classes
[![enter image description here](https://i.stack.imgur.com/UPoVg.png)](https://i.stack.imgur.com/UPoVg.png) |
I was just wondering if there exists problems that are solvable in polynomial time (a correct solution can be found in polynomial time) but not verifiable in polynomial time. My professor says no, but doesn't rlly give a clear explanation besides saying you can just use the solution to verify, but I fail to see how this can be if there are more than just one solution.
Are we allowed to make the argument that any problem with a poly algorithm can be modified in polynomial time so that it produces the particular polynomial solution we are trying to verify?
Edit: Sorry if the problem was confusing as I threw in P and NP, the title has been changed to reflect this. | It appears you're asking about problems for which there may be more than one correct answer for a given input (not decision problems).
In that case, there may exist a polynomial-time algorithm that finds a solution, but no algorithm that can verify arbitrary solutions. For example, given any string, it's easy to find a program that prints that string and then halts, but impossible in general to determine whether a program prints that string and then halts. |
I want to compress file size through making my own numbering system which is 80-based number, I do really want to know whether this even possible ? I learnt that Hexadecimal uses symbols like A, B, C, D, E, F to represent 10,11,12,13,14,15 -- and that's what i want to do to my own numbering system but in a bigger scale . Please correct me if i'm missing something .
Is it possible ? | There are several ways to interpret the question. What I *think* you might be asking is that you have a sequence of $n$ letters in an alphabet $\Sigma$ where $\left| \Sigma \right| = 80$. You want to store this in as few as possible bits. We will assume that the letters in the alphabet are uniformly distributed.
The information-theoretic amount of space required to store this is $n \log\_2 \left| \Sigma \right|$ bits. Using arithmetic coding, you can do this in linear time, using $O(\log n)$ bits of intermediate space. (Remember, that's the logarithm of the number of symbols, in bits! If the size of the sequence fits in a machine word, the intermediate storage required is a constant number of machine words at the most.)
So that's pretty good. But what about if we want random access?
It turns out that it can be done. The first technique to do it was only discovered about four years ago. We can store the sequence in $n \log\_2 \left| \Sigma \right|$ bits, such that *reading or writing* any entry takes $O(1)$ time. If you think about it, this is a remarkable result, because it means that a computer which works with any radix is, in a sense, equivalent to a binary one.
Here's the paper: Yevgeniy Dodis, Mihai Pătraşcu, and Mikkel Thorup, [An Alternative to Arithmetic Coding with Local Decodability](http://people.csail.mit.edu/mip/papers/trits/paper.pdf), STOC 2010.
By the way, remember the name Mihai Pătraşcu. He was and is the closest thing we have to a modern-day Évariste Galois. He died very young, of a brain tumor at the age of 29. But in his short career as a computer scientist, his work revolutionised the field of analysis of algorithms in ways that will take decades to fully understand. |
I need to create a recursive algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable. This is an assignment for my data structures class.
So far I have
```
void BST(tree T) {
if (T == null) return
if ( T.left and T.right) {
if (T.left.data < T.data or T.right.data > T.data) {
count = count + 1
BST(T.left)
BST(T.right)
}
}
}
```
But I can't really figure this one out. I know that this algorithm won't solve the problem because the count will be zero if the second if statement isn't true.
Could anyone help me out on this one? | As others have already indicated in comments, you really have two unrelated functions here: testing whether the tree is a search tree, and counting the complete branches. Unless the assignment specifically calls for it, I would write two separate functions.
Let's see abount counting the complete branches first. That means counting the nodes that have both a left child and a right child. Then you need to increment the counter (`count = count + 1`) when both `T.left` and `T.right` are non-null (not `T.left.data` and `T.right.data`: the data doesn't matter for this task).
```
if (T.left and T.right) {
count = count + 1
```
Furthermore, you need to explore the left subtree even if the right subtree is empty, and you need to explore the right subtree even if the left subtree is empty. So watch where you put the recursive calls.
To test whether the tree is a search tree, you need to inspect the data values. You've already got something close to the right comparison; not quite right. **Write a few example trees with various shapes** (not very big, 2 to 5 nodes) and run your algorithm on them to see what happens.
You still need to find some place to put the result of the validity check. Again, watch where you put the recursive calls (if you only do this part, there are several solutions, but at this stage don't worry if you only see one).
Finally, once you've managed to write both functions separately, and you've tested them on a few examples, put them together carefully (if required by the assignment). |
in the company where I work we retrain ML models regularly every day. Now we started to experiment with retraining a model by new observations with labels predicted by model itself.
I've tried to search on the Internet if someone else uses this technique but I didn't find much.
My gut feeling tells me that it is not the right approach as the labels do not represent ground truth any more and therefore the model might deviate and learn an incorrect hypothesis in time.
Am I right that this approach is not correct? Could you please direct me to some sources where I can read more about this topic?
Thank you,
Tomas | That problem is often framed as a [maximum subarray problem](https://en.wikipedia.org/wiki/Maximum_subarray_problem) and solved with [Kadane's algorithm](https://en.wikipedia.org/wiki/Maximum_subarray_problem#Kadane%27s_algorithm). Kadane's algorithm only stores 2 scalar values so it uses very little space. |
Coming from the field of computer vision, I've often used the [RANSAC](http://en.wikipedia.org/wiki/RANSAC) (Random Sample Consensus) method for fitting models to data with lots of outliers.
However, I've never seen it used by statisticians, and I've always been under the impression that it wasn't considered a "statistically-sound" method. Why is that so? It is random in nature, which makes it harder to analyze, but so are bootstrapping methods.
Or is simply a case of academic silos not talking to one another? | For us, it is just one example of a robust regression -- I believe it is used by statisticians also, but maybe not so wide because it has some better known alternatives. |
Im trying to understand how people use R packages and was wondering if there are documented cases where R packages have produced different answers.
Clarification: The motivation behind this question comes from an effort I'm involved in where the goal to understand the importance of provenance in the analytical methods and how it facilitates reproducible research. While R is big in the science community at present, and R packages are versioned in CRAN, without detailed information [especially version numbers], someone trying to reproduce a body of work in the future might come to a different conclusion that the original work (even with the original data).
Example: Paper by John Doe says "we used R 2.3.1 and package glmulti to fit our models". 10 years from now, someone might use a new version of glmulti (no one knows what version was used in the original) which might produce a much different conclusion. My question: Are there examples of such a thing happening already? Version 2 or R package produces a much different result that version 1. | This will vary package to package, but the general answer is **yes**. Outputs can vary, and even basic usage too (input/ouput args). This is why, when I do an important analysis, I always like to document what versions were used with `version()` and `sessionInfo()`. Even if things change, old versions are retained on CRAN, so you can get the old versions if you need them. |
I have two random variables and I am trying to see if and how much they are correlated. One of them (say X) is discrete and the other (say Y) is continuous. I used Seaborn to do the linear regression on the scatter plot for X-Y and I got the following:
[![enter image description here](https://i.stack.imgur.com/HcJhT.png)](https://i.stack.imgur.com/HcJhT.png)
In this case, the correlation between the two comes out tp be 0.5. I then took mean of all Y values for a given X and then did the regression again and got this:
[![enter image description here](https://i.stack.imgur.com/ojjo1.png)](https://i.stack.imgur.com/ojjo1.png)
Now I get the correlation to be close to 1! My question is, which one of these provides realistic description of the datasets and why? | The averages may be close to linearly related but individual points are not.
There is variation from a linear relationship. A correlation coefficient can show this in two ways. 1) if there is scatter (which is deviation from linear). 2) if the relationship is not linear (which is, clearly, deviation from linear).
What you found out is that the function describing the relationship of the averages is pretty much linear (your second analysis, which shows that the 'line' versus 'averages of multiple points', is nearly the same model), but the linear relation has a lot of scatter on top of it (the first analysis). This means that, the two coordinates for a point are *not* strongly linearly related.
The correlation tells how much, in the case for 'any point', one variable can be (linearly) related to another one. Your first image clearly shows that this is not the case (or at least not *so* much, since you end up in the middle between 0 and 1).
The best image to show the relationship would be the combination of **both** images. In that way another observer could directly see what is going on (which is two things at the same time, there is a linear trend, but still there is also some, non uniform, scatter). I would plot the averages, or maybe boxplots, with estimates *and* the data points on top of it (some jittering would be nice, it is currently difficult to see how many points you have in your 4 and 5 'dist' levels). Although this can be done in multiple ways. It depends on what your readers would like to see, or can handle.
An interesting point is to see that the lines are not the same in the two different cases. This is because the different weights. In the second image each average counts the same weight. In the first image this is not the case. It seems to me that there are four heavy points, the three at assort=0 and the one at dist = 4 and assort = 0.75, which dominate the line.
It looks like the linear relation ship is mostly in the variation of the scatter among the different 'dist' levels. It might be interesting to see what happens with the median or with other percentile levels (you chose the mean in your second study, but you may wonder whether it shows correctly what is going on). Also showing the variation of the scatter (and not just the means) seems to me a correct way to present the data. |
I am investigating a curvilinear effect between X and Y by using a hierarchic regression analysis. To test for curvilinear effects, the squared term for X was computed (I mean center also variable X).
In model 1, the control variables were entered. In Model 2, X (linear) was entered. In Model 3, X (quadratic) was entered.
In Model 2, X linear is significant. When the squared term is entered in Model 3, the quadratic term is significant but the linear term is not. Does this prove a curvilinear effect? Or is it essential that in Model 3 both (linear and quadratic) are significant?
When I do not mean center the independent variable, Model 3 stated X linear and X quadratic significant. The problem here is multicollinearity issues. | No, it is not essential that both the linear and quadratic terms be significant. Only the quadratic term need be significant.
In fact, it is important to note that the linear term takes on a somewhat different interpretation in the context of a model that also includes the quadratic term. In such a model, the linear term now represents the slope of the line tangent to x at the y-intercept, that is, the predicted slope of x *when and only when x = 0*. So a test of the linear term in a model like this is not in general testing the same thing as in a model that just includes the linear term without the quadratic. |
I was reading [this](http://scikit-learn.org/stable/modules/linear_model.html#ridge-regression) when I came across the term collinearity. I tried looking up what it is but top results are related to multicollinearity.
I could find [here](http://here) about multicollinearity
>
> multicollinearity refers to predictors that are correlated with other predictors in the model
>
>
>
It is my assumption (based on their names) that multicollinearity is a type of collinearity but not sure. Do these 2 terms differ or are they synonyms? | In statistics, the terms [collinearity and multicollinearity are overlapping](https://en.wikipedia.org/wiki/Multicollinearity). **Collinearity** is a linear association between **two** explanatory variables. Multicollinearity in a multiple regression model are highly linearly related associations between **two or more** explanatory variables.
In case of perfect multicollinearity the design matrix $X$ has less than full rank, and therefore the moment matrix $X^{\mathsf{T}}X$ cannot be matrix inverted. Under these circumstances, for a general linear model $y = X \beta + \epsilon$, the ordinary least-squares estimator $\hat{\beta}\_{OLS} = (X^{\mathsf{T}}X)^{-1}X^{\mathsf{T}}y$ does not exist. |
I hope you'll suffer a question from a novice :)
I'm trying to find the best way to graph (in Excel) the month-by-month share of site-wide visits and pageviews in five referral sources, grouping the remaining share as one: "other sources."
My thought was to do a clustered, stacked column chart, where each month has a cluster of two columns—visits and pageviews. Each column in a cluster would be stacked by source. The visits column would use one y-axis, and pageviews would use a secondary y-axis—pageviews have much higher values than visits. Please forgive me if any of this terminology is wrong.
Am I on the right track or is there a better way to visualize this data? If this is the way to go, any direction on how to arrange my data to coax Excel into getting all these parameters into one chart? Or does that question belong on a different StackExchange site?
Thanks! | I built on @Andy W's R-code and hope my changes are useful to someone else.
I mainly changed it, so that it
* obeys the new syntax (no more opts) in ggplot2, so no more warnings
* adds the correlations as text
* now correlation text size reflects its effect size
* colour scheme shows the type of correlation (hetero/mono-trait/method).
* put the legend in the empty upper right triangle
The function also contains my way for creating the data in the right format from a dataframe or correlation table. This depends on having your trait and method encoded in the variable name and you'd probably want to extract CFA loadings for a more solid look at the matter. In my case I first wanted to eyeball the correlations with a bit more visual structure. If you have your correlations/loadings in long format already it should be easy to adapt the function or to cast the long to wide.
### Edit:
I put this in a package on Github. You can get it using `devtools::install_github("rubenarslan/formr")`, the function is then `formr:mtmm`.
![MTMM](https://i.imgur.com/ejdUeHA.png)
```
## function for rendering a multi trait multi method matrix
mtmm = function (
variables, # data frame of variables that are supposed to be correlated
reliabilities = NULL, # reliabilties: column 1: scale, column 2: rel. coefficient
split_regex = "\\.", # regular expression to separate construct and method from the variable name. the first two matched groups are chosen
cors = NULL
) {
library(stringr); library(Hmisc); library(reshape2); library(ggplot2)
if(is.null(cors))
cors = cor(variables, use="pairwise.complete.obs") # select variables
var.names = colnames(cors)
corm = melt(cors)
corm = corm[ corm[,'Var1']!=corm[,'Var2'] , ] # substitute the 1s with the scale reliabilities here
if(!is.null(reliabilities)) {
rel = reliabilities
names(rel) = c('Var1','value')
rel$Var2 = rel$Var1
rel = rel[which(rel$Var1 %in% var.names), c('Var1','Var2','value')]
corm = rbind(corm,rel)
}
if(any(is.na(str_split_fixed(corm$Var1,split_regex,n = 2))))
{
print(unique(str_split_fixed(corm$Var1,split_regex,n = 2)))
stop ("regex broken")
}
corm[, c('trait_X','method_X')] = str_split_fixed(corm$Var1,split_regex,n = 2) # regex matching our column naming schema to extract trait and method
corm[, c('trait_Y','method_Y')] = str_split_fixed(corm$Var2,split_regex,n = 2)
corm[,c('var1.s','var2.s')] <- t(apply(corm[,c('Var1','Var2')], 1, sort)) # sort pairs to find dupes
corm[which(
corm[ ,'trait_X']==corm[,'trait_Y']
& corm[,'method_X']!=corm[,'method_Y']),'type'] = 'monotrait-heteromethod (validity)'
corm[which(
corm[ ,'trait_X']!=corm[,'trait_Y']
& corm[,'method_X']==corm[,'method_Y']), 'type'] = 'heterotrait-monomethod'
corm[which(
corm[ ,'trait_X']!=corm[,'trait_Y']
& corm[,'method_X']!=corm[,'method_Y']), 'type'] = 'heterotrait-heteromethod'
corm[which(
corm[, 'trait_X']==corm[,'trait_Y']
& corm[,'method_X']==corm[,'method_Y']), 'type'] = 'monotrait-monomethod (reliability)'
corm$trait_X = factor(corm$trait_X)
corm$trait_Y = factor(corm$trait_Y,levels=rev(levels(corm$trait_X)))
corm$method_X = factor(corm$method_X)
corm$method_Y = factor(corm$method_Y,levels=levels(corm$method_X))
corm = corm[order(corm$method_X,corm$trait_X),]
corm = corm[!duplicated(corm[,c('var1.s','var2.s')]), ] # remove dupe pairs
#building ggplot
mtmm_plot <- ggplot(data= corm) + # the melted correlation matrix
geom_tile(aes(x = trait_X, y = trait_Y, fill = type)) +
geom_text(aes(x = trait_X, y = trait_Y, label = str_replace(round(value,2),"0\\.", ".") ,size=log(value^2))) + # the correlation text
facet_grid(method_Y ~ method_X) +
ylab("")+ xlab("")+
theme_bw(base_size = 18) +
theme(panel.background = element_rect(colour = NA),
panel.grid.minor = element_blank(),
axis.line = element_line(),
strip.background = element_blank(),
panel.grid = element_blank(),
legend.position = c(1,1),
legend.justification = c(1, 1)
) +
scale_fill_brewer('Type') +
scale_size("Absolute size",guide=F) +
scale_colour_gradient(guide=F)
mtmm_plot
}
data.mtmm = data.frame(
'Ach.self report' = rnorm(200),'Pow.self report'= rnorm(200),'Aff.self report'= rnorm(200),
'Ach.peer report' = rnorm(200),'Pow.peer report'= rnorm(200),'Aff.peer report'= rnorm(200),
'Ach.diary' = rnorm(200),'Pow.diary'= rnorm(200),'Aff.diary'= rnorm(200))
reliabilities = data.frame(scale = names(data.mtmm), rel = runif(length(names(data.mtmm))))
mtmm(data.mtmm, reliabilities = reliabilities)
``` |
The worst case height of AVL tree is $1.44 \log n$. How do we prove that?
I read somewhere about Fibonacci quicks but did not understand it. | We want to show that the number of nodes $n$ in a height-balanced binary tree with height $h$ grows exponentially with $h$ and at least as fast as the Fibonacci sequence.
Let $N\_h$ denote the minimum number of nodes in a height-balanced binary tree having height $h$. Recall that in a height-balanced binary tree of height $h$, the subtree rooted at one of the children of the root has height $h-1$, and the subtree rooted at the other child of the root has height $h-1$ or $h-2$. Thus, $N\_h > N\_{h-1} + N\_{h-2}$. Thus, $N\_h$ is at least $f\_h$, the $h$th term of the Fibonacci sequence, where $f\_h \approx \phi^h / \sqrt{5}$ and $\phi$ is the golden ratio $\frac{1+\sqrt{5}}{2}$.
So, if $n$ is the number of nodes in an AVL tree of height $h$, we have $n \ge \phi^h / \sqrt{5}$. Taking $\log\_2$ of both sides, we get $h \le \frac{\log\_2 n}{\log\_2 \phi} + c = 1.4404 \log\_2 n + c$, for some constant $c$. Thus, an AVL tree has height $h = O(\log n)$
An easier proof, if you don't care about the constants as much, is to observe that $N\_h > N\_{h-1}+N\_{h-2} > 2N\_{h-2}$. Hence, $N\_h$ grows at least as fast as $\sqrt{2}^h$. So the number of nodes $n$ in a height-balanced binary tree of height $h$ satisfies $n > \sqrt{2}^h$. So $h \log\_2 \sqrt{2} < \log n$, which implies $h < 2 \log n$. |
I was asked to give an example of a graph that has edges with negative weight, but Dijkstra's algorithm will still give us the correct output. It was part of a prove/disprove question. The claim was.. *"If a graph has negative edges, Dijkstra's algorithm will return the wrong output"*. | Dijkstra's algorithm sometimes works when there are negative weights, and sometimes not, others have given several examples. Allow me to give a few ideas that may make it easier to find an example on your own.
Dijkstra's algorithm *never* works when there is a negative weight cycle. If a graph has negative weights, but no negative weight cycles, it is possible to modify the graph into a graph where Dijkstra can be applied and the results transformed to find shortest paths in the original graph. One such technique is known as [Johnson's algorithm](https://en.wikipedia.org/wiki/Johnson%27s_algorithm). I think this fact makes it more intuitive that negative weights are not an essential problem for Dijkstra's algorithm, and so we should expect some examples where it just works. |
In most basic probability theory courses your told moment generating functions (m.g.f) are useful for calculating the moments of a random variable. In particular the expectation and variance. Now in most courses the examples they provide for expectation and variance can be solved analytically using the definitions.
Are there any real life examples of distributions where finding the expectation and variance is hard to do analytically and so the use of m.g.f's was needed? I'm asking because I feel like I don't get to see exactly why they are important in the basic courses. | You are right that mgf's can seem somewhat unmotivated in introductory courses. So, some examples of use. First, in discrete probability problems often we use the probability generating function, but that is only a different packaging of the mgf, see [What is the difference between moment generating function and probability generating function?](https://stats.stackexchange.com/questions/297711/what-is-the-difference-between-moment-generating-function-and-probability-genera/297713#297713). The pgf can be used to solve some probability problems which could be hard to solve otherwise, for a recent example on this site, see [PMF of the number of trials required for two successive heads](https://stats.stackexchange.com/questions/303960/pmf-of-the-number-of-trials-required-for-two-successive-heads/304003#304003) or [sum of $N$ gamma distributions with $N$ being a poisson distribution](https://stats.stackexchange.com/questions/367158/sum-of-n-gamma-distributions-with-n-being-a-poisson-distribution/367197#367197). Some not-so-obvious applications which still could be used in an introductory course, is given in [Expectation of reciprocal of a variable](https://stats.stackexchange.com/questions/80874/expectation-of-reciprocal-of-a-variable/296879#296879), [Expected value of $1/x$ when $x$ follows a Beta distribution](https://stats.stackexchange.com/questions/284357/expected-value-of-1-x-when-x-follows-a-beta-distribution/284623#284623) and [For independent RVs $X\_1,X\_2,X\_3$, does $X\_1+X\_2\stackrel{d}{=}X\_1+X\_3$ imply $X\_2\stackrel{d}{=}X\_3$?](https://stats.stackexchange.com/questions/303525/for-independent-rvs-x-1-x-2-x-3-does-x-1x-2-stackreld-x-1x-3-imply-x/303587#303587) .
Another kind of use is constructing approximations of probability distributions, one example is the saddlepoint approximation, which take as starting point the natural logarithms of the mgf, called the cumulant generating function. See [How does saddlepoint approximation work?](https://stats.stackexchange.com/questions/191492/how-does-saddlepoint-approximation-work/191781#191781) and for some examples, see [Bound for weighted sum of Poisson random variables](https://stats.stackexchange.com/questions/183313/bound-for-weighted-sum-of-poisson-random-variables/183355#183355) and [Generic sum of Gamma random variables](https://stats.stackexchange.com/questions/72479/general-sum-of-gamma-distributions/137318#137318)
Mgf's can also be used to prove limit theorems, for instance the poisson limit of binomial distributions [Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution](https://stats.stackexchange.com/questions/261119/intuitively-understand-why-the-poisson-distribution-is-the-limiting-case-of-the) can be proved via mgf's.
Some examples (exercise sets with solutions) of actuarial use of mgf's can be found here: [https://faculty.math.illinois.edu/~hildebr/370/370mgfproblemssol.pdf](https://faculty.math.illinois.edu/%7Ehildebr/370/370mgfproblemssol.pdf) Search the internet with "moment generating function actuarial" will give lots of similar examples. The actuaries seem to be using mgf's to solve some problems (that arises for instances in premium calculations) that is difficult to solve otherwise. One example in [section 3.5 page 21](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=11&ved=2ahUKEwid6Jjyn6HeAhUOposKHduEDi44ChAWMAB6BAgJEAI&url=https%3A%2F%2Fwww.jstatsoft.org%2Farticle%2Fview%2Fv025i07%2Fv25i07.pdf&usg=AOvVaw063IRPxxBLuWzWEAbBRERN) and [books about actuarial risk theory](https://rads.stackoverflow.com/amzn/click/com/3642034071). One source of (estimated) mgf's for such applications could be [empirical mgf's](https://math.stackexchange.com/questions/1539072/uniform-convergence-of-empirical-moment-generating-function) (strangely, I cannot find even one post here about empirical moment generating functions).
A recent illuminating example is at out sister site: <https://mathoverflow.net/questions/435496/why-should-the-logarithmic-series-distribution-model-the-number-of-items-bough> |
I have 70,000 observations for my dependent variable. I have 12 independent variables. After removing zero value and error and missing value form my data set, my data reduced to 4000. Can I still do the multiple linear regression with this data set? I think 4000 data is more than enough for 12 independent variables, but I am not sure whether removing almost 90% of observations will harm my regression or not? | Because the bootstrapped statistic is one further abstraction away from your population parameter. You have your population parameter, your sample statistic, and only on the third layer you have the bootstrap. The bootstrapped mean value is not a better estimator for your population parameter. It's merely an estimate of an estimate.
As $n \rightarrow \infty$ the bootstrap distribution containing all possible bootstrapped combinations centers around the sample statistic much like the sample statistic centers around the population parameter under the same conditions. This paper [here](http://www.stat.rutgers.edu/home/mxie/rcpapers/bootstrap.pdf) sums these things up quite nicely and it's one of the easiest I could find. For more detailed proofs follow the papers they're referencing. Noteworthy examples are [Efron (1979)](http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.aos/1176344552) and [Singh (1981)](http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.aos/1176345636)
The bootstrapped distribution of $\theta\_B - \hat\theta$ follows the distribution of $\hat \theta - \theta$ which makes it useful in the estimation of the standard error of a sample estimate, in the construction of confidence intervals, and in the estimation of a parameter's bias. It does not make it a better estimator for the population's parameter. It merely offers a sometimes better alternative to the usual parametric distribution for the statistic's distribution. |
I am trying my first 'project' concerning machine learning and I am a bit stuck.
However, I am not sure if it's even possible but here goes my question.
What I want to achieve is clustering user groups based on the amount of visits a user does on a certain website.
So I started out with this feature matrix:
```
USER abc.be abc.be/a abc.be/b xyz.be xyz.be/a
123 0 0 0 0 1
456 1 0 1 0 0
789 2 3 1 0 0
321 1 0 1 0 1
654 1 1 1 1 1
987 0 1 0 3 0
```
So I got in this example 5 features (my 5 different websites).
So then I used PCA to come to 2 dimensions, so I could plot it and see how it went.
My feature matrix (in my example) is 5 columns \* 6 rows.
My PCA matrix is 2 columns \* 6 rows.
I came to this plot (please note that this plot uses different data then the example but the idea is the same)
![PCA points and k-means centroids](https://i.stack.imgur.com/Tl0Qv.png)
The green points are my PCA points
The red circles are my K-Means centroids.
But the part I am struggling with is this: so I got my clusters (red circles) but how can I use this to say:"Looks like most users go to either site A or site B)?
So how can I couple my clusters to a feature label from my feature matrix?
Or how does one approach this?
Any help is appreciated :) | **So how can I couple my clusters to a feature label from my feature matrix?**
Principal Components are not intuitive features. What seems common here is to cluster users based on PCs and investigate clusters based on original features afterward i.e. extract different clusters and plot data based on different subsets of features and use different colors for different clusters. It might give you some intution.
These two paths can be seen in many PCA results where the information varies with e.g time.
For instance in your case a beginner user visits less than an old one and the number of new users are most probably higher than old ones so data will be more dense around origin and far it gets from the origin less dense it becomes. Such a phenomenon affects your PCs as well so you'll see some paths along different lines in PC space.
Hope it helps :) |
In the following function, let $n \geq m$.
```
int gcd(n,m)
{
if (n % m == 0) return m;
n = n % m;
return gcd(m, n);
}
```
How many recursive calls are made by this function?
* $\Theta (\log\_2 n)$
* $\Omega (n)$
* $\Theta (\log\_2(\log\_2 n))$
* $\Theta ( \sqrt{n} )$
I think the answer is $\Theta (\log\_2(\log\_2n))$, but my book is saying $\Theta (\log\_2 n)$.
My reasoning is as follows. Here we are not dividing the number. If there was a division then it would be $\log n$. But here operation is $\bmod$. So we will get a very small number after the first call. So it must be $\log \log n$. Am I thinking correctly? | Hint: on input $F\_{n+1},F\_n$, *gcd* makes a recursive call with inputs $F\_n,F\_{n-1}$. Now use the asymptotic formula $F\_n = \phi^n + \Theta(1)$, where $\phi = (1+\sqrt{5})/2 > 1$.
Also, while we're at it, $\Theta(\log\_2 n) = \Theta(\log n)$, and similarly $O(\log\_2 n) = O(\log n)$ (and the same fore $\log\log n$); try to figure out why (here the base of $\log n$ can be arbitrary, but for definiteness you can choose base $e$). |
Given the general definition of Covariance between two random variables $x$ and $y$:
\begin{equation\*}
\text{Cov}(x,y)=\frac{1}{n}\sum\_{i=1}^n(x\_i-\bar{x})(y\_i-\bar{y})
\end{equation\*}
Does the above definition implicitly assume that every bivariate observation $(x\_i,y\_i)$ has the same relative frequency, namely equal to $\frac{1}{n}$?
I would expect that for bivariate observations with different relative frequency, the above definition (given that the value set of component random variable $x$ has $r$ values while that of $y$ has $s$ values) would become equal to:
\begin{equation}
\text{Cov}(x,y)=\sum\_{i=1}^r\sum\_{j=1}^sf\_{x,y}(x\_i,y\_i)(x\_i-\bar{x})(y\_i-\bar{y})
\end{equation}
with $f\_{x,y}(x\_i,y\_i)$ denotes the relative frequency of the pair $(x\_i, y\_i)$.
Is my reasoning correct or am I wrong? Why? | I interpret this to mean that you are unhappy if you have a data set like this.
$$
X, Y\\
1,1\\
2,3\\
1,1\\
0, -1
$$
In this case, the $(1,1)$ is repeated, so you want to weight it double. However, that is covered by the formula.
$$
cov(X, Y) = \frac{1}{4}\sum\_{i = 1}^4 (X\_i - \bar X)(Y\_i-\bar Y)\\
=\dfrac{(1-1)(1-1) + (2 - 1)(3 - 1) + (1 - 1)(1 - 1) + (0 - 1)(\text{-}1 - 1)
}{4}
$$
The values get repeated in the sum, so there is no need to weight observations according to how many times they appear. |
I'm interested in learning how to create the type of visualizations you see at <http://flowingdata.com> and informationisbeautiful. EDIT: Meaning, visualizations that are interesting in of themselves -- kinda like the NY Times graphics, as opposed to a quick something for a report.
What kinds of tools are used to create these -- is it mostly a lot of Adobe Illustrator/Photoshop? What are good resources (books, websites, etc.) to learn how to use these tools for data visualization in particular?
I know *what* I want visualizations to look like (and I'm familiar with design principles, e.g., from Tufte's books), but I have no idea *how* to create them. | Already mentioned processing has a nice set of books available. See: [1](http://rads.stackoverflow.com/amzn/click/0262182629), [2](http://rads.stackoverflow.com/amzn/click/144937980X), [3](http://rads.stackoverflow.com/amzn/click/0123736021), [4](http://rads.stackoverflow.com/amzn/click/159059617X), [5](http://rads.stackoverflow.com/amzn/click/0470375485), [6](http://rads.stackoverflow.com/amzn/click/0596514557), [7](http://rads.stackoverflow.com/amzn/click/1568817169)
You will find lots of stuff on the web to help you start with R. As next step then ggplot2 has excellent web [documentation](http://had.co.nz/ggplot2/). I also found Hadley's [book](http://rads.stackoverflow.com/amzn/click/0387981403) very helpful.
Python might be another way to go. Especially with tools like:
* [matplotlib](http://matplotlib.sourceforge.net/)
* [NetworkX](http://networkx.lanl.gov/)
* [igraph](http://cneurocvs.rmki.kfki.hu/igraph/)
* [Chaco](http://code.enthought.com/chaco/)
* [Mayavi](http://code.enthought.com/projects/mayavi/)
All projects are well documented on the web. You might also consider peeking into [some](http://rads.stackoverflow.com/amzn/click/1430218436) [books](http://rads.stackoverflow.com/amzn/click/1847197906).
Lastly, [Graphics of Large Datasets](http://rads.stackoverflow.com/amzn/click/0387329064) book could be also some help. |
An undirected graph is a Near-Clique if adding one more edge would make it a clique. Formally, a graph $G=(V,E)$ contains a near-clique of size $k$ if there exists $S\subseteq V$ and $u,v\in S$ where $|S|=k$, $(u,v)\notin E$, and $S$ forms a clique in $(V,E\cup\{(u,v)\})$. How can I show a direct reduction from Near-Clique to Clique? | Hint: Given an instance of clique, add two new vertices $u,v$, and connect each of them to all of the original vertices. |
I'm graduated now, and swear I remember this exact kind of problem coming up in my Bayesian statistics class, but I can't remember what the answer was.
**The Problem**
So my wife's brother is red/green colorblind (xY), but her father is not(XY), and her mother is not, so her mother must be a carrier(Xx). Red/green color-blindness is recessive on the X Chromosome(x), which means in order to have the phenotype (actually have it), all of your X chromosomes must have the gene(xx or xY). This means women must get it from both their mother and father, while men must get it from their mother.
As a result, my wife has a 50% chance of being a carrier(Xx or XX), while all of my sons have a 25% chance of being red/green colorblind(XY or xY).
What I'm trying to figure out is if my first son ends up NOT being red-green colorblind (XY), does that change the probability that my wife is a carrier, since I've now observed one datapoint that refutes that possibility? If so, by how much?
**My Thoughts**
I think that Frequentists would say that it remains a 50% chance due to them being Independent events, but again, I can't recall exactly how it worked. I may be spacing it, but if my memory serves, Bayesians would include the original assumption(50%) as essentially a datapoint, but would subjectively give it more or less weight. I'm wondering if there's an objective way to handle this.
Of course, I can always calculate the odds of her being a carrier and how unlikely it is that she would have X sons without color-blindness (simple Geometric Distribution), but that doesn't tell me anything about her original probability. | Here is my simple attempt.
We have prior of your wife being a carrier $\pi(Y = 1) = 0.5$ and the probability of your first son being colourblind $p(X = 0|Y = 1) = 0.5,\ p(X = 0|Y = 0) = 1$. Now using simple Bayes rule we have:
$$\pi(Y = 1|X = 0) = \frac{p(X = 0| Y = 1)\pi(Y = 1)}{p(X = 0)} = \frac{p(X = 0| Y = 1)\pi(Y = 1)}{p(X = 0| Y = 1)\pi(Y = 1) + p(X = 0| Y = 0)\pi(Y = 0)} = \frac{\frac{1}{2} \* \frac{1}{2}}{\frac{1}{2} \* \frac{1}{2} + 1 \* \frac{1}{2}} = \frac{1}{3}$$ |
I recently started reading about Descriptive Complexity, the branch of Complexity Theory studying the logic languages needed to express complexity classes. The main milestone in the area seems to be Neil Immerman's book, but this is already quite old. Seems like this line of research is dead. Is this the case? If so, why? | I also have the impression that Descriptive Complexity is a less active area of research nowadays. Nevertheless, there are some topics in which people are still active:
1. **Rank logics:**
* [Rank Logic is Dead, Long Live Rank Logic! by Grädel and Pakusa](https://arxiv.org/abs/1503.05423)
* [Symmetric Circuits for Rank Logic by Dawar and Wilsenach](https://arxiv.org/abs/1804.02939)
* [Separating Rank Logic from Polynomial Time
by Lichter](https://arxiv.org/abs/2104.12999)
2. **Choiceless Polynomial Time:**
* [Canonization for Bounded and Dihedral Color Classes in Choiceless Polynomial Time by Lichter and Schweitzer](https://arxiv.org/abs/2010.12182)
* [Choiceless Logarithmic Space by Grädel and Schalthöfer](https://drops.dagstuhl.de/opus/volltexte/2019/10975/)
3. **Dynamic Complexity:**
* [Dynamic Complexity of Parity Exists Queries by Vortmeier and Zeume](https://arxiv.org/abs/1910.06004)
* [Reachability Is in DynFO by Datta, Kulkarni, Mukherjee, Schwentick and Zeume](https://dl.acm.org/doi/abs/10.1145/3212685)
* [PHD thesis of Thomas Zeume](https://d-nb.info/110226881X/34)
4. **Other interesting things:**
* [Descriptive Complexity for Counting Complexity Classes by Arenas Muñoz and Riveros](https://ieeexplore.ieee.org/abstract/document/8005150)
* [Descriptive complexity of real computation and probabilistic independence logic by Hannula, Kontinen, Van den Bussche and Virtema](https://arxiv.org/abs/2003.00644)
* [Descriptive Complexity of Deterministic Polylogarithmic Time by Ferrarotti et al](https://link.springer.com/chapter/10.1007/978-3-662-59533-6_13)
* [On the Power of Symmetric Linear Programs by Atserias, Dawar and Ochremiak](https://ieeexplore.ieee.org/abstract/document/8785792)
* [Traversal-invariant characterizations of logarithmic space by Bhaskar, Lindell and Weinstein](https://arxiv.org/abs/2006.07067)
The list is not supposed to be complete. Just giving you a glimpse on what kind of problems are people looking at. |
I know that Assembly is the lowest level of the stack besides machine code, but are the languages above it all considered "high level languages" or is there a hierarchy amongst these languages. For instance, are C++, Ruby, and Python all "high level languages"? | There are supposedly generations of languages. According to [Wikipedia](https://en.wikipedia.org/wiki/Fifth_generation_computer):
First generation: Machine language.
Second generation: Low-level programming languages such as assembly language.
Third generation: Structured high-level programming languages such as C, COBOL and FORTRAN.
Fourth generation: Domain-specific high-level programming languages such as SQL (for database access) and TeX (for text formatting)
The fifth generation was supposed to be developed by Japan's fifth generation project in the 1980s but didn't really go anywhere. Some people consider constraint-based languages to be fifth generation ([Wikipedia](https://en.wikipedia.org/wiki/Fifth-generation_programming_language)).
There is also a supposed hierarchy of language power, expressed by Paul Graham as the "[Blub Paradox](https://en.wikipedia.org/wiki/Paul_Graham_(programmer)#The_Blub_paradox)" in his essay [Beating the Averages](http://www.paulgraham.com/avg.html). In this controversial approach, languages such as Basic and Cobol are low in the hierarchy, Python and Perl would be in the middle, and Lisp would be at the top. |
i plotted the ROC curve for RandomForest Classifier and this is what i get :
[![enter image description here](https://i.stack.imgur.com/jAyF2.png)](https://i.stack.imgur.com/jAyF2.png)
The shape looks weird to me , can somebody help me to make sense of it , and is this shape 'common' to not say normal?
thank you. | Imagine you have a blue line drawn from the point (0,0) to the point (0, 0.51). Do you see a common shape now?
That happens because you have few or no predicted probabilities above 0.9 or something close to it, so the point with 0 false positives is never touched.
The ROC curve you are watching is perfectly normal. |
The empirical cumulative distribution function of a random variable, given observations $x\_\left( k \right) > x\_\left( k-1 \right)$, $k \in \mathbb N$, $k \le n$, is defined as $F\_{emp}(x\_\left( k \right) > X \ge x\_\left( k-1 \right)) = \frac k {n+1}$ and $F\_{emp}(X \ge x\_\left(n\right))=1$.
Why? As long as we're interpolating, wouldn't it make sense to use some interpolation method with less error? A simple nearest neighbour or piecewise average interpolant would be an improvement, and a cubic interpolant would get us a differentiable empirical density function, too.
The above definition won't even give you the piecewise infimum of the cdf, because the variable is random. It certainly approaches the true function as $n\to\infty$, but then so would any other interpolant. Surely at least linear interpolants were considered. | The EDF is the CDF of the population constituted by the data themselves. This is *exactly* what you need to describe and analyze any resampling process from the dataset, including nonparametric bootstrapping, jackknifing, cross-validation, etc. Not only that, it's perfectly general: any kind of interpolation would be invalid for discrete distributions. |
Is there a Box-Cox like transformation for independent variables? That is, a transformation that optimizes the $x$ variable so that the `y~f(x)` will make a more reasonable fit for a linear model?
If so, is there a function to perform this with `R`? | There are many advantages to making estimation of covariate transformations a formal part of the estimation process. This will recognize the number of parameters involved and produced good confidence interval coverage and type I error preservation. Regression splines are some of the best approaches. And splines will work with zero and negative values of $X$ unlike logarithmic approaches. |