input
stringlengths 38
38.8k
| target
stringlengths 30
27.8k
|
---|---|
I want to determine the minimum and maximum number of leaves of a complete tree(not necessarily a binary tree) of height $h$.
I already know how to find minimum($h+1$) and maximum($2^{h+1}-1$) number of **nodes** from the height, but what about *leaves*? Is there a way to determine them knowing nothing but height of the tree? | You need to use one intersection operation. It is a known closure property that if two languages $A, B$ are regular, then the intersection $A \cap B$ will also be regular. In this case, $A$ is the given regular language. The other one, $B = \textrm{Even}(\Sigma^\*)$, is the language of all even-length strings. The language you want to prove is the language containing all the strings in $A$ AND having even length [in $B$].
It is easy to show that $B$ is regular as well. Its regular expression is `(..)*`, where the period is the metacharacter that matches any character in the alphabet.
Your language is equivalent to determining that $B$ is regular, and then stating that $A \cap B$ is also regular. However, determining the regular expression of the intersection, will take [doubly exponential time](https://math.stackexchange.com/a/913165/573291) to do so. |
I have two sets of 2-dimensional coordinates on an integer grid, $A$ and $B$
$A = \{(x\_{A1},y\_{A1}), (x\_{A2}, y\_{A2}), (x\_{A3}, y\_{A3}), \dots\}$
$B = \{(x\_{B1},y\_{B1}), (x\_{B2}, y\_{B2}), (x\_{B3}, y\_{B3}), \dots\}$
I need to find all coordinates in $B$ that are directly above, below, or to the left or right of any coordinate in $A$.
So they differ by one of $\{(1,0), (-1,0), (0,1), (0,-1)\}$ from the matching coordinate(s) in $A$.
As an example,
$A = \{(0,0), (1,2), (2,2)\}$
$B = \{(0,1), (1,2), (0,3)\}$
would result in
$R = \{(0,1), (1,2)\}$
How can I efficiently do this? | Store $B$ in a hashtable.
For each element of $A$, check whether it is in the hashtable. Also, for each element of $A$, for each of its four neighbors, check whether that neighbor is in the hashtable.
The expected running time will be approximately $O(|A|+|B|)$.
---
If you want a further constant-factor speed-up, it is possible to speed this up even further, if $A,B$ are sparse.
Given a location $(x,y)$ in $A$, compute $x'= \lfloor x/2 \rfloor$, $y'=\lfloor y/2 \rfloor$, and then store $(x,y)$ in a hashtable keyed on $(x',y')$. This basically breaks the grid up into $2 \times 2$ blocks, with all locations in the same $2 \times 2$ block treated identically and hashed identically. Now given a location $(u,t)$ in $B$, you can find three $2 \times 2$ blocks that cover $(u,v)$ and its neighborhood. So, look up those three blocks in the hashtable, check whether any of them contain any points in $A$, and if so, check whether they are in the neighborhood of $(u,v)$. In the average case, you'll need to do about 3 hashtable lookups per point of $A$, rather than 5.
The expected running time is still $O(|A|+|B|)$, but the constant factor is reduced by about a ratio of $5/3$. |
I have a conceptual question about why (processing power/storage aside) would you ever just use a regular linear regression without adding polynomial features? It seems like adding polynomial features (without overfitting) would always produce better results? I know linear regression can fit more than just a line but that is only once you decide to add polynomial features correct? My experience with python using sklearn's libraries. | The trouble with this is that, yes, 500 heads and 500 tails is awful evidence against $H\_0: p=0.5$. However, that is also awful evidence against $H\_0: p=0.50000001$.
Well which is it, $0.5$ or $0.50000001?$ Those numbers aren’t equal. Sure, they’re close, but they’re not equal.
You don’t know which it is, so you don’t really have evidence in favor of $p=0.5$.
(And $0.49999$. And $0.500103$. And $0.500063$. So many other values of $p$ are totally plausible for 500 heads and 500 tails.)
What you can do is something like two one-sided tests: TOST. The gist of TOST is to show that $p>0.501$ is unlikely and $p<0.499$ is unlikely, so you have confidence that $p\in(0.499,0.501)$.
<https://en.wikipedia.org/wiki/Equivalence_test> |
**Background**
I'm in the process of attempting to improve part of our data storage and analysis architecture. Without getting into a lot of details, at a certain part of our data analysis process we have a need to store large quantities (~100s of millions) of small pieces of unique data. The data looks like this:
```
ID, 20 bytes (immutable)
Hits, unsigned 64bit int (mutable)
Value1, arbitrary length byte array (immutable)
Value2, arbitrary length byte array (immutable)
```
I currently have this data stored in two parts, a B+Tree index which maps the keys to unsigned 64bit integer values. Those values are file offsets in a data file which contains a structure like:
```
[Hits] UInt64
[LengthOfValue1] UInt32
[Value1DataBlob] N-bytes
[LengthOfValue2] UInt32
[Value2DataBlob] N-bytes
```
As new values are posted to this data store, the code performs a lookup in the B+Tree. If the tree contains the value, the hit count is incremented in the data file. If the value is not there, a new entry is appended to the end of the data file, it's start offset the inserted into the B+Tree.
Later, after this process is complete, we will enumerate the data performing more processing on it. What is important here though is that, if the key is already in our system, we are incrementing the hits on that key. This is essentially a cache, which is tracking hits on each piece of data as it's encountered.
What we are finding is that as the B+Tree grows larger, insert times become VERY slow. Lookup remains very fast (as you might expect).
**Question**
So -- Does anyone know of another way to do this, where unique checks are lightening fast, and so are inserts? We really don't care about later search performance, because once we go through the initial build of this dataset, all we will use it for is to enumerate the results. We're not going to be doing random searches against the dataset, in a long term manner.
Please do not suggest any kind of off-the-shelve database system. We've tried a lot of them, and this custom solution is faster than any of them, with a smaller data storage footprint.
We're just trying to improve our custom solution, and have hit a wall with our collective CS knowledge. Maybe this is the fastest way to do this, or maybe a different structure would perform better than the B+Tree for inserts at this scale? | * If you are willing to change your model, quite a few lower bounds in data structures are tight. See [Lower Bounds for Data Structures](https://cstheory.stackexchange.com/questions/5517/lower-bounds-for-data-structures) for pointers to good references for lower bounds in data structures.
* From the $\Omega(n \log n)$ bound for sorting in the comparison model that some people have mentioned here, you can obtain a similar bound for the convex hull problem by considering the case where the input is composed of points along the graph of an increasing function in the first quadrant of the plane. |
My research question is : What’s the relationship between the education of a student’s parents and one’s SAT scores?
I will carry out my survey by interviewing students about their final SAT score and their parent's education categoried into below high school, high school, college, grad school and phd
Should I use Chi-squared, regression, or Pearson’s Correlation Coefficient? | Regression is a superior modeling technique because the association measure has a scale that reflects the units of the two measures you are relating. The slope coefficient is interpreted as an expected difference in SAT scores comparing two different parental education levels. The 95% CI summarizes the uncertainty in this estimate in a manner that is helpful for understanding both the effect size and the statistical significance when making inference on a population level association. |
I know that every model has assumptions and work best when those assumptions hold. So, theoretically there are problem domains where linear regression works best. My question is rather this: can you give a real life problem domain where linear regression is known to perform (has higher accuracy in prediction) better than more sophisticated methods like neural networks, support vector machines, or random forests. | If the underlying process is linear, linear regression will be superior. What comes to mind is estimating calories on food; the amount of calories will be strictly linear (i.e., they won't interact with each other). A neural network (etc) could be trained to do this but it would be more computationally demanding and prone to overfitting.
That said, I think that people will choose linear regression over machine learning less because of the problem domain and more because of their own goals. If you want an accurate prediction, then a well-trained model will be superior most of the time and linear regression wouldn't be considered seriously. In other cases, predictive accuracy isn't really interesting: social scientists will frequently publish regression models with accuracy (that is to say, $R^2$) well below .1, because the interest is in the contribution of individual factors to the outcome rather than predicting the value of Y given X. A neural network, rather, doesn't really care about the marginal effect of changing an input, it just runs the model again and returns its prediction. |
If accepted by dpda and npda,then it is regular. Is it correct?.I had confusion that some where I studied that a regular language is exactly should accepted by finite automata..... | If it's accepted by a DPDA, then clearly it is also accepted by a NPDA with exactly one possible transition for each configuration.
However, there are languages accepted by DPDA's not accepted by any DFA. The canonical example of a non-regular languages $L = \{a^n b^n \mid n \geq 0\}$ is accepted by a DPDA by empty stack (an easy exercise should be to design such a DPDA).
You're right that every finite state automata accepts a language that is regular. However, PDA's are not finite state in the larger sense of state that includes not only the internal state $q\_0$, but the contents of the stack. The stack is unbounded in size, hence PDAs are not finite state. |
Regarding [p-value](/questions/tagged/p-value "show questions tagged 'p-value'")s, I am wondering why $1$% and $5$% seem to be the gold standard for `"statistical significance"`. Why not other values, like $6$% or $10$%?
Is there a fundamental mathematical reason for this, or is this just a widely held convention? | I have to give a non-answer (same as [here](https://stats.stackexchange.com/a/783/442)):
>
> "... surely, God loves the .06 nearly as much as the .05. Can there be
> any doubt that God views the strength of evidence for or against the
> null as a fairly continuous function of the magnitude of p?" (p.1277)
>
>
>
Rosnow, R. L., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. *American Psychologist*, 44(10), 1276-1284. [pdf](http://socrates.berkeley.edu/~maccoun/PP279_Rosnow.pdf)
The paper contains some more discussion on this issue. |
>
> I know that if it exists, a regular, unbiased estimator $T$ for $\tau(\theta)$ attains the Cramér-Rao Lower Bound (next, CRLB) if and only if I can decompose the score function as follows: $S(\theta)=\frac{\partial}{\partial\theta}logf\_\mathbf{x}(\mathbf{x};\theta)=k(\theta,n)[T(\mathbf{X})-\tau(\theta)]$, where $k(\theta;n)$ is a generic function.
>
>
>
In particular, what is the link between CRLB and this last property and MLEs ${\hat{\theta}}$? I mean, is it possible that is something like that ${\hat{\theta}}$ always satisfies the decomposition above and thus it always reaches the CRLB? | It's difficult to identify the correct level of rigor for an answer. I added the "regularity" condition to your question, since there are [unbiased estimators](https://en.wikipedia.org/wiki/Hodges%27_estimator) that beat the Cramer-Rao bound.
Regular exponential families have score functions for parameters that take this linear form. So we have some idea that this notation is not arbitrary; it comes from estimating "usual" things that produce reasonable outcomes.
As you know, obtaining maxima of a functional (like a likelihood or log-likelihood) involves finding the root of its derivative if it's smooth and the root is continuous. For regular exponential families, the linear form means the solution is obtainable in closed form.
When the score has that form, its expectation is 0 and its variance is an information matrix. It was a revelation to me to think of a score as a random variable, but indeed it's a function of $X$. Using the Cauchy–Schwartz inequality, you can show that any biased estimator is the sum of an unbiased estimator and the bias of the original estimator. Therefore the variance is greater in the sum of these two functions. |
I'm wondering if someone could suggest what are good starting points when it comes to performing community detection/graph partitioning/clustering on a graph that has **weighted**, **undirected** edges. The graph in question has approximately 3 million edges and each edge expresses the degree of similarity between the two vertices it connects. In particular, in this dataset edges are individuals and vertices are a measure of the similarity of their observed behavior.
In the past I followed a suggestion I got here on stats.stackexchange.com and used igraph's implementation of Newman's modularity clustering and was satisfied with the results, but that was on a unweighted dataset.
Are there any specific algorithms I should be looking at? | Gephi implements the Louvain Modularity method: <http://wiki.gephi.org/index.php/Modularity>
cheers |
I have a longitudinal (panel) dataset for investment growth for 120 countries covering the time from 1960-2008. Essentially it's viewed as 120 time series.
What I am interested in is to group countries based on their shape of their growth curves over time. Thus whether they share similar Shape of their curves are the only criteria I need for grouping those countries.
I have tried KmL package (K-means for Longitudinal Data), but it seems that (please correct me if I am wrong) this methodology produces the result that group countries exhibiting similar (investment growth) mean value (or magnitude), not exactly according to the similar shape. For example, KmL tends to group countries with high investment growth, median average investment growth, low investment growth, etc. The countries within those groups may have very different shape of curves over time.
What I am looking for is regardless of the absolute value of investment growth. As long as the two countries exhibit similar pattern of their growth over time curve, they should be grouped together in one group.
Could anyone tell me a way to implement this clustering? I have noticed from previous posts that cointegration test may work. Any suggestions will be greatly appreciated! | All of the recommendations so far rely on the standard moment-based approaches to time series analysis and all are a type of HAC model. The question, though, specifically queried the *patterning* or shape in the data. Andreas Brandmaier at the Max Planck Institute has developed an non-moment-based, information and complexity theoretic pattern analysis time series model that he calls *permutation distribution analysis*. He's written an R module to test the similarities in shape. PDC has a long history in biostatistics as an approach to two group similarities. Brandmaier's dissertation was on PDC and structural equation modeling trees.
*pdc: An R Package for Complexity-Based Clustering of Time Series*, J Stat Software, Andreas Brandmaier
*Permutation Distribution Clustering and Structural Equation Model Trees*, Brandmaier dissertation PDF
In addition, there is Eamon Keogh's machine learning, *iSax* method for this.
<http://www.cs.ucr.edu/~eamonn/> |
Will the IBM PC ever move towards 128 bit architecture? Is that even possible or is the 64 bit architecture we have now the ceiling? | >
> Will the IBM PC ever move towards 128 bit architecture? Is that even possible or is the 64 bit architecture we have now the ceiling?
>
>
>
Those are really two completely separate questions, so I am going to answer them separately.
>
> Is that even possible or is the 64 bit architecture we have now the ceiling?
>
>
>
That really depends on how you define "IBM PC". The IBM PC is a 16 bit architecture, and has been dead since the early 1990s. So, technically speaking, no, you can't have a 128 bit IBM PC, because the IBM PC is 16 bit, and if it were 128 bit, then it wouldn't be an IBM PC anymore.
However, there is nothing stopping anyone from doing what Intel did when they introduced the IA-32 architecture by extending the x86 architecture from 16 bit to 32 bit, or what AMD did when they introduced the AMD64 architecture by extending the IA-32 architecture from 32 bit to 64 bit.
There is also nothing stopping anybody from introducing a 65 bit or 96 bit or 512 bit architecture.
So, yes, it is definitely possible to design an architecture that is a careful extension of the current mainstream AMD64-based "PC successor" architecture. It wouldn't be an IBM PC, though, just like what we had since the 1990s weren't IBM PCs either.
>
> Will the IBM PC ever move towards 128 bit architecture?
>
>
>
"ever" is a really long time, but I doubt it.
The current (and all currently known future planned) implementations of the AMD64 architecture, regardless of whether they are being designed and manufactured by Intel, by AMD, or someone else, are limited to a virtual address space of 48 bit and a physical address space of 48 bit. This means that the limit for both virtual memory and physical memory is 256 TiByte. However, there are currently no known or planned motherboards or chipsets that support even close to that amount of memory. (Also, I believe that while current CPUs support 48 bit physical addresses, they typically do not actually have 48 address pins, so the actual maximum is even lower.)
The virtual address space can theoretically be extended up to the full amount of 64 bit. The physical address space is limited by the page table format to 52 bit (4 PiByte).
So, a more conservative first step before we move to 128 bit would be to redesign the page table format such that the full 64 bit can be used.
At the moment, the 48 bit limitation does not seem to be a problem, even remotely. But even if we assume that we are going to run out of 48 bit address space tomorrow, and that address space requirements will grow similar to Moore's Law, doubling every 1.5–2.5 years, it will take another 20–40 years until we run out of 64 bit address space.
The largest supercomputer in the world, [Fugaku](https://wikipedia.org/wiki/Fugaku_(supercomputer)) consists of 158976 compute nodes. Each node has 32 GiByte of RAM, for a total of 4.85 PiByte. Each group of 16 nodes has a 1.6 TByte SSD as level 1 storage, for a total of 15.53 PByte. Plus, there is a shared 150 PByte cluster filesystem (Lustre) for the whole cluster.
So, the total amount of storage of the largest computer in the world currently is about 171.4 PByte or 152.2 PiByte, which could be byte-addressed with 57.24 bits, and even if you want to address every single individual bit, you would only need 60.24 bits. So, even if we assume that all supercomputers are this big, 64 bit would still be enough to address every single individual bit of the total combined storage of the TOP 10 supercomputers in the world. But note that this is the total sum of storage (not just RAM, but hard disk and network filesystem) in the entire cluster (not just each node), and that we normally address bytes, not bits.
In reality, there is not a single OS kernel running on the entire cluster that needs to address all that storage, each of the 158976 nodes is running its own OS kernel, and only needs to address its 32 GiByte of local RAM.
There is a general trend in the industry, where computers aren't getting "bigger" but are instead getting "more". For example, the total amount of RAM in my home has grown by a factor of 200 over the last 20 years, but 20 years ago, it was all in my desktop, and now it is distributed among my two laptops, two phones, two tablets, router, and NAS, so even my "largest" computer only has about 50 times as much RAM as my old desktop. (I am cheating a bit because I also use cloud services heavily, but those are actually also not a big computer but hundreds of thousands of medium sized ones, each with their own individual address space.)
The clock frequency per core in the current top supercomputers is only about 10 times that from 2000. The memory per node is about 10–50 times that from 2000. (For example, the top supercomputer in 2000, [ASCI White](https://wikipedia.org/wiki/ASCI_White), had 12 GiByte per node, Fugaku has 32 GiByte per node, so only less than 3 times the amount of RAM.) But the number of cores is about 1000–10000 times that of 2000! Each Fugaku node has 48 cores, meaning Fugaku has 7.6 *million* custom ARM64 cores. The supercomputer with the most number of cores in the [TOP 10 supercomputers of November, 2000](https://top500.org/lists/top500/2000/11/) has 9632 cores (interestingly, that one is not the fastest), and there are even two supercomputers in the TOP 10 from November, 2000 that only have 100 and 112 cores (and again, interestingly, they are not the slowest).
So, in 20 years, the physical address space requirements for the world's top supercomputer have only grown by less than 1.5 bit from 33.6 bit for 12 GiByte to 35 bit for 32 GiByte.
The thing is, humans are very bad at understanding exponential growth and tend to severely underestimate it. When Intel moved from 8 bit to 16 bit, they didn't double the address space, they increased it by a factor of 256. When Intel moved from 16 bit to 32 bit, they didn't double the address space, they increased it by a factor of 65536. When AMD moved from 32 bit to 64 bit, they didn't double the address space, they increased it by a factor of over 4 billion.
So, I personally doubt that we will ever see a 128 bit architecture. We *might* see more than 64 bit someday, but I believe it is more likely to be an 80 bit or 96 bit architecture than 128 bit.
Note that this does not mean that there might not be "labels" that need more than 64 bit. For example, IPv6 addresses are 128 bit. The IBM AS/400 (which still exists to this day as IBM i, after many name changes) had 128 bit object labels even back in the 1980s, but these contain not just a memory address but also type information, ownership information, access rights, bookkeeping data, etc. The actual CPU architecture, however, was never 128 bit. It was originally a custom 48 bit CISC architecture specially designed for the AS/400, which was later replaced with a slightly extended 64 bit PowerPC architecture and has now been merged into the POWER architecture.
While I believe it is possible that we might see bigger-than-64 bit architectures in the future, I seriously doubt that we will see another big change to the "PC successor" architecture. All current mainstream Operating Systems are highly portable (for example, Linux runs on a dozen architectures or more, both macOS and Windows NT run on AMD64 and ARM64, and have run on even more architectures in the past, e.g. macOS on PowerPC and m68k, Windows NT on Sparc, PowerPC, MIPS, Alpha, and i860). Which means that Operating Systems aren't really tightly tied to a specific architecture anymore. And the rise of platforms such as Java and .NET, the rise of high-level languages like, well, pretty much every language except C, C++, and (maybe) Rust, the rise of Web Applications and the Cloud mean that switching architectures is rather painless. (And actually, a lot of modern C and C++ code tends to be rather high-level and mostly platform-independent as well.)
Even for native code that we have lost the source for, modern emulation and re-engineering technologies make it possible to move them to a new architecture. Heck, I am writing this very answer from an ARM64 laptop that executes native AMD64 code in emulation almost as fast, sometimes even faster than my twice as expensive AMD64 laptop!
So, it simply does not make sense to keep piling band-aid after band-aid on a 1970s architecture, when we could just as easily design a 2030s architecture instead. |
Suppose we are given an array of positive integers $P = [p\_1, p\_2, \dots, p\_N]$ where each $p\_i$ represents the price of a product on a different day $i = 1 \dots N$.
I would like to design an algorithm to find the maximum profit that you can given this array of prices. Profit is made by buying at a given date $i$ and selling at a later date $j$ so that $i \leq j$.
One easy solution is the following "exhaustive algorithm":
```
profit = 0
for i = 1 to N-1
for j = i+1 to N
if P(j) - P(i) > profit
profit = P(j) - P(i)
```
The issue with this however is that it takes time $\Omega(N^2)$.
Can anyone think of something faster? | The first observation is that the strategy of buying at the lowest price or selling at the highest price does not always maximize the profit. As you also note, the simple brute-force method works by trying every possible pair of buy and sell dates in which the buy date precedes the sell date. A period of $n$ days has $n \choose 2$ dates and $n \choose 2$ is $\Theta(n^2)$.
To achieve $o(n^2)$ running time, a simple transformation is applied to the input array. Instead of looking at the daily prices given, we will instead work with the daily *change* in price, where change on day $i$ is the difference between the prices after day $i-1$ and after day $i$. With a transformed input array like this, we now want to find the nonempty, contiguous subarray whose values have the largest sum. This contiguous subarray is called the [maximum subarray](http://en.wikipedia.org/wiki/Maximum_subarray_problem).
For a detailed divide-and-conquer algorithm running in $\Theta(n \log n)$ time, see for example Chapter 4 of the [Cormen et al. book](http://en.wikipedia.org/wiki/Introduction_to_Algorithms), 3rd edition, page 68-74. The [Wikipedia page](http://en.wikipedia.org/wiki/Maximum_subarray_problem) also mentions Kadane's linear time algorithm and gives pseudocode. |
In a hotel booking scenario, when we are using a clustering model to cluster people's booking behaviors. We have two out of 15 features:
Feature1: booking\_counts\_yearly, which indicates how many times bookings has specific customer made. e.g. customerA booked 25 times while customerB booked 2 times in 2017. CustomerA's `booking_counts_yearly=25` while customerB's `booking_counts_yearly=2`.
Feature2: booking\_5\_stars\_rate, which indicates the portion of 5 star hotel bookings has been made by specific customers. e.g. customerA booked 25 times in total, 5 out of 25 bookings are 5 star hotels, then customerA's `booking_5_stars_rate` is `5/25=0.25`. CustomerB booked 2 times in total, none of the bookings is 5 star hotel, then customerB's `booking_5_stars_rate` is `0/2=0`.
Let's discuss this case:
1. Customer1 with `booking_counts_yearly=1`,
`booking_5_stars_rate=100%(1)`
2. Customer2 with
`booking_counts_yearly=1`, `booking_5_stars_rate=100%(1)`
3. Customer3
with `booking_counts_yearly=1`, `booking_5_stars_rate=100%(1)`
4. Customer4 with `booking_counts_yearly=15`,
`booking_5_stars_rate=100%(1)`
5. Customer5 with
`booking_counts_yearly=100`, `booking_5_stars_rate=70%(0.7)`
In my case, customer 4 and customer 5 should be one cluster, it looks like they are like type of person: "made some bookings through the whole year and majority of times, they booked high-class hotels" while customer 1, 2, 3 are more like: "lived once a year, it happens to be 5 star".
What I should do in feature engineering to make sure these two type of persons are well separated. Currently the cluster my clustering model gives me, I have a lot of customer 1, 2, 3 (considered as noises) mixed with customer 4 in one single cluster.
Is there any way I can combine these two features so that my model would take the counts and rates both into consideration? | A colleague and I have conducted some preliminary studies on the performance differences between pandas and data.table. You can find the study (which was split into two parts) on our [Blog](https://www.statworx.com/de/blog/pandas-vs-data-table-a-study-of-data-frames/) (You can find part two [here](https://www.statworx.com/de/blog/pandas-vs-data-table-a-study-of-data-frames-part-2/)).
We figured that there are some tasks where pandas clearly outperforms data.table, but also cases in which data.table is much faster. You can check it out yourself and let us know what you think of the results.
EDIT:
If you don't want to read the blogs in detail, here is a short summary of our setup and our findings:
**Setup**
We compared `pandas` and `data.table` on 12 different simulated data sets on the following operations (so far), which we called scenarios.
* Data retrieval with a select-like operation
* Data filtering with a conditional select operation
* Data sort operations
* Data aggregation operations
The computations were performed on a machine with an Intel i7 2.2GHz with 4 physical cores, 16GB RAM and a SSD hard drive. Software Versions were OS X 10.13.3, Python 3.6.4 and R 3.4.2. The respective library versions used were 0.22 for pandas and 1.10.4-3 for data.table
**Results in a nutshell**
* `data.table`seems to be faster when selecting columns (`pandas`on average takes 50% more time)
* `pandas` is faster at filtering rows (roughly 50% on average)
* `data.table` seems to be considerably faster at sorting (`pandas` was sometimes 100 times slower)
* adding a new column appears faster with `pandas`
* aggregating results are completely mixed
Please note that I tried to simplify the results as much as possible to not bore you to death. For a more complete visualization read the studies. If you cannot access our webpage, please send me a message and I will forward you our content. You can find the code for the complete study on [GitHub](https://github.com/STATWORX/blog/tree/master/pandas_vs_datatable). If you have ideas how to improve our study, please shoot us an e-mail. You can find our contacts on GitHub. |
I'm creating Poisson GLMs in R. To check for overdispersion I'm looking at the ratio of residual deviance to degrees of freedom provided by `summary(model.name)`.
Is there a cutoff value or test for this ratio to be considered "significant?" I know that if it's >1 then the data are overdispersed, but if I have ratios relatively close to 1 [for example, one ratio of 1.7 (residual deviance = 25.48, df=15) and another of 1.3 (rd = 324, df = 253)], should I still switch to quasipoisson/negative binomial? I found [here](http://data.princeton.edu/R/glms.html "here") this test for significance: 1-pchisq(residual deviance,df), but I've only seen that once, which makes me nervous. I also read (I can't find the source) that a ratio < 1.5 is generally safe. Opinions? | In the R package AER you will find the function `dispersiontest`, which implements a [Test for Overdispersion](http://www.sciencedirect.com/science/article/pii/030440769090014K) by Cameron & Trivedi (1990).
It follows a simple idea: In a Poisson model, the mean is $E(Y)=\mu$ and the variance is $Var(Y)=\mu$ as well. They are equal. The test simply tests this assumption as a null hypothesis against an alternative where $Var(Y)=\mu + c \* f(\mu)$ where the constant $c < 0$ means underdispersion and $c > 0$ means overdispersion. The function $f(.)$ is some monoton function (often linear or quadratic; the former is the default).The resulting test is equivalent to testing $H\_0: c=0$ vs. $H\_1: c \neq 0$ and the test statistic used is a $t$ statistic which is asymptotically standard normal under the null.
Example:
```
R> library(AER)
R> data(RecreationDemand)
R> rd <- glm(trips ~ ., data = RecreationDemand, family = poisson)
R> dispersiontest(rd,trafo=1)
Overdispersion test
data: rd
z = 2.4116, p-value = 0.007941
alternative hypothesis: true dispersion is greater than 0
sample estimates:
dispersion
5.5658
```
Here we clearly see that there is evidence of overdispersion (c is estimated to be 5.57) which speaks quite strongly against the assumption of equidispersion (i.e. c=0).
Note that if you not use `trafo=1`, it will actually do a test of $H\_0: c^\*=1$ vs. $H\_1: c^\* \neq 1$ with $c^\*=c+1$ which has of course the same result as the other test apart from the test statistic being shifted by one. The reason for this, though, is that the latter corresponds to the common parametrization in a quasi-Poisson model. |
>
> Do you know interesting consequences of (standard) conjectures in complexity theory in other fields of mathematics (i.e. outside of theoretical computer science)?
>
>
>
I would prefer answers where:
* the complexity theory conjecture is as general and standard as possible; I am ok with consequences of the hardness of specific problems too, but it would be nice if the problems are widely believed to be hard (or at least have been studied in more than a couple of papers)
* the implication is a statement that is not known to be true unconditionally, or other known proofs are considerably more difficult
* the more surprising the connection the better; in particular, the implication should not be a statement explicitly about algorithms
"If pigs could fly, horses would sing" type of connections are ok, too, as long as the flying pigs come from complexity theory, and the singing horses from some field of math outside of computer science.
This question is in some sense "the converse" of a [question](https://cstheory.stackexchange.com/q/1920/4896) we had about surprising uses of mathematics in computer science. Dick Lipton had a [blog post](http://rjlipton.wordpress.com/2009/03/11/factoring-could-be-easy/) exactly along these lines: he writes about consequences of the conjecture that factoring has large circuit complexity. The consequences are that certain diophantine equations have no solutions, a kind of statement that can very hard to prove unconditionally. The post is based on work with Dan Boneh, but I cannot locate a paper.
**EDIT:** As Josh Grochow notes in the comments, [his question](https://cstheory.stackexchange.com/a/163/4896) about applications of TCS to classical math is closely related. My question is, on one hand, more permissive, because I do not insist on the "classical math" restriction. I think the more important difference is that I insist on a proven implication from a complexity conjecture to a statement in a field of math outside TCS. Most of the answers to Josh's question are not of this type, but instead give techniques and concepts useful in classical math that were developed or inspired by TCS. Nevertheless, at least [one answer](https://cstheory.stackexchange.com/a/163/4896) to Josh's question is a perfect answer to my question: [Michael Freedman's paper](http://arxiv.org/abs/0810.0033) which is motivated by a question identical to mine, and proves a theorem in knot theory, conditional on $\mathsf{P}^{\#P} \ne \mathsf{NP}$. He argues the theorem seems out of reach of current techniques in knot theory. By Toda's theorem, if $\mathsf{P}^{\#P} = \mathsf{NP}$ then the polynomial hierarchy collapses, so the assumption is quite plausible. I am interested in other similar results. | Here's another example from graph theory. The graph minor theorem tells us that, for every class $\mathcal{G}$ of undirected graphs that is closed under minors, there is a finite obstruction set $\mathcal{Obs(G)}$ such that a graph is in $\mathcal{G}$ if and only if it does not contain a graph in $\mathcal{Obs(G)}$ as a minor. However, the graph minor theorem is inherently nonconstructive and does not tell us anything about how big these obstruction sets are, i.e., how many graphs it contains for a particular choice of $\mathcal{G}$.
In [Too Many Minor Order Obstructions](http://www.jucs.org/jucs_3_11/too_many_minor_order), Michael J. Dinneen showed that under a plausible complexity-theoretic conjecture, the sizes of several of such obstruction sets can be shown to be large. For example, consider the parameterized class $\mathcal{G}\_k$ of graphs of genus at most $k$. As $k$ increases, we can expect the obstruction sets $\mathcal{Obs}(\mathcal{G}\_k)$ to become more and more complicated, but how much so? Dinneen showed that if the polynomial hierarchy does not collapse to its third level then there is no polynomial $p$ such that the number of obstructions in $\mathcal{Obs}(\mathcal{G}\_k)$ is bounded by $p(k)$. Since the number of minor obstructions for having genus zero (i.e. being planar) is just two ($\mathcal{Obs}(\mathcal{G}\_0) = \{K\_5, K\_{3,3}\}$), this superpolynomial growth is not immediately obvious (although I believe it can be proven unconditionally). The nice thing about Dinneen's result is that it applies to the sizes of obstruction sets corresponding to *any* parameterized set of minor ideals $\mathcal{G}\_k$ for which deciding the smallest $k$ for which $G \in \mathcal{G}\_k$ is NP-hard; in all of such parameterized minor ideals the obstruction set sizes must grow superpolynomially. |
I recently came across this toy problem:
You have two sticks of unknown lengths $a>b$ and a measuring device with constant variance $1$ that you can only use **twice**. How can you construct estimators $\hat a,\hat b$ with minimal total MSE?
The solution involves measuring $\hat x=a+b,\hat y=a-b$ (sum and difference of the sticks) and combining them as $\hat a=\frac{\hat x+\hat y}2,\hat b=\frac{\hat x-\hat y}2$, and then each of $\hat a$,$\hat b$ have variance $1/2$. This is the same as what you would get by separately measuring each of $a,b$ twice, so it's optimal. In addition, if you make some boilerplate Gaussian assumptions, it's easy to show that
$$(\hat a,\hat b)\sim MVN((a,b),
\begin{bmatrix}
\frac12 & 0 \\
0 & \frac12
\end{bmatrix})$$
so they actually have the same distribution!
I have two follow-up questions:
1. What's a good intuitive explanation of why we can get a "free" variance reduction with no trade-offs? This is open-ended and multiple answers/analogies to other concepts are welcome! Bonus points for tying this in with sufficient statistics.
2. Can this be generalized to $n$ sticks (ordering is not important) and $n$ uses of the device? | Interesting example. I think some key intuition is right there in your post: You get to measure each stick twice. The magic is not so much about statistics or probability but about how you cleverly arrange the measurements so that you get the nuisance terms to cancel:
Let's simply say that the measured quantity will differ from the true one by some amount $\epsilon\_t$ that differs for every measurement.
Measuring $a$ twice and taking the average gives $\hat{a} = \frac{a + a + \epsilon\_1 + \epsilon\_2}{2}$
Measuring $a+b$ and $a-b$ adding and taking the average gives $\hat{a} = \frac{a + b + \epsilon\_1 + a - b + \epsilon\_2}{2} = \frac{a + a + \epsilon\_1 + \epsilon\_2}{2}$ |
How could I randomly split a data matrix and the corresponding label vector into a `X_train`, `X_test`, `X_val`, `y_train`, `y_test`, `y_val` with scikit-learn?
As far as I know, `sklearn.model_selection.train_test_split` is only capable of splitting into two not into three... | Best answer above does not mention that by separating two times using `train_test_split` not changing partition sizes won`t give initially intended partition:
```
x_train, x_remain = train_test_split(x, test_size=(val_size + test_size))
```
Then **the portion of validation and test sets in the x\_remain change** and could be counted as
```
new_test_size = np.around(test_size / (val_size + test_size), 2)
# To preserve (new_test_size + new_val_size) = 1.0
new_val_size = 1.0 - new_test_size
x_val, x_test = train_test_split(x_remain, test_size=new_test_size)
```
In this occasion all initial partitions are saved. |
Logic often states that by overfitting a model, its capacity to generalize is limited, though this might only mean that overfitting stops a model from improving after a certain complexity. Does overfitting cause models to become worse regardless of the complexity of data, and if so, why is this the case?
---
**Related:** Followup to the question above, "[When is a Model Underfitted?](https://datascience.stackexchange.com/questions/361/when-is-a-model-underfitted)" | Overfitting, in a nutshell, means take into account **too much** information from your data and/or prior knowledge, and use it in a model. To make it more straightforward, consider the following example: you're hired by some scientists to provide them with a model to predict the growth of some kind of plants. The scientists have given you information collected from their work
with such plants throughout a whole year, and they shall continuously give you information on the future development of their plantation.
So, you run through the data received, and build up a model out of it. Now suppose that, in your model, you considered just as many characteristics as possible to always find out the exact behavior of the plants you saw in the initial dataset. Now, as the production continues, you'll always take into account those characteristics, and will produce very *fine-grained* results. However, if the plantation eventually suffer from some seasonal change, the results you will receive may fit your model in such a way that your predictions will begin to fail (either saying that the growth will slow down, while it shall actually speed up, or the opposite).
Apart from being unable to detect such small variations, and to usually classify your entries incorrectly, the *fine-grain* on the model, i.e., the great amount of variables, may cause the processing to be too costly. Now, imagine that your data is already complex. Overfitting your model to the data not only will make the classification/evaluation very complex, but will most probably make you error the prediction over the slightest variation you may have on the input.
**Edit**: [This](https://www.youtube.com/watch?v=DQWI1kvmwRg) might as well be of some use, perhaps adding dynamicity to the above explanation :D |
Can anyone explain to me the benefits of the genetic algorithm compared to other traditional search and optimization methods? | * Concept is easy to understand
* Modular, separate from application
* Supports multi-objective
* optimization Good for “noisy” environments
* Always an answer; answer gets better with time
* Inherently parallel; easily distributed |
I came across a proof that an AVL tree has $O(\log n)$ height and there's one step which I do not understand.
Let $N\_h$ represent the minimum number of nodes that can form an AVL tree of height $h$. Since we're looking for the **minimum** number of nodes, let its children's number of nodes be $N\_{h-1}$ and $N\_{h-2}$.
Proof:
$$N\_h = N\_{h-1} + N\_{h-2} + 1 \tag{1}$$
$$N\_{h-1} = N\_{h-2} + N\_{h-3} + 1 \tag{2}$$
$$ N\_h = (N\_{h-2} + N\_{h-3} + 1) + N\_{h-2} + 1 \tag{3}$$
$$ N\_h > 2N\_{h-2} \tag{4}$$
$$N\_h > 2^{h/2} \tag{5} $$
I do not understand how we went from (4) to (5). If anyone could explain, that'd be great. | Assuming $h$ even, by induction
$$N\_h>2N\_{h-2}>2^2N\_{h-4}>2^3N\_{h-6}>\cdots 2^{h/2}N\_0$$
because you go up two levels $h/2$ times. |
I know that a DFA has to have exactly one transition for each symbol in the alphabet, but is it allowed to have two symbols on the same arrow? If, for example, I have a DFA with states $q\_0$ and $q\_1$, can I have one arrow from $q\_0$ to $q\_1$ with both $a$ and $b$?
This may be a stupid question, but I need to be completely sure that this is allowed (I believe it is). | The transition graph (as a drawing) is merely a representation of an Automaton, which is a well-defined model.
Formally, an DFA is a tuple $(Q,\Sigma,\delta,q\_0,F)$, where the "type" of the transition function is $\delta:Q\times \Sigma\to Q$.
Thus, if you have $\delta(q\_0,a)=\delta(q\_0,b)=q\_1$, that's fine.
In the graphic representation, you will either have two arrows from $q\_0$ to $q\_1$, labeled $a$ and $b$, or you can just put both letters on the same arrow, it's not a formal thing anyway. |
I need to show that the following problem is in P:
$$\begin{align\*}\text{HALF-2-SAT} = \{ \langle \varphi \rangle \mid \, &\text{$\varphi$ is a 2-CNF formula and there exists an assignment} \\
& \text{that satisfies at least half of the clauses} \}\end{align\*}$$
I know that 2-SAT is in P, hence it got a decider and I wanted to use it for HALF-2-SAT but got stuck to find how to extract the right half of the clauses that will be satisfied in a polynomial way.
Is there an official way to choose half of the clauses that satisfy the formula? | Your language consists of all valid encodings of a 2-CNF formula. Consider a random assignment to the variables of $\varphi$, i.e. each variable is assigned True/False with probability $\frac{1}{2}$, then the expected portion of satisfied clauses is $\frac{3}{4}$ (use linearity of expectation and note that each clause is not satisfied with probability $\frac{1}{4}$) , which means that there exists an assignment which satisfies at least $\frac{3}{4}$ of the clauses.
The above observation yields a simple deterministic algorithm for finding such an assignment. Let $S\_{\varphi}$ denote the random variable counting the number of satisfied clauses relative to a random assignment.
Note that for any variable $x\_i$ we have:
$\mathbb{E}\big[S\_\varphi\big]=
\frac{1}{2}\mathbb{E}\big[S\_\varphi\big| x\_i = 1 \big]+
\frac{1}{2}\mathbb{E}\big[S\_\varphi\big| x\_i = 0 \big]$.
Now, you can simply pick the assignment $b\in\{0,1\}$ to $x\_i$ which maximizes $\mathbb{E}\big[S\_\varphi | x\_i = b\big]$ (you can compute the expectation in linear time). To see why, suppose that $\mathbb{E}\big[S\_\varphi| x\_i = 1 \big] \ge \mathbb{E}\big[S\_\varphi| x\_i = 0 \big]$, then since $\mathbb{E}[S\_\varphi]=\frac{3}{4}m$, it must hold that $\mathbb{E}\big[S\_\varphi| x\_i = 1 \big]\ge\frac{3}{4}m$, which means that there exists an assignment for the variables of $\varphi$ which assigns $1$ to $x\_i$ and satisfies $\ge\frac{3}{4}m$ clauses. |
On the Wikipedia page for [quantum algorithm](https://en.wikipedia.org/wiki/Quantum_algorithm) I read that
>
> [a]ll problems which can be solved on a quantum computer can be solved on a classical computer. In particular, problems which are undecidable using classical computers remain undecidable using quantum computers.
>
>
>
I expected that the fundamental changes that a quantum computer brings would lead to the possibility of not only solving problems that could already be solved with a classical computer, but also new problems that could not be solved before. Why is it that a quantum computer can only solve the same problems? | Because a quantum computer can be simulated using a classical computer: it's essentially just linear algebra. Given a probability distribution for each of the qubits, you can keep track of how each quantum gate modifies those distributions as time progresses. This isn't very efficient (which is why people want to build actual quantum computers) but it works. |
Following to the recent questions we had [here](https://stats.stackexchange.com/questions/1818/how-to-determine-the-sample-size-needed-for-repeated-measurement-anova/1823#1823).
I was hopping to know if anyone had come across or can share **R code for performing a custom power analysis based on simulation for a linear model?**
Later I would obviously like to extend it to more complex models, but `lm` seems to right place to start. | I'm not sure you need simulation for a simple regression model. For example, see the paper [Portable Power](http://www.jstor.org/stable/1267939), by Robert E. Wheeler (Technometrics , May, 1974, Vol. 16, No. 2). For more complex models, specifically mixed effects, the [pamm](http://cran.r-project.org/web/packages/pamm/index.html) package in R performs power analyses through simulations. Also see Todd Jobe's [post](http://toddjobe.blogspot.de/2009/09/power-analysis-for-mixed-effect-models.html) which has R code for simulation. |
Complexity theory uses a large number of unproven conjectures. There are several hardness conjectures in David Johnson's [NP-Completeness Column 25](http://www2.research.att.com/~dsj/columns/col25.pdf). What are the other major conjectures not mentioned in the above article? Did we achieve some progress towards proving one of these conjectures? Which conjecture do you think would require completely different techniques from the currently known ones? | This isn't mentioned in the article, but the [exponential time hypothesis](http://en.wikipedia.org/wiki/Exponential_time_hypothesis) is very useful for proving exponential lower bounds on the running time of hard problems. |
I have the following time series:
[![enter image description here](https://i.stack.imgur.com/yGTaj.png)](https://i.stack.imgur.com/yGTaj.png)
Data is aviable here [data](http://www.sharecsv.com/s/f2b58c4304ffa69599fb5ef09f70eb6e/janssen_assorbimento.csv)
The time series represent an hourly eletricity load. It starts at 2018-09-13 19:00:00 and end at 2018-12-23 15:00:00.
I want to predict the next 36 hours values.
I tried several method but without success.
This is my code:
```
load.msts <- msts(df$Power), seasonal.periods = c(7, 365.25))
load.tbats <- tbats(load.msts)
load.pred <- forecast(load.tbats, h = 100)
```
The result of prediction is:
[![enter image description here](https://i.stack.imgur.com/xyDWi.png)](https://i.stack.imgur.com/xyDWi.png)
Then i tried:
```
load.stlm <- stlm(load.msts, s.window = 'periodic', method = 'ets')
load.pred <- forecast(load.stlm, h = 100)
```
The result of prediction is:
[![enter image description here](https://i.stack.imgur.com/bpvSd.png)](https://i.stack.imgur.com/bpvSd.png)
I have also tried Facebook prophet:
```
load.prophet.df <-prophet(load.df,yearly.seasonality=TRUE)
load.prophet.model <- make_future_dataframe(load.prophet.df, periods =
200, freq = 3600)
load.prophet.pred <- predict(load.prophet.df, load.prophet.model)
```
Results:
[![enter image description here](https://i.stack.imgur.com/2ppPi.png)](https://i.stack.imgur.com/2ppPi.png)
I think that the problem is related to the amount of data. I don't have enough data ( only one year of data).
How can improve my forecasting? Thx | Your problem is not (so much) a lack of data. Your problem is that the data generating process changes abruptly multiple times. First there is a step change around Sep 20, then there is a period of strangely low variability at the beginning of November, almost two weeks of missing data at the beginning of December, and finally a precipitous drop at the end of December.
The last is a particular problem for your models, and it will be for *any* model. Your models fit either a downward trend, which they extrapolate (TBATS and Prophet), or another step change (ETS). Which one makes more sense? We don't know, since we don't know what happened recently, whether the downward trend will continue, or whether your series has reached a new equilibrium, or whether it will increase again to the level it showed before the drop.
I'd very much recommend you find out what happened to your series in the past, and include this in any model. For instance, you could regress your series on explanatory variables and fit your time series to residuals. This is related to [How to know that your machine learning problem is hopeless?](https://stats.stackexchange.com/q/222179/1352) |
I've found a problem that boils down to this: I need to find the non-rooted MST of a directed weighted graph. In other words, I need to find the minimal set of edges such that from any one node in the graph you can get to all others.
This is similar to the rooted MST digraph problem, which the [chu-liu](http://www.ce.rit.edu/~sjyeec/dmst.html) algorithm solves quite nicely. My intuition is to calculate the rooted MST for all nodes using chu-liu and then merge each, removing redundancies along the way. However, I don't believe that that would be optimal.
Has anybody been working on this? Can you point me towards some papers that I should read?
Thanks. | As has been explained repeatedly above, finding the minimum-weight strongly-connected spanning subgraph ("SCSS") of a given weighted digraph is NP-complete, even in the special case when all weights are one and even when you only want to distinguish the case that there is a subgraph of weight n from the case that the optimal weight is larger than n.
However, to follow up on a comment of Warren Schudy: there is a simple polynomial-time approximation algorithm that gets an approximation ratio of 2: choose an arbitrary vertex s, find the minimum-weight arborescence out of s (subgraph that includes a path from s to all other vertices), find the minimum-weight arborescence into s, and take the union of the two arborescences. I believe this idea is due to Frederickson and JáJá, "Approximation algorithms for several graph augmentation problems", SIAM J. Comput. 1981.
Better approximations are known for the unweighted problem e.g. 1.64 by [Khuller, Raghavachari, and Young, SIAM J. Comput. 1995](http://arxiv.org/abs/cs.DS/0205040). I don't know of improvements for the weighted version but that may merely indicate that I didn't search hard enough. |
Given a computer that takes 1 microsecond for an operation, I'm trying to find the amount of operations this computer can perform in one second, given an algorithm with complexity $O(n\log n)$. I've tried to solve it by the following ways but always get stuck. Note that one second equals $10^6$ microseconds.
[What I've tried](https://i.stack.imgur.com/hVCEk.png)
------------------------------------------------------
I've found [this](https://math.stackexchange.com/questions/188637/how-can-i-solve-for-n-in-the-equation-n-log-n-c) post where it's stated that *"there is no simple way"* to solve that but I still want to ask here if there is any different approach to the question from a "computer science perspective". | It looks like you got it, but what's actually going on here is that you are working in [homogeneous coordinates](https://en.wikipedia.org/wiki/Homogeneous_coordinates).
In this system, a *point* in space is represented as a 4-tuple, $(x,y,z,1)$. You can multiply this 4-tuple by any non-zero constant and it represents the same point, so this is equivalent to $(wx,wy,wz,w)$ for any $w \ne 0$.
A *vector* is the difference between two points, and this is represented as a 4-tuple with a zero in the fourth component, $(x,y,z,0)$.
Right-multiplying that 4x4 matrix by a point has the effect of adding the translation component, and right-multiplying by a vector has the effect of not adding the translation component. |
This is a GRE practice question.
![BST n=8](https://i.stack.imgur.com/dUavw.png)
If a node in the binary search tree above is to be located by binary tree search, what is the expected number of comparisons required to locate one of the items (nodes) in the tree chosen at random?
(A) 1.75
(B) 2
(C) 2.75
(D) 3
(E) 3.25
My answer was 3 because $n=8$ and $\lg(n)$ comparisons should be made, and $\lg(8) = 3$. But the correct answer is 2.75. Can someone explain the correct answer? Thanks! | Find comparisons for every element and total them ie; 22 comparisions for 8 elements so for 1 element 22/8=2.75 |
Are there any publications focusing on solving TSP with ant colony optimization that consider small changes in the graph's nodes or vertices?
So what I have is:
* a traveling salesman problem (TSP)
* and a single solution for the TSP
* a slight change in the graph afterwards (extra/removed node for example)
and what I want is:
* An ant algorithm to solve the new TSP (on a subset-basis) using the original solution
* And a benchmark
Thanks for you help so far. | Yes, there are several papers on this topic, and related ones, in the context of ant colony optimization algorithms for routing on ad hoc or mobile area networks (MANETs). In a MANET, the nodes in the network graph are mobile, and if they move too far away from their neighbors, they fall out of range, hence the communication link is broken (i.e., the edge disappears from the network graph). The routing problem essentially is: how do I keep all the vertices connected, and maintain efficient pathways between nodes, when links can disappear and reappear? Of course, if a node moves out of range from all other nodes, it is the same as if the node were deleted from the network graph.
The paper [An ant colony optimization routing based on robustness for ad hoc networks with GPSs](http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B7576-4W38RJN-2-13&_cdi=12890&_user=965532&_pii=S1570870509000389&_origin=search&_coverDate=01%2F31%2F2010&_sk=999919998&view=c&wchp=dGLbVzb-zSkzS&md5=ffee75b0f9e291db16023d89b304e378&ie=/sdarticle.pdf) by Kadono et al. contains a "related work" section you will probably find interesting. In this paper I've linked, there is an assumption of the availability of some GPS information, which probably does not apply to you; I chose it mainly for its discussion of other papers. However, all these approaches assume *something* about how nodes can be deleted (or suddenly appear), in order to construct an efficient algorithm. You'll have to decide what formal assumptions hold for the problem you are trying to solve.
Search phrases like "ant colony optimization MANET" or "ant colony optimization self-stabilization" may turn up other papers of interest to you. |
I recently attended a lecture on an introduction to computation complexity and I am looking to find out more, I haven't studied computer science or discrete mathmateics at university and I was wondering if anyone could recommend any relatively simple books or reading material that could help me understand the topic more | Classic texts on computational complexity include Hopcroft & Ullman's text {1} and Sipser's text {2}. With patience, they should both be accessible.
---
{1} Hopcroft, John E., R. Motwani, and J. D. Ullman. "Introduction to Automata Thoery, Language, and Computation (2nd edition)". Addison-Wesley, Reading (2001).
{2} Sipser, Michael. Introduction to the Theory of Computation. Cengage Learning, 2012. |
Let's say $\sum\_{n \ge 1} a\_{n}x^n$ is generating function for regular language $L$. $a\_{n}$ is number of words with length $n$.
Find an example of generating function which isn't correspond for any regular language.
My attempt : I use the fact for any regular language there is exists $n\_{0}$ $\lambda\_{i}$ and $p\_{i}$ : $a\_{n} = \sum\_{i} \lambda\_{i}^{n} p\_{i}$ and as example get language with $a\_{n} = C\_{n}$ , where $C\_{n}$ is Catalan number. My teacher said that's not obvious and told me to get easier example. But I don't know useful criteria to find an example with contradiction about regularity. Any ideas? | An *NP-hard* problem can be beyond NP. The polynomial-time reduction from your X to any problem in NP does not necessarily have a polynomial-time inverse. If the inverse is harder, then the verification is harder.
An **NP-complete** problem, on the other hand, is one that is NP-hard and itself in NP. For these, of course, there exist polynomial time verifications. |
Can someone give me a hint about a good approach to find a frequent patterns in a single sequence.
For example there is the single sequence
```
3 6 1 2 7 3 8 9 7 2 2 0 2 7 2 8 4 8 9 7 2 4 1 0 3 2 7 2 0 3 8 9 7 2 0
```
I am looking for a method that can detect frequent patterns in this ordered sequence:
```
3 6 1 [2 7] 3 [8 9 7 2] 2 0 [2 7] 2 8 4 [8 9 7 2] 4 1 0 3 [2 7] 2 0 3 [8 9 7 2] 0
```
Also other information would be interesting like:
* What is the probability that 7 comes after 2
* When each number has a timestamp assigned to it, what is the estimated time interval that 7 occurs after 2
The sequential pattern mining methods I found require multiple sequences, but I have one large sequence where I want to detect regularities.
Thanks for any help! | Calculate a histogram of N-grams and threshold at an appropriate level. In Python:
```
from scipy.stats import itemfreq
s = '36127389722027284897241032720389720'
N = 2 # bi-grams
grams = [s[i:i+N] for i in xrange(len(s)-N)]
print itemfreq(grams)
```
The N-gram calculation (lines three and four) are from [this](https://stackoverflow.com/questions/17531684/n-grams-in-python-four-five-six-grams) answer.
The example output is
```
[['02' '1']
['03' '2']
['10' '1']
['12' '1']
['20' '2']
['22' '1']
['24' '1']
['27' '3']
['28' '1']
['32' '1']
['36' '1']
['38' '2']
['41' '1']
['48' '1']
['61' '1']
['72' '5']
['73' '1']
['84' '1']
['89' '3']
['97' '3']]
```
So 72 is the most frequent two-digit subsequence in your example, occurring a total of five times. You can run the code for all $N$ you are interested about. |
I wonder if there is always a maximizer for any maximum (log-)likelihood estimation problem? In other words, is there some distribution and some of its parameters, for which the MLE problem does not have a maximizer?
My question comes from a claim of an engineer that the cost function (likelihood or log-likelihood, I am not sure which was intended) in MLE is always concave and therefore it always has a maximizer.
Thanks and regards! | Perhaps the engineer had in mind canonical exponential families: in their natural parametrization, the parameter space is convex and the log-likelihood is concave (see Thm 1.6.3 in Bickel & Doksum's *Mathematical Statistics, Volume 1*). Also, under some mild technical conditions (basically that the model be "full rank", or equivalently, that the natural parameter by identifiable), the log-likelihood function is strictly concave, which implies there exists a unique maximizer. (Corollary 1.6.2 in the same reference.) [Also, the lecture notes cited by @biostat make the same point.]
Note that the natural parametrization of a canonical exponential family is usually different from the standard parametrization. So, while @cardinal points out that the log-likelihood for the family $\mathcal{N}(\mu,\sigma^2)$ is not convex in $\sigma^2$, it will be concave in the natural parameters, which are $\eta\_1 = \mu / \sigma^2$ and $\eta\_2 = -1/\sigma^2$. |
I am a bit confused about ensemble learning. In a nutshell, it runs k models and gets the average of these k models. How can it be guaranteed that the average of the k models would be better than any of the models by themselves? I do understand that the bias is "spread out" or "averaged". However, what if there are two models in the ensemble (i.e. k = 2) and one of the is worse than the other - wouldn't the ensemble be worse than the better model? | In your example, your ensemble of two models could be worse than a single model itself. But your example is artificial, we generally build more than two in our ensemble.
There is no absolute guarantee a ensemble model performs better than an individual model, but if you build many of those, and your individual classifier is **weak**. Your overall performance should be better than an individual model.
In machine learning, training multiple models generally outperform training a single model. That's because you have more parameters to tune. |
I am not an expert in this field, but I have read that the existence of one-way functions implies $P \neq NP$. Since there seem to be so many different definitions of one-way-functions and I have not seen a proof I wanted to ask if someone knows where to read this, or knows the proof.
Please if you give a proof, also give a precise definition of which version of "one-way-function" you use. The proof can of course be detailed, the detailed, the better! :)
Thanks! | $P \ne NP$ if and only if worst-case one-way functions exist.
Reference:
Alan L. Selman. A survey of one-way functions in complexity theory. Mathematical systems theory, 25(3):203–221, 1992. |
It's well known that planar graphs from a closed-family with forbidden minors $K\_{3,3}, K\_{5}$, graphs with bounded treewidth also are closed family graphs with no $H\_{k}$ as minor.
I assume that graphs with bounded max cut form closed family graphs. Given arbitrarily graph $G$ that doesn't contain $H$ as a minor, how to find max cut approximately.
Thanks!
**Addendum:**
The relevant topic can be found on On the complexity of the Maximum Cut problem Chapter 6. Graphs with bounded treewidth. The PTAS begins with making modification to the tree decomposition without increasing it's treewidth.
1) $T$ is a binary tree.
2) If a node $i \in I$ has two children $j\_{1}$ and $j\_{2}$, then $X\_{i}=X\_{j1}=X\_{j2}$.
3) If a node $i \in I$ has one child $j$, then either $X\_{j} \subset X\_{i}$ and $|X\_{i}-X\_{j}|=1$, or $X\_{i} \subset X\_{j}$ and $|X\_{j}-X\_{i}|=1$.
In my opinion it's very strong modification, and actually I don't get the idea behind this modification. On the 2th condition if I understood rigth, if there is a node with two neighbors then all of then contain actually the same set of the vertexes, but what for? | MaxCut can be solved in polynomial time in $K\_5$-minor-free graphs but is NP-hard in $K\_6$-minor-free graphs (in particular, for apex graphs of planar graphs) [[Barahona 1983](http://www.sciencedirect.com/science/article/pii/0167637783900160)].
See also [this WG 2010 paper](http://rutcor.rutgers.edu/~mkaminski/preprints/Max-Cut%20and%20containment%20relations%20in%20graphs.pdf) and [slides](http://rutcor.rutgers.edu/~mkaminski/slides/Max-Cut%20and%20containment%20relations%20in%20graphs.pdf) by [Marcin Kamiński](http://rutcor.rutgers.edu/~mkaminski/research.html). |
R has many libraries which are aimed at Data Analysis (e.g. JAGS, BUGS, ARULES etc..), and is mentioned in popular textbooks such as: J.Krusche, Doing Bayesian Data Analysis; B.Lantz, "Machine Learning with R".
I've seen a guideline of 5TB for a dataset to be considered as Big Data.
My question is: Is R suitable for the amount of Data typically seen in Big Data problems?
Are there strategies to be employed when using R with this size of dataset? | R is great for "big data"! However, you need a workflow since R is limited (with some simplification) by the amount of RAM in the operating system. The approach I take is to interact with a relational database (see the `RSQLite` package for creating and interacting with a SQLite databse), run SQL-style queries to understand the structure of the data, and then extract particular subsets of the data for computationally-intensive statistical analysis.
This just one approach, however: there are packages that allow you to interact with other databases (e.g., Monet) or run analyses in R with fewer memory limitations (e.g., see `pbdR`). |
I recently started my masters course. Last semester I took courses from different disciplines like networks, software engineering, architecture, etc. Recently, after taking an advanced course in algorithms and data structures I think I found a course which interests me most (including other similar topics likes programming languages, etc).
How do I find a research topic - specific data structure or algorithm I can work on for my thesis, and possibly follow up into a PhD? I am currently looking at some of the research done at my university in the same disciplines.
**edit**
I think some people confused the Question due to ambiguous framing from my side.
I want to find a topic for my masters thesis, I am still somewhat far awaw from starting a PhD(If I do) | It's somewhat like shopping for shoes, before you go shopping and try your luck it's hard to tell what you will end up with.
That said, there are some nice blog posts about this. For example I found this useful back in grad school: [Finding Problems to Work On](http://blog.computationalcomplexity.org/2003/04/finding-problems-to-work-on.html). |
Parity and $AC^0$ are like inseparable twins. Or so it has seemed for the last 30 years. In the light of Ryan's result, there will be renewed interest in the small classes.
Furst Saxe Sipser to Yao to Hastad are all parity and random restrictions. Razborov/Smolensky is approximate polynomial with parity (ok, mod gates). Aspnes et al use weak degree on parity. Further, Allender Hertrampf and Beigel Tarui are about using Toda for small classes. And Razborov/Beame with decision trees. All of these fall into the parity basket.
1) What are other natural problems (apart from parity) that can be shown directly not to be in $AC^0$?
2) Anyone know of a drastically different approach to lower bound on AC^0 that has been tried? | [Benjamin Rossman](http://www.mit.edu/~brossman/)'s result on $AC^0$ lowerbound for k-clique from STOC 2008.
---
References:
* Paul Beame, "[A Switching Lemma Primer](http://www.cs.washington.edu/homes/beame/papers/primer.ps)", Technical Report 1994.
* Benjamin Rossman, "[On the Constant-Depth Complexity of k-Clique](http://www.mit.edu/~brossman/k-clique-stoc.pdf)", STOC 2008. |
I saw [this article](http://www.walesonline.co.uk/sport/football/football-news/maths-genius-worked-out-exactly-11120318) about the expected number of stickers required to complete the Panini Euro 2016 album, where stickers are sold in packets of 5 distinct stickers. The author presents the case where stickers are bought singly, then adds in the restriction that each 5 in a packet are distinct. For the latter case, he calculates the expected number of stickers to be the sum:
680/680 + 680/680 + 680/680 + 680/680 + 680/680
+ 680/675 + 680/675 + 680/675 + 680/675 + 680/675
+ 680/670 + . . . . . . .
+ 680/5 + 680/5 + 680/5 + 680/5 + 680/5
However, I can't see why this is the case. Surely the second sticker from the first packet has probability 679/679 of being required, because it can't be the same as the first sticker? Of course, for the first packet, this makes no difference, but come the second packet it does. In the second packet the first sticker has probability 675/680 of being required, but the second has probability 674/679. Therefore, why isn't the sum this?
680/680 + 679/679 + 678/678 + 677/677 + 676/676
+ 680/675 + 679/674 + 678/673 + 677/672 + 676/671
+ 680/670 + . . . . . . .
+ 680/5 + 679/4 + 678/3 + 677/2 + 676/1
Could somebody point out where I'm going wrong? Thanks. | **Probability problems can be tricky. Whenever possible, reduce them to steps that are justified by basic principles and axioms.**
Expectation problems get a little easier because you don't have to keep track of all the individual chances. This particular problem is a nice illustration.
**To get going, let's establish notation.** I like $n=680$ for the total number of cards to collect and $k=5$ for the packet size. After you have begun collecting cards you will keep track of how many more you need: let's call that quantity $m$ (which means you already have $n-m \ge 0$ distinct cards).
**What happens when you obtain a new packet?** There are up to $k+1$ possibilities, depending on whether it contains $0, 1, \ldots,$ through $k$ new cards. To keep track of these, let the expected number of packets you need to buy *in addition to those you currently have* be written $e(m; n,k)$. Let $X$ be the random variable giving the number of new cards you collect and let its probability distribution be given by $\Pr(X=j|m,n,k)$. Two things happen:
1. You pay for another packet: this raises the expectation by $1$.
2. You change the expectation depending on $X$. By the rules of conditional expectation, for any $j$ between $0$ and $k$, we have to weight the new expectations by the probabilities and add them up:
$$e(m;n,k) = 1 + \sum\_{j=0}^k \Pr(X=j|m,n,k) e(m-j;n,k).$$
To make this practicable, we have to overcome the difficulty that $e(m;n,k)$ appears on both sides (it shows up for $j=0$ on the right hand side). Just do the usual algebra to solve:
$$e(m;n,k) = \frac{1}{1 - \Pr(X=0|m,n,k)}\left(1 + \sum\_{j=1}^k \Pr(X=j|m,n,k) e(m-j;n,k)\right).$$
(Notice that the sum begins at $j=1$ now.) The formula for $\Pr(X=j|m,n,k)$ is well known: it's a Hypergeometric distribution,
$$\Pr(X=j|m,n,k) = \frac{\binom{n-m}{k-j}\binom{m}{j}}{\binom{n}{k}}.$$
The initial conditions are easily determined: there's nothing more to be done once $m$ has been reduced to $0$ or less:
$$e(m;n,k) = 0\text{ if }m \le 0.$$
This algorithm finds $e(m;n,k)$ in terms of the $k+1$ preceding values. It therefore requires only $O(k)$ storage and $O(mk)$ time for the computation (assuming all those binomial coefficients can be obtained in $O(1)$ time each--which they can). To illustrate, here is `R` code:
```R
n <- 680 # Distinct cards
k <- 5 # Packet size (1 or greater)
# Hypergeometric probabilities
hyper <- function(j,m,n,k) exp(lchoose(n-m, k-j) + lchoose(m,j) - lchoose(n,k))
# Initialize
e <- c(rep(0, k), rep(NA, n)) # The index offset is `k`!
names(e) <- paste((1-k):n)
# The algorithm
for (m in 1:n)
e[m+k] <- (1 + sum(hyper(k:1,m,n,k) * e[(m-k):(m-1) + k])) / (1 - hyper(0,m,n,k))
print(e[n+k], digits=12)
```
The output, $963.161719772$, errs only in the last digit (it is "2" rather than "3" due to accumulated floating point roundoff). In the case $n=4,k=2$ it yields the answer $3.8$: it can be instructive to trace through the code as it computes that answer.
---
**As far as where the arguments went wrong,**
* The argument in the article is useless because it implicitly assumes there is no overlap among packets. That's the only possible way the calculation could be broken down into multiples of five. For instance, it's possible that the second pack you buy has a card you already collected. Afterwards there will be 671 cards to collect--but that formula has no terms corresponding to this possibility.
* Your argument refers to "probabilities of being required." It's unclear what these might be. Nevertheless, let's suppose your argument is correct, at least initially. It appears to say that if you ever get to the point of needing one last card, you will expect to buy $676/1$ packets to do that. They comprise $5\times 676=3380$ cards. Now that this has been pointed out, do you really think you need to buy so many? My intuition says the value should be very close to $680$ *divided* by $5$, or $136$, because by then we would expect to see each card once on average--and that's *exactly* the right answer. (You can see it by printing out the array `e` in the code: it starts out
```
-4 -3 -2 -1 0 1 2 3 4
0.0000 0.0000 0.0000 0.0000 0.0000 136.0000 203.7991 248.9989 282.8988
```
Those last few values tell you how many more packets you expect to buy when you have $4, 3, 2,$ or $1$ cards left to collect (reading from the right side in): 283, 459, 204, and 136.
---
**One moral is,** don't trust newspaper articles that describe the computations of so-called "geniuses" unless there's evidence the writer understood the procedure. (That's pretty rare.)
**Another moral** is revealed by inspecting all of `e`. Because $e(19;680,5)=481.47$ is almost exactly half of $e(680;680,5)$, *you're only halfway done when you have only $19$ more cards to collect!* This is characteristic of carnival games that lure suckers in by letting them score high in their first few attempts but where attaining the final few points to win a prize is almost impossible. The only thing that saves it from being a complete fraud is the possibility of trading cards. (And let's not go into the possibility that one or more cards appear with much smaller chances than the others... .) |
In statistics class, we learnt that alpha = type 1 error rate, and type 1 error rate is the probability of wrongly rejecting our null hypothesis when it is true. It is equal to the red area of the following figure.[![enter image description here](https://i.stack.imgur.com/R0ncP.png)](https://i.stack.imgur.com/R0ncP.png)
However, very rarely do we have just ONE comparison in a study. For example, in a typical psychology study, we often produce something like this:
[![enter image description here](https://i.stack.imgur.com/6YRII.png)](https://i.stack.imgur.com/6YRII.png)
I counted for you, there are 25 ANOVAs and 250 post hoc independent sample t-tests there... And as indicated by the researchers, they consider the p-value 0.05 to be significant. And if the researchers had used Bonferroni correction, I am afraid nothing would have been considered statistically significant...
Therefore, I want to know can I (or should I) calculate the overall Type 1 Error rate for a STUDY, but not just for a single test? Also, I want to know whether other methods to control for Type 1 Error rate exist.
Thank you very much. | We should normally have $$MAE \leq RMSE$$
This is a consequence of the [triangle inequality](https://en.wikipedia.org/wiki/Triangle_inequality) (in a similar way as ['square of the mean values < mean of the squared values'](https://math.stackexchange.com/questions/2272353/how-can-i-prove-that-the-mean-of-squared-data-points-is-greater-than-the-square))
$$
\overbrace{(\vert r\_1 \vert + \vert r\_2\vert + \dots + \vert r\_n \vert)^2}^{MAE^2} \leq \overbrace{\vert r\_1 \vert^2 + \vert r\_2\vert^2 + \dots + \vert r\_n \vert^2}^{RMSE^2}$$
We can derive this more explicitly as following by expressing the absolute value of the error terms $\vert r\_i \vert$ as a sum of two components: the mean of the absolute value of the error terms and the variation relative to that mean, $\vert r\_i \vert = \mu + \delta\_{i}$.
$$\begin{array}{}
\overbrace{(r\_1^2 + r\_2^2 + \dots + r\_n^2)}^{n \cdot RMSE^2} &=& \vert r\_1 \vert^2 + \vert r\_2\vert^2 + \dots + \vert r\_3 \vert^2 \\
& = & (\mu+\delta\_1)^2 + (\mu+\delta\_2)^2 +\dots + (\mu+\delta\_n)^2\\
& = & n \mu^2 + 2 \mu (\delta\_1 + \delta\_2 +\dots + \delta\_n) + \delta\_1^2 + \delta\_2^2 + \dots + \delta\_n^2\\
& = & n \mu^2 + \delta\_1^2 + \delta\_2^2 + \dots + \delta\_n^2 \geq \underbrace{n\mu^2}\_{n \cdot MAE^2}
\end{array}$$
*The step where $2 \mu (\delta\_1 + \delta\_2 +\dots + \delta\_n)$ is removed comes from $\overline{\delta\_{\vert r\_i \vert}}=0$. Note that this must be the case for $\mu$ to be the mean of ${\vert r\_i \vert}$, since $\overline{\vert r\_i \vert} = \overline{\mu + \delta\_i} = \overline{\mu} + \overline{\delta\_i} = \mu + \overline{\delta\_i}$)*
The equality arises when you have all $\delta\_i=0$, this is the case when $\vert r\_i \vert = \mu$, or when $r\_i = \pm \mu$.
---
### Possible exception for different definition of 'mean'
Normally you compute the mean operation in RMSE and MAE by division with $n$ and then you get that the above inequality becomes
$$\frac{(r\_1^2 + r\_2^2 + \dots + r\_n^2)}{n} \geq \mu^2 = \left( \frac{\vert r\_1\vert + \vert r\_2\vert + \dots + \vert r\_n\vert }{n} \right)^2$$
But it is possible that you use a division by $n-1$ or $n-p$. And you compare
$$\frac{(r\_1^2 + r\_2^2 + \dots + r\_n^2)}{n-p} \quad\text{versus} \quad \left( \frac{\vert r\_1\vert + \vert r\_2\vert + \dots + \vert r\_n\vert }{n-p} \right)^2$$
In this case it is possible that the right side is larger than the left side. Maybe this is the case for your code where you mention different methods to compute MAE and RMSE.
Example: let $r\_1 = r\_2 = 1$ and $r\_3 = r\_4 = -1$, let $n=1$ and $p=1$, then
$$\begin{array}{rcccl}
RMSE &=& \sqrt{\frac{1^2+1^2+(-1)^2+(-1)^2}{3}} &=& \sqrt{\frac{4}{3}} &\approx& 1.1547 \\
MAE &=& \frac{\vert 1 \vert+\vert 1 \vert+\vert -1 \vert+\vert -1 \vert}{3} &=& {\frac{4}{3}} &\approx& 1.3333
\end{array}$$ |
I have some survey data, where the first question is something like, "rate how you are feeling on a scale of 1 - 5". The next group of questions are something like, "do you smoke?" or "how much exercise do you get per day: 0, 15, 30, 45, 60, 60+?"
I'm looking for a way to visualize this data, where each question is compared to how the the surveyor is feeling. Any suggestions? I came across a correlation matrix, but it seems I can't have how you are feeling scale on the x-axis and the questions on the y-axis.
```
feeling_scale, smokes, exercise_frequency
5, N, 15
3, Y, 60
5, Y, 0
``` | Try the `lattice` package and maybe box-and-whisker plots.
```
# Make up some data
set.seed(1)
test = data.frame(feeling_scale = sample(1:5, 50, replace=TRUE),
smokes = sample(c("Y", "N"), 50, replace=TRUE),
exercise_frequency = sample(seq(0, 60, 15),
50, replace = TRUE))
library(lattice)
bwplot(exercise_frequency ~ feeling_scale | smokes, test)
```
![enter image description here](https://i.stack.imgur.com/R3pq0.png)
I would also think that a basic barchart would be fine for this type of data.
```
barchart(xtabs(feeling_scale ~ exercise_frequency + smokes, test),
stack=FALSE, horizontal=FALSE, auto.key=list(space = "right"))
```
![enter image description here](https://i.stack.imgur.com/5mLnf.png)
A third option I can think of is the bubble plot, but I'll leave it up to you to decide on how to scale the circles appropriately. It also requires that you first get the frequencies of the different combinations of `exercise_frequency` and `feeling_scale`.
```
test2 = data.frame(with(test,
table(feeling_scale,
exercise_frequency, smokes)))
par(mfrow = c(1, 2))
lapply(split(test2, test2$smokes),
function(x) symbols(x$feeling_scale, x$exercise_frequency,
circles = x$Freq, inches=1/4))
```
![enter image description here](https://i.stack.imgur.com/n1CTG.png) |
In an supervised learning approach, the training data set is already labelled with correct values. So, what is the purpose of evaluating the accuracy of the classifier function after it gets trained with the training data set ?
Wouldn't the accuracy be 100% since all the training data that is being fed will be correctly classified for it to learn from it.
example:
```
classifier = tf.estimator.LinearClassifier(feature_columns=feature_columns,n_classes=3,model_dir="/tmp/iris_model")
classifier.train(input_fn=input_fn(training_set),steps=1000)
accuracy_score = classifier.evaluate(input_fn=input_fn(test_set_, steps=100)["accuracy"]
``` | No, most certainly not! In fact, if you got 100%, you should not trust your model, because you are most definitely [overfitting](https://en.wikipedia.org/wiki/Overfitting), which is a bad thing.
Machine learning models learn from the data that you provide them, as you'd stated. However, what you **want** for them to do is to learn so that they can fit data **beyond** the data you supplied. This is typically done via some sort of assessment, like the accuracy that you mention.
---
Often, but not always, this is done by removing some of the data, then trying to fit that data and measure the accuracy. There are different ways to do this, but 2 common ways are:
1. [Bootstrapping](https://en.wikipedia.org/wiki/Bootstrapping_(statistics))
2. [Cross-Validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)) |
If your logistic regression fit has coefficients with the following attributes, do you look at the values of `Pr(Z>|z|)` are smaller than 0.95 to determine whether that variable is needed at a 5% level of significance?
ie. If `Pr(>|z|)` is 0.964, this variable is not needed at 5% significance.
[![enter image description here](https://i.stack.imgur.com/6UTBa.png)](https://i.stack.imgur.com/6UTBa.png) | Firstly, the p-value given for the Z-statistic would have to be interpreted as how likely it is that a result as extreme or more extreme than that observed would have occured under the null hypothesis. I.e. 0.96 would in principle mean that the data are providing very little evidence that the variable is needed (while small values such as, say, $p\leq 0.05$ would provide evidence for the likely relevance of the variable, as pointed out by others already). However, a lack of clear evidence that the variable is needed in the model to explain the this particular data set would not imply evidence that the variable is not needed. That would require a difference approach and with a very larege standard error one would not normally be able to say that the variable does not have an effect. Also, it is a very bad idea to decide which variables are to be included in a model based on p-values and then fitting the model with or without them as if no model selection had occurred.
Secondly, as also pointed out by others, when you get this huge a coefficient (corresponds to an odds ratio of $e^{-14.29}$) and standard error from logistic regression, you typically have some problem. E.g. the algorithm did not converge or there is complete separation in the data. If your model really did only include an intercept, then perhaps there are no events at all, and all records did not have an outcome? If so, then a standard logistic regression may not be able to tell you a lot. There are some alternatives for such sparse data situations (e.g. a Bayesian analysis including the available prior information). |
I have a series of objects for which I know the probability of belonging to 10 classes. This probability can be null (see example below with 4 classes: A,B,C,D).
```
A B C D
1 0.4 0.0 0.2 0.4
2 0.1 0.3 0.4 0.2
3 0.0 0.0 0.0 1.0
```
In order to get for each object an information about the quality of the classification, I wanted to calculate Shannon's entropy but it does not work when one of the classes has a probability equal to zero (log(0)=-Inf).
My question: Is there a measure similar to Shannon's entropy (or an adaptation) which handles probabilities equal to zero? | Ignore the zero probabilities, and carry on summation using the same equation.
```
for each object{
double e=0.0;
for (int i=0;i<n;i++){
if (p[i]!=0)
e = e+ p[i]*Log(p[i],2);
}
e=-1 * e;
print e;
}
```
//
the entropy for rows 1,2 and 3 is 1.52, 1.85, and 0.00 respectively |
Is it (always) true that
$$\mathrm{Var}\left(\sum\limits\_{i=1}^m{X\_i}\right) = \sum\limits\_{i=1}^m{\mathrm{Var}(X\_i)} \>?$$ | The answer to your question is "Sometimes, but not in general".
To see this let $X\_1, ..., X\_n$ be random variables (with finite variances). Then,
$$ {\rm var} \left( \sum\_{i=1}^{n} X\_i \right) = E \left( \left[ \sum\_{i=1}^{n} X\_i \right]^2 \right) - \left[ E\left( \sum\_{i=1}^{n} X\_i \right) \right]^2$$
Now note that $(\sum\_{i=1}^{n} a\_i)^2 = \sum\_{i=1}^{n} \sum\_{j=1}^{n} a\_i a\_j $, which is clear if you think about what you're doing when you calculate $(a\_1+...+a\_n) \cdot (a\_1+...+a\_n)$ by hand. Therefore,
$$ E \left( \left[ \sum\_{i=1}^{n} X\_i \right]^2 \right) = E \left( \sum\_{i=1}^{n} \sum\_{j=1}^{n} X\_i X\_j \right) = \sum\_{i=1}^{n} \sum\_{j=1}^{n} E(X\_i X\_j) $$
similarly,
$$ \left[ E\left( \sum\_{i=1}^{n} X\_i \right) \right]^2 = \left[ \sum\_{i=1}^{n} E(X\_i) \right]^2 = \sum\_{i=1}^{n} \sum\_{j=1}^{n} E(X\_i) E(X\_j)$$
so
$$ {\rm var} \left( \sum\_{i=1}^{n} X\_i \right) = \sum\_{i=1}^{n} \sum\_{j=1}^{n} \big( E(X\_i X\_j)-E(X\_i) E(X\_j) \big) = \sum\_{i=1}^{n} \sum\_{j=1}^{n} {\rm cov}(X\_i, X\_j)$$
by the definition of covariance.
Now regarding *Does the variance of a sum equal the sum of the variances?*:
* **If the variables are uncorrelated, yes**: that is, ${\rm cov}(X\_i,X\_j)=0$ for $i\neq j$, then $$ {\rm var} \left( \sum\_{i=1}^{n} X\_i \right) = \sum\_{i=1}^{n} \sum\_{j=1}^{n} {\rm cov}(X\_i, X\_j) = \sum\_{i=1}^{n} {\rm cov}(X\_i, X\_i) = \sum\_{i=1}^{n} {\rm var}(X\_i) $$
* **If the variables are correlated, no, not in general**: For example, suppose $X\_1, X\_2$ are two random variables each with variance $\sigma^2$ and ${\rm cov}(X\_1,X\_2)=\rho$ where $0 < \rho <\sigma^2$. Then ${\rm var}(X\_1 + X\_2) = 2(\sigma^2 + \rho) \neq 2\sigma^2$, so the identity fails.
* **but it is possible for certain examples**: Suppose $X\_1, X\_2, X\_3$ have covariance matrix $$ \left( \begin{array}{ccc} 1 & 0.4 &-0.6 \\
0.4 & 1 & 0.2 \\
-0.6 & 0.2 & 1 \\
\end{array} \right) $$ then ${\rm var}(X\_1+X\_2+X\_3) = 3 = {\rm var}(X\_1) + {\rm var}(X\_2) + {\rm var}(X\_3)$
Therefore **if the variables are uncorrelated** then the variance of the sum is the sum of the variances, but converse is **not** true in general. |
Is it so that for each virtual adress there exists an adress in the hard disk? But the hard disk can be larger than the virtual memory?
EDIT: Or can some virtual adresses always map to physical, and some map to hard drive which then is loaded to physical adresses? | In a typical modern operating system, a page frame of RAM can essentially be one of four things:
* Free. Unallocated. Available for use at any time.
* Used by the operating system for its own internal purposes. An OS has data structures, and they need to live somewhere. Page tables might also fall under this general heading.
* A page that represents part of a memory-mapped file. As well as files memory-mapped by user programs, the the text and read-only data of running programs are invariably the executable file memory-mapped. The other main example is buffer cache, where the OS keeps commonly-used data from secondary storage around if it looks like it might be needed.
* "Anonymous memory" (this is the Unix term), which essentially means memory allocated by user programs. The reason why it's called "anonymous memory" is that it's memory that doesn't have a "name", in the sense of a file on disk. Anonymous memory includes user heap-allocated memory, call stacks, and memory shared between processes.
We'll ignore the first two for the moment, because they probably aren't relevant to your question, and just look at the last two.
If the page frame of memory represents part of a file, then it needs to be kept in sync with the copy in secondary storage. If it is a read-only mapping (which is the case for program text), then this is trivial.
If the OS needs more free page frames, and it determines that some piece of read-only file mapping is a good candidate for ejection, it can just be removed from everyone's virtual memory mapping and then freed. If it's read-write, then it may need to be written first, if it's a dirty copy.
If the page frame is anonymous memory, then there is no "file" that it is a copy of. However, all modern operating systems have support for *swap space*, where secondary storage can be used to store anonymous memory in a place other than RAM.
Some older operating systems (e.g. 4.3BSD) did essentially the same thing with anonymous memory as it did with memory-mapped files. All allocated memory was a copy, or cache, of swap space. This meant that you needed at least as much secondary storage dedicated to swap as you had RAM. At some point in the late 80s to mid 90s, this tradeoff made a lot of sense.
Modern operating systems don't do this anymore, and can handle having less swap than RAM, including no swap at all. An operating system may still internally pretend that anonymous memory is kind of a memory-mapped page from a kind of pseudo-device called "swap", but the way its managed is different.
**EDIT**
OK, so that's the perspective of RAM. Now let's talk about the perspective of virtual memory.
Virtual memory is typically organised as a bunch of *segments*. A segment is a contiguous collection of pages which represent a contiguous region of a virtual memory "object". Different operating systems have a different idea about what a virtual memory object can be, but this *usually* means either a file, or anonymous memory.
If a user program tries to access a page which is not part of a segment, then this is what Unix famously calls a segmentation violation.
If a user program tries to access a page which *is* part of a segment, then this is a valid operation. (Assuming that the access itself is valid; you can't write to a read-only segment, for example.)
However, within a segment, a page may or may not be "valid", in the sense that its entry in the CPU's page tables currently points to a page frame of RAM. When a user program tries to access it, this causes a page fault, which traps to the operating system so it can intervene.
In the case of anonymous memory, there are lots of reasons why a page might not be valid:
* When memory is allocated by a program, the OS sets up a segment of anonymous memory, but it does *not* have to be mapped to actual RAM yet, and this can be desirable for speed. It is sufficient that enough memory (whether RAM or swap) to satisfy the allocation *exists*. All modern operating systems let user programs control this.
* The page may be swapped out. In this case, the OS will suspend the thread and read it in.
* The page may be copy-on-write. A page might be shared in such a way that if you only ever read it, you can safely share that copy, but writing to it must force a private copy to be made. I'll give a common example in a moment.
* The OS might just want to do it for its own housekeeping purposes. See [this previous answer](https://cs.stackexchange.com/questions/66541/what-is-the-need-for-valid-invalid-bit-in-paged-memory-technique/66544#66544) for some examples of when this might occur.
**NOTE** What follows is an explanation of one of the more common uses of copy-on-write memory. This doesn't really answer your question, but I'm including it because you might find it helpful, but feel free to skip or skim.
The use is in program executables. I'm going to go into a bit of detail as to how executable files are understood by a modern virtual memory system to make this somewhat self-contained, but if you already know this part, skip down to "read-write data".
A modern executable file (e.g. ELF, PE/COFF) is typically arranged as a bunch of segments. A typical layout is the following segments in roughly this order:
* "Text", which is executable code. This segment should be mapped as read-only and executable.
* "Read only data", which is data that the user programs shouldn't modify. This typically includes data structures generated by the compiler for its own purposes (e.g. data structures to implement virtual calls, exception handling, static initialisation, etc), or truly read-only data such as static strings. This segment should be mapped as read-only and not executable.
* "Read-write data", which is data that is initialised but user programs can write to. This segment should be mapped as read-write, not executable, but also copy-on-write. Why copy-on-write? Because writing to that data should not modify the executable file that you're running!
* "[BSS](https://en.wikipedia.org/wiki/.bss)", is static data that should be initialised to zero bytes and therefore doesn't need to take up any space in the executable file itself. This should be mapped as anonymous zero-fill memory.
When you run a program, this is how the operating system sets up a new virtual address space: it maps the segments from the executable in whatever way the executable requests.
By the way, the term "BSS" deserves some explanation. It was a pseudo-operation implemented on a particular assembler for the IBM 704 at some point in the mid-1950s, and it stood for "block starting symbol". By a series of historical accidents, the term has stuck and lives on today. Peter van der Linden suggested that "Better Save Space" might be a more suitable mnemonic. |
I thought this problem had a trivial solution, couple of for loops and some fancy counters, but apparently it is rather more complicated.
So my question is, how would you write (in C) a function traversal of a square matrix in diagonal strips.
Example:
```
1 2 3
4 5 6
7 8 9
```
Would have to be traversed in the following order:
[3],[2,6],[1,5,9],[4,8],[7] | Straight lines are defined by linear equations. Assuming arrays indexed from zero, as in C, you want the lines $x-y=c$ for $c=2,\,1,\,0,\,-1,\,-2$. This gives something like
```
for c := 2 to -2 step -1
for x := 0 to 2
y := x-c
if 0<=y<=2
print A[x,y]
``` |
I want an algorithm to list all possible ways to map a series of integers $M = \{1,2,3,...,m\}$ to another series of integers $N = \{1,2,3,...,n\}$ where $m > n$, subject to the constraint that only contiguous integers in $M$ map to the same integer in $N$.
E.g.: 5 -> 2:
(1, 2, 3, 4), (5)
(1), (2, 3, 4, 5)
(1, 2, 3), (4, 5)
(1, 2), (3, 4, 5)
This seems like it should be a standard problem with a well known solution, but I can't find it.
To clarify, I am not looking for the total number of solutions, but for pseudocode or the name of a corresponding algorithm to exhaustively list all solutions. | Suppose that the image of your mapping has size $k$. This implies a division of $\{1,\ldots,M\}$ into $k$ contiguous intervals. By identifying each such interval with its largest point, we see that there are $\binom{M-1}{k-1}$ such partitions (since $M$ is always one of the $k$ largest points). For the actual colors chosen, the number of choices is $N(N-1)\cdots(N-k+1) = N!/(N-k)!$. The total number of mappings is thus
$$
\sum\_{k=1}^N \frac{N!}{(N-k)!} \binom{M-1}{k-1}.
$$
Using this argument, it is not too hard to construct an algorithm to list all such mappings; the algorithm completely mirrors the argument. |
On the surface, this sounds like a pretty stupid question. However, i've spent the day poking around various sources and can't find an answer.
Let me make the question more clear.
Take this classic image:
[![enter image description here](https://i.stack.imgur.com/QkT6H.png)](https://i.stack.imgur.com/QkT6H.png)
Clearly, the input layer is a vector with 3 components. Each of the three components is propagated to the hidden layer. Each neuron, in the hidden layer, sees the same vector with 3 components -- all neurons see the same data.
So we are at the hidden layer now. From what I read, this layer is normally just ReLus or sigmoids.
Correct me if I'm wrong, but a ReLu is a ReLu. Why would you need 4 of the exact same function, all seeing the exact same data?
What makes the red neurons in the hidden layer different from each other? Are they supposed to be different? I haven't read anything about tuning or setting parameters or perturbing different neurons to have them be different. But if they aren't different...then what's the point?
[Text under the image above](https://www.toptal.com/machine-learning/an-introduction-to-deep-learning-from-perceptrons-to-deep-networks) says, "A neural network is really just a composition of perceptrons, connected in different ways." They all look connected in the exact same way to me. | To explain using the sample neural network you have provided:
1. Purpose of the multiple inputs: Each input represents a feature of the input dataset.
2. Purpose of the hidden layer: Each neuron learns a different set of weights to represent different functions over the input data.
3. Purpose of the output layer: Each neuron represents a given class of the output (label/predicted variable).
If you used only a single neuron and no hidden layer, this network would only be able to learn linear decision boundaries. To learn non-linear decision boundaries when classifying the output, multiple neurons are required. By learning different functions approximating the output dataset, the hidden layers are able to reduce the dimensionality of the data as well as identify mode complex representations of the input data. If they all learned the same weights, they would be redundant and not useful.
The way they will learn different "weights" and hence different functions when fed the same data, is that when backpropagation is used to train the network, the errors represented by the output are different for each neuron. These errors are worked backwards to the hidden layer and then to the input layer to determine the most optimum value of weights that would minimize these errors.
This is why when implementing backpropagation algorithm, one of the most important steps is to randomly initialize the weights before starting the learning. If this is not done, then you would observe a large no. of neurons learning the exact same weights and give sub-optimal results.
---
Edited to answer additional questions:
* The only reason the neurons aren't redundant is because they've all been "trained" with different set of weights, hence, give a different output when presented with the same data. This is achieved by random initialization and back-propagation of errors.
* The outputs from the Orange neurons (use your diagram as an example), are "squashed" by each Blue neuron by applying the sigmoid or Relu function with the trained weights and the output of the orange neurons. |
I am looking to construct a predictive model where the outcome variable is binary and the input is time series. To make it more concrete, the model will predict if a customer churns (left the company; coded as 1 or 0) based on the amount they spent with the company in the prior 60 days. So, the data is one customer per row and the columns are an outcome factor (1 or 0) and 60 additional columns for the amount spent the in time t-1, t-2....t-60.
Here is some example data:
```
#create the data a series of length 60 and a class ID
sc <- read.table("http://kdd.ics.uci.edu/databases/synthetic_control/synthetic_control.data", header=F, sep="")
#binary class lable
classId <- as.factor(c(rep(0,300), rep(1,300)))
newSc <- data.frame(cbind(classId, sc))
newSc$ID<-seq(1,600,1)
```
The actual model may have many of these series for each customer, so I need to reduce the dimensionality of the data for the series, e.g. instead of using 60 values, I need to reduce this down to a handful. Of course, I can use the mean, min, max etc of the series but I have been reading about using Discrete Fourier Transform.
**Questions:**
1. Is the DFFT in [R](http://stat.ethz.ch/R-manual/R-patched/library/stats/html/fft.html) a proper method to use for my purpose? Any information on how it works would be appreciated.
2. Assuming this R function is correct, how do you extract just the most meaningful coefficients to achieve dimensionality reduction?
ADD:
There seems to be a consensus that using DFFT for dimension reduction is not a wise choice, but it seems that in data mining, this function, DWT and SVD are all commonly used:
[Time Series Mining](http://www.cs.gmu.edu/~jessica/BookChapterTSMining.pdf) starting on page 20. | I'm not sure that I'd classify a Fourier transform as a dimensionality reduction technique *per se*, though you can certainly use it that way.
As you probably know, a Fourier transform converts a **time-domain** function $f(t)$ into a **frequency-domain** representation $F(\omega)$. In the original function, the $t$ usually denotes time: for example, f(1) might denote someone's account balance on the first day, or the volume of the first sample of a song's recording, while f(2) indicates the following day's balance/sample value). However, the argument $\omega$ in $F(\omega$) usually denotes frequency: F(10) indicates the extent to which the signal fluctuates at 10 cycles/second (or whatever your temporal units are), while F(20) indicates the extent to which it fluctuates twice as fast. The Fourier transform "works" by reconstructing your original signal as a weighted sum of sinusoids (you actually get "weight", usually called amplitude, and a "shift", typically called the phase, values for each frequency component). The wikipedia article is a bit complex, but there are a bunch of decent tutorials floating around the web.
The Fourier transform, by itself, doesn't get you any dimensionality reduction. If your signal is of length $N$, you'll get about $N/2$ amplitudes and $N/2$ phases back (1), which is clearly not a huge savings. However, for some signals, most of those amplitudes are close to zero or are *a priori* known to be irrelevant. You could then throw out the coefficients for these frequencies, since you don't need them to reconstruct the signal, which can lead to a considerable savings in space (again, depending on the signal). This is what the linked book is describing as "dimensionality reduction."
A Fourier representation could be useful if:
1. Your signal is periodic, and
2. Useful information is encoded in the periodicity of the signal.
For example, suppose you're recording a patient's vital signs. The electrical signal from the EKG (or the sound from a stethoscope) is a high-dimensional signal (say, 200+ samples/second). However, for some applications, you might be more interested in the subject's heart *rate*, which is likely to be the location of the peak in the FFT, and thus representable by a single digit.
A major limitation of the FFT is that it considers the whole signal at once--it cannot localize a changes in it. For example, suppose you look at the coefficient associated with 10 cycles/second. You'll get similar amplitude values if
1. There is consistent, but moderate-sized 10 Hz oscillation in the signal,
2. That oscillation is twice as large in the first half of the signal, but totally absent in the 2nd half, and
3. The oscillation is totally absent in the first half, but twice as large as #1 in the 2nd half.
4. (and so on)
I obviously don't know much about your business, but I'd imagine these could be very relevant features. Another major limitation of the FFT is that it operates on a single time scale. For example, suppose one customer religiously visits your business every other day: he has a "frequency" of 0.5 visits/day (or a period of 2 days). Another customer might also consistently come for two days in a row, take two off, and then visit again for the next two. Mathematically, the second customer is "oscillating" twice as slowly as the first, but I'd bet that these two are equally likely to churn.
A time-frequency approach helps get around this issues by localizing changes in both frequency and time. One simple approach is the short-time FFT, which divides your signal into little windows, and then computes the Fourier transform of each window. This assumes that the signal is stationary within a window, but changes across them. Wavelet analysis is a more powerful (and mathematically rigorous approach). There are lots of Wavelet tutorials around--the charmingly named [Wavelets for Kids](http://gtwavelet.bme.gatech.edu/wp/kidsA.pdf) is a good place to start, even if it is a bit much for all but the smartest actual children. There are several wavelet packages for R, but their syntax is pretty straightforward (see page 3 of [wavelet package](http://cran.r-project.org/web/packages/wavelets/wavelets.pdf) documentation for one). You need to choose an appropriate wavelet for your application--this ideally looks something like the fluctuation of interest in your signal, but a Morlet wavelet might be a reasonable starting point. Like the FFT, the wavelet transform itself won't give you much dimensionality reduction. Instead, it represents your original signal as a function of two parameters ("scale", which is analogous to frequency, and "translation", which is akin to position in time). Like the FFT coefficients, you can safely discard coefficients whose amplitude is close to zero, which gives you some effective dimensionality reduction.
---
Finally, I want to conclude by asking you if dimensionality reduction is really what you want here. The techniques you've been asking about are all essentially ways to reduce the size of the data while preserving it as faithfully as possible. However, to get the best classification performance we typically want to collect and transform the data to make relevant features as explicit as possible, while discarding everything else.
Sometimes, Fourier or Wavelet analysis is exactly what is needed (e.g., turning a high dimensional EKG signal into a single heart rate value); other times, you'd be better off with completely different approaches (moving averages, derivatives, etc). I'd encourage you to have a good think about your actual problem (and maybe even brainstorm with sales/customer retention folks to see if they have any intuitions) and use those ideas to generate features, instead of blindly trying a bunch of transforms. |
I'm studying discrete math for tomorrow's exam and got stuck in the below question. I tried to google it and couldn't find anything useful.
Prove the following sum is $\Theta (n^2)$ (we have to find $O(n^2)$ and $\Omega (n^2)$)
1. $P(n)= 1+2+3+4\cdots + n$
2. $P(n) = n+(n+1)+(n+2)+\cdots +2n$
Note you **cannot** use any formula you have to do it my **algebraic manipulation**.
This might be simple but I am not getting any clue right now and I dont have solution of it. | Another approach:
* $n^2/4=(n/2)^2= \underbrace{n/2 + n/2 +\cdots + n/2}\_{n/2 \text{ times}} \le 1+2+\cdots + n \le \underbrace{n + n +\cdots + n}\_{n \text{ times}} = n^2 $
* $n^2= \underbrace{n + n +\cdots + n}\_{n \text{ times}} \le n+(n+1)+\cdots + 2n \le \underbrace{2n + 2n +\cdots + 2n}\_{n \text{ times}} = 2n^2 $ |
Along the same thinking as this statement by [Andrej Bauer](https://cs.stackexchange.com/users/1329/andrej-bauer) in this [answer](https://cs.stackexchange.com/a/9763/268)
>
> The Haskell community has developed a number of techniques inspired by
> category theory, of which [monads](http://en.wikipedia.org/wiki/Monad_%28category_theory%29) are best known but should not be
> confused with [monads](http://en.wikipedia.org/wiki/Monad_%28functional_programming%29).
>
>
>
What is the relation between [functors](http://en.wikipedia.org/wiki/Standard_ML#Module_system) in SML and [functors](http://en.wikipedia.org/wiki/Functor) in Category theory?
Since I don't know about the details of functors in other languages such as Haskell or OCaml, if there is info of value then please also add sections for other languages. | Categories form a (large) category whose objects are the (small) categories and whose morphisms are functors between small categories. In this sense functors in category theory are "higher size morphisms".
ML functors are not functors in the categorical sense of the word. But they are "higher size functions" in a type-theoretic sense.
Think of concrete datatypes in a typical programming language as "small". Thus `int`, `bool`, `int -> int`, etc are small, classes in java are small, as well structs in C.
We may collect all the datatypes into a large collection called `Type`. A type constructor, such as `list` or `array` is a function from `Type` to `Type`. So it is a "large" function.
An ML functor is just a slightly more complicated large function: it accepts as an argument several small things and it returns several small things. "Several small things put together" is known as *structure* in ML. In terms of Martin-Löf type theory we have a *universe* `Type` of small types. The large types are usually called *kinds*. So we have:
1. values are elements of types (example: `42 : int`)
2. types are elements of `Type` (example: `int : Type`)
3. ML signatures are kinds (example: [`OrderedType`](http://caml.inria.fr/pub/docs/manual-ocaml/libref/Map.OrderedType.html))
4. type constructors are elements of kinds (example: `list : Type -> Type`)
5. ML stuctures are elements of kinds (example: [`String : OrderedType`](http://caml.inria.fr/pub/docs/manual-ocaml/libref/String.html))
6. ML functors are functions between kinds (example: [`Map.Make : Map.OrderedType -> Make.S`](http://caml.inria.fr/pub/docs/manual-ocaml/libref/Map.Make.html))
Now we can draw an analogy between ML and categories, under which functors correspond to functors. But we also notice that datatypes in ML are like "small categories without morphisms", in other words they are like sets more than they are like categories. We could use an analogy between ML and set theory then:
1. datatypes are like sets
2. kinds are like set-theoretic classes
3. functors are like class-sized functions |
I am confused by the very notion of epochs in neural networks (as well as number of trees in gradient boosting).
Gradient descent method (as most optimization algorithms) keep going until the loss function is "stable", i.e. not changing (within some tolerance) for a certain number of steps.
**tolerance** and the **number of steps** in which the loss function is stable after which stop iterating are indeed what i would call *external parameters*, but why the number of passes of the dataset (a.k.a. **epochs**) or the number of boosted trees should be fixed a *a priori*?
My feeling is that the training should just keep going until convergence (in a global or local minimum of the loss function). Where am I wrong?
This question came to me when dealing with **early stopping**, where you actually stop the training *before convergence* when a metric computed *out-of-sample* has reached a stationary point. And this is clear to me, since the training is optimizing *in-sample*, but you want to stop before to avoid overfitting. But why you need to specify a number of epochs before training is obscure to me. | Answering with a question: *how would you know that the model has "converged"?* Would you wait for test error equal to zero? What if it would be impossible? If the test error would not decrease for 10 epochs would it mean convergence? Or maybe 100? Or maybe 10000? An hour of training? A week? Or maybe a year? "Not decrease" means the difference equal to zero? Or 0.01 is acceptable? Or rather 1e-7?
We need some stopping rule and fixed number of epochs is the simplest one. With fixing the number of epochs you simply decide to wait as long as it is possible for you to wait. If it would find minimum faster, you waisted your time. If not *and* longer time to wait was unacceptable for you, then nonetheless, you'd have to stop. Nobody says it's the most optimal approach. |
I am looking for an algorithm to move a subarray in before an element (which is not part of that subarray) in the same array where the last element of the subarray is the last element of the array with O(1) extra space and O(n) runtime.
e.g. where \*p1 = 5 and \*p2 = 3:
1 2 5 6 7 3 4
becomes
1 2 3 4 5 6 7
This is what I have so far (written in C programming language). Trouble arises when p1 reaches p2.
```
void swap(long* p1, long* p2, long* array_end) {
long* p2_i = p2;
while (p1 < array_end) {
if (p2_i > array_end) {
p2_i = p2;
}
// swap *p1 and *p2_i
long* temp = p1;
*p1 = *p2_i;
*p2_i = temp;
++p1;
++p2_i;
}
}
``` | We can assume without loss of generality that $y$ is the first position, and that the first position is zero. Suppose that there are $n$ elements in the array. Then we want to apply the permutation $\pi(i) = (i+x) \pmod n$. Two elements $i,j$ are in the same cycle of $\pi$ if $i \equiv j \pmod {(n,x)}$, which means the the cycle leaders (minimum elements of each cycle are $0,\ldots,(n,x)-1$. In other words, for each $j < (n,x)$, we need to consider the elements
$$
A[j],A[j+x],A[j+2x],\ldots,A[j+(n/(n,x)-1)x]
$$
and rotate them one step to the right. This can be done using constant memory, and takes time $O(n/(n,x))$. Overall, the running time will be $O(n)$.
You don't actually need to determine $(n,x)$. Instead, you can simply count how many elements were touched. For example, you can consider the following pseudocode:
```
touched = 0
current = 0
while touched < n:
origin = current
element = A[current]
repeat:
current = (current + x) mod n
swap A[current] and element
touched = touched + 1
until current = origin
current = current + 1
end while
``` |
I am working with a large dataset of behavioral data that I am treating (post-hoc) as a time-series experimental design to look for reliable change in a single dependent variable as a result of a treatment. The data comes from user's interaction with a website over 10 years. There is an overall improvement from time 1 to time 2, p<.001. But there is a [regression to the mean effect](http://en.wikipedia.org/wiki/Regression_toward_the_mean) (also see [here](http://www-users.york.ac.uk/~mb55/talks/regmean.htm)) such that those with low DV at time 1 increases at time 2, and those with high DV at time 1 decreases at time 2. (This can be seen clearly on graphs.)
I don't know how to proceed with the analysis. Can I quantify the regression to the mean effect and from that determine how it is that the treatment effect exceeds the regression effect?
Here are some additional details about the study:
* the data for the study come from a support group website where users write about life problems. volunteer counsellors read the entries and respond to the users with support and advice. there is 10 years worth of data; n=~200,000.
* my research is a [natural experiment](http://en.wikipedia.org/wiki/Natural_experiment) because it works with the website data that was not collected with research in mind. biggest issue with that: no control group.
* the volunteer counsellors also tag (privately) the user's written entry with topic, attribute, and severity labels. I conducted a survey of the counsellors asking them to rate the relative severity of these tags (e.g. 'depression-panic'~0, 'school-worry'~2, 'relationships-happy'~5). The survey results and the tags applied to each writing entry were used to derive a simple proxy for the user's state at time of writing. this proxy was normalized across the sample, has a quasi-normal distribution, and it is treated as the IV.
* the treatment is simply use of the site (writing about life problems & receiving social support), so DV from writing entry 1 to entry 2. the main effect is that IV does increase overall from entry 1 to 2, but as described there is a regression to the mean effect.
* after establishing a main effect, I am interested in looking into a variety of interacting variables: user's language choices, details of website interaction, timing of counsellor response, etc. | Update: if you have a true regression to the mean effect, because both it and treatment effects co-occur over time **and have the same directionality for people needing treatment**, the regression to the mean is confounded with treatment, and so you will not be able to estimate the "true" treatment effect.
This is an interesting set of data, and I think you can do some analyses with it, however you will not be able to treat the method used to generate the data as an experiment. I think you have what [is outlined on Wikipedia as a natural experiment](http://en.wikipedia.org/wiki/Natural_experiment) and, while useful, these types of studies have some issues not found in controlled experiments. In particular, natural experiments suffer from a lack of control over independent variables, so cause-and-effect relationships may be impossible to identify, although it is still possible to draw conclusions about correlations.
In your case, I would be worried about [confounding variables](http://en.wikipedia.org/wiki/Confound). This is a list of possible factors that could influence the results:
1. Possibly your largest confound is that you don't know what else is going on in users' lives away from the website. On the basis of what they write on the website, one user may realise how bad their situation is, they may draw on resources around them (family, friends) for support and therefore the help is not limited to that received on the website. Another user, perhaps due to their life issues, may be alienated from family and friends and the website is all the support they have. We may expect that the time-to-positive-outcome will be different for these two users, but we can't distinguish between them.
2. I'm assuming that the website users accessed the website when they wanted to (which is great for them) but means that the results you have for their problems may not be reflective of the number and severity of their life issues, because I assume they didn't access the site regularly (unlike face-to-face counselling appointments which tend to be scheduled regularly).
3. The level of detail in their writing will be reflective of their written style, and is not likely to be equivalent to what they would express in a face-to-face counselling session. There are also no non-verbal cues which a face-to-face counsellor would also use to help assess the state of their client. Were the changes over time more pronounced in users who wrote less and had less tags applied to their content?
4. If there were a number of lower-score and high-score tags in the same post (e.g. someone is having problems with study and they're in a happy relationship), how was the proxy affected by this, for example was a simple average score take across all tag scores for each post? This could be affecting your scores if there is a particular very negative issue that the person is facing, but much of what else they mention is positive. In a face-to-face setting, the counsellor can focus on the negative and find out, e.g. find out why the person is so depressed even though much of their life seems to be going well, but in the website situation you only have what they write. So it is possible that the way users have written their posts means that taking an overall proxy may not work too well.
5. If the website is for users with life problems, I'm not sure why you wish to include users who scored as being very (happy? successful?) in their first post. These people do not seem to be the target audience for the website and I'm not sure of why you would want to include them in the same group as people who had issues. For example, the happy(?) people do not seem to need treatment, so there is no reason I can think of why the website intervention would be suitable for them. I'm not sure if users were assigned to the website as a treatment after, for example, seeing a counsellor. If that was the case, I would wonder why people who were upset enough to see a counsellor would then do a very positive post on a website designed to help them improve their mental state. Assuming there was this pre-counselling stage, maybe all they needed was that one counselling appointment. Regardless, I think this is quite a different group to the ones that gave initial posts that showed life issues, and for the moment I would omit them as they seem to be a "sampling error". Normally when assessing treatment effects, we only select people who appear to need treatment (e.g. we don't include happy contented people in trials of antidepressants).
6. There may be some social desirability bias in the user posts.
7. Have you undertaken any inter-rater reliability testing with the tags? If not, could some of the issues with scoring be related to bias with some tags? In particular, there could be some quality issues when the counsellor has just started and is learning how to tag posts, just like there are quality issues when any of us learn something new. Also, did some counsellors tend to place more tags, and did some tend to place few tags? Your analysis requires tag consistency across all the posts.
These are just suggestions based on your post, and I could well have misunderstood some of your study, or made some incorrect assumptions. I think that the factors you mention at the end of your post - user's language choices, details of website interaction, timing of counsellor response - are all very important.
Best wishes with your study. |
I would like to ask about a special case of the question “[Deciding if a given NC0 circuit computes a permutation](https://cstheory.stackexchange.com/questions/8664/deciding-if-a-given-mathsfnc0-circuit-computes-a-permutation)” by QiCheng that has been left unanswered.
A Boolean circuit is called an NC0*k* circuit if each output gate syntactically depends on at most *k* input gates. (We say that an output gate *g* *syntactically depends on* an input gate *g*′ when there is a directed path from *g*′ to *g* in the circuit as viewed as a directed acyclic graph.)
In the aforementioned question, QiCheng asked about the complexity of the following problem, where *k* is a constant:
*Instance*: An NC0*k* circuit with *n*-bit input and *n*-bit output.
*Question*: Does the given circuit compute a permutation on {0, 1}*n*? In other words, is the function computed by the circuit a bijection from {0, 1}*n* to {0, 1}*n*?
As Kaveh commented on that question, it is easy to see that the problem is in coNP. In an answer, I showed that the problem is coNP-complete for *k*=5 and that it is in P for *k*=2.
>
> **Question**. What is the complexity for *k*=3?
>
>
>
*Clarification on May 29, 2013*: “A permutation on {0, 1}*n*” means a bijective mapping from {0, 1}*n* to itself. In other words, the problem asks whether every *n*-bit string is the output of the given circuit for some *n*-bit input string. | This problem with $k=3$ is coNP-hard (and therefore coNP-complete).
To prove this, I will reduce from 3-SAT to the complement of this problem (for a given $NC\_3^0$ circuit, does the circuit enact a non-bijective function).
First a preliminary definition that will be helpful:
We define a labeled graph to be a directed graph, some of whose edges are labeled with literals, with the property that every vertex has either one unlabeled incoming edge, one labeled incoming edge, or two unlabeled incoming edges.
The reduction
-------------
Suppose we have a 3-SAT formula $\phi$ consisting of $m$ clauses, each containing three literals. The first step is to construct a labeled graph $G$ from $\phi$. This labeled graph contains one copy of the following gadget (sorry for the terrible diagram) for each clause in $\phi$. The three edges labeled L1, L2, and L3 are instead labeled with the literals in the clause.
```
|
| |
| |
| O<-----\
| ^ |
| | |
| | |
| /----->O |
| | ^ |
| | | |
| | | |
| O O O
| ^ ^ ^
| | | |
| |L1 |L2 |L3
| | | |
| O O O
| ^ ^ ^
| | | |
| | | |
| \------O------/
| ^
| |
| |
| O
| ^
| |
|
```
The gadgets (one for each clause) are all arrange in one big cycle with the bottom of one gadget linking to the top of the next.
Note that this arrangement of gadgets does in fact form a labeled graph (every vertex has indegree 1 or 2 with only edges leading to vertices of indegree 1 being labeled).
From the formula $\phi$ and the labeled graph $G$ (which was constructed from $\phi$) we next construct an $NC\_3^0$ circuit (this will conclude the reduction). The number of inputs and outputs for this circuit is $n+v$ where $n$ is the number of variables in $\phi$ and $v$ is the number of vertices in $G$. One input and one output is assigned to each variable in $\phi$ and to each vertex in $G$. If $x$ is some variable in $\phi$ then we will refer to the input and output bits associated with $x$ as $x\_{in}$ and $x\_{out}$. Furthermore, if $l$ is a literal with $l = x$ then we define $l\_{in} = x\_{in}$ and if $l$ is a literal with $l = \neg x$ then we define $l\_{in} = \neg x\_{in}$. Finally, if $v$ is some vertex in $G$ then we will refer to the input and output bits associated with $v$ as $v\_{in}$ and $v\_{out}$.
There are four types of output bits:
1) For every variable $x$ in $\phi$, $x\_{out} = x\_{in}$. Note that this output depends on only one input bit.
2) For every vertex $v$ in the labeled graph with exactly one incoming edge $(u, v)$ such that the edge is unlabeled, $v\_{out} = v\_{in} \oplus u\_{in}$. Note that this output depends on only two input bits.
3) For every vertex $v$ in the labeled graph with exactly one incoming edge $(u, v)$ such that the edge is labeled $l$, $v\_{out} = v\_{in} \oplus (u\_{in} \land l\_{in})$. Note that this output depends on only three input bits since $l\_{in}$ depends only on $x\_{in}$ for whatever variable $x$ is used in the literal $l$.
4) For every vertex $v$ in the labeled graph with exactly two incoming edges $(u, v)$ and $(w, v)$, $v\_{out} = v\_{in} \oplus (u\_{in} \lor w\_{in})$. Note that this output depends on only three input bits.
Since in all cases the output depends on only three inputs, the circuit we build is in $NC\_3^0$ as desired.
Correctness proof case 1: $\phi$ is satisfiable
-----------------------------------------------
Suppose there exists a satisfying assignment for $\phi$. Then construct the following two sets of values for the inputs.
1) The inputs associated with the variables of $\phi$ are given the values of the satisfying assignment. All of the inputs associated with vertices of $G$ are given the value 0.
2) The inputs associated with the variables of $\phi$ are given the values of the satisfying assignment. Consider the vertices in one clause gadget in $G$. If the value of a label is 0 (under the satisfying assignment), the input associated with the vertex at the target endpoint of the edge labeled with that label is given a value of 0. If both L1 and L2 have value 0 then the second-top vertex in the gadget (as shown above) is also given a value of 0. All other vertices are given a value of 1.
We wish to show that these two sets of inputs yield identical outputs and therefore that the $NC\_3^0$ circuit does not encode a permutation.
Consider the four types of output bits:
1) For every variable $x$ in $\phi$, $x\_{out} = x\_{in}$. Since $x\_{in}$ is the same for both sets of inputs, outputs of this form will always be the same across the two sets of inputs.
2) For every vertex $v$ in the labeled graph with exactly one incoming edge $(u, v)$ such that the edge is unlabeled, $v\_{out} = v\_{in} \oplus u\_{in}$. Examining the gadget whose copies make up $G$, we see that all such edges consist only of pairs of vertices whose input values are always 1s under the second set of inputs. Thus $v\_{out} = v\_{in} \oplus u\_{in} = 0 \oplus 0 = 0$ under the first set of inputs and $v\_{out} = v\_{in} \oplus u\_{in} = 1 \oplus 1 = 0$ under the second set of inputs. Thus outputs of this form will always be the same (and in fact zero) across the two sets of inputs.
3) For every vertex $v$ in the labeled graph with exactly one incoming edge $(u, v)$ such that the edge is labeled $l$, $v\_{out} = v\_{in} \oplus (u\_{in} \land l)$. If $l$ is false under the assignment then $v\_{in}$ is 0 under both sets of inputs; then $v\_{out} = v\_{in} \oplus (u\_{in} \land l) = v\_{in} \oplus (u\_{in} \land 0) = v\_{in} = 0$ under both sets of inputs. If $l$ is true under the assignment, $v\_{in}$ is 0 under the first set of inputs and 1 under the second; also note that in the gadget, the only labeled edges $(u, v)$ have vertices $u$ which always have $u\_{in} = 1$ under the second set of inputs. As a result we see that under both sets of inputs, $u\_{in} = v\_{in}$ whenever $l$ is true; then $v\_{out} = v\_{in} \oplus (u\_{in} \land l) = v\_{in} \oplus (u\_{in} \land 1) = v\_{in} \oplus u\_{in} = v\_{in} \oplus v\_{in} = 0$. Thus outputs of this form will always be the same (and in fact zero) across the two sets of inputs.
4) For every vertex $v$ in the labeled graph with exactly two incoming edges $(u, v)$ and $(w, v)$, $v\_{out} = v\_{in} \oplus (u\_{in} \lor w\_{in})$. There are two such vertices in each gadget. The top vertex and the second-from top vertex. We consider those two cases separately.
4a) When $v$ is the second-top vertex in a gadget, $u$ and $w$ are the two target endpoints of the edges labeled L1 and L2. Under the first set of inputs, $v\_{out} = v\_{in} \oplus (u\_{in} \lor w\_{in}) = 0 \oplus (0 \lor 0) = 0$. Under the second set of inputs, $u\_{in}$ is 0 iff L1 has value 0 under the satisfying assignment (aka $u\_{in} = L1$); similarly, $w\_{in}$ is 0 iff L2 has value 0 under the satisfying assignment (aka $w\_{in} = L2$); and finally, $v\_{in}$ is defined to be 0 iff both L1 and L2 have value 0 (aka $v\_{in} = L1 \lor L2$). Thus under the second set of inputs, $v\_{out} = v\_{in} \oplus (u\_{in} \lor w\_{in}) = (L1 \lor L2) \oplus (L1 \lor L2) = 0$. Thus outputs of this form will always be the same (and in fact zero) across the two sets of inputs.
4b) When $v$ is the top vertex in a gadget, $u$ is the second-top vertex and $w$ is the target endpoint of the edge labeled L3. Under the first set of inputs, $v\_{out} = v\_{in} \oplus (u\_{in} \lor w\_{in}) = 0 \oplus (0 \lor 0) = 0$. Under the second set of inputs, $u\_{in}$ is 0 iff both L1 and L2 have value 0 (aka $u\_{in} = L1 \lor L2$); $w\_{in}$ is 0 iff L3 has value 0 (aka $w\_{in} = L3$); and finally $v\_{in} = 1$. Thus under the second set of inputs, $v\_{out} = v\_{in} \oplus (u\_{in} \lor w\_{in}) = 1 \oplus ((L1 \lor L2) \lor L3) = 1 \oplus (L1 \lor L2 \lor L3) = 1 \oplus 1 = 0$ where the equality $(L1 \lor L2 \lor L3) = 1$ holds by definition in a satisfying assignment for every clause. Thus outputs of this form will always be the same (and in fact zero) across the two sets of inputs.
Clearly, we see that the outputs are the same for two different sets of inputs and therefore that the $NC\_3^0$ circuit enacts a non-bijective function.
Correctness proof case 2: $\phi$ is unsatisfiable
-------------------------------------------------
Suppose now that there exists no satisfying assignment for $\phi$. Then assume for the sake of contradiction that some two different sets of inputs lead to the $NC\_3^0$ circuit having the same output.
Clearly, the two inputs must have the same values for $x\_{in}$ for every variable $x$ in $\phi$. Thus we may now unambiguously refer to the value of $x$.
Define $S$ to be the set of vertices $v$ in $G$ such that $v\_{in}$ is different in the two sets of input values.
We will prove the following lemmas below:
Lemma 1: If in some gadget all three vertices at the target endpoints of the labeled edges are not in $S$ then no vertices above those three in the gadget are in $S$.
Lemma 2: If in some gadget the top vertex is not in $S$ then in the next gadget up no vertex is in $S$.
Since the gadgets form a loop, this implies that if in any gadget all three vertices at the target endpoints of the labeled edges are not in $S$ then no vertex in $G$ is in $S$ (in other words $S$ is empty).
However, consider a gadget associated with a clause $(L1 \lor L2 \lor L3)$ that is not satisfied. In this gadget all three labels have value 0. We know that edge $(u, v)$ labeled $L$ must satisfy $v\_{out} = v\_{in} \oplus (u\_{in} \land L)$, but $L = 0$, so $v\_{out} = v\_{in} \oplus (u\_{in} \land L) = v\_{in} \oplus (u\_{in} \land 0) = v\_{in} \oplus 0 = v\_{in}$. Thus since the output is the same for both inputs, the values of $v\_{in}$ must also be the same across the two sets of inputs. In other words, we have shown that $v$ is not in $S$. Thus we see that in this particular gadget, the three vertices at the target endpoints of the labeled edges are not in $S$.
As a result, we conclude that $S$ is empty. This however, implies that between the two sets of inputs, there were no differences, which contradicts the assumption that these input sets are different. As a result, we see that the function enacted by the $NC\_3^0$ circuit is injective and therefore a bijection.
All that's left is to prove the lemmas.
To do this, we note that for every type of vertex in $G$ (indegree 1 with label, indegree 1 without label, and indegree 2), if all incoming edges come from vertices not in $S$ then the vertex in question is also not in $S$. This is because in all three cases $v\_{out} = v\_{in} \oplus X$ where $X$ is some function of the inputs associated with variables and/or vertices with edges to $v$. Since all such vertices are not in $S$ by assumption, the value of $X$ must be the same under both sets of inputs. Therefore $v\_{in} = v\_{out} \oplus X$ is also the same under both sets of inputs. In other words $v$ is not in $S$.
Now that we have the rule that a vertex is not in $S$ whenever all of its predecessors are not in $S$, the lemmas follow simply by applying the rule repeatedly to the gadget diagram above. |
Consider this language: $$L = \{w \in \{a,c\}^\* \mid 3\nmid\#a(w)\land\#c(w)>0\}$$.
Here is an automaton for the first part of language, but I do not know how to devise and attach the second part of the condition $\#c(w) > 0$. Because $C$ can come in any state.
[![enter image description here](https://i.stack.imgur.com/7JvPL.png)](https://i.stack.imgur.com/7JvPL.png) | The product construction is a way of taking two DFAs for languages $L\_1,L\_2$, and constructing a new DFA for the language $L\_1 \cup L\_2$ or $L\_1 \cap L\_2$. In your case, $L\_1$ consists of all words in which the number of $a$s is not a multiple of 3, and $L\_2$ consists of all words containing at least one $c$. You have already constructed a DFA for $L\_1$. Construct one for $L\_2$, and use the product construction to construct one for $L\_1 \cap L\_2$. |
I'm trying to solve a problem and I'm not used to how little data I'm given. I'm trying to compare results from an actual system I've created, and a value given to me. I've found that the value of Model A's mean is 10.871, for instance, and I have also found the standard deviation, sample size, and half width of Model A. I'm supposed to compare Model A's mean to another mean from the "real" mean, which is appx 14. I'm not given the standard deviation, sample size, or any information about the "real" mean, other than it being 14. I need to use the level of significance alpha = 0.05, but none of the t-tests or z-tests seem to work since I don't know standard deviation or sample size for the "real" mean. How can I carry out my test to see whether or not the "real" mean is consistent with the mean I've found from Model A? | You are comparing one sample (mean = 10.871, sd known) with a specified mean (14). If you really want to do a null hypothesis significance test (many experienced statisticians deprecate them) and can assume [approximate] Normality, then a one-sample Student's t-test is appropriate, you can Google that for more detail but briefly:
The test statistic is $ t = \frac{m-μ}{s/\sqrt n } $
where
m = sample mean
μ = hypothesized population mean
s = sample standard deviation
n = sample size
The p-value depends on the alternative hypothesis
$H\_1: m > μ$ then $P(t\_{n-1} >= t)$
$H\_1: m < μ$ then $P(t\_{n-1} <= t)$
$H\_1: m != μ$ then $2 \* P(t\_{n-1} >= |t|)$ |
I have a problem embracing the benefits of labeling a model factor as random for a few reasons. To me it appears like in almost all cases the optimal solution is to treat all of the factors as fixed.
First, the distinction of fixed vs random is quite arbitrary. The standard explanation is that, if one is interested in the particular experimental units per se, then one should use fixed effects, and, if one is interested in the population represented by the experimental units, one should use random effects. This is not of much help because it implies one can alternate between fixed and random views even if the data and experimental design remain the same. Also, this definition promotes an illusion that, if a factor is labeled as random, the inference drawn from the model is somehow more applicable to the population than in case when the factor is labeled as fixed. Finally, [Gelman shows that the fixed-random distinction is confusing](http://andrewgelman.com/2005/01/25/why_i_dont_use/) even at the definition level because there are four more definitions of what fixed and random effects are.
Second, the estimation of mixed models is quite complicated. As opposed to a "purely fixed” model, there are more than a few ways to obtain the p-values. Prof. Bates who implemented REML estimation in the lme4 package in R went so far as to refuse to report the p-values altogether.
Third, there is a murky issue of how many implicit parameters are introduced by a random factor. The following example is my adaptation of that in Burnham & Anderson, *Model Selection and Multi-Model Inference: A Practical Information-Theoretic Approach*. From the bias-variance tradeoff perspective, the role of random effects can be illustrated as follows. Consider a one-way ANOVA with $K$ treatments and $K$ main factor effects, of which $K - 1$ are estimable. The error term has $\mathcal N(0, \sigma^2)$ distribution. If the number of observations is fixed, the bias-variance tradeoff will deteriorate as $K$ goes up. Suppose we say that the $K$ main effects are drawn from $\mathcal N(0, \sigma\_K)$ distribution. The corresponding model will have a complexity that is somewhere in between the fixed (overfitted) version and the underfitted model that contains the intercept only. The number of effective parameters in the fixed model is
$$1 \:\:\mathrm{intercept} + (K - 1) \:\:\mathrm{main\: effects} + 1 \:\:\sigma = K + 1.$$
The number of effective parameters in the random model is at least three: $ \mathrm{intercept}, \sigma, \sigma\_K$. In addition, the random model has a number of “hidden” parameters implied by the distributional (normal in this case) restriction imposed on the main effects.
In particular, if there is a factor with two levels, it doesn’t make sense to call it random, even if we know for sure that its levels have been sampled at random from some population. That is because the fixed effect version has three parameters, and the random effect version has over three parameters. In this case, the random model turns out to have more complexity than the fixed version. Apparently, a switch from the fixed to random version is more grounded for larger $K$. However, the number of “hidden” parameters in the random model is unknown, so it is impossible to compare the fixed and random versions based on the information criteria such as AIC. Therefore, while this example illuminates the contribution of random effects (the possibility of a better bias-variance tradeoff), it also shows that it is hard to say when it is justifiable to relabel the factor from fixed to random.
None of the above problems are present in a “purely fixed” model. Therefore, I am willing to ask:
1. Can anyone provide an example when something very bad happened when a random factor was used as if it were fixed? I believe there should be some simulation studies that address the issue explicitly.
2. Is there a proven quantitative method to decide when it makes sense to switch from fixed to random label? | ***1.*** A famous example in psychology and linguistics is described by Herb Clark (1973; following Coleman, 1964): "The language-as-fixed-effect fallacy: A critique of language statistics in psychological research."
Clark is a psycholinguist discussing psychological experiments in which a sample of research subjects make responses to a set of stimulus materials, commonly various words drawn from some corpus. He points out that the standard statistical procedure used in these cases, based on repeated-measures ANOVA, and referred to by Clark as $F\_1$, treats participants as a random factor but (perhaps implicitly) treats the stimulus materials (or "language") as fixed. This leads to problems in interpreting the results of hypothesis tests on the experimental condition factor: naturally we want to assume that a positive result tells us something about both the population from which we drew our participant sample as well as the theoretical population from which we drew the language materials. But $F\_1$, by treating participants as random and stimuli as fixed, only tells us the about the effect of the condition factor across other similar participants responding to *the exact same stimuli*. Conducting the $F\_1$ analysis when both participants and stimuli are more appropriately viewed as random can lead to Type 1 error rates that substantially exceed the nominal $\alpha$ level--usually .05--with the extent depending on factors such as the number and variability of stimuli and the design of the experiment. In these cases, the more appropriate analysis, at least under the classical ANOVA framework, is to use what are called quasi-$F$ statistics based on ratios of *linear combinations of* mean squares.
Clark's paper made a splash in psycholinguistics at the time, but failed to make a big dent in the wider psychological literature. (And even within psycholinguistics the advice of Clark became somewhat distorted over the years, as documented by Raaijmakers, Schrijnemakers, & Gremmen, 1999.) But in more recent years the issue has seen something of a revival, due in large part to statistical advances in mixed-effects models, of which the classical mixed model ANOVA can be seen as a special case. Some of these recent papers include Baayen, Davidson, & Bates (2008), Murayama, Sakaki, Yan, & Smith (2014), and (*ahem*) Judd, Westfall, & Kenny (2012). I'm sure there are some I'm forgetting.
***2.*** Not exactly. There *are* methods of getting at whether a factor is better included as a random effect or not in the model at all (see e.g., Pinheiro & Bates, 2000, pp. 83-87; ***however*** see Barr, Levy, Scheepers, & Tily, 2013). And of course there are classical model comparison techniques for determining if a factor is better included as a fixed effect or not at all (i.e., $F$-tests). But I think that determining whether a factor is better considered as fixed or random is generally best left as a conceptual question, to be answered by considering the design of the study and the nature of the conclusions to be drawn from it.
One of my graduate statistics instructors, Gary McClelland, liked to say that perhaps the fundamental question of statistical inference is: ***"Compared to what?"*** Following Gary, I think we can frame the conceptual question that I mentioned above as: *What is the reference class of hypothetical experimental results that I want to compare my actual observed results to?* Staying in the psycholinguistics context, and considering an experimental design in which we have a sample of Subjects responding to a sample of Words that are classified in one of two Conditions (the particular design discussed at length by Clark, 1973), I will focus on two possibilities:
1. The set of experiments in which, for each experiment, we draw a new sample of Subjects, a new sample of Words, and a new sample of errors from the generative model. Under this model, Subjects and Words are both random effects.
2. The set of experiments in which, for each experiment, we draw a new sample of Subjects, and a new sample of errors, but we *always use the same set of Words*. Under this model, Subjects are random effects but Words are fixed effects.
To make this totally concrete, below are some plots from (above) 4 sets of hypothetical results from 4 simulated experiments under Model 1; (below) 4 sets of hypothetical results from 4 simulated experiments under Model 2. Each experiment views the results in two ways: (left panels) grouped by Subjects, with the Subject-by-Condition means plotted and tied together for each Subject; (right panels) grouped by Words, with box plots summarizing the distribution of responses for each Word. All experiments involve 10 Subjects responding to 10 Words, and in all experiments the "null hypothesis" of no Condition difference is true in the relevant population.
***Subjects and Words both random: 4 simulated experiments***
![both_random](https://i.stack.imgur.com/AcK7x.png)
Notice here that in each experiment, the response profiles for the Subjects and Words are totally different. For the Subjects, we sometimes get low overall responders, sometimes high responders, sometimes Subjects that tend to show large Condition differences, and sometimes Subjects that tend to show small Condition difference. Likewise, for the Words, we sometimes get Words that tend to elicit low responses, and sometimes get Words that tend to elicit high responses.
***Subjects random, Words fixed: 4 simulated experiments***
![subs_random](https://i.stack.imgur.com/XhIBz.png)
Notice here that across the 4 simulated experiments, the Subjects look different every time, but the responses profiles for the Words look basically the same, consistent with the assumption that we are reusing the same set of Words for every experiment under this model.
Our choice of whether we think Model 1 (Subjects and Words both random) or Model 2 (Subjects random, Words fixed) provides the appropriate reference class for the experimental results we actually observed can make a big difference to our assessment of whether the Condition manipulation "worked." We expect more chance variation in the data under Model 1 than under Model 2, because there are more "moving parts." So if the conclusions that we wish to draw are more consistent with the assumptions of Model 1, where chance variability is relatively higher, but we analyze our data under the assumptions of Model 2, where chance variability is relatively lower, then our Type 1 error rate for testing the Condition difference is going to be inflated to some (possibly quite large) extent. For more information, see the References below.
**References**
Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of memory and language, 59(4), 390-412. [PDF](http://webcom.upmf-grenoble.fr/LIP/Perso/DMuller/M2R/R_et_Mixed/documents/Baayen-2008-JML.pdf)
Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255-278. [PDF](http://idiom.ucsd.edu/~rlevy/papers/barr-etal-2013-jml.pdf)
Clark, H. H. (1973). The language-as-fixed-effect fallacy: A critique of language statistics in psychological research. Journal of verbal learning and verbal behavior, 12(4), 335-359. [PDF](http://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/clark-jvlvb-1973.pdf)
Coleman, E. B. (1964). Generalizing to a language population. Psychological Reports, 14(1), 219-226.
Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: a new and comprehensive solution to a pervasive but largely ignored problem. Journal of personality and social psychology, 103(1), 54. [PDF](http://jakewestfall.org/publications/JWK.pdf)
Murayama, K., Sakaki, M., Yan, V. X., & Smith, G. M. (2014). Type I Error Inflation in the Traditional By-Participant Analysis to Metamemory Accuracy: A Generalized Mixed-Effects Model Perspective. Journal of Experimental Psychology: Learning, Memory, and Cognition. [PDF](http://jakewestfall.org/Murayama_et_al_2014_JEPLMC.pdf)
Pinheiro, J. C., & Bates, D. M. (2000). Mixed-effects models in S and S-PLUS. Springer.
Raaijmakers, J. G., Schrijnemakers, J., & Gremmen, F. (1999). How to deal with “the language-as-fixed-effect fallacy”: Common misconceptions and alternative solutions. Journal of Memory and Language, 41(3), 416-426. [PDF](http://www.raaijmakers.edu.fmg.uva.nl/PDFs/Raaijmakers%20et%20al%20MinF%20paper.pdf) |
Arora and Barak show that $\mathsf{AM}$ can be expressed as $\mathsf{BP}\cdot \mathsf{NP}$ i.e the set of languages that have randomized reductions to 3SAT. $\mathsf{MA}$ is also a natural randomized generalization of $\mathsf{NP}$ in that you replace the deterministic verifier by a randomized one.
Is there a sense in which one of these is a closer fit in the "P is to BPP as NP is to ?" relation ? | Here is a point for AM: For a complexity class C, almost-C is define to be the set of languages that are in C relative to almost every oracle (almost = Probability 1). Then almost-P=BPP and almost-NP=AM. |
I would like to develop a test to identify which variables in my data set have a variation higher than the "average variability".
I'm struggling with that since days, and I also tried in vain to look for help in other forums.
I have data from biological experiments, that look like this:
```
v1 2 1.8 1.5 1.9 2.1 1.78 1.95 2.0 2.1
v2 2 100 -5.2
v3 1 -1.3 -2 2.3
v4 1 1.5 1.6 1.9 2.1 2.0 2.4 -1.1 2.3 1.5 1.6 1.9 1.8 1.6
```
These represent gene expressions.
Now, I would expect that all values of each variable(genes) are more or less similar, since the values are repeat measurements of the same gene.
Having a variable with such a huge difference, as v2 , doesn't have sense, because the repeated measurements should give consistent values. Therefore, it has to come from a methodological error and the variable (gene) has to be discarded.
I was looking for a method (possible a statistical test) in R which could identify the "average variability" among my samples and report me which variables (genes) have a variability significantly greater. This means that for these genes my data are not good enough to estimate the expression, and I have to discard them.
I would really appreciate any suggestion/links/advice/methods on test I could use for my purpose. | I just looked at this.
**My approach was:**
* compute the mean, standard deviation, and count for each set of samples
* compute the critical t-threshold given alpha, the sample size, and the nature of the fit (quadratic). I was using excel so I used "[T.inv](https://support.office.com/en-us/article/T-INV-function-2908272b-4e61-4942-9df9-a25fec9b0e2e?ui=en-US&rs=en-US&ad=US)".
* transform the data by subtracting the mean, then dividing by the standard deviation, then comparing the absolute value to the t-threshold.
* If it is above the threshold then it is classified as an outlier
Note: alpha is a parameter. If you want to make your fit "wider" then use a smaller value. If you want more data to be classified as possible outlier then use a higher value. It is exceptionally good if you can take the time to understand what "alpha" means in the statistical sense of this threshold.
**I notice you have rows with 3 samples - that is dangerous:**
Having two samples and computing the standard deviation is like having one sample and computing the mean. The math gives you a number, but it is as sample-sparse as mathematics can go and still give a value - it is on the edge of the cliff of oblivion and is not very informative. Get more samples.
There are rules of thumb that say 5, 10, 30, 100 or 300 are sufficient. If you are going below 5 then you had best have a great defense for why the math isn't bad. |
While reading about model explainability and model accountability, the term surrogate model keeps appearing. I had an idea about what it is but it does not seem to make sense anymore:
* **What is a surrogate models?**
* **Why are surrogate models good for explainability?** | Here is a solution based on @NoahWeber and @etiennedm answers. It is based on a juxtaposition of splittings, a 1) repeated k fold splitting (to get training customers and testing customers), and 2) a time series splits on each k fold.
This strategy is based on a time series' splitting using a custom CV split iterator on dates (whereas usual CV split iterators are based on sample size / folds number).
An implementation within sklearn ecosystem is provided.
Let's restate the problem.
Say you have 10 periods and 3 customers indexed as follows :
```py
example_data = pd.DataFrame({
'index': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
'cutomer': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
'date': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
})
```
We do a repeated k fold with 2 folds and 2 iterations (4 folds in total) and within each k fold split we split again with time series split such that each time series split has 2 folds
kfold split 1 : training customers are [0, 1] and testing customers are [2]
kfold split 1 time series split 1 : train indices are [0, 1, 2, 3, 10, 11, 12, 13] and test indices are [24, 25, 26]
kfold split 1 time series split 2 : train indices are [0, 1, 2, 3, 4, 5, 6, 10, 11, 12, 13, 14, 15, 16] and test indices are [27, 28, 29]
kfold split 2 : training customers are [2] and testing customers are [0, 1]
kfold split 2 time series split 1 : train indices are [20, 21, 22, 23] and test indices are [4, 5, 6, 7, 15, 16, 17]
kfold split 2 time series split 2 : train indices are [20, 21, 22, 23, 24, 25, 26] and test indices are [7, 8, 9, 17, 18, 19]
kfold split 3 : training customers are [0, 2] and testing customers are [1]
kfold split 3 time series split 1 : train indices are [0, 1, 2, 3, 20, 21, 22, 23] and test indices are [14, 15, 16]
kfold split 3 time series split 2 : train indices are [0, 1, 2, 3, 4, 5, 6, 20, 21, 22, 23, 24, 25, 26] and test indices are [17, 18, 19]
kfold split 4 : training customers are [1] and testing customers are [0, 2]
kfold split 4 time series split 1 : train indices are [10, 11, 12, 13,] and test indices are [4, 5, 6, 24, 25, 26]
kfold split 4 time series split 2 : train indices are [10, 11, 12, 13, 14, 15, 16] and test indices are [7, 8, 9, 27, 28, 29]
Usually, cross-validation iterators, such as those in sklearn, which are based on the number of folds, i.e., the sample size in each fold. These are unfortunately not suited in our kfold / time series split with real data. In fact, nothing guarantees that data is perfectly distributed over time and over groups. (as we assumed in the previous example).
For instance, we can have the 4th observation in the consumer training sample (say customer 0 and 1 in kfold split 1 in the example) that comes after the 4th observation in the test sample (say customer 2). This violates condition 1.
Here is one CV splits strategy based on dates by fold (not by sample size or the number of folds).
Say you have previous data but with random dates. Define an initial\_training\_rolling\_months, rolling\_window\_months. say for example 6 and 1 months.
kfold split 1 : training customers are [0, 1] and testing customers are [2]
kfold split 1 time series split 1 : train sample is the 6 first months of customers [0, 1] and test sample is the month starting after train sample for customers [2]
kfold split 1 time series split 2 : train sample is the 7 first months of customers [0, 1] and test sample is the month starting after train sample for customers [2]
Below a suggestion of implementation to build such a time series split iterator.
The returned iterator is a list of tuples that you can use as another cross-validation iterator.
With a simple generated data like in our previous example to debug the folds generation, noting that customers 1 (resp. 2) data begins at index 366 and (resp. 732).
```py
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
df = generate_happy_case_dataframe()
grouped_ts_validation_iterator = build_grouped_ts_validation_iterator(df)
gridsearch = GridSearchCV(estimator=RandomForestClassifier(), cv=grouped_ts_validation_iterator, param_grid={})
gridsearch.fit(df[['feat0', 'feat1', 'feat2', 'feat3', 'feat4']].values, df['label'].values)
gridsearch.predict([[0.1, 0.2, 0.1, 0.4, 0.1]])
```
With randomly generated data like in @etiennedm's example (to debug split, I covered simple cases such as when the test sample begins before the training samples or just after).
```py
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
df = generate_fake_random_dataframe()
grouped_ts_validation_iterator = build_grouped_ts_validation_iterator(df)
gridsearch = GridSearchCV(estimator=RandomForestClassifier(), cv=grouped_ts_validation_iterator, param_grid={})
gridsearch.fit(df[['feat0', 'feat1', 'feat2', 'feat3', 'feat4']].values, df['label'].values)
gridsearch.predict([[0.1, 0.2, 0.1, 0.4, 0.1]])
```
The implementation :
```py
import pandas as pd
import numpy as np
from sklearn.model_selection import RepeatedKFold
def generate_fake_random_dataframe(start=pd.to_datetime('2015-01-01'), end=pd.to_datetime('2018-01-01')):
fake_date = generate_fake_dates(start, end, 500)
df = pd.DataFrame(data=np.random.random((500,5)), columns=['feat'+str(i) for i in range(5)])
df['customer_id'] = np.random.randint(0, 5, 500)
df['label'] = np.random.randint(0, 3, 500)
df['dates'] = fake_date
df = df.reset_index() # important since df.index will be used as split index
return df
def generate_fake_dates(start, end, n):
start_u = start.value//10**9
end_u = end.value//10**9
return pd.DatetimeIndex((10**9*np.random.randint(start_u, end_u, n, dtype=np.int64)).view('M8[ns]'))
def generate_happy_case_dataframe(start=pd.to_datetime('2019-01-01'), end=pd.to_datetime('2020-01-01')):
dates = pd.date_range(start, end)
length_year = len(dates)
lenght_df = length_year * 3
df = pd.DataFrame(data=np.random.random((lenght_df, 5)), columns=['feat'+str(i) for i in range(5)])
df['label'] = np.random.randint(0, 3, lenght_df)
df['dates'] = list(dates) * 3
df['customer_id'] = [0] * length_year + [1] * length_year + [2] * length_year
return df
def build_grouped_ts_validation_iterator(df, kfold_n_split=2, kfold_n_repeats=5, initial_training_rolling_months=6, rolling_window_months=1):
rkf = RepeatedKFold(n_splits=kfold_n_split, n_repeats=kfold_n_repeats, random_state=42)
CV_iterator = list()
for train_customers_ids, test_customers_ids in rkf.split(df['customer_id'].unique()):
print("rkf training/testing with customers : " + str(train_customers_ids)+"/"+str(test_customers_ids))
this_k_fold_ts_split = split_with_dates_for_validation(df=df,
train_customers_ids=train_customers_ids,
test_customers_ids=test_customers_ids,
initial_training_rolling_months=initial_training_rolling_months,
rolling_window_months=rolling_window_months)
print("In this k fold, there is", len(this_k_fold_ts_split), 'time series splits')
for split_i, split in enumerate(this_k_fold_ts_split) :
print("for this ts split number", str(split_i))
print("train ids is len", len(split[0]), 'and are:', split[0])
print("test ids is len", len(split[1]), 'and are:', split[1])
CV_iterator.extend(this_k_fold_ts_split)
print('***')
return tuple(CV_iterator)
def split_with_dates_for_validation(df, train_customers_ids, test_customers_ids, initial_training_rolling_months=6, rolling_window_months=1):
start_train_df_date, end_train_df_date, start_test_df_date, end_test_df_date = \
fetch_extremas_train_test_df_dates(df, train_customers_ids, test_customers_ids)
start_training_date, end_training_date, start_testing_date, end_testing_date = \
initialize_training_dates(start_train_df_date, start_test_df_date, initial_training_rolling_months, rolling_window_months)
ts_splits = list()
while not stop_time_series_split_decision(end_train_df_date, end_test_df_date, start_training_date, end_testing_date, rolling_window_months):
# The while implies that if testing sample is les than one month, then the process stops
this_ts_split_training_indices = fetch_this_split_training_indices(df, train_customers_ids, start_training_date, end_training_date)
this_ts_split_testing_indices = fetch_this_split_testing_indices(df, test_customers_ids, start_testing_date, end_testing_date)
if this_ts_split_testing_indices:
# If testing data is not empty, i.e. something to learn
ts_splits.append((this_ts_split_training_indices, this_ts_split_testing_indices))
start_training_date, end_training_date, start_testing_date, end_testing_date =\
update_testing_training_dates(start_training_date, end_training_date, start_testing_date, end_testing_date, rolling_window_months)
return ts_splits
def fetch_extremas_train_test_df_dates(df, train_customers_ids, test_customers_ids):
train_df, test_df = df.loc[df['customer_id'].isin(train_customers_ids)], df.loc[df['customer_id'].isin(test_customers_ids)]
start_train_df_date, end_train_df_date = min(train_df['dates']), max(train_df['dates'])
start_test_df_date, end_test_df_date = min(test_df['dates']), max(test_df['dates'])
return start_train_df_date, end_train_df_date, start_test_df_date, end_test_df_date
def initialize_training_dates(start_train_df_date, start_test_df_date, initial_training_rolling_months, rolling_window_months):
start_training_date = start_train_df_date
# cover the case where test consumers begins long after (initial_training_rolling_months after) train consumers
if start_training_date + pd.DateOffset(months=initial_training_rolling_months) < start_test_df_date:
start_training_date = start_test_df_date - pd.DateOffset(months=initial_training_rolling_months)
end_training_date = start_train_df_date + pd.DateOffset(months=initial_training_rolling_months)
start_testing_date = end_training_date
end_testing_date = start_testing_date + pd.DateOffset(months=rolling_window_months)
return start_training_date, end_training_date, start_testing_date, end_testing_date
def stop_time_series_split_decision(end_train_df_date, end_test_df_date, end_training_date, end_testing_date, rolling_window_months):
no_more_training_data_stoping_condition = end_training_date + pd.DateOffset(months=rolling_window_months) > end_train_df_date
no_more_testing_data_stoping_condition = end_testing_date + pd.DateOffset(months=rolling_window_months) > end_test_df_date
stoping_condition = no_more_training_data_stoping_condition or no_more_testing_data_stoping_condition
return stoping_condition
def update_testing_training_dates(start_training_date, end_training_date, start_testing_date, end_testing_date, rolling_window_months):
start_training_date = start_training_date
end_training_date += pd.DateOffset(months=rolling_window_months)
start_testing_date += pd.DateOffset(months=rolling_window_months)
end_testing_date += pd.DateOffset(months=rolling_window_months)
return start_training_date, end_training_date, start_testing_date, end_testing_date
def fetch_this_split_training_indices(df, train_customers_ids, start_training_date, end_training_date):
train_df = df.loc[df['customer_id'].isin(train_customers_ids)]
in_training_period_df = train_df.loc[(train_df['dates'] >= start_training_date) & (train_df['dates'] < end_training_date)]
this_ts_split_training_indices = in_training_period_df.index.to_list()
return this_ts_split_training_indices
def fetch_this_split_testing_indices(df, test_customers_ids, start_testing_date, end_testing_date):
test_df = df.loc[df['customer_id'].isin(test_customers_ids)]
in_testing_period_df = test_df.loc[(test_df['dates'] >= start_testing_date) & (test_df['dates'] < end_testing_date)]
this_ts_split_testing_indices = in_testing_period_df.index.to_list()
return this_ts_split_testing_indices
``` |
Given a list of numbers as `L`, how do you find the minimum value `m` such that `L` can be made into a strictly ascending list by adding or subtracting values from `[0,m]` from each element of `L` except the last? (Strictly ascending means the new list can't have any duplicates.)
Example #1:
```
for L = [5, 4, 3, 2, 8] the minimum value for `m` is 3.
5 - 3 = 2 # subtract by 3
4 - 1 = 3 # subtract by 1
3 + 1 = 4 # add by 1
2 + 3 = 5 # add by 3
8 untouched # nothing
result = [2, 3, 4, 5, 8]
```
Example #2:
```
for L = [5, 4, 3, 0, 8], minimum value for `m` is 4
```
**NOTE**: I'm not looking for a complete solution just give me few thoughts and clue. | Below I try to prove that the greedy algorithm ($\mathcal{A}$) given by [@norbertpy](https://cs.stackexchange.com/a/41507/4911) (and [@Bergi](https://stackoverflow.com/a/19076632/1833118)) is correct. *Please check it.*
---
**Problem Definition:**
The algorithm $\mathcal{A}$ of @norbertpy is for a variant of the original problem:
>
> To find the minimum positive number $2m$ such that for each item in the array, **adding** a number from $[0, 2m]$ can lead to a strictly ascending array?
>
>
>
The solutions to these two problems can be reduced to each other (by $\pm m$).
Note that I have ignored the "except the last" part.
---
**A lemma for the property of the algorithm $\mathcal{A}$:**
Let $L'[n]$ be the last element of the resulting strictly ascending list of any feasible solution to $L[1 \ldots n]$. We first claim that:
>
> **Lemma:** $\mathcal{A}$ gives the *smallest* value of $L'[n]$ among all the feasible solutions to $L[1 \ldots n]$.
>
>
>
This lemma can be proved by mathematical induction on the length $n$ of $L$.
---
**Now we prove that $\mathcal{A}$ always gives the optimal solution.**
**Base case:** $n = 1$ and $n = 2$ are trivial.
**Inductive Hypothesis:** Suppose that for any $L[1 \cdots n-1]$ of length $n-1$, the algorithm $\mathcal{A}$ gives us the optimal solution $m$.
**Inductive Step:** Consider the $n$-th iteration of the greedy algorithm $\mathcal{A}$: it compare `a = head + 1 - L[n]` with $m$, and take $M = \max(m,a)$ as the feasible solution to $L[1 \cdots n]$.
We aim to prove that
>
> $M$ is the optimal solution to $L[1 \cdots n]$.
>
>
>
Suppose, by contradiction, that there is another feasible solution to $L[1 \cdots n]$, denoted by $M' < M$.
First, $m < M'$: otherwise, $M'$ is a *smaller* feasible solution to $L[1 \cdots n-1]$, which contradicts the assumption.
Thus we have $m < M' < M$. Because $M = \max(m,a)$, we obtain $m < M' < M = a$.
By the lemma above, $L'[n-1]$ in the solution corresponding to $M'$ is not less than that in the solution corresponding to $m$. However, $L'[n]$ (for $M'$) is less than that for $M$ (because $M' < a$). According to the way how $a$ is chosen in $\mathcal{A}$, the resulting array (for $M'$) is not strictly ascending. Thus $M'$ cannot be a solution. Contradication. |
How do you predict data that contains multiple levels of nearly constant data?
Simple linear models even with weights (exponential) did not cut it.
I experimented with some clustering and then robust linear regression but my problem is that the relationship between these levels of constant data is lost.
Here is the data:
```
structure(list(date = structure(c(32L, 10L, 11L, 14L, 5L, 6L,
1L, 2L, 12L, 9L, 19L, 13L, 4L, 17L, 15L, 3L, 18L, 7L, 8L, 21L,
16L, 22L, 28L, 29L, 30L, 26L, 27L, 31L, 20L, 23L, 24L, 25L), .Label = c("18.02.13",
"18.03.13", "18.11.13", "19.08.13", "19.11.12", "20.01.13", "20.01.14",
"20.02.14", "20.05.13", "20.08.12", "20.09.12", "21.04.13", "21.07.13",
"21.10.12", "21.10.13", "22.04.14", "22.09.13", "22.12.13", "23.06.13",
"25.01.15", "25.03.14", "25.05.14", "26.02.15", "26.03.15", "26.04.15",
"26.10.14", "26.11.14", "27.07.14", "27.08.14", "28.09.14", "28.12.14",
"29.03.10"), class = "factor"), amount = c(-4, -12.4, -9.9, -9.9,
-9.94, -14.29, -9.97, -9.9, -9.9, -9.9, -9.9, -9.9, -9.9, -9.9,
-9.9, -9.9, -9.9, -4, -4, -11.9, -11.9, -11.9, -11.9, -11.98,
-11.98, -11.9, -13.8, -11.64, -11.96, -11.9, -11.9, -11.9)), .Names = c("date",
"amount"), class = "data.frame", row.names = c(NA, -32L))
```
[![regression for multiple levels](https://i.stack.imgur.com/DWypm.jpg)](https://i.stack.imgur.com/DWypm.jpg)
revisiting rollmedian
=====================
@Gaurav - you asked: Have you tried building a model with moving averages? as ARIMA didn't work - I did not try it. But I have now.
```
zoo::rollmedian(rollTS, 5)
```
Seems to get the pattern of the data. However I wonder now how to reasonably forecast it. Is this possible?
[![rollmedian](https://i.stack.imgur.com/dPhK8.png)](https://i.stack.imgur.com/dPhK8.png) | Your data is a classic example of data where there is more noise than signal and therefore unpredictable, no matter what ever data mining /time series approach you use, it is going to give you poor predictions unless you know a priori by domain knowledge what $caused$ the level shifts and outliers. Also techniques like arima and exponential smoothing needs equally space time series which you do not have in your example. That said two reasonable approaches:
1. Model it deterministically, again this needs knowledge of outliers
2. Use last value for all future prediction ( this is simple exponential smoothing) |
I have a Cox model (estimated with coxph), built using "some" variables. There is an extra variable that I have not measured, so it is not in my dataset, but I do know the effect of that variable on cohort's similar to mine from literature (let's say the effect is well-established). If I want to "adjust" my model by including this effect, how do i do it?
My previous conviction was that this will lead to a biased model as we do not know about the interactions of this variable with the others in the cohort. But, I have seen some papers do something similar, such as [Wishart 2012](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3425970/), where the author's adjusted their prognostic model for an extra variable that wasn't available in their cohort.
I am open to changing the base model (to a parametric or Bayesian), if that makes any difference.
Edit (for context): My goal is similar to Predict, to create a nomogram, which ideally I would like to adjust for an effect that is not in my dataset (large population level data, >10,000 observations, >500 events), but is well-established in literature. | I'll provide a step by step answer. In case you want to try it yourself, please stop at each point and try from there.
1. Define what you are looking for. What is an unbiased estimator?
>
> An unbiased estimator, $t(x)$, of a quantity $\theta$ (which in this case is $\theta = e^{-2\lambda}$), is such that:
> $$ \mathbb{E}t(x) = \theta = e^{-2\lambda} $$
>
>
>
2. Plug-in information available to start solving the problem
>
> we know $t(x)=(-1)^{x}$ and $x\sim \text{Poisson}(\lambda)$. Then we want to compute the expected value of this quantity:
> $$ \mathbb{E}\left[(-1)^x\right]. $$
> Recall that expected value (for a discrete random variable) is obtained by summing all possible values the random variable can take multiplied by their probability.
> Recall further that if $x\sim \text{Poisson}(\lambda)$ then $P(x=i)=\frac{\lambda^{i}e^{-\lambda}}{i!}$. Thus, just using definitions:
> $$ $$
> $$ \mathbb{E}\left[(-1)^x\right] = \sum\_{i=0}^{\infty} (-1)^{i}P(x=i) $$
>
>
>
3. Solve the above.
>
> Using the Poisson law, we have
> $$ \sum\_{i=0}^{\infty} (-1)^{i}P(x=i) = \sum\_{i=0}^{\infty} (-1)^{i}\frac{\lambda^{i}e^{-\lambda}}{i!}=e^{-\lambda}-\lambda e^{-\lambda}+\lambda^{2} \frac{e^{-\lambda}}{2!}-\lambda^{3} \frac{e^{-\lambda}}{3!}+\ldots$$
>
>
>
4. What is the right-hand side term equal to?
>
> Note that we can further simplify by taking common factor $e^{-\lambda}$
> $$ e^{-\lambda}\left(1-\lambda+ \frac{\lambda^{2}}{2!}- \frac{\lambda^{3}}{3!}+\ldots\right) $$
> Recall Taylor series? You can use them to rewrite $e^{-\lambda}$ around $0$ (which is actually the definition of exponential):
> $$ e^{-\lambda}= e^{-0} +\frac{\partial e^{-\lambda}}{\partial\lambda}|\_{\lambda=0}\frac{(\lambda-0)}{1!}+\frac{\partial^{2} e^{-\lambda}}{\partial\lambda^{2}}|\_{\lambda=0}\frac{(\lambda-0)^{2}}{2!} \ldots$$
> solving the derivatives, you see that the above is:
> $$ e^{-\lambda}= 1 -\lambda+\frac{\lambda^{2}}{2!}-\frac{\lambda^{3}}{3!} \ldots$$
> So, we use this fact to rewrite:
> $$ e^{-\lambda}\left(1-\lambda+ \frac{\lambda^{2}}{2!}- \frac{\lambda^{3}}{3!}+\ldots\right)=e^{-\lambda}e^{-\lambda}=e^{-2\lambda}. $$
>
>
>
Remember where we started from?
$$ \mathbb{E}t(x) = e^{-\lambda}\left(1-\lambda+ \frac{\lambda^{2}}{2!}- \frac{\lambda^{3}}{3!}+\ldots\right)=e^{-\lambda}e^{-\lambda}=e^{-2\lambda} $$
Done! |
I am looking for a theoretically rigorous textbook on Bayesian econometrics, assuming a solid understanding of frequentist econometrics.
I would like to suggest one work per answer, so that recommendations can be voted up or down individually. | ***[Bayesian Econometrics](http://rads.stackoverflow.com/amzn/click/0470845678)*, by Gary Koop (2003)** is a modern rigorous coverage of the field that I recommend. It is in addition completed by a book of exercises: ***[Bayesian Econometric Methods (Econometrics Exercises)](http://rads.stackoverflow.com/amzn/click/0521671736)* by Gary Koop, Dale J. Poirier and Justin L. Tobias (2007)**. |
Let $X$ follow a uniform distribution and $Y$ follow a normal distribution. What can be said about $\frac X Y$? Is there a distribution for it?
I found the ratio of two normals with mean zero is Cauchy. | Let random variable $X \sim \text{Uniform}(a,b)$ with pdf $f(x)$:
[![enter image description here](https://i.stack.imgur.com/2E9fd.png)](https://i.stack.imgur.com/2E9fd.png)
where I have assumed $0<a<b$ (this nests the standard $\text{Uniform}(0,1)$ case). [ Different results will be obtained if say parameter $a<0$, but the procedure is exactly the same. ]
Further, let $Y \sim N(\mu, \sigma^2)$, and let $W=1/Y$ with pdf $g(w)$:
[![enter image description here](https://i.stack.imgur.com/c51it.png)](https://i.stack.imgur.com/c51it.png)
Then, we seek the pdf of the product $V = X\*W$, say $h(v)$, which is given by:
[![enter image description here](https://i.stack.imgur.com/4vGsL.png)](https://i.stack.imgur.com/4vGsL.png)
where I am using *mathStatica*'s `TransformProduct` function to automate the nitty-gritties, and where `Erf` denotes the Error function: <http://reference.wolfram.com/language/ref/Erf.html>
All done.
**Plots**
Here are two plots of the pdf:
* Plot 1: $\mu = 0$, $\sigma = 1$, $b = 3$ ... and ... $a = 0, 1, 2$
[![enter image description here](https://i.stack.imgur.com/eCw0b.png)](https://i.stack.imgur.com/eCw0b.png)
* Plot 2: $\mu = {0,\frac12,1}$, $\sigma = 1$, $a=0$, $b = 1$
[![enter image description here](https://i.stack.imgur.com/FH1NH.png)](https://i.stack.imgur.com/FH1NH.png)
**Monte Carlo check**
Here is a quick Monte Carlo check of the Plot 2 case, just to make sure no errors have crept in:
$\mu = \frac12$, $\sigma = 1$, $a=0$, $b = 1$
[![enter image description here](https://i.stack.imgur.com/hJeEi.png)](https://i.stack.imgur.com/hJeEi.png)
The blue line is the empirical Monte Carlo pdf, and the red dashed line is the theoretical pdf $h(v)$ above. Looks fine :) |
I have a question about the possible outcome of a trained model. Imagine that I would like to classify 2 different models of Ferrari and the dataset of these 2 models is small (for example, a few hundred images per model).
In the [Keras blog](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html), the issue was discussed but in the example, they are classifying dogs and cats, 2 class that are very general, distinct between them, and there are many cats and dogs already included in the original imagenet model (lets say the car/bus/truck are not included in the imagenet output classes).
Will it be better start training the model from Coco/Imagenet weights or start training using the weights of a model previously trained to classify cars only? | Using pretrained model as a starting point would give better results even classes you want to classify don't present in the original dataset. Because first layers of the CNN models just learn primitive features like lines, circles and they are relevant for other image classification tasks and classes that not present in the imagenet dataset too, so I don't think answers that stated if the class is not present then there is no transfer learning is true. Low level representations are just same for almost every object. It is the reason why transfer learning in image related tasks are successful.
But of course you should adapt pretrained network to your task by replacing original classification layer with a fully connected layer to classify your classes, a binary classifier in that case, and you should train layers close to end to extract high level representations from your dataset. As I said before low level representations already learned by first layers and they are okay, so make weights of last a few layers and fully connected layer trainable and just freeze anything else. It is called fine-tuning.
Model you already have also might have been trained in this way, so you should compare results of both to have an idea which one is better. |
In an underdetermined linear regression where the parameters to estimate outnumber the observations, why is it the case the residual sum of squares, defined as:
$$
RSS = \sum\_{i=1}^n (y\_i-\widehat{y}\_i)^2
$$
equal zero? | That need not be the case.
It is routine to say that a polynomial of degree greater than the sample size will give a perfect fit to the data. However, this will not be the case if two points have equal feature ($X$) values but unequal outcome ($y$) values. No function can accommodate that. At least one of those points will have a fitted value $\hat y\_i$ not equal to its observed $y\_i$ value, thus a strictly positive squared residual.
**EDIT**
For example...
```
X Y
1 1
2 2
3 3
3 4
4 5
5 6
```
No matter what a model predicts for `x = 3`, that model will make a prediction that is incorrect, resulting in a positive sum of squared residuals.
This really has nothing to do with $p>n$. If you have two instances with equal feature values and unequal outcomes, the model cannot ever get both right.
```
x1 x2 x3 x4 x5 x6 y
0 0 0 0 0 0 1.1
0 0 1 1 1 0 0.8
0 0 0 0 1 1 0.3
0 1 0 1 0 0 1.1
0 1 0 1 0 0 0.5
```
The final two rows have identical feature values yet different outcomes, so a model will get at least one of those wrong, leading to a positive sum of squared residuals. |
From my understanding of the proof that halting problem is not computable, this problem is not computable because if we have a program P(x) which computes if the program x halts or not, we got a paradox when giving P as an input to the same P, having: P(P), trying to decide if P halts or not using P itself.
So my question is: is halting problem computable by program P for all other programs used as input but P itself? In other words: is halting problem not computable only in this special case or the proof is more general and I'm missing something? | There are algorithms to show that certain classes of programs do or don't halt. For example,
* It is possible to determine algorithmically whether a program that models a finite-state machine halts.
* It is possible to determine arithmetically whether a linear-bounded turing machine halts
* If you know what complexity class a program is in, then you know that the program halts for finite inputs.
While there's no algorithm to determine if an arbitrary program halts, a majority of code was designed either to halt (like most subroutines) or not halt (like an infinite loop to handle events), and it's possible to algorithmically to determine which is which. In other words, you *can* have an algorithm that answers either "halts", "doesn't halt", or "I dunno", and such an algorithm can be designed to cover enough programs that it would be useful. |
I am going over some practice questions for the Major field exam and it asks:
A processor with a word-addressable memory has a two-way set-associative cache. A cache line is one word, so a cache entry contains a set of two words. If there are *M* words of memory and *C* cache entries, how many words of memory map to the same cache entry?
I am not sure how to even approach this problem since I am not too familiar with computer organization. Any help would be appreciated. Thanks! | The total cache size is C entries of two words each = 2C words.
The total memory is M words.
Assuming that the same number of words are mapped to each cache entry (and this is a very very safe assumption), there are M / (2C) words mapped to each entry.
There is another answer claiming this is the same question as in some test, with a different answer. Since the answer is wrong, I assume that the questions were not quite the same, but the link in that answer doesn't work.
PS. Words of memory mapped to each cache line is usually quite irrelevant. It's a measurement that changes if you double, quadruple or halve the memory size which is all irrelevant to the cache performance. |
A famous aphorism by cosmologist Martin Rees(\*) goes *"absence of evidence is not evidence of absence"*. On the other hand, quoting Wikipedia:
>
> In carefully designed scientific experiments, even null results can be evidence of absence. For instance, a hypothesis may be falsified if a vital predicted observation is not found empirically. (At this point, the underlying hypothesis may be rejected or revised and sometimes, additional ad hoc explanations may even be warranted.) Whether the scientific community will accept a null result as evidence of absence depends on many factors, including the detection power of the applied methods, the confidence of the inference, as well as confirmation bias within the community.
>
>
>
Therefore, for the sake of scientific progress, we end up accepting the absence of evidence as evidence of absence. This is also at the heart of two very famous analogies, namely [Russell's teapot](https://en.wikipedia.org/wiki/Russell%27s_teapot) and [Carl Sagan's Dragon in the garage](https://rationalwiki.org/wiki/The_Dragon_in_My_Garage).
**My question is:** how can we formally justify, based on Bayesian probability theory, that absence of evidence can legitimately be used as evidence of absence? Under which conditions is that true? (the answer is expected to depend on the specific details of the problem such as the model we assume, the information gain provided by our observations given the model, or the prior probabilities of the competing hypotheses involved).
(\*) the origin of the aphorism seems to be much older, see e.g. [this](https://quoteinvestigator.com/2019/09/17/absence/). | There's an important point missing here, but it's not strictly speaking a statistical one.
Cosmologists can't run experiments. Absence of evidence in cosmology means there's no evidence available to us here on or near earth, observing the cosmos through instruments.
Experimental scientists have a lot more freedom to generate data. We could have an absence of evidence because no one has run the appropriate experiment yet. That isn't evidence of absence. We could also have it because the appropriate experiment was run, which should have produced evidence if the phenomenon in question was real, but it didn't. This is evidence for absence. This is the idea formalised by the more mathematical answers here, in one form or another. |
Basically, I using an algorithm called 'miranda' to look at miRNA targets and it only runs on a single thread. It compares everything in one file against everything in another file, produces a file as an output and runs off of the command line in terminal. The process took roughly 20 hours to create the output file.
I was advised by my supervisor that if i split one of the files up into say 4 equally sized parts, and ran them in four separate terminal windows this would decrease the overall time it took for the process to be completed.
I found that when I was using a single terminal window, the process would take up about 100-120% of the CPU. However, when running four terminal windows, each individual process only takes between 30-40% of the CPU.
How much effect does splitting the file up like this have in the overall time it takes to run the process? Although I split it across four threads, will the effect only be an increase in speed of about 1.5 times? | What you describe (split job into parts, total CPU usage stays the same) indicates that your task is (or gets) I/O (or perhaps RAM) bound. Try splitting among several machines, not just several processes on the same machine. If the program you are using comes from somewhere else, see if there are newer versions, bug reports on performance, or perhaps tricks and suggestions, on its webpage or user sites.
Use the tools on your operating system to see what the bottleneck really is. As you don't say what operating system this is, I can't give concrete suggestions. Perhaps look into uxix.stackexchange.com, superuser.com, or serverfault.com (all part of the stackexhange network). |
In my previous question ( [Can Turing machines be converted into equivalent Lambda Calculus expressions with a systematic approach?](https://cs.stackexchange.com/questions/48622/is-there-an-algorithm-for-converting-turing-machines-into-equivalent-lambda-expr) ), I got the answer that it is indeed possible.
And as I have read before, every program written in all programming languages is convertible to a Turing Machine. And of course, since there are no side effects and no order in calculating a lambda expression, parallelization is infinitely possible, and it can break down to computing one lambda function on a separate machine.
So with having these three facts in mind, An interesting question comes to mind. Since every program written in every programming language has an equivalent Turing machine, Turing machines are convertible to Lambda Calculus expression through an algorithm, and Lambda expressions are infinitely parallelizable, can every program be parallelized automatically and infinitely?
**EDIT** : I think I have to clear out one thing. By infinitely parallelizing, I mean to parallelize till the point where it benefits us, so the arguments about the size of parallelizations are not valid. For example, by having five cores of cpu, one can utilize all of his\her cores by these approach. | If you're working in the strict lambda calculus, everything can be automatically parallelized. In particular, when evaluating a function application, the function and the argument can always be evaluated in parallel.
However, it cannot be infinitely parallelized. There are inherent data dependencies: the result of a function application can't be determined until both the argument and the function have been evaluated, meaning that you need to wait for your threads to both finish, then synchronize.
This is still relevant with your clarified definition of infinitely. For example, if you have 5 processors, it's possible that a particular program can only ever use 4 processors, because of the data dependencies.
Moreover, while this is automatic, it is not "performance for free." In practice, there is non-trivial overhead to creating and synchronizing threads. Moreover, it's difficult to do this in a way that scales only to the current number of processors: if you have 5 cores, the automatic parallelization might generate 6 threads, and in general, it's not possible to know at compile-time how many threads will be active at a given time.
So, you can automatically make a program that runs massively parallel, but with the current state of affairs, it will likely be slower than your original.
It's also worth mentioning that, in practice, this becomes difficult with shared access to resources and IO. For example, a real world program might write to a disk, which can cause problems if done in parallel without control. This is something that can't be modeled by the standard lambda calculus. |
Will word2vec fail if sentences contain only similar words, or in other words, if the window size is equal to the sentence size? I suppose this question boils down to whether word to vec considers words from other sentences as negative samples, or only words from the same sentence but outside of the window | A good method for identifying mislabeled data is [Confident Learning](https://arxiv.org/abs/1911.00068). It can use predictions from any trained classifier to automatically identify which data is incorrectly labeled. Since Confident Learning directly estimates which datapoints have label errors, it can also estimate what portion of these labels were incorrectly labeled by the curator.
This approach can work for most types of classification data (image, text, tabular, audio, etc) and both binary or multi-class applications. This method was previously used to discover tons of [label errors in many major ML benchmark datasets](https://labelerrors.com/).
Intuitively, a baseline solution could be flagging any example where the classifier's prediction differs from the given label. However this baseline performs poorly if the classifier makes mistakes (something typically inevitable in practice). Confident Learning also accounts for the classifier's confidence-level in each prediction and its propensity to predict certain classes (eg. some classifiers may incorrectly predict class A overly much due to a training shortcoming), in a theoretically principled way that ensures one can still identify most label errors even with an imperfect classifier. |
I appreciate the usefulness of the bootstrap in obtaining uncertainty estimates, but one thing that's always bothered me about it is that the distribution corresponding to those estimates is the distribution defined by the sample. In general, it seems like a bad idea to believe that our sample frequencies look exactly like the underlying distribution, so why is it sound/acceptable to derive uncertainty estimates based on a distribution where the sample frequencies define the underlying distribution?
On the other hand, this may be no worse (possibly better) than other distributional assumptions we typically make, but I'd still like to understand the justification a bit better. | There are several ways that one can conceivably apply the bootstrap. The two most basic approaches are what are deemed the "nonparametric" and "parametric" bootstrap. The second one assumes that the model you're using is (essentially) correct.
Let's focus on the first one. We'll assume that you have a random sample $X\_1, X\_2, \ldots, X\_n$ distributed according the the distribution function $F$. (Assuming otherwise requires modified approaches.) Let $\hat{F}\_n(x) = n^{-1} \sum\_{i=1}^n \mathbf{1}(X\_i \leq x)$ be the empirical cumulative distribution function. Much of the motivation for the bootstrap comes from a couple of facts.
**Dvoretzky–Kiefer–Wolfowitz inequality**
$$
\renewcommand{\Pr}{\mathbb{P}}
\Pr\big( \textstyle\sup\_{x \in \mathbb{R}} \,|\hat{F}\_n(x) - F(x)| > \varepsilon \big) \leq 2 e^{-2n \varepsilon^2} \> .
$$
What this shows is that the empirical distribution function converges *uniformly* to the true distribution function *exponentially fast* in probability. Indeed, this inequality coupled with the Borel–Cantelli lemma shows immediately that $\sup\_{x \in \mathbb{R}} \,|\hat{F}\_n(x) - F(x)| \to 0$ almost surely.
There are no additional conditions on the form of $F$ in order to guarantee this convergence.
Heuristically, then, if we are interested in some functional $T(F)$ of the distribution function that is *smooth*, then we expect $T(\hat{F}\_n)$ to be close to $T(F)$.
**(Pointwise) Unbiasedness of $\hat{F}\_n(x)$**
By simple linearity of expectation and the definition of $\hat{F}\_n(x)$, for each $x \in \mathbb{R}$,
$$
\newcommand{\e}{\mathbb{E}}
\e\_F \hat{F}\_n(x) = F(x) \>.
$$
Suppose we are interested in the mean $\mu = T(F)$. Then the unbiasedness of the empirical measure extends to the unbiasedness of linear functionals of the empirical measure. So,
$$
\e\_F T(\hat{F}\_n) = \e\_F \bar{X}\_n = \mu = T(F) \> .
$$
So $T(\hat{F}\_n)$ is correct on average and since $\hat{F\_n}$ is rapidly approaching $F$, then (heuristically), $T(\hat{F}\_n)$ rapidly approaches $T(F)$.
To construct a confidence interval (*which is, essentially, what the bootstrap is all about*), we can use the central limit theorem, the consistency of empirical quantiles and the delta method as tools to move from simple linear functionals to more complicated statistics of interest.
Good references are
1. B. Efron, [Bootstrap methods: Another look at the jackknife](http://projecteuclid.org/euclid.aos/1176344552), *Ann. Stat.*, vol. 7, no. 1, 1–26.
2. B. Efron and R. Tibshirani, *[An Introduction to the Bootstrap](http://rads.stackoverflow.com/amzn/click/0412042312)*, Chapman–Hall, 1994.
3. G. A. Young and R. L. Smith, *[Essentials of Statistical Inference](http://rads.stackoverflow.com/amzn/click/0521548667)*, Cambridge University Press, 2005, **Chapter 11**.
4. A. W. van der Vaart, *[Asymptotic Statistics](http://rads.stackoverflow.com/amzn/click/0521784506)*, Cambridge University Press, 1998, **Chapter 23**.
5. P. Bickel and D. Freedman, [Some asymptotic theory for the bootstrap](http://projecteuclid.org/euclid.aos/1176345637). *Ann. Stat.*, vol. 9, no. 6 (1981), 1196–1217. |
When we say that in random graph we add an edge with a certain fixed probability, what do we actually mean?
For example if probability is 0.5, does that mean that we can just add two edges in a graph because 0.5+0.5=1. | Suppose you wish to compute the random graph $G(n,p)$ that is the graph with $n$ vertices where each edge is added with probability $p.$
Suppose you have a coin that gives tails with probability $p$ and heads with probability $1-p.$
Then what you do you take $\{1,...,n\}$ to be the vertex set of your graph and for each pair $(i,j) \in { \{1,\ldots,n\} \choose 2}$ you flip your coin. If it comes tails you add the edge $(i,j)$ to your graph otherwise you don't. |
I'm new to data mining and I'm trying to train a decision tree against a data set which is highly unbalanced. However, I'm having problems with poor predictive accuracy.
The data consists of students studying courses, and the class variable is the course status which has two values - Withdrawn or Current.
* Age
* Ethnicity
* Gender
* Course
...
* Course Status
In the data set there are many more instances which are Current than Withdrawn. Withdrawn instances only accounting for 2% of the total instances.
I want to be able to build a model which can predict the probability that a person will withdraw in the future. However when testing the model against the training data, the accuracy of the model is terrible.
I've had similar issues with decision trees where the data is dominated by one or two classes.
What approach can I use to solve this problem and build a more accurate classifier? | This is an interesting and very frequent problem in classification - not just in decision trees but in virtually all classification algorithms.
As you found empirically, a training set consisting of different numbers of representatives from either class may result in a classifier that is biased towards the majority class. When applied to a test set that is similarly imbalanced, this classifier yields an optimistic accuracy estimate. In an extreme case, the classifier might assign every single test case to the majority class, thereby achieving an accuracy equal to the proportion of test cases belonging to the majority class. This is a well-known phenomenon in binary classification (and it extends naturally to multi-class settings).
This is an important issue, because an imbalanced dataset may lead to inflated performance estimates. This in turn may lead to false conclusions about the significance with which the algorithm has performed better than chance.
The machine-learning literature on this topic has essentially developed three solution strategies.
1. You can restore balance on the training set by *undersampling* the large class or by *oversampling* the small class, to prevent bias from arising in the first place.
2. Alternatively, you can modify the costs of misclassification, as noted in a previous response, again to prevent bias.
3. An additional safeguard is to replace the accuracy by the so-called [balanced accuracy](http://en.wikipedia.org/wiki/Accuracy_and_precision#In_binary_classification). It is defined as the arithmetic mean of the class-specific accuracies, $\phi := \frac{1}{2}\left(\pi^+ + \pi^-\right),$ where $\pi^+$ and $\pi^-$ represent the accuracy obtained on positive and negative examples, respectively. If the classifier performs equally well on either class, this term reduces to the conventional accuracy (i.e., the number of correct predictions divided by the total number of predictions). In contrast, if the conventional accuracy is above chance *only* because the classifier takes advantage of an imbalanced test set, then the balanced accuracy, as appropriate, will drop to chance (see sketch below).
![Accuracy vs. balanced accuracy](https://i.stack.imgur.com/qarV0.png)
I would recommend to consider at least two of the above approaches in conjunction. For example, you could oversample your minority class to prevent your classifier from acquiring a bias in favour the majority class. Following this, when evaluating the performance of your classifier, you could replace the accuracy by the balanced accuracy. The two approaches are complementary. When applied together, they should help you both prevent your original problem and avoid false conclusions following from it.
I would be happy to post some additional references to the literature if you would like to follow up on this. |
I'm wondering if there is a standard way of measuring the "sortedness" of an array? Would an array which has the median number of possible inversions be considered maximally unsorted? By that I mean it's basically as far as possible from being either sorted or reverse sorted. | No, it depends on your application. The measures of sortedness are often refered to as *measures of disorder*, which are functions from $N^{<N}$ to $\mathbb{R}$, where $N^{<N}$ is the collection of all finite sequences of distinct nonnegative integers. The survey by Estivill-Castro and Wood [1] lists and discusses 11 different measures of disorder in the context of adaptive sorting algorithms.
The number of inversions might work for some cases, but is sometimes insufficient. An example given in [1] is the sequence
$$\langle \lfloor n/2 \rfloor + 1, \lfloor n/2 \rfloor + 2, \ldots, n, 1, \ldots, \lfloor n/2 \rfloor \rangle$$
that has a quadratic number of inversions, but only consists of two ascending runs. It is nearly sorted, but this is not captured by inversions.
---
[1] [Estivill-Castro, Vladmir, and Derick Wood. "A survey of adaptive sorting algorithms." ACM Computing Surveys (CSUR) 24.4 (1992): 441-476.](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.8017) |
I have just started using mixed effect models, and I apologize if my question seems intuitive to most.
If there are between subject regressors in the model, is it still acceptable to include random slopes?
Not sure if this question even makes sense so let me contextualize it:
If the treatment in my study is repeated measure, but I also want to look at whether the subject's extraversion has an effect on the outcome variable, is it acceptable to include both extraversion and random intercepts/slopes in the same model?
If we are already using the random intercept to capture the subject to subject variation in the outcome, would it wash out the effect the covariate might have on the outcome? | This is discussed here:
<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3881361/>
Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 10.1016/j.jml.2012.11.001. <http://doi.org/10.1016/j.jml.2012.11.001>
(See the section: "Random effects in LMEMs and ANOVA: The same principles apply")
A model specifying random slopes for a between subjects variable would be unidentifiable. The data do not contain the information needed to estimate random slope variance, because each subject belongs to one and only one condition of the manipulation. Without observing subjects across conditions, variability by subject in the effect of the manipulation cannot be distinguished from residual error. |
Hello I'm having trouble understanding how an intersection/union of regular languages can be regular and in other case non-regular.
Can someone please give me some good examples? | For every word $w$, there's a language $\{ w\}$, which is regular, and $\Sigma^\* \setminus \{ w\}$, which is also regular.
But, we can express every language (regular or not) $L$ as an infinite union: $L = \bigcup\_{w \in L} \{ w \}$, which is an infinite union of regular languages.
For intersection, you do the opposite:
$L = \bigcap\_{w \not \in L} (\Sigma^\* \setminus \{w \})$.
So, we know that there are regular languages, and non-regular languages, and they can all be expressed as infinite unions or intersections of regular languages. |
I have a sample $X$ and two normal distributions $\mathcal{N}\_1$ and $\mathcal{N}\_2$. I would like to determine from which of these distributions $X$ was more likely sampled.
However, $p(x | \mathcal{N}\_1)$ and $p(x | \mathcal{N}\_2)$ are both $0$ as the normal distribution is continuous. A sneaky (but maybe wrong?) work around would be to define a small $\epsilon$ and integrating from $x-\epsilon$ to $x+\epsilon$ under both distributions and using that as the respective probability of generating the sample $X$.
Is this a correct approach or should I be doing something else? | The account of this "downhill simplex algorithm" in the original versions of *Numerical Recipes* is particularly lucid and helpful. I will therefore quote relevant parts of it. Here is the background:
>
> In one-dimensional minimization, it was possible to bracket a minimum... . Alas! There is no analogous procedure in multidimensional space. ... The best we can do is give our algorithm a starting guess; that is, an $N$-vector of independent variables as the first point to try. The algorithm is then supposed to make its own way downhill through the unimaginable complexity of an $N$-dimensional topography until it encounters an (at least local) minimum.
>
>
> The downhill simplex method must be started not just with a single point, but with $N+1$ points, defining an initial simplex. [You can take these points to be an initial starting point $P\_0$ along with] $$P\_i = P\_0 + \lambda e\_i\tag{10.4.1}$$ where the $e\_i$'s are $N$ unit vectors and where $\lambda$ is a constant which is your guess of the problem's characteristic length scale. ...
>
>
> Most steps just [move] the point of the simplex where the function is largest ("highest point") through the opposite face of the simplex to a lower point. ...
>
>
>
**Now for the issue at hand, terminating the algorithm.** Note the generality of the account: the authors provide intuitive and useful advice for terminating *any* multidimensional optimizer and then show specifically how it applies to this particular algorithm. The first paragraph answers the question before us:
>
> Termination criteria can be delicate ... . We typically can identify one "cycle" or "step" of our multidimensional algorithm. It is then possible to terminate when the vector distance moved in that step is fractionally smaller in magnitude than some tolerance `TOL`. Alternatively, we could require that the decrease in the function value in the terminating step be fractionally smaller than some tolerance `FTOL`. ...
>
>
> Note well that either of the above criteria might be fooled by a single anomalous step that, for one reason or another, failed to get anywhere. Therefore, it is frequently a good idea to *restart* a multidimensional minimization routine at a point where it claims to have found a minimum. For this restart, you should reinitialize any ancillary input quantities. In the downhill simplex method, for example, you should reinitialize $N$ of the $N+1$ vertices of the simplex again by equation $(10.4.1)$, with $P\_0$ being one of the vertices of the claimed minimum.
>
>
> Restarts should never be very expensive; your algorithm did, after all, converge to the restart point once, and now you are starting the algorithm already there.
>
>
>
[Pages 290-292.]
The code accompanying this text in *Numerical Recipes* clarifies the meaning of "fractionally smaller": the difference between values $x$ and $y$ (either values of the argument or values of the function) is "fractionally smaller" than a threshold $T\gt 0$ when
$$\frac{|x| - |y|}{f(x,y)} = 2\frac{|x|-|y|}{|x| + |y|} \lt T\tag{1}$$
with $f(x,y) = (|x|+|y|)/2$.
The left hand side of $(1)$ is sometimes known as the "relative absolute difference." In some fields it is expressed as a percent, where it is called the "relative percent error." See the Wikipedia article on [Relative change and difference](https://en.wikipedia.org/wiki/Relative_change_and_difference) for more options and terminology.
### Reference
William H. Press *et al.*, *Numerical Recipes: The Art of Scientific Computing.* Cambridge University Press (1986). Visit <http://numerical.recipes/> for the latest editions. |
In the year 2000, Judea Pearl published *Causality*. What controversies surround this work? What are its major criticisms? | Reading answers and comments I feel the opportunity to add something.
The accepted answer, by rje42, is focused on DAG’s and non-parametric systems; strongly related concepts. Now, capabilities and limitations of these tools can be argued, however we have to say that linear SEMs are part of the Theory presented in Pearl manual (2000 or 2009). Pearl underscores limitations of linear systems but they are part of the Theory presented.
The comment of Paul seems crucial to me: “*The whole point of Pearl's work is that different causal models can generate the same probabilistic model and hence the same-looking data. So it's not enough to have a probabilistic model, you have to base your analysis and causal interpretation on the full DAG to get reliable results*.” Let me says that the last phrase can be rewritten as: **so it's not enough to have a probabilistic model, you need structural causal equation/model (SCM)**. Pearl asked us to keep in mind a **demarcation line between statistical and structural/causal concepts**. The former alone can be never enough for proper causal inference, we need the latter too; here stays the root of most problems. In my opinion this clear distinction and his defense, represent the most important merit of Pearl.
Moreover Pearl suggests some tools such as: *DAG, d-separation, do-operator, backdoor and front door criterion*, among others.
All of them are important, and express his theory, but all come from the demarcation line mentioned above, and all help us to work according with it. Put it differently, is not so tremendously relevant to argue pro or cons of one specific tool, it is rather about the necessity of the demarcation line. If the demarcation line disappears, all of Pearl's theory goes down, or, at best, adds just a bit of language to what we already have. However, this seems to me an unsustainable position. If some authors today still seriously argue so, please give me some reference about it.
I'm not yet expert enough to challenge the capability of all these tools, but they seem clear to me, and, until now, it seems to me that they work. I come from the econometric side and, about the tools therein, that I think the opposite. I can say that econometrics is: very widespread, very useful, very practical, very empirical, very challenging, very considered matters; and it has one of his most interesting topics in *causality*. In econometrics some causal issues can be fruitfully addressed with RCT tools. However, unfortunately, we can show that the econometrics literature, too often, addressed causal problems not properly. Shortly, this happened due to theoretical flaws. The dimensions of this problem emerge in their full width in:
*Regression and Causation: A Critical Examination of Six Econometrics Textbooks* - Chen and Pearl (2013) and
*Trygve Haavelmo and the Emergence of causal calculus* - Pearl; Econometric Theory (2015)
In these related discussions some point are addressed:
[Under which assumptions a regression can be interpreted causally?](https://stats.stackexchange.com/questions/493211/under-which-assumptions-a-regression-can-be-interpreted-causally?noredirect=1&lq=1)
[Difference Between Simultaneous Equation Model and Structural Equation Model](https://stats.stackexchange.com/questions/63417/difference-between-simultaneous-equation-model-and-structural-equation-model/400279#400279)
I don’t know if "equilibrium problems" invoked by Dimitriy V. Masterov cannot be addressed properly with Pearl SCMs, but from here:
*Eight Myths About Causality and Structural Equation Models* - Handbook of Causal Analysis for Social Research, Springer (2013)
it emerges that some frequently invoked limitations are false.
Finally, the argument of Matt seems to me not relevant, but not for "citations evidence" as argued by Neil G. In two words, Matt's point is
“Pearl's theory can be good for itself but not for the purpose of practice”.
This seems to me a logically wrong argument, definitely. Matter of fact is that Pearl presented a theory. So, it suffices to mention the motto “nothing can be more practical and useful than a good theory”. It is obvious that the examples in the book are as simple as possible, good didactic practice demands this. Making things more complicated is always possible and not difficult; on the other hand, proper simplifications are usually hard to make. The possibility to face simple problems or to rend them more simple seems to me strong evidence in favor of Pearl's Theory.
That said, the fact that no real issues are solved by Pearl's Theory (if it is true) is neither his responsibility not the responsibility of his theory. He himself complains that professors and researchers haven't spent time enough on his theory and tools (and on causal inference in general). This fact could be justified only in face of a clear theoretical flaw of Pearl's theory and clear superiority of another one. It is curious to see that probably the opposite is true; note that Pearl argued that RCT boil down to SCM. The problem that Matt underscores comes from professors' and researchers' responsibility.
I think that in the future Pearl's Theory will become common in econometrics too. |
Monty had perfect knowledge of whether the Door had a goat behind it (or was empty). This fact allows Player to Double his success rate over time by switching “guesses” to the other Door.
What if Monty’s knowledge was less than perfect? What if sometimes the Prize truly WAS in the same Doorway as the Goat? But you could not see it until after you chose and opened YOUR door?
Can you please help me to understand how to calculate IF— and by how much — Player can improve his success when Monty’s accuracy rate is less than 100%?
For example: what if Monty is wrong — on Average-50% of the time?
Can the Player STILL benefit from switching his Guess/Door?
I imagine that if Monty has less than 33.3% chance of being correct that Prize is NOT behind the Door, then Player's best option is to NOT Switch his Door choice.
Can you please provide me with a way to calculate the potential benefit of switching by inserting different Probabilities of Monty being Correct about the Prize NOT being behind the Door? I have nothing beyond High School math, and am 69 years old, so please be gentle.
---
Thanks for the insights and formulae provided. It appears to be that if "Fallible Monty" is only 66% accurate in predicting the absence of a Prize/Car that there is ZERO benefit to switching from your original choice of doors....because his 33% error rate is the default base rate for the Prize being behind ANY door. One assumes, though, that IF Monty gets better than 66% at predicting where there is NO PRIZE THEN switching derives greater Utility. I will be trying to apply this reasoning to a game where an "Expert" makes an "expert prediction" that one of three roughly equally probable options will be the correct one. I have little faith in the Expert being correct, and I am quite certain that his "hit rate" will be less than 33% - more like 15%. My conclusion from this will be that when the "Expert" chooses a different option from my choice, that it will be beneficial for me to switch my choice to the other option. And, when the Expert chooses the *same* option as me, I am probably wrong for sure, and should change to one of the other two! ;-) | This should be a fairly simple variation of the problem (though I note your limited maths background, so I guess that is relative). I would suggest that you first try to determine the solution *conditional on* whether Monte is infallible, or fully fallible. The first case is just the ordinary Monte Hall problem, so no work required there. In the second case, you would treat the door he picks as being random over all the doors, including the door with the prize (i.e., he might still pick a door with no prize, but this is now random). If you can calculate the probability of a win in each of these cases then you an use the [law of total probability](https://en.wikipedia.org/wiki/Law_of_total_probability) to determine the relevant win probabilities in the case where Monte has some specified level of fallibility (specified by a probability that we is infallible versus fully fallible). |
The following seems to me to be relevant to [this question](https://cs.stackexchange.com/questions/18897/complexity-classes-that-are-closed-under-subtraction/112022?noredirect=1#comment239251_112022), but to me is an interesting exercise, especially since I have not formally worked with complexity before, but I want to learn more:
Suppose that A is NP-Complete, but B is in P. I claim that A\B is NP-Complete and B\A is NP-Complete as well. To see this, assume first that A\B is in P, and let X and Y be polynomial-time algorithms for B and for A\B, respectively. "Concatenating" X and Y as follows yields an algorithm Z for A:
Given L, test L using X;
if X outputs "yes", test using Y;
if Y yields "yes", output "no" and stop;
if X yields "no", output "no" and stop; output "yes" otherwise and stop.
This algorithm Z runs in polynomial time, because if the (polynomial time) complexity exponent of X is k and the (polynomial time) complexity exponent of Y is n, then this algorithm clearly has (polynomial time) complexity exponent m=max(k,n). This would provide proof that P=NP, so A\B is NP-Complete.
Now suppose that B\A is in P. This time, let Y' be a polynomial-time algorithm for B\A and let X be as above. We construct an algorithm Z' for A, as follows:
Given L, test L using X;
if X outputs "yes", test using Y';
if Y yields "no", output "yes" and stop;
if X yields "no", test using Y'; if Y yields "no", output "yes" and stop;
output "no" otherwise and stop.
This yields a polynomial-time algorithm for A, and so again, this would entail that P=NP, so B\A also is NP-Complete.
+++++++++End of Example++++++++++++
While I don't see anything wrong with the above at the moment, perhaps I have a mistake or complexity miscalculation? ...because for a while, as I was writing the second algorithm, I began to think it was odd and perhaps impossible that I can be right about B\A also being NP-Complete...
As I said, I'm somewhat new to this area, so feedback would be appreciated. | [Newton's method](https://en.wikipedia.org/wiki/Newton%27s_method) is pretty good for this. Specifically, a good strategy is to first calculate an approximation $x \approx \dfrac{1}{b}$ so that $ax$ is then a good approximation for $\dfrac{a}{b}$. To do this, we will use Newton's method to approximate a root of the function $f(x) = \dfrac{1}{x} - b$.
Newton's method asks us to recursively calculate approximations $x\_n$ given by the formula
$$x\_{n+1} = x\_n - \frac{f(x\_n)}{f'(x\_n)}$$
A priori, this looks like using division but since $f'(x) = \dfrac{-1}{x^2}$, the formula boils down to
$$x\_{n+1} = x\_n + x\_n^2\left(\frac{1}{x\_n} - b\right) = x\_n + (x\_n - bx\_n^2) = (2 - bx\_n)x\_n,\tag{$\*$}$$ which only involves addition and multiplication.
In order to ensure good convergence of this method, it helps to have $b$ very close to $1$. Of course, there are very few integers that are close to $1$. However, if $b$ is an $n$-bit integer then $b\_0 = \dfrac{b}{2^{n-1}}$ is between $1$ and $2$. Since the denominator is a power of $2$ this is really just a bit shift and not a division per se. Incorporating this in the formula ($\*$) we obtain the iteration
$$
x\_{n+1} = (2 - b\_0x\_n)x\_n = \left(2 - \frac{bx\_n}{2^{n-1}}\right)x\_n = \frac{(2^n - bx\_n)x\_n}{2^{n-1}}
\tag{$\dagger$}$$
which still involves no division. Note that this gives a method for approximating $1/b\_0$ but we can recover a good approximation to $\dfrac{1}{b} = \dfrac{1/b\_0}{2^{n-1}}$ by shifting.
Further analysis shows that, starting with $x\_0 = 0.75$, the absolute error $\varepsilon\_n = \left|x\_n - \dfrac{2^{n-1}}{b}\right|$ for $(\dagger)$ satisfies $\varepsilon\_{n+1} \leq \varepsilon\_n^2$. So the number of correct bits doubles at each step!
Let's illustrate by approximating $22/7$. Then $n=3$ and $b\_0 = 7/4 = 1.75$. Then $(\dagger)$ gives the approximations
$$\begin{aligned}
x\_0 &= 0.750\,000\,000 \\
x\_1 &= 0.515\,625\,000 \\
x\_2 &= 0.565\,979\,004 \\
x\_3 &= 0.571\,376\,600 \\
x\_4 &= 0.571\,428\,567 \\
x\_5 &= 0.571\,428\,571 \\
\end{aligned}$$
All the digits shown for $x\_5$ are exact for
$$
\frac{1}{b\_0} = \frac{4}{7} = 0.571\,428\,571\,428\,571\,428\,571\,428\ldots
$$
Thus
$$
\frac{22x\_5}{4} = 3.142\,857\,141
$$
is a good approximation to
$$
\frac{22}{7} = 3.142\,857\,142\,857\,142\,857\,142\,857\ldots
$$ |
I use a variation of a 5-cross median filter on image data
on a small embedded system, i.e.
```
x
x x x
x
```
The algorithm is really simple: read 5 unsigned integer values, get the highest 2, do some calculations on those and write back the unsigned integer result.
What is nice is that the 5 integer input values are all in the range of 0-20. The calculated integer value are also in the 0-20 range!
Through profiling, I have figured out that getting the largest two numbers is the bottleneck so I want to speed this part up. What is the fastest way to perform this selection?
The current algorithm uses a 32 bit mask with 1 in the position given by the 5 numbers and a HW-supported CLZ function.
I should say that the CPU is a proprietary one, not available outside of my company. My compiler is GCC but tailor made for this CPU.
I have tried to figure out if I can use a lookup-table but I have failed to generate a key that I can use.
I have $21^5$ combinations for the input but order isn't important, i.e. `[5,0,0,0,5]` is the same as `[5,5,0,0,0]`.
It happens that the hash-function below produces a perfect hash without collisions!
```
def hash(x):
h = 0
for i in x:
h = 33*h+i
return h
```
But the hash is huge and there is simply not enough memory to use that.
Is there a better algorithm that I can use?
Is it possible to solve my problem using a lookup-table and generating a key? | In my [other answer](https://cs.stackexchange.com/a/39792/98) I suggest that
conditional jumps might be the main impediment to efficiency. As a consequence,
[sorting networks](https://en.wikipedia.org/wiki/Sorting_network) come to mind:
they are data agnostic, that is the same sequence of comparisons is executed no
matter the input, with only the swaps being conditional.
Of course, sorting may be too much work; we only need the biggest two numbers.
Lucky for us, *selection* networks have also been studied. Knuth tells us
that finding the two smallest numbers out of five² can be done with
$\hat{U}\_2(5) = 6$ comparisons [1, 5.3.4 ex 19] (and at most as many swaps).
The network he gives in the solutions (rewritten to zero-based arrays) is
$\qquad\displaystyle [0:4]\,[1:4]\,[0:3]\,[1:3]\,[0:2]\,[1:2]$
which implements -- after adjusting the direction of the comparisons -- in
pseudocode as
```
def selMax2(a : int[])
a.swap(0,4) if a[0] < a[4]
a.swap(1,4) if a[1] < a[4]
a.swap(0,3) if a[0] < a[3]
a.swap(1,3) if a[1] < a[3]
a.swap(0,2) if a[0] < a[2]
a.swap(1,2) if a[1] < a[2]
return (a[0], a[1])
end
```
Now, naive implementations still have conditional jumps (across the swap code).
Depending on your machine you can cirumvent them with conditional instructions,
though. x86 seems to be its usual mudpit self; ARM looks more promising since
apparently [most operations are conditional](https://en.wikipedia.org/wiki/Arm_instruction_set#Instruction_set)
in themselves. If I understand the [instructions](http://www.heyrick.co.uk/assembler/qfinder.html)
correctly, the first swap translates to this, assuming our array values have been
loaded to registers `R0` through `R4`:
```
CMP R0,R4
MOVLT R5 = R0
MOVLT R0 = R4
MOVLT R4 = R6
```
Yes, yes, of course you can use [XOR swapping](http://en.wikipedia.org/wiki/XOR_swap_algorithm)
with [EOR](http://www.heyrick.co.uk/assembler/mov.html#eor).
I just hope your processor has this, or something similar. Of course, if you *build* the thing for this purpose, maybe you can get the network hard-wired on there?
This is probably (provably?) the best you can do in the classical realm, i.e.
without making use of the limited domain and performing wicked intra-word magicks.
---
1. [Sorting and Searching](http://www-cs-faculty.stanford.edu/~uno/taocp.html)
by Donald E. Knuth; *The Art of Computer Programming* Vol. 3 (2nd ed, 1998)
2. Note that this leaves the two selected elements unordered. Ordering them requires an extra comparison, that is $\hat{W}\_2(5) = 7$ many in total [1, p234 Table 1]. |
I have data from questionnaire from school. 35 questions are various questions (influence of friends etc.)
Possible answers for 35 questions are "definitely yes", "mostly yes", "mostly no" and "definitely no".
I did hierarchical clustering using `hclust` in R. Then I used `cutree` for cut the dendrogram.
How to visualize data about clusters from `cutree`? I wrote function for export information about clusters to CSV, but I want to display graphical information.
Thanks | This is the most straightforward way to do this:
```
# Ward Hierarchical Clustering
d <- dist(mydata, method = "euclidean") # distance matrix
fit <- hclust(d, method="ward")
plot(fit) # display dendogram
groups <- cutree(fit, k=5) # cut tree into 5 clusters
# draw dendogram with red borders around the 5 clusters
rect.hclust(fit, k=5, border="red")
```
for more info you may want to check out this link:
<http://www.statmethods.net/advstats/cluster.html> |
Suppose you have a machine learning model that predicts if a given client is going to buy or not. The model has `N` features.
One of those features, `N[i]`, when present, strongly indicates that the client is going to buy.
There are 2 options:
1) If `N[i]` is present predict 1, if not present predict with the model.
2) Feed `N[i]` as another feature of the model and let it do it's job, trusting that it will detect the strong relation.
Which one would you pick and when? what if the presence of `N[i]` is a **really strong** indicator that the client will buying, like **99.99% of the time**, would you leave it to the model to figure it out then? Why not?
EDIT:
Note that if the variable is present you know the client buys but if it's not you have no certainty (it's not like the weight in kg/lb example described in one of the answers.) | **Neither**.
Let's try to understand how machine learning works in **practice**. Machine learning is not something you run in isolation, you don't just take your raw data and feed everything into it. Instead, you would develop a **pipeline** where you process or transform your data. Machine learning or predictive modelling is simply part of the pipeline, usually after the filtering process. Not everything in the raw data would go into your model. Your outcome depends on how exactly the pipeline is implemented.
What does that mean for your example?
* The pipeline would simply use the "cheating" variable (if present) to make a prediction. No machine learning here, just some if statements in your code. Very standard programming.
* If the cheating variable is not present. Feed the data to your trained data.
Thus, you should train a model without the cheating variable. This is not something statistics can tell you, but it's how the industry works. Maintaining a model with a cheating variable like your example costs money and complicates the modelling. |
Consider an filesystem targeted at some embedded devices that does little more than store files in a hierarchical directory structure. This filesystem lacks many of the operations you may be used to in systems such as unix and Windows (for example, its access permissions are completely different and not tied to metadata stored in directories). This filesystem does not allow any kind of hard link or soft link, so every file has a unique name in a strict tree structure.
Is there any benefit to storing a link to the directory itself and to its parent in the on-disk data structure that represents a directory?
Most unix filesystems have `.` and `..` entries on disk. I wonder why they don't handle those at the VFS (generic filesystem driver) layer. Is this a historical artifact? Is there a good reason, and if so, which precisely, so I can determine whether it's relevant to my embedded system? | Having links to the parent directory makes good sense to me. If you didn't have them, you would always need to work with a whole list of directories. So, for example, `/home/svick/Documents/` would have to be represented as `{ /, /home/, /home/svick/, /home/svick/Documents }`. If you didn't do that, you wouldn't be able to find the parent directory at all (or it would be very expensive). This is not only inefficient, but also dangerous. If you have two such lists that overlap, they could easily desynchronize if you moved some directory.
On the other hand, if you have a reference to the parent directory, it's more efficient and safer.
I don't see any reason to actually have a link to the current directory. If you have a structure that represents some directory and you want to access that directory, using `.` is always completely unnecessary. Because of that, I would expect that the `.` link doesn't actually exist in the filesystem structure and is only virtual. |
Most of today's encryption, such as the RSA, relies on the integer factorization, which is not believed to be a NP-hard problem, but it belongs to BQP, which makes it vulnerable to quantum computers. I wonder, why has there not been an encryption algorithm which is based on an known NP-hard problem. It sounds (at least in theory) like it would make a better encryption algorithm than a one which is not proven to be NP-hard. | There have been.
One such example is [McEliece cryptosystem](http://en.wikipedia.org/wiki/McEliece_cryptosystem) which is based on hardness of decoding a linear code.
A second example is [NTRUEncrypt](http://en.wikipedia.org/wiki/NTRUEncrypt) which is based on the shortest vector problem which I believe is known to be NP-Hard.
Another is [Merkle-Hellman knapsack cryptosystem](http://en.wikipedia.org/wiki/Merkle%E2%80%93Hellman_knapsack_cryptosystem) which has been broken.
Note: I have no clue if the first two are broken/how good they are. All I know is that they exist, and I got those from doing a web search. |
It is not uncommon to see students starting their PhDs with only a limited background in mathematics and the formal aspects of computer science. Obviously it will be very difficult for such students to become theoretical computer scientists, but it would be good if they could become savvy with using formal methods and reading papers that contain formal methods.
>
> What is a good short term path that starting PhD students could follow to gain the expose required to get them reading papers involving formal methods and eventually writing papers that use such formal methods?
>
>
>
In terms of context, I'm thinking more in terms of Theory B and formal verification as the kinds of things that they should learn, but also classical TCS topics such as automata theory. | In the preface of his book “Mathematical Discovery, On Understanding, Learning, and Teaching Problems Solving” George Pólya writes:
>
> Solving problems is a practical art, like swimming, or skiing, or
> playing the piano: you can learn it only be imitation and practice.
> This book cannot offer you a magic key that opens all the doors and
> solves all the problems, but it offers you good examples for imitation
> and many opportunities for practice: if you wish to learn swimming you
> have to go into the water, and if you wish to become a problem solver
> you have to solve problems.
>
>
>
I think there is no short path, especially for reaching the state of writing papers. It requires practice, a lot of it.
Some pointers:
If “limited background in mathematics and the formal aspects” means “has never conceived a proof and written it down” then something like [this](http://cheng.staff.shef.ac.uk/proofguide/proofguide.pdf) might be a start.
If something on the [Theoretical Computer Science Cheat Sheet](http://www.tug.org/texshowcase/cheat.pdf) makes the student feel uneasy, then a refresher course of the according branch of mathematics would be advisable.
There are many sources for mathematical writing: The [lecture notes](http://tex.loria.fr/typographie/mathwriting.pdf) of the 1978 Stanford University CS209 course perhaps. Or [this](http://www.math.uh.edu/~tomforde/Books/Halmos-How-To-Write.pdf) article by Paul Halmos. |
Does having one larger L1 cache instead of L1 and L2 cache makes computation faster? Also will this make the CPU more expensive to make? | Whether a shallower memory hierarchy provides better performance depends on the workload, the microarchitecture, and the implementation technology.
A workload that has a high miss rate for a "conventionally-sized" L1, when run on such a processor, would have more overhead in transferring cache blocks from L2 to L1 and (if writeback) dirty blocks from L1 to L2. In addition, if access to L2 was only started when a miss in L1 is determined, then latency for L2 would be larger (even if L1 in the shallow hierarchy processor used the same memory array implementation as L2).
A microarchitecture suited to modest-sized L1 caches would be biased (relative to one suited to huge L1 caches) toward speed-demon rather than brainiac design, exploiting the lower latency of a cache hit. (Workload, implementation technology, and other factors influence other tradeoffs with respect to speed-demon vs. brainiac.)
If the targeted workload can benefit from extensive hardware threading, providing a larger (and more associative) L1 can be advantageous as the thread-level parallelism can hide greater L1 access latency.
If the implementation technology reduces the latency penalty of a larger cache, the variety of workloads where the tradeoffs (in miss handling overheads and other factors) weigh in favor of a shallower memory hierarchy will increase. For example, in the future it is conceivable that 3D integration might be used for L1 cache and the memory used for this might have relatively high cell access latency (for low power to avoid thermal issues associated with 3D integration), so reduced distance (from 3D integration) and slower memory cells could increase the incentive for a larger L1 cache.
These types of questions become even more complex when one considers that the definition of L1 cache can get somewhat less clear. For example, if what would conventionally be called L2 cache is accessed in parallel with L1 but with higher latency, is it an L2 cache or part of a non-uniform cache architecture L1, especially if some cache blocks are never allocated to the smaller portion of the cache (cache bypassing has been proposed as a mechanism to better utilize capacity and bandwidth). (Even the NUCA L2 cache proposals allowed for transfers between slow and fast portions of L2.) Way prediction can also introduce variable latency for L1; if the prediction mechanism included consideration of expected criticality (and such was used for allocation to near or distant memory arrays within a NUCA design), one might reasonably consider the far memory arrays part of the L1 cache even though their inherent access latency is greater.
(Itanium 2 did not even probe the small L1 data cache for floating-point register loads, so for floating-point data one could almost consider the L2 cache as an L1 cache.) |
Say 2% of the population dies before a drug is consumed and 1% of the population dies after the drug is consumed. Then, a non-mathematician will say yes the drug was effective. What better can we say instead?
How should I design this same experiment in a better way to say something statistically? My total population can be assumed to remain the same (say it is very large, so despite 2% dying it doesn't change much). But I cannot repeat this experiment so I cannot design a $t$ test around it. I don't even know how I can develop a CI if I cannot repeat this experiment. | This alternative acceptance probability is [Barker’s formula](https://arxiv.org/pdf/1709.07710.pdf) which got published in the Australian Journal of Physics at the beginning of Barker’s PhD at the University of Adelaide.
[![enter image description here](https://i.stack.imgur.com/yzRtK.png)](https://i.stack.imgur.com/yzRtK.png)
As shown in the above screenshot, the basis of Barker’s algorithm is indeed [Barker’s acceptance probability](https://www.math.kth.se/matstat/gru/sf2955/2017/material//E4.pdf), albeit written in a somewhat confusing way since the current value of the chain is kept if a Uniform variate is smaller than what is actually the rejection probability.
As in Metropolis et al. (1953), the analysis is made on a discretised (finite) space, building the Markov transition matrix, stating the detailed balance equation (called microscopic reversibility). Interestingly, while Barker acknowledges that there are other ways of assigning the transition probability, his is the “most rapid” in terms of mixing. And equally interestingly, he discusses the scale of the random walk in the [not-yet-called] Metropolis-within-Gibbs move as major, targetting 0.5 as the right acceptance rate, and suggesting to adapt this scale on the go. |
What's the distribution of $\bar{X}^{-1}$ with X being a continuous iid random variable that is uniformly distributed? Can I use the CLT here? | In the absence of a response on the questions, I'll make some mention of both possibilities for the order of the mean and reciprocal, and discuss why the limits on the domain of the Uniform matter.
Let $X$ have a continuous uniform distribution on $[a,b]\,, b>a>0$
Let $Y = 1/X$.
Then $f\_Y(y) = \frac{1}{(b-a)y^2}$.
$\text{E}(Y) = \frac{1}{(b-a)} \int\_a^b y^{-1} dy = \frac{\ln(b)-\ln(a)}{(b-a)}$
The mean doesn't exist if $a$ is not bounded above zero, given $b$ is positive. More generally, you need both limits on the same side of zero and both bounded away from it for the mean to exist.
If the mean doesn't exist, the CLT doesn't apply.
If the mean and variance exist ($b$ and $a$ on the same side of 0 and both bounded away from it), then the CLT should apply to $Y$, and $\sqrt{n}(\overline{Y}-\mu\_Y)$ should be asymptotically normal.
But what about $\overline{X}^{-1}$? Note that - again, if $b$ and $a$ on the same side of 0 and both bounded away from it, then $\overline{X}$ will also have limits between $b$ and $a$ on the same side of 0 and bounded away from it, and so its reciprocal will have a mean and variance. While the CLT will apply to $X$ (so $\sqrt{n}(\overline{X}-\mu\_X)$ would be asymptotically normal), here you take its reciprocal. At sufficiently large sample sizes, the reciprocal should also be approximately normal (see the [Delta method](http://en.wikipedia.org/wiki/Delta_method#Univariate_delta_method)).
However, if $b>0$ and $a = 0$ then the CLT applies to $X$ but the reciprocal has no mean or variance at any finite sample size.
<http://en.wikipedia.org/wiki/Reciprocal_distribution> |
What is a pseudocode of try-catch? Is it this?
```
**Try** SOMETHING
If you succeed
Do nothing special
Else, if you didn't succeed
Do what's described in the **Catch** block
Anyway, either if you succeeded or failed
Do what's described in the **Finally** block (if there is one)
```
Update
------
I aim to find the pseudocode that will reflect the logic of every `try...catch` pattern in generally any modern abstract programming language (such as JavaScript, PHP, Python, etc.).
I might have found it already (the one above), and I basically ask for an affirmation. | The pseudocode for `try`-`catch` is
```
try:
〈some-code〉
catch:
〈exception-handling-code〉
```
My point is that *pseudocode* is not a well-defined concept but an informal way of displaying algorithms. You thus have the freedom to adapt the pseudocode so that it reflects the control-flow and other features of the computation that you would like to present. |
I have two neural networks. If I take only weights (the activation functions for both are the same), is there a way to tell the percent similarity of these two networks? | You can try to estimate the similarity between two units, using their weights and the similarity matrix between units from which they receive inputs. This would lead to a process similar to back-propagation, but going from bottom to top. At first you estimate the similarity between each pair of 1st layer units, then you estimate the similarity between the second layer units, and so on.
At the end you will not have an exact answer. The best you can hope is to make an estimation which is not too far away from the truth. |
A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc. When my son was 2, he was able to identify trams and trains, even though he had seen just a few. Since he was usually confusing one with each other, apparently his neural network was not trained enough, but still.
What is it that artificial neural networks are missing that prevent them from being able to learn way quicker? Is transfer learning an answer? | As pointed out by others, the data-efficiency of artificial neural networks varies quite substantially, depending on the details. As a matter of fact, there are many so called one-shot learning methods, that can solve the task of labelling trams with quite good accuracy, using only a single labelled sample.
One way to do this is by so-called transfer learning; a network trained on other labels is usually very effectively adaptable to new labels, since the hard work is breaking down the low level components of the image in a sensible way.
But we do not infact need such labeled data to perform such task; much like babies dont need nearly as much labeled data as the neural networs you are thinking of do.
For instance, one such unsupervised methods that I have also successfully applied in other contexts, is to take an unlabeled set of images, randomly rotate them, and train a network to predict which side of the image is 'up'. Without knowing what the visible objects are, or what they are called, this forces the network to learn a tremendous amount of structure about the images; and this can form an excellent basis for much more data-efficient subsequent labeled learning.
While it is true that artificial networks are quite different from real ones in probably meaningful ways, such as the absence of an obvious analogue of backpropagation, it is very probably true that real neural networks make use of the same tricks, of trying to learn the structure in the data implied by some simple priors.
One other example which almost certainly plays a role in animals and has also shown great promise in understanding video, is in the assumption that the future should be predictable from the past. Just by starting from that assumption, you can teach a neural network a whole lot. Or on a philosophical level, I am inclined to believe that this assumption underlies almost everything what we consider to be 'knowledge'.
I am not saying anything new here; but it is relatively new in the sense that these possibilities are too young to have found many applications yet, and do not yet have percolated down to the textbook understanding of 'what an ANN can do'. So to answer the OPs question; ANN's have already closed much of the gap that you describe. |