id
int64 1
141k
| title
stringlengths 15
150
| body
stringlengths 43
35.6k
| tags
stringlengths 1
118
| label
int64 0
1
|
---|---|---|---|---|
2 | Does the 'difference' operation add expressiveness to a query language that already includes 'join'? | <p>The set difference operator (e.g., <code>EXCEPT</code> in some SQL variants) is one of the many fundamental operators of relational algebra. However, there are some databases that do not support the set difference operator directly, but which support <code>LEFT JOIN</code> (a kind of outer join), and in practice this can be used instead of a set difference operation to achieve the same effect.</p>

<p>Does this mean that the expressive power of a query language is the same even without the set difference operator, so long as the <code>LEFT JOIN</code> operator is maintained? How would one prove this fact?</p>
 | database theory relational algebra finite model theory | 1 |
3 | Why is quicksort better than other sorting algorithms in practice? | <p>In a standard algorithms course we are taught that <strong>quicksort</strong> is <span class="math-container">$O(n \log n)$</span> on average and <span class="math-container">$O(n^2)$</span> in the worst case. At the same time, other sorting algorithms are studied which are <span class="math-container">$O(n \log n)$</span> in the worst case (like <strong>mergesort</strong> and <strong>heapsort</strong>), and even linear time in the best case (like <strong>bubblesort</strong>) but with some additional needs of memory.</p>
<p>After a quick glance at <a href="http://en.wikipedia.org/wiki/Sorting_algorithm#Comparison_of_algorithms" rel="noreferrer">some more running times</a> it is natural to say that quicksort <strong>should not</strong> be as efficient as others.</p>
<p>Also, consider that students learn in basic programming courses that recursion is not really good in general because it could use too much memory, etc. Therefore (and even though this is not a real argument), this gives the idea that quicksort might not be really good because it is a recursive algorithm.</p>
<p><strong>Why, then, does quicksort outperform other sorting algorithms in practice?</strong> Does it have to do with the structure of <em>real-world data</em>? Does it have to do with the way memory works in computers? I know that some memories are way faster than others, but I don't know if that's the real reason for this counter-intuitive performance (when compared to theoretical estimates).</p>
<hr />
<p><strong>Update 1:</strong> a canonical answer is saying that the constants involved in the <span class="math-container">$O(n\log n)$</span> of the average case are smaller than the constants involved in other <span class="math-container">$O(n\log n)$</span> algorithms. However, I have yet to see a proper justification of this, with precise calculations instead of intuitive ideas only.</p>
<p>In any case, it seems like the real difference occurs, as some answers suggest, at memory level, where implementations take advantage of the internal structure of computers, using, for example, that cache memory is faster than RAM. The discussion is already interesting, but I'd still like to see more detail with respect to memory-management, since it appears that <em>the</em> answer has to do with it.</p>
<hr />
<p><strong>Update 2:</strong> There are several web pages offering a comparison of sorting algorithms, some fancier than others (most notably <a href="http://www.sorting-algorithms.com/" rel="noreferrer">sorting-algorithms.com</a>). Other than presenting a nice visual aid, this approach does not answer my question.</p>
 | algorithms sorting | 1 |
5 | Does cooperative scheduling suspend processes when they perform an I/O operation? | <p>Many operating systems references say that with cooperative (as opposed to preemptive) multitasking, a process keeps the CPU until it explicitly voluntarily suspends itself. If a running process performs an I/O request that cannot be immediately satisfied (e.g., requests a key stroke that is not yet available), does the scheduler suspend it, or does it really keep the CPU until the request can be serviced?</p>

<p>[Edited to replace "blocks on i/o" with "performs an I/O request that cannot be immediately satisfied."]</p>
 | operating systems process scheduling | 1 |
7 | Which method is preferred for storing large geometric objects in a quadtree? | <p>When placing geometric objects in a quadtree (or octree), you can place objects that are larger than a single node in a few ways:</p>

<ol>
<li>Placing the object's reference in every leaf for which it is contained</li>
<li>Placing the object's reference in the deepest node for which it is fully contained</li>
<li>Both #1 and #2</li>
</ol>

<p>For example:</p>

<p><img src="https://i.stack.imgur.com/Z2Bj7.jpg" alt="enter image description here"></p>

<p>In this image, you could either place the circle in all four of the leaf nodes (method #1) or in just the root node (method #2) or both (method #3).</p>

<p>For the purposes of querying the quadtree, which method is more commonplace and why?</p>
 | graphics data structures computational geometry | 0 |
11 | Generating Combinations from a set of pairs without repetition of elements | <p>I have a set of pairs. Each pair is of the form (x,y) such that x,y belong to integers from the range <code>[0,n)</code>.</p>

<p>So, if the n is 4, then I have the following pairs:</p>

<pre><code>(0,1) (0,2) (0,3)
(1,2) (1,3) 
(2,3) 
</code></pre>

<p>I already have the pairs. Now, I have to build a combination using <code>n/2</code> pairs such that none of the integers are repeated (in other words, each integer appears at least once in the final combination). Following are the examples of a correct and an incorrect combination for better understanding</p>

<pre><code> 1. (0,1)(1,2) [Invalid as 3 does not occur anywhere]
 2. (0,2)(1,3) [Correct]
 3. (1,3)(0,2) [Same as 2]
</code></pre>

<p>Can someone suggest me a way to generate all possible combinations, once I have the pairs.</p>
 | algorithms | 1 |
14 | What is the significance of context-sensitive (Type 1) languages? | <p>Seeing that in the <a href="http://en.wikipedia.org/wiki/Chomsky_hierarchy">Chomsky Hierarchy</a> Type 3 languages can be recognised by a state machine with no external memory (i.e., a finite automaton), Type 2 by a state machine with a <em>single</em> stack (i.e. a push-down automaton) and Type 0 by a state machine with <em>two</em> stacks (or, equivalently, a tape, as is the case for Turing Machines), how do Type 1 languages fit into this picture? And what advantages does it bring to determine that a language is not only Type 0 but Type 1?</p>
 | formal languages applied theory computability automata formal grammars | 1 |
20 | Evaluating the average time complexity of a given bubblesort algorithm. | <p>Considering this pseudo-code of a bubblesort:</p>

<pre><code>FOR i := 0 TO arraylength(list) STEP 1 
 switched := false
 FOR j := 0 TO arraylength(list)-(i+1) STEP 1
 IF list[j] > list[j + 1] THEN
 switch(list,j,j+1)
 switched := true
 ENDIF
 NEXT
 IF switched = false THEN
 break
 ENDIF
NEXT
</code></pre>

<p>What would be the basic ideas I would have to keep in mind to evaluate the average time-complexity? I already accomplished calculating the worst and best cases, but I am stuck deliberating how to evaluate the average complexity of the inner loop, to form the equation. </p>

<p>The worst case equation is:</p>

<p>$$
\sum_{i=0}^n \left(\sum_{j=0}^{n -(i+1)}O(1) + O(1)\right) = O(\frac{n^2}{2} + \frac{n}{2}) = O(n^2)
$$</p>

<p>in which the inner sigma represents the inner loop, and the outer sigma represents the outer loop. I think that I need to change both sigmas due to the "if-then-break"-clause, which might affect the outer sigma but also due to the if-clause in the inner loop, which will affect the actions done during a loop (4 actions + 1 comparison if true, else just 1 comparison).</p>

<p>For clarification on the term average-time: This sorting algorithm will need different time on different lists (of the same length), as the algorithm might need more or less steps through/within the loops until the list is completely in order. I try to find a mathematical (non statistical way) of evaluating the average of those rounds needed. </p>

<p>For this I expect any order to be of the same possibility.</p>
 | algorithms time complexity sorting average case | 1 |
27 | Clever memory management with constant time operations? | <p>Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:</p>

<ul>
<li>allocation of one block</li>
<li>freeing a previously allocated block which is not used anymore.</li>
</ul>

<p>Also, as a requirement, the memory management system is not allowed to move around currently allocated blocks: their index/address must remain unchanged.</p>

<p>The most naive memory management algorithm would increment a global counter (with initial value 0) and use its new value as an address for the next allocation.
However this will never allow to shorten the segment when only a few allocated blocks remain.</p>

<p>Better approach: Keep the counter, but maintain a list of deallocated blocks (which can be done in constant time) and use it as a source for new allocations as long as it's not empty.</p>

<p>What next? Is there something clever that can be done, still with constraints of constant time allocation and deallocation, that would keep the memory segment as short as possible?</p>

<p>(A goal could be to track the currently non-allocated block with the smallest address, but it doesn't seem to be feasible in constant time…)</p>
 | time complexity memory allocation operating systems | 1 |
33 | Rice's theorem for non-semantic properties | <p><a href="http://en.wikipedia.org/wiki/Rice%27s_theorem">Rice's theorem</a> tell us that the only <em>semantic</em> properties of <a href="http://en.wikipedia.org/wiki/Turing_machines">Turing Machines</a> (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).</p>

<p>But there are other properties of Turing Machines that are not decidable. For example, the property that there is an unreachable state in a given Turing machine is undecidable$^{\dagger}$.</p>

<p>Is there a similar theorem to Rice's theorem that categorizes the decidability of similar properties? I don't have a precise definition. Any known theorem that covers the example I have given would be interesting for me.</p>

<p>$^\dagger$ it is easy to prove that this set is undecidable using <a href="http://en.wikipedia.org/wiki/Kleene%27s_recursion_theorem">Kleene's Recursion/Fixed Point theorems</a>.</p>
 | computability undecidability | 1 |
43 | Language theoretic comparison of LL and LR grammars | <p>People often say that <a href="https://en.wikipedia.org/wiki/LR_parser">LR(k)</a> parsers are more powerful than <a href="https://en.wikipedia.org/wiki/LL_parser">LL(k)</a> parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.</p>

<p>As far as I know, the respective sets of grammars LL and LR parsers accept are orthogonal, so let us talk about the languages generated by the respective sets of grammars. Let $LR(k)$ denote the class of languages generated by grammars that can be parsed by an $LR(k)$ parser, and similar for other classes.</p>

<p>I am interested in the following relations:</p>

<ul>
<li>$LL(k) \overset{?}{\subseteq} LR(k)$</li>
<li>$\bigcup_{i=1}^{\infty} LL(k) \overset{?}{\subseteq} \bigcup_{i=1}^{\infty} LR(k)$</li>
<li>$\bigcup_{i=1}^{\infty} LL(k) \overset{?}{=} LL(*)$</li>
<li>$LL(*) \overset{?}{\circ} \bigcup_{i=1}^{\infty} LR(k)$</li>
</ul>

<p>Some of these are probably easy; my goal is to collect a "complete" comparison. References are appreciated.</p>
 | formal languages formal grammars parsers reference question | 1 |
57 | How does one know which notation of time complexity analysis to use? | <p>In most introductory algorithm classes, notations like $O$ (Big O) and $\Theta$ are introduced, and a student would typically learn to use one of these to find the time complexity.</p>

<p>However, there are other notations, such as $o$, $\Omega$ and $\omega$. Are there any specific scenarios where one notation would be preferable to another?</p>
 | algorithms terminology asymptotics landau notation reference question | 1 |
62 | Characterization of lambda-terms that have union types | <p>Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):</p>

<p>$$
\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}
 {\Gamma \vdash M : T_1 \wedge T_2}
 (\wedge I)
\qquad\qquad
\dfrac{}
 {\Gamma \vdash M : \top}
 (\top I)
$$</p>

<p>Intersection types have interesting properties with respect to normalization:</p>

<ul>
<li>A lambda-term can be typed without using the $\top I$ rule iff it is strongly normalizing.</li>
<li>A lambda-term admits a type not containing $\top$ iff it has a normal form.</li>
</ul>

<p>What if instead of adding intersections, we add unions?</p>

<p>$$
\dfrac{\Gamma \vdash M : T_1}
 {\Gamma \vdash M : T_1 \vee T_2}
 (\vee I_1)
\qquad\qquad
\dfrac{\Gamma \vdash M : T_2}
 {\Gamma \vdash M : T_1 \vee T_2}
 (\vee I_2)
$$</p>

<p>Does the lambda-calculus with simple types, subtyping and unions have any interesting similar property? How can the terms typable with union be characterized?</p>
 | lambda calculus type theory logic | 1 |
72 | Random Sudoku generator | <p>I want to generate a completely random <a href="http://en.wikipedia.org/wiki/Sudoku">Sudoku</a>.</p>

<p>Define a Sudoku grid as a $9\times9$ grid of integers between $1$ and $9$ where some elements can be omitted. A grid is a valid puzzle if there is a <strong>unique</strong> way to complete it to match the Sudoku constraints (each line, column and aligned $3\times3$ square has no repeated element) and it is minimal in that respect (i.e. if you omit any more element the puzzle has multiple solutions).</p>

<p>How can I generate a random Sudoku puzzle, such that all Sudoku puzzles are equiprobable?</p>
 | algorithms randomness sudoku | 0 |
73 | Which kind of branch prediction is more important? | <p>I have observed that there are two different types of states in branch prediction.</p>

<ol>
<li><p>In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.</p></li>
<li><p>In the instruction pipeline, where the fetching is more problem since the instructions do not actually get executed till later.</p></li>
</ol>

<p>Which of these is very important (as in which of these really matters in the CPU now-a-days)? If both are equally important or in case the second one is more important then Why do we not have two instruction pipeline (probably of half the length ) and then depending on the branches, just choose one of them and then again start the population from the beginning?</p>
 | cpu pipelines computer architecture | 1 |
74 | Is Smoothed Analysis used outside academia? | <p>Did the <a href="http://en.wikipedia.org/wiki/Smoothed_analysis">smoothed analysis</a> find its way into main stream analysis of algorithms? Is it common for algorithm designers to apply smoothed analysis to their algorithms?</p>
 | algorithms complexity theory algorithm analysis | 1 |
75 | Time spent on requirement and its effect on project success and development time | <p>Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies done which show that time spent on requirement analysis will have any effect on development time of a project, or how successful the project will be.</p>
 | software engineering | 0 |
78 | Natural candidates for the hierarchy inside NPI | <p>Let's assume that $\mathsf{P} \neq \mathsf{NP}$. $\mathsf{NPI}$ is the class of problems in $\mathsf{NP}$ which are neither in $\mathsf{P}$ nor in $\mathsf{NP}$-hard. You can find a list of problems conjectured to be $\mathsf{NPI}$ <a href="https://cstheory.stackexchange.com/questions/79/problems-between-p-and-npc/">here</a>. </p>

<p><a href="https://cstheory.stackexchange.com/questions/799/generalized-ladners-theorem">Ladner's theorem</a> tells us that if $\mathsf{NP}\neq\mathsf{P}$ then there is an infinite hierarchy of $\mathsf{NPI}$ problems, i.e. there are $\mathsf{NPI}$ problems which are harder than other $\mathsf{NPI}$ problems.</p>

<blockquote>
 <p>I am looking for candidates of such problems, i.e. I am interested in pairs of problems<br>
 - $A,B \in \mathsf{NP}$,<br>
 - $A$ and $B$ are conjectured to be $\mathsf{NPI}$,<br>
 - $A$ is known to reduce to $B$,<br>
 - but there are no known reductions from $B$ to $A$.</p>
</blockquote>

<p>Even better if there are arguments for supporting these, e.g. there are results that $B$ does not reduce to $A$ assuming some conjectures in complexity theory or cryptography.</p>

<p>Are there any <em>natural</em> examples of such problems?</p>

<p>Example: Graph Isomorphism problem and Integer Factorization problem are conjectured to be in $\mathsf{NPI}$ and there are argument supporting these conjectures. Are there any decision problems harder than these two but not known to be $\mathsf{NP}$-hard?</p>
 | complexity theory np hard | 0 |
81 | Applying the graph mining algorithm Leap Search in an unlabeled setting | <p>I am reading <a href="http://www.google.ch/url?sa=t&rct=j&q=leap%20search&source=web&cd=5&ved=0CE8QFjAE&url=http://dl.acm.org/ft_gateway.cfm?id=1376662&type=pdf&ei=sSVXT569JYjkiAL2hLyiCw&usg=AFQjCNEOkxyk31CeifLNr72Cv_it7IATbg&cad=rja">Mining Significant Graph Patterns by Leap Search</a> (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.</p>

<p>On page 436 however, the authors clearly state that "In the following presentation, we are going to use the second setting (Figure 3) to illustrate the main idea. Nevertheless, the proposed technique can also be applied to the 1st [unlabeled] setting."</p>
 | data mining | 0 |
102 | Is there any nongeneral CFG parsing algorithm that recognises EPAL? | <p>EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:</p>
<blockquote>
<p><span class="math-container">$S \rightarrow a a$</span></p>
<p><span class="math-container">$S \rightarrow b b$</span></p>
<p><span class="math-container">$S \rightarrow a S a$</span></p>
<p><span class="math-container">$S \rightarrow b S b$</span></p>
</blockquote>
<p>EPAL is the 'bane' of many parsing algorithms: I have yet to encounter any parsing algorithm for unambiguous CFGs that can parse any grammar describing the language. It is often used to show that there are unambiguous CFGs that cannot be parsed by a particular parser. This inspired my question:</p>
<blockquote>
<p>Is there some parsing algorithm accepting only unambiguous CFGs that works on EPAL?</p>
</blockquote>
<p>Of course, one can design an ad-hoc two-pass parser for the grammar that parses the language in linear time. I'm interested in parsing methods that have not been designed specifically with EPAL in mind.</p>
 | formal languages formal grammars parsers | 1 |
103 | Clock synchronization in a network with asymmetric delays | <p>Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.</p>

<p>The simple method is that the computer sends a query to a time server, noting the local time $B + C_1$. The time server receives the query at a time $T$ and sends a reply containing $T$ back to the client, which receives it at a time $B + C_2$. Then $B + C_1 \le T \le B + C_2$, i.e. $T - C_2 \le B \le T - C_1$.</p>

<p>If the network transmission time and the server processing time are symmetric, then $B = T - \dfrac{C_1 + C_2}{2}$. As far as I know, <a href="http://en.wikipedia.org/wiki/Network_Time_Protocol">NTP</a>, the time synchronization protocol used in the wild, operates on this assumption.</p>

<p>How can the precision be improved if the delays are not symmetric? Is there a way to measure this asymmetry in a typical Internet infrastructure?</p>
 | clocks distributed systems computer networks | 0 |
104 | Recursive definitions over an inductive type with nested components | <p>Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.</p>

<pre><code>Inductive LTree : Set := Node : list LTree -> LTree.
</code></pre>

<p>The naive way of defining a recursive function over these trees by recursing over trees and lists of trees does not work. Here's an example with the <code>size</code> function that computes the number of nodes.</p>

<pre><code>Fixpoint size (t : LTree) : nat := match t with Node l => 1 + (size_l l) end
with size_l (l : list LTree) : nat := match l with
 | nil => 0
 | cons h r => size h + size_l r
 end.
</code></pre>

<p>This definition is ill-formed (error message excerpted):</p>

<pre><code>Error:
Recursive definition of size_l is ill-formed.
Recursive call to size has principal argument equal to
"h" instead of "r".
</code></pre>

<p>Why is the definition ill-formed, even though <code>r</code> is clearly a subterm of <code>l</code>? Is there a way to define recursive functions on such a data structure?</p>

<hr>

<p>If you aren't fluent in Coq syntax: <code>LTree</code> is an inductive type corresponding to the following grammar.</p>

<p>$$\begin{align}
 \mathtt{LTree} ::= & \\
 \vert & \mathtt{list}(\mathtt{LTree}) \\
\end{align}$$</p>

<p>We attempt to define the <code>size</code> function by induction over trees and lists. In OCaml, that would be:</p>

<pre><code>type t = Node of t list
let rec size = function Node l -> 1 + size_l l
and size_l = function [] -> 0
 | h::r -> size h + size_l r
</code></pre>
 | logic coq type theory recursion proof assistants | 0 |
105 | Algorithm to test whether a binary tree is a search tree and count complete branches | <p>I need to create a recursive algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable. This is an assignment for my data structures class. </p>

<p>So far I have</p>

<pre><code>void BST(tree T) {
 if (T == null) return
 if ( T.left and T.right) {
 if (T.left.data < T.data or T.right.data > T.data) {
 count = count + 1
 BST(T.left)
 BST(T.right)
 }
 }
}
</code></pre>

<p>But I can't really figure this one out. I know that this algorithm won't solve the problem because the count will be zero if the second if statement isn't true.</p>

<p>Could anyone help me out on this one? </p>
 | algorithms recursion trees | 1 |
108 | Equivalence of Büchi automata and linear $\mu$-calculus | <p>It's a known fact that every LTL formula can be expressed by a Büchi $\omega$-automaton. But, apparently, Büchi automata are a more powerful, expressive model. I've heard somewhere that Büchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and only one temporal operator: $\mathbf{X}$). </p>

<p>Is there an algorithm (constructive proof) of this equality?</p>
 | logic automata formal methods linear temporal logic buchi automata | 0 |
109 | Are there inherently ambiguous and deterministic context-free languages? | <p>Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.</p>

<p>Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous, and unambiguous otherwise.</p>

<p>An example of a deterministic, unambiguous language is the language: $$\{a^{n}b^{n} \in \{a, b\}^{*} | n \ge 0\}$$
An example of a nondeterministic, unambiguous language is the language: 
$$\{w \in \{a, b\}^{*} | w = w^{R}\}$$</p>

<p>From <a href="http://en.wikipedia.org/wiki/Ambiguous_grammar#Inherently_ambiguous_languages">Wikipedia</a>, an example of an inherently ambiguous context-free language is the following union of context-free languages, which must also be context-free: 
$$L = \{a^{n}b^{m}c^{m}d^{n} \in \{a, b, c, d\}^{*} | n, m \ge 0\} \cup \{a^{n}b^{n}c^{m}d^{m} \in \{a, b, c, d\}^{*} | n, m \ge 0\}$$</p>

<p>Now for the questions:</p>

<ol>
<li>Is it known whether there exists a deterministic, inherently ambiguous context-free language? If so, is there an (easy) example?</li>
<li>Is it known whether there exists a nondeterministic, inherently ambiguous context-free language? If so, is there an (easy) example?</li>
</ol>

<p>Clearly, since an inherently ambiguous context-free language exists ($L$ is an example), the answer to one of these questions is easy, if it is known whether $L$ is deterministic or nondeterministic. I also assume that it's true that if there's a deterministic one, there's bound to be a nondeterministic one as well... but I've been surprised before. References are appreciated, and apologies in advance if this is a well-known, celebrated result (in which case, I'm completely unaware of it).</p>
 | formal languages automata formal grammars pushdown automata | 1 |
110 | Determining capabilities of a min-heap (or other exotic) state machines | <p><em>See the end of this post for some clarification on the definition(s) of min-heap automata.</em></p>

<p>One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equivalent in power to Turing machines.</p>

<p>Imagine a min-heap machine. It works exactly like a push-down automaton, with the following exceptions:</p>

<ol>
<li>Instead of getting to look at the last thing you added to the heap, you only get to look at the smallest element (with the ordering defined on a per-machine basis) currently on the heap.</li>
<li>Instead of getting to remove the last thing you added to the heap, you only get to remove one of the smallest element (with the ordering defined on a per-machine basis) currently on the heap.</li>
<li>Instead of getting to add an element to the top of the heap, you can only add an element to the heap, with its position being determined according to the other elements in the heap (with the ordering defined on a per-machine basis).</li>
</ol>

<p>This machine can accept all regular languages, simply by not using the heap. It can also accept the language $\displaystyle \{a^{n}b^{n} \in \{a, b\}^{*} \mid n \ge 0\}$ by adding $a$'s to the heap, and removing $a$'s from the heap when it reads $b$'s. It can accept a variety of other context-free languages. However, it cannot accept, for instance, $\displaystyle \{w \in \{a, b\}^{*} \mid w = w^{R}\}$ (stated without proof). EDIT: or can it? I don't think it can, but I've been surprised before, and I'm sure I'll keep being surprised when my assumptions to keep making of me an... well.</p>

<blockquote>
 <p>Can it accept any context-sensitive or Turing-complete languages?</p>
</blockquote>

<p>More generally, what research, if any, has been pursued in this direction? What results are there, if any? I am also interested in other varieties of exotic state machines, possibly those using other data structures for storage or various kinds of restrictions on access (e.g., how LBAs are restricted TMs). References are appreciated. I apologize in advance if this question is demonstrating ignorance.</p>

<hr>

<p><strong>Formal Definition:</strong></p>

<p>I provide some more detailed definitions of min-heap automata here in order to clarify further discussion in questions which reference this material.</p>

<p>We define a <em>type-1 nondeterministic min-heap automaton</em> as a 7-tuple $$(Q, q_0, A, \Sigma, \Gamma, Z_0, \delta)$$ where...</p>

<ol>
<li>$Q$ is a finite, non-empty set of states;</li>
<li>$q_0 \in Q$ is the initial state;</li>
<li>$A \subseteq Q$ is the set of accepting states;</li>
<li>$\Sigma$ is a finite, non-empty input alphabet;</li>
<li>$\Gamma$ is a finite, non-empty input alphabet, where the weight of a symbol $\gamma \in \Gamma$, $w(\gamma) \in \mathbb{N}$, is such that $w(\gamma_1) = w(\gamma_2) \iff \gamma_1 = \gamma_2$;</li>
<li>$Z_0 \notin \Gamma$ is the special bottom-of-the-heap symbol;</li>
<li>$\delta : Q \times (\Sigma \cup \{\epsilon\}) \times (\Gamma \cup \{Z_0\}) \rightarrow \mathcal{P}({Q \times \Gamma^*})$ is the transition function.</li>
</ol>

<p>The transition function works by assuming an initially empty heap consisting of only $Z_0$. The transition function may add to the heap an arbitrary collection (finite, but possibly empty or with repeats) of elements $\gamma_1, \gamma_2, ..., \gamma_k \in \Gamma$. Alternatively, the transition function may remove an instance of the element $\gamma$ with the lowest weight $w(\gamma)$ of all elements remaining on the heap (i.e., the element on top of the heap). The transition function may only use the top-most (i.e., of minimal weight) symbol instance in determining any given transition.</p>

<p>Further, define a <em>type-1 deterministic min-heap automaton</em> to be a type-1 nondeterministic min-heap automaton which satisfies the following property: for all strings $x{\sigma}y \in \Sigma$ such that $|x| = n$ and $\sigma \in \Sigma$, $|\delta^{n+1}(q_0, x{\sigma}y, Z_0)| \leq 1$.</p>

<p>Define also a <em>type-2 nondeterministic min-heap automaton</em> exactly the same as a type-1 nondeterministic min-heap automaton, except for the following changes:</p>

<ol>
<li>$\Gamma$ is a finite, non-empty input alphabet, where the weight of a symbol $\gamma \in \Gamma$, $w(\gamma) \in \mathbb{N}$, is such that $w(\gamma_1) = w(\gamma_2)$ does not necessarily imply $\gamma_1 = \gamma_2$; in other words, different heap symbols can have the same weight.</li>
<li>When instances of distinct heap symbols with same weight are added to the heap, their relative order is preserved according to a last-in, first-out (LIFO) stack-like ordering.</li>
</ol>

<p>Thanks to Raphael for pointing out this more natural definition, which captures (and extends) the context-free languages. </p>

<hr>

<p><strong>Some results demonstrated so far:</strong></p>

<ol>
<li>Type-1 min-heap automata recognize a set of languages which is neither a subset nor a superset of the context-free languages. [<a href="https://cs.stackexchange.com/a/114/98">1</a>,<a href="https://cs.stackexchange.com/a/115/98">2</a>]</li>
<li>Type-2 min-heap automata, by their definition, recognize a set of languages which is a proper superset of the context-free languages, as well as a proper superset of the languages accepted by type-1 min-heap automata.</li>
<li>Languages accepted by type-1 min-heap automata appear to be closed under union, concatenation, and Kleene star, but not under complementation [<a href="https://cs.stackexchange.com/a/415/98">1</a>], intersection, or difference;</li>
<li>Languages accepted by type-1 nondeterministic min-heap automata appear to be a proper superset of languages accepted by type-1 deterministic min-heap automata.</li>
</ol>

<p>There may be a few other results I have missed. More results are (possibly) on the way.</p>

<hr>

<p><strong>Follow-up Questions</strong></p>

<ol>
<li><a href="https://cs.stackexchange.com/q/390/98">Closure under reversal?</a> -- Open</li>
<li><a href="https://cs.stackexchange.com/q/393/98">Closure under complementation?</a> -- No!</li>
<li><a href="https://cs.stackexchange.com/q/394/98">Does nondeterminism increase power?</a> -- Yes?</li>
<li><a href="https://cs.stackexchange.com/q/933/69">Is $HAL \subsetneq CSL$ for type-2?</a> -- Open</li>
<li><a href="https://cs.stackexchange.com/q/934/69">Does adding heaps increase power for type-1?</a> -- $HAL^1 \subsetneq HAL^2 = HAL^k$ for $k > 2$ (?)</li>
<li><a href="https://cs.stackexchange.com/q/944/69">Does adding a stack increase power for type-1?</a> -- Open</li>
</ol>
 | formal languages automata | 1 |
118 | Are there improvements on Dana Angluin's algorithm for learning regular sets | <p>In her 1987 seminal paper Dana Angluin presents a polynomial time algorithm for learning a DFA from membership queries and theory queries (counterexamples to a proposed DFA).</p>
<p>She shows that if you are trying to learn a minimal DFA with <span class="math-container">$n$</span> states, and your largest countexample is of length <span class="math-container">$m$</span>, then you need to make <span class="math-container">$O(mn^2)$</span> membership-queries and at most <span class="math-container">$n - 1$</span> theory-queries.</p>
<p>Have there been significant improvements on the number of queries needed to learn a regular set?</p>
<hr />
<h3>References and Related Questions</h3>
<ul>
<li><p>Dana Angluin (1987) "Learning Regular Sets from Queries and Counterexamples", Infortmation and Computation 75: 87-106</p>
</li>
<li><p><a href="https://cstheory.stackexchange.com/q/10958/1037">Lower bounds for learning in the membership query and counterexample model</a></p>
</li>
</ul>
 | algorithms learning theory machine learning | 1 |
119 | How fundamental are matroids and greedoids in algorithm design? | <p>Initially, <a href="http://en.wikipedia.org/wiki/Matroid" rel="nofollow noreferrer">matroids</a> were introduced to generalize the notions of linear independence of a collection of subsets $E$ over some ground set $I$. Certain problems that contain this structure permit greedy algorithms to find optimal solutions. The concept of <a href="http://en.wikipedia.org/wiki/Greedoid" rel="nofollow noreferrer">greedoids</a> was later introduced to generalize this structure to capture more problems that allow for optimal solutions to be found by greedy methods.</p>

<p>How often do these structures arise in algorithm design? </p>

<p>Furthermore, more often than not a greedy algorithm will not be able to fully capture what is necessary to find optimal solutions, but may still find very good approximate solutions (Bin Packing, for example). Given that, is there a way to measure how "close" a problem is to a greedoid or matroid?</p>
 | algorithms optimization combinatorics greedy algorithms matroids | 1 |
122 | What Is The Complexity of Implementing a Particle Filter? | <p>In a <a href="http://www.youtube.com/watch?feature=player_embedded&v=4S-sx5_cmLU#!" rel="noreferrer">video</a> discussing the merits of <a href="http://en.wikipedia.org/wiki/Particle_filter" rel="noreferrer">particle filters</a> for localization, it was implied that there is some ambiguity about the complexity cost of particle filter implementations. Is this correct? Could someone explain this?</p>
 | computational geometry knowledge representation reasoning statistics | 1 |
125 | How to define quantum Turing machines? | <p>In quantum computation, what is the equivalent model of a Turing machine? 
It is quite clear to me how quantum <strong>circuits</strong> can be constructed out of quantum gates, but how can we define a quantum Turing machine (QTM) that can actually benefit from quantum effects, namely, perform on high-dimensional systems?</p>
 | quantum computing turing machines computation models | 1 |
129 | Do subqueries add expressive power to SQL queries? | <p>Does SQL need subqueries?</p>

<p>Imagine a sufficiently generalized implementation of the structured query language for relation databases. Since the structure of the canonical SQL <code>SELECT</code> statement is actually pretty important for this to make sense, I don't appeal directly to relational algebra, but you could frame this in those terms by making appropriate restrictions on the form of expressions.</p>

<p>An SQL <code>SELECT</code> query generally consists of a projection (the <code>SELECT</code> part) some number of <code>JOIN</code> operations (the <code>JOIN</code> part), some number of <code>SELECTION</code> operations (in SQL, the <code>WHERE</code> clauses), and then set-wise operations (<code>UNION</code>, <code>EXCEPT</code>, <code>INTERSECT</code>, etc.), followed by another SQL <code>SELECT</code> query.</p>

<p>Tables being joined can be the computed results of expressions; in other words, we can have a statement such as:</p>

<pre><code>SELECT t1.name, t2.address
 FROM table1 AS t1 
 JOIN (SELECT id, address 
 FROM table2 AS t3 
 WHERE t3.id = t1.id) AS t2
 WHERE t1.salary > 50,000;
</code></pre>

<p>We will refer to the use of a computed table as part of an SQL query as a subquery. In the example above, the second (indented) <code>SELECT</code> is a subquery.</p>

<p>Can all SQL queries be written in such a way as to not use subqueries? The example above can:</p>

<pre><code>SELECT t1.name, t2.address
 FROM table1 AS t1 
 JOIN table2 AS t2
 ON t1.id = t2.id
 WHERE t1.salary > 50,000;
</code></pre>

<p>This example is somewhat spurious, or trivial, but one can imagine instances where considerably more effort might be required to recover an equivalent expression. In other words, is it the case that for every SQL query $q$ with subqueries, there exists a query $q'$ without subqueries such that $q$ and $q'$ are guaranteed to produce the same results for the same underlying tables? Let us limit SQL queries to the following form:</p>

<pre><code>SELECT <attribute>,
 ...,
 <attribute>
 FROM <a table, not a subquery>
 JOIN <a table, not a subquery>
 ...
 JOIN <a table, not a subquery>
WHERE <condition>
 AND <condition>
 ...
 AND <condition>

UNION
 -or-
EXCEPT
 -or-
<similar>

SELECT ...
</code></pre>

<p>And so on. I think left and right outer joins don't add much, but if I am mistaken, please feel free to point that out... in any event, they are fair game as well. As far as set operations go, I guess any of them are fine... union, difference, symmetric difference, intersection, etc... anything that is helpful. Are there any known forms to which all SQL queries can be reduced? Do any of these eliminate subqueries? Or are there some instances where no equivalent, subquery-free query exists? References are appreciated... or a demonstration (by proof) that they are or aren't required would be fantastic. Thanks, and sorry if this is a celebrated (or trivial) result of which I am painfully ignorant.</p>
 | database theory relational algebra | 1 |
130 | What are the conditions for a NFA for its equivalent DFA to be maximal in size? | <p>We know that DFAs are equivalent to NFAs in expressiveness power; there is also a known algorithm for converting NFAs to DFAs (unfortunately I do now know the inventor of that algorithm), which in worst case gives us $2^S$ states, if our NFA had $S$ states.</p>

<p>My question is: what is determining the worst case scenario?</p>

<hr>

<p>Here's a transcription of an algorithm in case of ambiguity:</p>

<p>Let $A = (Q,\Sigma,\delta,q_0,F)$ be a NFA. We construct a DFA $A' = (Q',\Sigma,\delta',q'_0,F')$ where </p>

<ul>
<li>$Q' = \mathcal{P}(Q)$, </li>
<li>$F' = \{S \in Q' | F \cap S \neq \emptyset \}$,</li>
<li>$\delta'(S,a) =\bigcup_{s \in S} (\delta(s,a) \cup \hat \delta(s,\varepsilon))$, and</li>
<li>$q'_0 = \{q_0\} \cup \hat \delta(q_0, \varepsilon)$,</li>
</ul>

<p>where $\hat\delta$ is the extended transition function of $A$.</p>
 | formal languages automata regular languages finite automata nondeterminism | 1 |
134 | Analyzing load balancing schemes to minimize overall execution time | <p>Suppose that a certain parallel application uses a master-slave design to process a large number of workloads. Each workload takes some number of cycles to complete; the number of cycles any given workload will take is given by a known random variable $X$. Assume that there are $n$ such workloads and $m$ equivalent slaves (processing nodes). Naturally, a more general version of this question addresses the case of slaves of differing capabilities, but we ignore this for now.</p>

<p>The master cannot process workloads, but can distribute workloads to slave nodes and monitor progress of slave nodes. Specifically, the master can perform the following actions:</p>

<ol>
<li>Instantaneously begin processing of any $k$ workloads on any free node.</li>
<li>Instantaneously receive confirmation of the completion by a node of a previously initiated batch of $k$ workloads.</li>
<li>At any point in time, and instantaneously, determine the state of all nodes (free or busy) as well as the number of workloads completed and the number of workloads remaining.</li>
</ol>

<p>For simplicity's sake, assume $k$ divides $n$.</p>

<p>There are at least two categories of load balancing strategies for minimizing the total execution time of all workloads using all slaves (to clarify, I'm talking about the makespan or wall-clock time, not the aggregate process time, which is independent of the load-balancing strategy being used under the simplifying assumptions being made in this question): static and dynamic. In a static scheme, all placement decisions are made at time $t = 0$. In a dynamic scheme, the master can make placement decisions using information about the progress being made by some slaves, and as such, better utilization can be attained (in practice, there are overheads associated with dynamic scheduling as compared to static scheduling, but we ignore these). Now for some questions:</p>

<ol>
<li>Is there a better way to statically schedule workloads than to divide batches of $k$ workloads among the $m$ slaves as evenly as possible (we can also assume, for simplicity's sake, that $m$ divides $n/k$, so batches could be statically scheduled completely evenly)? If so, how?</li>
<li>Using the best static scheduling policy, what should the mean and standard deviation be for the total execution time, in terms of the mean $\mu$ and standard deviation $\sigma$ of $X$? </li>
</ol>

<p>A simple dynamic load balancer might schedule $i$ batches of $k$ workloads to each slave initially, and then, when nodes complete the initial $i$ batches, schedule an additional batch of $k$ workloads to each slave on a first-come, first-served basis. So if two slave nodes are initially scheduled 2 batches of 2 workloads each, and the first slave finishes its two batches, an additional batch is scheduled to the first slave, while the second slave continues working. If the first slave finishes the new batch before the second batch finishes its initial work, the master will continue scheduling to the first slave. Only when the second slave completes executing its work will it be issued a new batch of workloads. Example:</p>

<pre><code> DYNAMIC STATIC
 POLICY POLICY

 slave1 slave2 slave1 slave2
 ------ ------ ------ ------

t<0 -- -- -- --

t<1 batch1 batch3 batch1 batch3
 batch2 batch4 batch2 batch4
 batch5 batch7
 batch6 batch8

t=1 -- batch3 batch5 batch3
 batch4 batch6 batch4
 batch7
 batch8

t<2 batch5 batch3 batch5 batch3
 batch4 batch6 batch4
 batch7
 batch8

t=2 -- batch4 batch6 batch4
 batch7
 batch8

t<3 batch6 batch4 batch6 batch4
 batch7
 batch8

t=3 -- -- -- batch7
 batch8

t<4 batch7 batch8 -- batch7
 batch8

t=4 -- -- -- batch8

t<5 -DONE- -- batch8

t=5 -- --

t < 6 -DONE-
</code></pre>

<p>For clarification, batches 1 and 2 take 1/2 second each to be processed, batch 3 takes 2 seconds to be processed, and batches 4-8 take 1 second each to be processed. This information is not known a-priori; in the static scheme, all jobs are distributed at t=0, whereas in the dynamic scheme, distribution can take into account what the actual runtimes of the jobs "turned out" to be. We notice that the static scheme takes one second longer than the dynamic scheme, with slave1 working 3 seconds and slave2 working 5 seconds. In the dynamic scheme, both slaves work for the full 4 seconds.</p>

<p>Now for the question that motivated writing this:</p>

<ol>
<li>Using the dynamic load balancing policy described above, what should the mean and standard deviation be for the total execution time, in terms of the mean $\mu$ and standard deviation $\sigma$ of $X$?</li>
</ol>

<p>Interested readers have my assurances that this isn't homework, although it probably isn't much harder than what one might expect to get as homework in certain courses. Given that, if anyone objects to this being asked and demands that I show some work, I will be happy to oblige (although I don't know when I'll have time in the near future). This question is actually based on some work that I never got around to doing a semester or two ago, and empirical results were where we left it. Thanks for help and/or effort, I'll be interested to see what you guys put together.</p>
 | scheduling distributed systems parallel computing | 1 |
135 | Extension of SQL capturing $\mathsf{P}$ | <p>According to <a href="http://books.google.ca/books?id=kWSZ0OWnupkC&pg=PA224#v=onepage&q&f=false">Immerman</a>, the complexity class associated with <a href="http://en.wikipedia.org/wiki/SQL">SQL</a> queries is exactly the class of <em>safe queries</em> in $\mathsf{Q(FO(COUNT))}$ (first-order queries plus counting operator): SQL captures safe queries. (In other words, all SQL queries have a complexity in $\mathsf{Q(FO(COUNT))}$, and all problems in $\mathsf{Q(FO(COUNT))}$ can be expressed as an SQL query.)</p>

<p>Based on this result, from theoretical point of view, there are many interesting problems that can be solved efficiently but are not expressible in SQL. Therefore an extension of SQL which is still efficient seems interesting. So here is my question:</p>

<blockquote>
 <p>Is there an <strong>extension of SQL</strong> (implemented and <strong>used in the industry</strong>) which <strong>captures $\mathsf{P}$</strong> (i.e. can express all polynomial-time computable queries and no others)?</p>
</blockquote>

<p>I want a database query language which stisfies all three conditions. It is easy to define an extension which would extend SQL and will capture $\mathsf{P}$. But my questions is if such a language makes sense from the practical perspective, so I want a language that is being used in practice. If this is not the case and there is no such language, then I would like to know if there is a reason that makes such a language uninteresting from the practical viewpoint? For example, are the queries that rise in practice usually simple enough that there is no need for such a language?</p>
 | database theory complexity theory finite model theory descriptive complexity | 0 |
138 | Natural occurrences of monads that make use of the category-theoretical framework | <p>Today, a talk by Henning Kerstan ("Trace Semantics for Probabilistic Transition Systems") confronted me with category theory for the first time. He has built a theoretical framework for describing probablistic transition systems and their behaviour in a general way, i.e. with uncountably infinite state sets and different notions of traces. To this end, he goes up through several layers of abstraction to finally end up with the notion of <a href="https://en.wikipedia.org/wiki/Monad_%28category_theory%29">monads</a> which he combines with measure theory to build the model he needs.</p>

<p>In the end, it took him 45 minutes to (roughly) build a framework to describe a concept he initially explained in 5 minutes. I appreciate the beauty of the approach (it <em>does</em> generalise nicely over different notions of traces) but it strikes me as an odd balance nevertheless.</p>

<p>I struggle to see what a monad really <em>is</em> and how so general a concept can be useful in applications (both in theory and practice). Is it really worth the effort, result-wise?</p>

<p>Therefore this question: </p>

<blockquote>
 <p>Are there problems that are natural (in the sense of CS) on which
 the abstract notion of monads can be applied and helps (or is even
 instrumental) to derive desired results (at all or in a nicer way 
 than without)?</p>
</blockquote>
 | applied theory category theory | 1 |
143 | Is this finite graph problem decidable? What factors make a problem decidable? | <p>I want to know if the following problem is decidable and how to find out. Every problem I see I can say "yes" or "no" to it, so are most problems and algorithms decidable except a few (which is provided <a href="http://en.wikipedia.org/wiki/List_of_undecidable_problems">here</a>)?</p>

<blockquote>
 <p>Input: A directed and finite graph $G$, with $v$ and $u$ as vertices<br>
 Question: Does a path in $G$ with $u$ as initial vertex and $v$ as final vertex exist?</p>
</blockquote>
 | algorithms computability graphs undecidability | 1 |
146 | Equivalence of Kolmogorov-Complexity definitions | <p>There are many ways to define the <a href="https://en.wikipedia.org/wiki/Kolmogorov_complexity">Kolmogorov-Complexity</a>, and usually, all these definitions they are equivalent up to an additive constant. That is if $K_1$ and $K_2$ are kolmogorov complexity functions (defined via different languages or models), then there exists a constant $c$ such that for every string $x$, $|K_1(x) - K_2(x)| < c$. I believe this is because for every Kolmogorov complexity function $K$ and for every $x$ it holds that $K(x) \le |x| +c$, for some constant $c$.</p>

<p>I'm interested in the following definitions for $K$, based on Turing-machines</p>

<ol>
<li><strong>number of states</strong>: Define $K_1(x)$ to be the minimal number $q$ such that a TM with $q$ states outputs $x$ on the empty string.</li>
<li><strong>Length of Program</strong>: Define $K_2(x)$ to be the shortest "program" that outputs $x$. Namely, fix a way to encode TMs into binary strings; for a machine $M$ denote its (binary) encoding as $\langle M \rangle$. $K_2(x) = \min |\langle M \rangle|$ where the minimum is over all $M$'s that output $x$ on empty input.</li>
</ol>

<p>Are $K_1$ and $K_2$ equivalent? What is the relation between them, and which one grasps better the concept of Kolmogorov complexity, if they are not equivalent.</p>

<p>What especially bugs me is the rate $K_2$ increase with $x$, which seems not to be super-linear (or at least linear with constant $C>1$ such that $K_2 < C|x|$, rather than $|x|+c$).
Consider the most simple TM that outputs $x$ - the one that just encodes $x$ as part of its states and transitions function. it is immediate to see that
$K_1(x) \le |x|+1$. However the encoding of the same machine is much larger, and the trivial bound I get is $K_2(x) \le |x|\log |x|$. </p>
 | computability kolmogorov complexity | 1 |
148 | Type-checking algorithms | <p>I am starting a personal bibliographic research on type-checking algorithms and want some tips. What are the most commonly used type-checking algorithms, strategies and general techniques?</p>

<p>I am particularly interested in complex type-checking algorithms that were implemented in widely known strongly static typed languages such as, for example, C++, Java 5+, Scala or others. I.E, type-checking algorithms that are not very simple due to the very simple typing of the underlying language (like Java 1.4 and below).</p>

<p>I am not per se interested in a specific language X, Y or Z. I am interested in type-checking algorithms regardless of the language that they target. If you provide a answer like "language L that you never heard about which is strongly typed and the typing is complex has a type-checking algorithm that does A, B and C by checking X and Y using the algorithm Z", or "the strategy X and Y used for Scala and a variant Z of A used for C# are cool because of the R, S and T features that works in that way", then the answers are nice.</p>
 | algorithms programming languages reference request type checking | 1 |
149 | Analysis of and references for Koch-snowflake-like (and other exotic) network topologies | <p>In computer networking and high-performance cluster computer design, network topology refers to the design of the way in which nodes are connected by links to form a communication network. Common network topologies include the mesh, torus, ring, star, tree, etc. These topologies can be studied analytically to determine properties related to their expected performance; such characteristics include diameter (maximal distance between a pair of nodes, in terms of the number of links which must be crossed if such nodes communicate), the average distance between nodes (over all pairs of nodes in the network), and the bisection bandwidth (the worst-case bandwidth between two halves of the network). Naturally, other topologies and metrics exist.</p>

<p>Consider a network topology based on the Koch snowflake. The simplest incarnation of such a topology consists of three nodes and three links in a fully-connected setup. The diameter is 1, average distance is 1 (or 2/3, if you include communications inside a node), etc.</p>

<p>The next incarnation of the topology consists of 12 nodes and 15 links. There are three clusters of three nodes fully, each cluster being fully connected by three links. Additionally, there are the three original nodes, connecting the three clusters using six additional links.</p>

<p>In fact, the number of nodes and links in incarnation $k$ are described by the following recurrence relations:
$$N(1) = 3$$
$$L(1) = 3$$
$$N(k+1) = N(k) + 3L(k)$$
$$L(k+1) = 5L(k)$$
Hopefully, the shape of this topology is clear; incarnation $k$ looks like the $k^{th}$ incarnation of the Koch snowflake. (A key difference is that for what I have in mind, I am actually keeping the link between the 1/3 and 2/3 nodes on successive iterations, so that each "triangle" is fully connected and the above recurrence relations hold).</p>

<p>Now for the question:</p>

<blockquote>
 <p>Has this network topology been studied, and if so, what is it called? If it has been studied extensively, are there any references? If not, what are the diameter, average distance and bisection bandwidth of this topology? How do these compare to other kinds of topologies, in terms of cost (links) & benefit?</p>
</blockquote>

<p>I have heard of a "star of stars" topology, which I think is similar, but not identical, to this. If anything, this seems to be more of a "ring of rings", or something along those lines. Naturally, tweaks could be made to the definition of this topology, and more advanced questions could be asked (for instance, we could assign different bandwidths to links introduced at earlier stages, or discuss scheduling or data placement for such a topology). More generally, I am also interested in any good references for exotic or little-studied network topologies (regardless of practicality). </p>

<p>Again, apologies if this demonstrates ignorance of relevant research results, and any insights are appreciated.</p>
 | computer networks network topology | 0 |
151 | How to determine if a database schema violates one of the less known normal forms? | <p>In database normalization, 1NF (no multivalued attributes), 2NF (all non-PK attributes depending only on PK attributes) and 3NF (all non-PK attributes depending on all of the PK attributes) are widely known. The 4NF (no part of the PK depending on other part of the PK) is less known, but still reasonably known.</p>

<p>Much less known are the 5NF, 6NF and the intermediates EKNF (Elementary Key normal form), BCNF (Boyce-Codd normal form - 3.5) and DKNF (Domain/Key normal form - 5.5). What exactly are that? Given a database schema, how do I determine if any table violates one of these much less knows normal forms?</p>
 | database theory databases | 0 |
154 | Is there an equivalent of van Emde Boas trees for ropes? | <p>Someone I know is planning on implementing a text editor in the near future, which prompted me to think about what kind of data structures are fast for a text editor. The most used structures are apparently <a href="http://en.wikipedia.org/wiki/Rope_%28computer_science%29">ropes</a> or <a href="http://en.wikipedia.org/wiki/Gap_buffer">gap buffers</a>.</p>

<p><a href="http://en.wikipedia.org/wiki/Van_Emde_Boas_tree">Van Emde Boas trees</a> are just about the fastest priority queues around, if you don't mind an upper bound on the number of items you can put into it and a large initialization cost. My question is whether there exists some data structure that is just as fast as the van Emde Boas tree, but supports text editor operations.</p>

<p>We only need to support up to $m$ characters in our data structure (so if $\log m = 32$, then we support up to 4GB worth of ASCII characters). We are allowed $\sqrt{m}$ time to initialize a new data structure. We'd like to support the following operations:</p>

<ul>
<li>Insert a character at position $i$ in $O(\log \log m)$ (and thereby increasing the position of every subsequent character by 1).</li>
<li>Delete a character at position $i$ in $O(\log \log m)$.</li>
<li>Return the character at position $i$ in $O(\log \log m)$.</li>
</ul>

<p>So, insert(0,'a') followed by insert(0,'b') results in "ba".</p>

<p>Even better would be this:</p>

<ul>
<li>Return a 'pointer' to some index $i$ in $O(\log \log m)$.</li>
<li>Given a 'pointer', return the character at this position in $O(1)$.</li>
<li>Given a 'pointer', remove the character at this position in $O(1)$.</li>
<li>Given a 'pointer', add a character at this position in $O(1)$ and return a pointer to the following position.</li>
<li>(optional) Given a 'pointer', return a 'pointer' to the next/previous character in $O(1)$.</li>
</ul>
 | data structures | 1 |
155 | Ratio of decidable problems | <p>Consider decision problems stated in some “reasonable” formal language. Let's say formulae in higher-order Peano arithmetic with one free variable as a frame of reference, but I'm equally interested in other models of computation: Diophantine equations, word problems from rewriting rules using Turing machines, etc. An answer expressed in any classical formalization would be fine, though if you know how much the choice of formalization influences the answer, that would also be interesting.</p>

<p>Given the length $N$ of the statement of a decision problem, we can define the number $D(N)$ of decidable statements of length $N$ and the number $U(N)$ of undecidable statements of length $N$.</p>

<p>What is known about the relative growth of $U(N)$ and $D(N)$? In other words, if I take a well-formed decision problem at random, what is the probability of its being decidable for a given statement length?</p>

<p><sub> Inspired by <a href="https://cs.stackexchange.com/questions/143/what-factors-make-a-problem-decidable">this question</a> which asks whether “most problems and algorithms [are] decidable”. Well, if you don't filter by interest, are they? </sub> </p>
 | computability undecidability | 0 |
156 | A sufficient and necessary condition about regularity of a language | <blockquote>
 <p>Which of the following statements is correct? </p>
 
 <ol>
 <li>sufficient and necessary conditions about regularity of a language exist but not discovered yet.</li>
 <li><p>There's no sufficient and necessary condition about regularity of a
 language.</p></li>
 <li><p>Pumping lemma is a necessary condition for non-regularity of a
 language.</p></li>
 <li>Pumping lemma is a sufficient condition for non-regularity of a
 language.</li>
 </ol>
</blockquote>

<p>I know <a href="http://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages#Converse_of_lemma_not_true" rel="noreferrer">#(4) is correct and #(3) is false</a> because "the converse of this statement is not true: a language that satisfies these conditions may still be non-regular", but what can be said about (1) and (2)?</p>
 | formal languages regular languages | 1 |
163 | Minimum number of clues to fully specify any sudoku? | <p>We know from <a href="http://arxiv.org/abs/1201.0749" rel="nofollow">this paper</a> that there does not exist a puzzle that can be solved starting with 16 or fewer clues, but it implies that there does exist a puzzle that can be solved from 17 clues. Can all valid sudoku puzzles be specified in 17 clues? If not, what is the minimum number of clues that can completely specify every valid puzzle? More formally, does there exist a valid sudoku puzzle (or, I guess it would be a set of puzzles) that cannot be uniquely solved from only 17 clues? If so, then what is the minimum number of clues, $C$, such that every valid sudoku puzzle can be uniquely specified in $C$ or fewer clues?</p>
 | sudoku combinatorics | 0 |
164 | What are the possible sets of word lengths in a regular language? | <p>Given a language $L$, define the length set of $L$ as the set of lengths of words in $L$:
$$\mathrm{LS}(L) = \{|u| \mid u \in L \}$$</p>

<p>Which sets of integers can be the length set of a regular language?</p>
 | formal languages computability regular languages finite automata | 1 |
165 | Efficient encoding of sudoku puzzles | <p>Specifying any arbitrary 9x9 grid requires giving the position and value of each square. A naïve encoding for this might give 81 (x, y, value) triplets, requiring 4 bits for each x, y, and value (1-9 = 9 values = 4 bits) for a total of 81x4x3 = 972 bits. By numbering each square, one can reduce the positional information to 7 bits, dropping a bit for each square and a total of 891 bits. By specifying a predetermined order, one can reduce this more drastically to just the 4 bits for each value for a total of 324 bits. However, a sudoku can have missing numbers. This provides the potential for reducing the number of numbers that have to be specified, but may require additional bits for indicating positions. Using our 11-bit encoding of (position, value), we can specify a puzzle with $n$ clues with $11n$ bits, e.g. a minimal (17) puzzle requires 187 bits. The best encoding I've thought of so far is to use one bit for each space to indicate whether it's filled and, if so, the following 4 bits encode the number. This requires $81+4n$ bits, 149 for a minimal puzzle ($n=17$). Is there a more efficient encoding, preferably without a database of each valid sudoku setup? (Bonus points for addressing a general $n$ from $N \times N$ puzzle)</p>

<p>It just occurred to me that many puzzles will be a rotation of another, or have a simple permutation of digits. Perhaps that could help reduce the bits required. </p>

<p>According to <a href="http://en.wikipedia.org/wiki/Sudoku#Mathematics_of_Sudoku">Wikipedia</a>, </p>

<blockquote>
 <p>The number of classic 9×9 Sudoku solution grids is 6,670,903,752,021,072,936,960 (sequence A107739 in OEIS), or approximately $6.67×10^{21}$.</p>
</blockquote>

<p>If I did my math right ($\frac{ln{(6,670,903,752,021,072,936,960)}}{ln{(2)}}$), that comes out to 73 (72.498) bits of information for a lookup table.</p>

<p>But:</p>

<blockquote>
 <p>The number of essentially different solutions, when symmetries such as rotation, reflection, permutation and relabelling are taken into account, was shown to be just 5,472,730,538[15] (sequence A109741 in OEIS).</p>
</blockquote>

<p>That gives 33 (32.35) bits, so it's possible that a clever method of indicating which permutation to use could get below the full 73 bits.</p>
 | combinatorics modelling information theory sudoku | 0 |
168 | Efficient compression of unlabeled trees | <p>Consider unlabeled, rooted binary trees. We can <em>compress</em> such trees: whenever there are pointers to subtrees $T$ and $T'$ with $T = T'$ (interpreting $=$ as structural equality), we store (w.l.o.g.) $T$ and replace all pointers to $T'$ with pointers to $T$. See <a href="https://cs.stackexchange.com/a/177/98">uli's answer</a> for an example.</p>

<p>Give an algorithm that takes a tree in the above sense as input and computes the (minimal) number of nodes that remain after compression. The algorithm should run in time $\cal{O}(n\log n)$ (in the uniform cost model) with $n$ the number of nodes in the input.</p>

<p>This has been an exam question and I have not been able to come up with a nice solution, nor have I seen one.</p>
 | algorithms data structures trees binary trees | 1 |
178 | Is there a "natural" undecidable language? | <p>Is there any "natural" language which is undecidable?</p>

<p>by "natural" I mean a language defined directly by properties of strings, and not via machines and their equivalent. In other words, if the language looks like
$$ L = \{ \langle M \rangle \mid \ldots \}$$
where $M$ is a TM, DFA (or regular-exp), PDA (or grammar), etc.., then $L$ <strong>is not</strong> natural. However $L = \{xy \ldots \mid x \text{ is a prefix of y} \ldots \}$ is natural.</p>
 | formal languages automata computability undecidability | 1 |
192 | How to come up with the runtime of algorithms? | <p>I've not gone much deep into CS. So, please forgive me if the question is not good or out of scope for this site.</p>

<p>I've seen in many sites and books, the big-O notations like $O(n)$ which tell the time taken by an algorithm. I've read a few articles about it, but I'm still not able to understand how do you calculate it for a given algorithm.</p>
 | algorithms algorithm analysis runtime analysis reference question | 0 |
194 | The time complexity of finding the diameter of a graph | <blockquote>
 <p>What is the time complexity of finding the diameter of a graph
 $G=(V,E)$?</p>
 
 <ul>
 <li>${O}(|V|^2)$</li>
 <li>${O}(|V|^2+|V| \cdot |E|)$</li>
 <li>${O}(|V|^2\cdot |E|)$</li>
 <li>${O}(|V|\cdot |E|^2)$</li>
 </ul>
</blockquote>

<p>The diameter of a graph $G$ is the maximum of the set of shortest path distances between all pairs of vertices in a graph.</p>

<p>I have no idea what to do about it, I need a complete analysis on how to solve a problem like this.</p>
 | algorithms time complexity graphs | 1 |
196 | A Case Distinction on Dynamic Programming: Example Needed! | <p>I have been working on dynamic programming for some time. The canonical way to evaluate a dynamic programming recursion is by creating a table of all necessary values and filling it row by row. See for example <a href="http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=11866" rel="nofollow noreferrer">Cormen, Leiserson et al: "Introduction to Algorithms"</a> for an introduction.</p>

<p>I focus on the table-based computation scheme in two dimensions (row-by-row filling) and investigate the structure of cell dependencies, i.e. which cells need to be done before another can be computed. We denote with $\Gamma(\mathbf{i})$ the set of indices of cells the cell $\mathbf{i}$ depends on. Note that $\Gamma$ needs to be cycle-free.</p>

<p>I abstract from the actual function that is computed and concentrate on its recursive structure. Formally, I consider a recurrrence $d$ to be <em>dynamic programming</em> if it has the form</p>

<p>$\qquad d(\mathbf{i}) = f(\mathbf{i}, \widetilde{\Gamma}_d(\mathbf{i}))$</p>

<p>with $\mathbf{i} \in [0\dots m] \times [0\dots n]$, $\widetilde{\Gamma}_d(\mathbf{i}) = \{(\mathbf{j},d(\mathbf{j})) \mid \mathbf{j} \in \Gamma_d(\mathbf{i}) \}$ and $f$ some (computable) function that does not use $d$ other than via $\widetilde{\Gamma}_d$.</p>

<p>When restricting the granularity of $\Gamma_d$ to rough areas (to the left, top-left, top, top-right, ... of the current cell) one observes that there are essentially three cases (up to symmetries and rotation) of valid dynamic programming recursions that inform how the table can be filled:</p>

<p><img src="https://i.stack.imgur.com/AhnK7.png" alt="Three cases of dynamic programming cell dependencies"></p>

<p>The red areas denote (overapproximations of) $\Gamma$. Cases one and two admit subsets, case three is the worst case (up to index transformation). Note that it is not strictly required that the <em>whole</em> red areas are covered by $\Gamma$; <em>some</em> cells in every red part of the table are sufficient to paint it red. White areas are explictly required to <em>not</em> contain any required cells.</p>

<p>Examples for case one are <a href="https://en.wikipedia.org/wiki/Edit_distance" rel="nofollow noreferrer">edit distance</a> and <a href="https://en.wikipedia.org/wiki/Longest_common_subsequence_problem#Code_for_the_dynamic_programming_solution" rel="nofollow noreferrer">longest common subsequence</a>, case two applies to <a href="https://en.wikipedia.org/wiki/Bellman%E2%80%93Ford_algorithm" rel="nofollow noreferrer">Bellman & Ford</a> and <a href="https://en.wikipedia.org/wiki/CYK" rel="nofollow noreferrer">CYK</a>. Less obvious examples include such that work on the diagonals rather than rows (or columns) as they can be rotated to fit the proposed cases; see <a href="https://cs.stackexchange.com/a/211/98">Joe's answer</a> for an example.</p>

<p>I have no (natural) example for case three, though! So my question is: What are examples for case three dynamic programming recursions/problems?</p>
 | algorithms dynamic programming | 0 |
206 | Is an infinite union of context-free languages always context-free? | <p>Let $L_1$, $L_2$, $L_3$, $\dots$ be an infinite sequence of context-free languages, each of
which is defined over a common alphabet $Σ$. Let $L$ be the infinite union of $L_1$, $L_2$, $L_3$, $\dots $;
i.e., $L = L_1 \cup L_2 \cup L_3 \cup \dots $. </p>

<p>Is it always the case that $L$ is a context-free language? </p>
 | formal languages context free closure properties | 1 |
210 | Why polynomial time is called "efficient"? | <p>Why in computer science any complexity which is at most polynomial is considered efficient?</p>

<p>For any practical application<sup>(a)</sup>, algorithms with complexity $n^{\log n}$ are way faster than algorithms that run in time, say, $n^{80}$, but the first is considered inefficient while the latter is efficient. Where's the logic?!</p>

<p><sup>(a) Assume, for instance, the number of atoms in the universe is approximately $10^{80}$.</sup></p>
 | algorithms complexity theory terminology efficiency | 1 |
219 | Implementing the GSAT algorithm - How to select which literal to flip? | <p>The GSAT algorithm is, for the most part, straight forward: You get a formula in conjunctive normal form and flip the literals of the clauses until you find a solution that satisfies the formula or you reach the max_tries/max_flips limit and find no solution.</p>

<p>I'm implementing the following algorithm:</p>

<pre><code>procedure GSAT(A,Max_Tries,Max_Flips)
 A: is a CNF formula
 for i:=1 to Max_Tries do
 S <- instantiation of variables
 for j:=1 to Max_Iter do
 if A satisfiable by S then
 return S
 endif
 V <- the variable whose flip yield the most important raise in the number of satisfied clauses;
 S <- S with V flipped;
 endfor
 endfor
 return the best instantiation found
end GSAT
</code></pre>

<p>I'm having trouble interpreting the following line: </p>

<pre><code>V <- the variable whose flip yield the most important raise in the number of satisfied clauses;
</code></pre>

<p>Isn't the maximum number of satisfied clauses what we're looking for? It seems to me that we're trying to use the solution or approximations to it to find the solution. </p>

<p>I've thought of some ways to do this but It'd be good to hear other points of view (The assumption is that once the variable is flipped once it is selected.):</p>

<ul>
<li>Generate a state space with all possible flips and search the space for a literal that results in the best approximation to the goal state.</li>
<li>Randomly select the variable that I will flip starting with the literals that are more common.</li>
<li>Pick a random literal.</li>
</ul>
 | algorithms satisfiability 3 sat | 1 |
221 | Identifying events related to dates in a paragraph | <p>Is there an <s><em>algorithmic</em></s> approach to identify that dates given in a paragraph correlate to particular events (phrases) in the paragraph?</p>
<p>Example, consider the following paragraph:</p>
<blockquote>
<p>In June 1970, the great leader took the oath. But it was only after May 1972, post the death of the Minister of State, that he took over the reins of the country. While he enjoyed popular support until Mid-1980, his influence began to fall thereafter.</p>
</blockquote>
<p>Is there an algorithm (deterministic or stochastic)# that can generate a 2-tuple (date, event), where the <em>event</em> is implied, by the paragraph, to have occured on the <em>date</em>? In the above case:</p>
<ul>
<li><p>(June 1970, great leader took oath)</p>
</li>
<li><p>(May 1972, took over the reins)</p>
<p>or better yet</p>
</li>
<li><p>(May 1972, <em>the great leader</em> took over the reins)</p>
</li>
<li><p>(1980, fall in influence)</p>
</li>
</ul>
<hr />
<p>#Later addition</p>
 | algorithms data mining natural language processing | 1 |
222 | Easy reduction from 3SAT to Hamiltonian path problem | <p>There is a reduction in Sipser's book "Introduction to the theory of computation" on page 286 from 3SAT to Hamiltonian path problem. </p>

<blockquote>
 <p>Is there a simpler reduction?</p>
</blockquote>

<p>By simpler I mean a reduction that would be easier to understand (for students).</p>

<blockquote>
 <p>Is there a reduction that uses linear number of variables?</p>
</blockquote>

<p>The reduction in Sipser uses $O(kn)$ variables where $k$ is the number of clauses and $n$ is the number of variables. In other words, it is possible for the reduction to blow the size from $s$ to $O(s^2)$. Is there a simple reduction where the size of the output of the reduction is linear in the size of its input? </p>

<p>If it is not possible, is there a reason? Would that imply an unknown result in complexity/algorithms?</p>
 | complexity theory np hard | 1 |
223 | Stability for couples in the Stable Matching Problem | <p>In the <a href="http://en.wikipedia.org/wiki/Stable_marriage_problem" rel="nofollow">Stable Matching Problem</a>, it is stated that there can exist cases where the $m$ list of men can be content with their decisions, yet the list of $f$ cannot when the algorithm is run with men's proposals.</p>

<p>From what I read, an unstable match occurs when $m$ and $f$ prefer each other to their current partners.</p>

<p>I am a little lost in the definition of Stable Matching for this case. I'm going over the slides <a href="http://www.cs.princeton.edu/~wayne/kleinberg-tardos/01stable-matching-2x2.pdf" rel="nofollow">here</a>.</p>

<blockquote>
 <p>Is a pair $(m, f)$ stable as long as the men are content even though the female's preferences have not been matched?</p>
</blockquote>
 | combinatorics | 1 |
224 | Are two-level schedulers only useful to manage swapping? | <p><a href="http://en.wikipedia.org/wiki/Two-level_scheduling">Two-level scheduling</a> is useful when a system is running more processes than fit in RAM: a lower-level scheduler switches between resident processes, and a higher-level scheduler swaps groups of processes in and out.</p>

<p>I find no other mention of two-level scheduling in Andrew Tanenbaum's <em>Operating Systems: Design and Implementation</em>, 1st ed. Exercise 2.22 asks why two-level scheduling might be used; I don't know whether it's there as a reading comprehension check or there are other reasons not prominently mentioned in the text.</p>

<p>Is two-level scheduling useful to manage other resource contentions, besides memory?</p>
 | operating systems process scheduling | 0 |
226 | Round-robin scheduling: allow listing a process multiple times? | <p>In a round-robin scheduler, adding a process multiple times to the process list is a cheap way to give it higher priority.</p>

<p>I wonder how practical an approach this might be. What benefit does it have over other techniques such as giving the process a longer time slice (benefit: less switching time) or maintaining a separate list of high-priority processes. In particular, how does listing a process multiple times influence fairness and reactivity?</p>

<p>(From exercise 2.16 in Andrew Tanenbaum's <em>Operating Systems: Design and Implementation</em> 1st ed.)</p>
 | operating systems process scheduling | 1 |
227 | Why store self and parent links (. and ..) in a directory entry? | <p>Consider an filesystem targeted at some embedded devices that does little more than store files in a hierarchical directory structure. This filesystem lacks many of the operations you may be used to in systems such as unix and Windows (for example, its access permissions are completely different and not tied to metadata stored in directories). This filesystem does not allow any kind of hard link or soft link, so every file has a unique name in a strict tree structure.</p>

<p>Is there any benefit to storing a link to the directory itself and to its parent in the on-disk data structure that represents a directory?</p>

<p>Most unix filesystems have <code>.</code> and <code>..</code> entries on disk. I wonder why they don't handle those at the VFS (generic filesystem driver) layer. Is this a historical artifact? Is there a good reason, and if so, which precisely, so I can determine whether it's relevant to my embedded system?</p>
 | operating systems filesystems | 0 |
231 | Problems Implementing Closures in Non-functional Settings | <p>In programming languages, closures are a popular and often desired feature. <a href="https://en.wikipedia.org/wiki/Closure_%28computer_science%29">Wikipedia</a> says (emphasis mine):</p>

<blockquote>
 <p>In computer science, a closure (...) is a <strong>function together with a referencing environment</strong> for the non-local variables of that function. A closure allows a function to access variables outside its immediate lexical scope.</p>
</blockquote>

<p>So a closure is essentially a (anonymous?) function value which can use variables outside of its own scope. In my experience, this means it can access variables that are in scope at its definition point.</p>

<p>In practice, the concept seems to be diverging, though, at least outside of functional programming. Different languages implement different semantics, there even seem to be wars of opinons on. Many programmers do not seem to know what closures are, viewing them as little more than anonymous functions.</p>

<p>Also, there seem to exist major hurdles when implementing closures. Most notable, Java 7 was supposed to include them but the feature was pushed back to a future release.</p>

<p>Why are closures so hard (to understand and) to realise? This is too broad and vague a question, so let me focus it more with these interconnected questions: </p>

<ul>
<li>Are there problems with expressing closures in common semantic formalisms (small-step, big-step, ...)? </li>
<li>Are existing type systems not suited for closures and can not be extended easily?</li>
<li>Is it problematic to bring closures in line with a traditional, stack-based procedure translation?</li>
</ul>

<p>Note that the question relates mostly to procedural, object-oriented and scripting languages in general. As far as I know, functional languages do not have any problems.</p>
 | programming languages semantics | 1 |
234 | Parsing arbitrary context-free grammars, mostly short snippets | <p>I want to parse user-defined domain specific languages. These languages are typically close to mathematical notations (I am not parsing a natural language). Users define their DSL in a BNF notation, like this:</p>

<pre><code>expr ::= LiteralInteger
 | ( expr )
 | expr + expr
 | expr * expr
</code></pre>

<p>Input like <code>1 + ( 2 * 3 )</code> must be accepted, while input like <code>1 +</code> must be rejected as incorrect, and input like <code>1 + 2 * 3</code> must be rejected as ambiguous.</p>

<p>A central difficulty here is coping with ambiguous grammars in a user-friendly way. Restricting the grammar to be unambiguous is not an option: that's the way the language is — the idea is that writers prefer to omit parentheses when they are not necessary to avoid ambiguity. As long as an expression isn't ambiguous, I need to parse it, and if it isn't, I need to reject it.</p>

<p>My parser must work on any context-free grammar, even ambiguous ones, and must accept all unambiguous input. I need the parse tree for all accepted input. For invalid or ambiguous input, I ideally want good error messages, but to start with I'll take what I can get.</p>

<p>I will typically invoke the parser on relatively short inputs, with the occasional longer input. So the asymptotically faster algorithm may not be the best choice. I would like to optimize for a distribution of around 80% inputs less than 20 symbols long, 19% between 20 and 50 symbols, and 1% rare longer inputs. Speed for invalid inputs is not a major concern. Furthermore, I expect a modification of the DSL around every 1000 to 100000 inputs; I can spend a couple of seconds preprocessing my grammar, not a couple of minutes.</p>

<p>What parsing algorithm(s) should I investigate, given my typical input sizes? Should error reporting be a factor in my selection, or should I concentrate on parsing unambiguous inputs and possibly run a completely separate, slower parser to provide error feedback?</p>

<p>(In the project where I needed that (a while back), I used <a href="http://en.wikipedia.org/wiki/CYK_algorithm">CYK</a>, which wasn't too hard to implement and worked adequately for my input sizes but didn't produce very nice errors.)</p>
 | formal languages parsers compilers | 1 |
245 | Influence of the dimension of cellular automata on complexity classes | <p>Let's take as an example the 3d → 2d reduction: What's the cost of simulating a 3d cellular automaton by a 2d cellular automaton?</p>

<p>Here is a bunch of more specific questions:</p>

<ol>
<li><p>What kind of algorithms will have their time complexity changed, by how much?</p></li>
<li><p>What would be the basic idea for the encoding; how is a 3d grid efficiently (or not efficiently…) mapped to a 2d grid? (The challenge seems to achieve communication between two cells that where originally neighbors on the 3d grid, but are not neighbors anymore on the 2d grid). </p></li>
<li><p>In particular, I'm interested in the complexity drift for exponential complexity algorithms (which I guess remains exponential whatever the dimension, is it the case?)</p></li>
</ol>

<p>Note: I'm not interested in low complexity classes for which the chosen I/O method has an influence on complexities. (Maybe the best is to assume that the I/O method is dimensionless: done locally on one specific cell during a variable amount of time steps.) </p>

<hr>

<p><em>Some context: I'm interested in parallel local graph rewriting, but those graphs are closer to 3d (or maybe ωd…) grids than to 2d grids, I'd like to know what to expect of a hardware implementation on a 2-dimentional silicon chip.</em></p>
 | complexity theory time complexity cellular automata | 1 |
246 | Reflection on Concurrency | <p><a href="http://en.wikipedia.org/wiki/Reflection_%28computer_programming%29" rel="nofollow">Reflection</a> is a common mechanism for accessing and changing the structure of a program at run-time, found in many dynamic programming languages such as Smalltalk, Ruby and Python, and in impoverished form in Java and (hence) Scala. Functional languages such as LISP and Scheme also support a good reflect framework. </p>

<p>Modern languages support concurrency, either by adding threads on top of the existing language constructs, or by designing from the ground up with concurrency in mind.</p>

<p>My question is: </p>

<blockquote>
 <p>What models of reflection for the concurrency aspects in concurrent languages (multi-threaded, actor-based, active-object-based) exist? </p>
</blockquote>
 | programming languages semantics concurrency reflection | 1 |
248 | Analyzing a modified version of the card-game "War" | <p>A simple game usually played by children, the game of War is played by two people using a standard deck of 52 playing cards. Initially, the deck is shuffled and all cards are dealt two the two players, so that each have 26 random cards in a random order. We will assume that players are allowed to examine (but not change) both decks, so that each player knows the cards and orders of cards in both decks. This is typically note done in practice, but would not change anything about how the game is played, and helps keep this version of the question completely deterministic.</p>

<p>Then, players reveal the top-most cards from their respective decks. The player who reveals the larger card (according to the usual order: 2, 3, 4, 5, 6, 7, 8, 9, 10, Jack, Queen, King, Ace) wins the round, placing first his card (the high card) at the bottom of his deck, and then his opponent's card (the low card) at the bottom of the deck (typically, the order of this isn't enforced, but to keep the first version of this question deterministic, such an ordering will be enforced).</p>

<p>In the event of a tie, each player reveals four additional cards from the top of their decks. If the fourth card shown by one player is higher than the fourth card shown by another player, the player with the higher fourth card wins all cards played during the tie-breaker, in which case the winner's cards are first placed at the bottom of the winner's deck (in first-in, first-out order; in other words, older cards are placed at the bottom first), followed by the loser's cards (in the same order).</p>

<p>In the event of subsequent ties, the process is repeated until a winner of the tie is determined. If one player runs out of cards and cannot continue breaking the tie, the player who still has cards is declared the winner. If both players run out cards to play at the same time the game is declared a tie.</p>

<p>Rounds are played until one player runs out of cards (i.e., has no more cards in his deck), at which point the player who still has cards is declared the winner.</p>

<p>As the game has been described so far, neither skill nor luck is involved in determining the outcome. Since there are a finite number of permutations of 52 cards, there are a finite number of ways in which the decks may be initially dealt, and it follows that (since the only state information in the game is the current state of both players' decks) the outcome of each game configuration can be decided a priori. Certainly, it is possibly to win the game of War, and by the same token, to lose it. We also leave open the possibility that a game of War might result in a Tie or in an infinite loop; for the completely deterministic version described above, such may or may not be the case.</p>

<p>Several variations of the game which attempt to make it more interesting (and no, not all involve making it into a drinking game). One way which I have thought of to make the game more interesting is to allow players to declare automatic "trumps" at certain rounds. At each round, either player (or both players) may declare "trump". If one player declares "trump", that player wins the round regardless of the cards being played. If both players declare "trump", then the round is treated as a tie, and play continues accordingly.</p>

<p>One can imagine a variety of rules limiting players' ability to trump (unlimited trumping would always result in a Tie game, as players would trump every turn). I propose two versions (just off the top of my head; more interesting versions along these lines are probably possible) of War based on this idea but using different trump limiting mechanisms:</p>

<ol>
<li>Frequency-War: Players may only trump if they have not trumped in the previous $k$ rounds.</li>
<li>Revenge-War: Players may only trump if they have not won a round in the previous $k$ rounds.</li>
</ol>

<p>Now for the questions, which apply to each of the versions described above:</p>

<blockquote>
 <ol>
 <li>Is there a strategy such that, for some set of possible initial game configurations, the player using it always wins (strongly winning strategy)? If so, what is this strategy? If not, why not?</li>
 <li>Is there a strategy such that, for some set of possible initial game configurations, the player using it can always win or force a tie (winning strategy)? If so, what is this strategy? If not, why not?</li>
 <li>Are their initial game configurations such that there is no winning strategy (i.e., a player using any fixed strategy $S$ can always be defeated by a player using fixed strategy $S'$)? If so, what are they, and explain?</li>
 </ol>
</blockquote>

<p>To be clear, I am thinking of a "strategy" as a fixed algorithm which determines at what rounds the player using the strategy should trump. For instance, the algorithm "trump whenever you can" is a strategy, and an algorithm (a heuristic algorithm). Another way of what I'm asking is this:</p>

<blockquote>
 <p>Are there any good (or provably optimal) heuristics for playing these games?</p>
</blockquote>

<p>References to analyses of such games are appreciated (I am unaware of any analysis of this version of War, or of essentially equivalent games). Results for any $k$ are interesting and appreciated (note that, in both cases, $k=0$ leads to unlimited trumping, which I have already discussed).</p>
 | algorithms optimization | 1 |
249 | (When) is hash table lookup O(1)? | <p>It is often said that hash table lookup operates in constant time: you compute the hash value, which gives you an index for an array lookup. Yet this ignores collisions; in the worst case, every item happens to land in the same bucket and the lookup time becomes linear ($\Theta(n)$).</p>

<p>Are there conditions on the data that can make hash table lookup truly $O(1)$? Is that only on average, or can a hash table have $O(1)$ worst case lookup?</p>

<p><em>Note: I'm coming from a programmer's perspective here; when I store data in a hash table, it's almost always strings or some composite data structures, and the data changes during the lifetime of the hash table. So while I appreciate answers about perfect hashes, they're cute but anecdotal and not practical from my point of view.</em></p>

<p>P.S. Follow-up: <a href="https://cs.stackexchange.com/questions/477/for-what-kind-of-data-are-hash-table-operations-o1">For what kind of data are hash table operations O(1)?</a></p>
 | algorithm analysis data structures runtime analysis hash tables | 0 |
256 | Are today's massive parallel processing units able to run cellular automata efficiently? | <p>I wonder whether the massively parallel computation units provided in graphic cards nowadays (one that is programmable in <a href="http://en.wikipedia.org/wiki/OpenCL">OpenCL</a>, for example) are good enough to simulate 1D cellular automata (or maybe 2D cellular automata?) efficiently.</p>

<p>If we choose whatever finite grid would fit inside the memory of the chip, can we expect one transition of a cellular automaton defined on this grid to be computed in (quasi)constant time?</p>

<p>I assume 2D cellular automata would require more bandwidth for communication between the different parts of the chips than 1D automata.</p>

<p>I'd also be interested by the same question in the case of FPGA programming or custom chips.</p>
 | computer architecture parallel computing cellular automata | 0 |
257 | How is a JIT compiler different from an ordinary compiler? | <p>There's been a lot of hype about JIT compilers for languages like Java, Ruby, and Python. How are JIT compilers different from C/C++ compilers, and why are the compilers written for Java, Ruby or Python called JIT compilers, while C/C++ compilers are just called compilers?</p>
 | compilers | 1 |
258 | Is it possible to solve the halting problem if you have a constrained or a predictable input? | <p>The halting problem cannot be solved in the general case. It is possible to come up with defined rules that restrict allowed inputs and can the halting problem be solved for that special case?</p>

<p>For example, it seems likely that a language that does not allow loops for instance, would be very easy to tell if the program would halt or not.</p>

<p>The problem I'm trying to solve right now is that I'm trying to make a script checker that checks for the validity of the program. Can halting problem be solved if I know exactly what to expect from script writers, meaning very predictable inputs. If this cannot be solved exactly, what are some good approximation techniques to solve this?</p>
 | computability undecidability software engineering formal methods | 0 |
261 | How to determine likely connections in a social network? | <p>I am curious in determining an approach to tackling a "suggested friends" algorithm.</p>

<p><a href="http://facebook.com" rel="nofollow noreferrer">Facebook</a> has a feature in which it will recommended individuals to you which it thinks you may be acquainted with. These users normally (excluding the edge cases in <a href="http://www.facebook.com/help/?faq=154758887925123#How-do-I-suggest-a-friend-to-someone?" rel="nofollow noreferrer">which a user specifically recommends a friend</a>) have a highly similar network to oneself. That is, the number of friends in common are high. I assume Twitter follows a similar path for their "Who To Follow" mechanism.</p>

<p><a href="https://stackoverflow.com/a/6851193/321505">Stephen Doyle (Igy)</a>, a Facebook employee suggested that the related newsfeed that uses <a href="http://www.quora.com/How-does-Facebook-calculate-weight-for-edges-in-the-EdgeRank-formula" rel="nofollow noreferrer">EdgeRank formula</a> which seems to indicate that more is to valued than friends such as appearance is similar posts. Another user suggested the Google Rank system. </p>

<p>Facebook states their News Feed Optimization as $\sum u_{e}w_{e}d_{e}$ where</p>

<p>$u_{e}$ = affinity score between viewing user and edge creator<br>
$w_{e}$ = weight for this edge (create, comment, like, tag, etc)<br>
$d_{e}$ = time decay factor based on how long ago the edge was created </p>

<p>Summing these items is supposed to give an object's rank which I assume as Igy hinted, means something in a similar format is used for suggested friends.</p>

<p>So I'm guessing that this is the way in which connections for all types are done in general via a rank system?</p>
 | algorithms machine learning modelling social networks | 1 |
265 | How to prove that a language is not context-free? | <p>We learned about the class of context-free languages $\mathrm{CFL}$. It is characterised by both <a href="https://en.wikipedia.org/wiki/Context-free_grammar">context-free grammars</a> and <a href="https://en.wikipedia.org/wiki/Pushdown_automata">pushdown automata</a> so it is easy to show that a given language is context-free.</p>

<p>How do I show the opposite, though? My TA has been adamant that in order to do so, we would have to show for <em>all</em> grammars (or automata) that they can not describe the language at hand. This seems like a big task!</p>

<p>I have read about some pumping lemma but it looks really complicated.</p>
 | formal languages context free proof techniques reference question | 1 |
266 | Why are the total functions not enumerable? | <p>We learned about the concept of enumerations of functions. In practice, they correspond to programming languages.</p>

<p>In a passing remark, the professor mentioned that the class of all total functions (i.e. the functions that always terminate for every input) is <em>not</em> enumerable. That would mean that we can not devise a programming language that allows us to write all total functions but no others---which would be nice to have!</p>

<p>So how is it that we (apparently) have to accept the potential for non-termination if we want decent computational power?</p>
 | computability semi decidability enumeration | 1 |
269 | Why would anyone want CISC? | <p>In our computer systems lecture we were introduced to the MIPS processor. It was (re)developed over the course of the term and has in fact been quite easy to understand. It uses a <a href="https://en.wikipedia.org/wiki/Reduced_instruction_set_computing">RISC</a> design, that is its elementary commands are regularly encoded and there are only few of them in order to keep the wires simple.</p>

<p>It was mentioned that <a href="https://en.wikipedia.org/wiki/Complex_instruction_set_computing">CISC</a> follows a different philosophy. I looked briefly at the <a href="https://en.wikipedia.org/wiki/X86_instruction_listings">x86 instruction set</a> and was shocked. I can not image how anyone would want to build a processor that uses so complex a command set!</p>

<p>So I figure there have to be good arguments why large portions of the processor market use CISC architectures. What are they? </p>
 | computer architecture | 1 |
270 | Hash tables versus binary trees | <p>When implementing a dictionary ('I want to look up customer data by their customer IDs'), the typical data structures used are hash tables and binary search trees. I know for instance that the C++ STL library implements dictionaries (they call them maps) using (balanced) binary search trees, and the .NET framework uses hash tables under the hood.</p>

<blockquote>
 <p>What are the advantages and disadvantages of these data structures? Is there some other option that is reasonable in certain situations?</p>
</blockquote>

<p>Note that I'm not particularly interested in cases where the keys have a strong underlying structure, say, they are all integers between 1 and n or something.</p>
 | algorithms data structures binary trees hash tables | 1 |
271 | Is there an abstract machine that can capture power consumption? | <p>When reporting algorithmic complexity of an algorithm, one assumes the underlying computations are performed on some abstract machine (e.g. RAM) that approximates a modern CPU. Such models allow us to report time and space complexity of algorithms. Now, with the spread out of <a href="http://en.wikipedia.org/wiki/GPGPU">GPGPUs</a>, one wonders whether there are well known models where one can take into account power consumption as well.</p>

<p>GPUs are well known to consume considerable amount of power and certain instructions fall into different categories of power consumption based on their complexity and location on the sophisticated chip. Hence instructions, from an energy of view, are not of unit (or even fixed) cost. A trivial extension would be assigning weights to operation cost, but I'm looking for a powerful model where an operation/instruction might cost <em>non-constant</em> units of energy, e.g. polynomial amount (or even more complex e.g.: function of time elapsed since start of the algorithm; or taking into account probability of failure of cooling system, which will heat up the chips, and slow down the clock frequency etc.)</p>

<p>Are there such models where non-trivial costs and faults can be incorporated?</p>
 | complexity theory computer architecture power consumption machine models | 1 |
273 | Decidable non-context-sensitive languages | <p>It is arguable that most languages created to describe everyday problems are context-sensitives. In the other hand, it is possible and not hard to find some languages that are not recursive or even not recursively-enumerable.</p>

<p>Between these two types are the recursive non-context-sensitive languages. Wikipedia gives one example <a href="http://en.wikipedia.org/wiki/Context-sensitive_language">here</a>:</p>

<blockquote>An example of recursive language that is not context-sensitive is any recursive language whose decision is an EXPSPACE-hard problem, say, the set of pairs of equivalent regular expressions with exponentiation.</blockquote>

<p>So the question: What others problems exists that are decidable but yet non-context-sensitive? Is this class of problems the same as decidable EXPSPACE-hard?</p>
 | formal languages complexity theory formal grammars | 1 |
285 | Decidability of prefix language | <p>At the midterm there was a variant of the following question:</p>

<blockquote>
 <p>For a decidable $L$ define $$\text{Pref}(L) = \{ x \mid \exists y \text{ s.t. } xy \in L\}$$
 Show that $\text{Pref}(L)$ is not necessarily decidable.</p>
</blockquote>

<p>But if I choose $L=\Sigma^*$ then I think $\text{Pref}(L)$ is also $\Sigma^*$, thus decidable. Also $L=\emptyset$ gives the same result. And since $L$ must be decidable I cannot pick the halting problem or such..</p>

<ol>
<li>How can I find $L$ such the $\text{Pref}(L)$ is not decidable?</li>
<li>Which conditions on $L$ will make $\text{Pref}(L)$ decidable, and which will make it undecidable?</li>
</ol>
 | computability undecidability | 1 |
286 | Proving the security of Nisan-Wigderson pseudo-random number generator | <p>Let <span class="math-container">$\cal{S}=\{S_i\}_{1\leq i\leq n}$</span> be a partial <span class="math-container">$(m,k)$</span>-design and <span class="math-container">$f: \{0,1\}^m \to \{0,1\}$</span> be a Boolean function. The Nisan-Wigderson generator <span class="math-container">$G_f: \{0,1\}^l \to \{0,1\}^n$</span> is defined as follows:</p>
<p><span class="math-container">$$G_f(x) = (f(x|_{S_1}) , \ldots, f(x|_{S_n}) )$$</span></p>
<p>To compute the <span class="math-container">$i$</span>th bit of <span class="math-container">$G_f$</span> we take the bits of <span class="math-container">$x$</span> with indexes in <span class="math-container">$S_i$</span> and then apply <span class="math-container">$f$</span> to them.</p>
<blockquote>
<p>Assume that <span class="math-container">$f$</span> is <span class="math-container">$\frac{1}{n^c}$</span>-hard for circuits of size <span class="math-container">$n^c$</span> where <span class="math-container">$c$</span> is a constant.
How can we prove that <span class="math-container">$G_f$</span> is <span class="math-container">$(\frac{n^c}{2}, \frac{2}{n^c})$</span>-secure pseudo-random number generator?</p>
</blockquote>
<h3>Definitions:</h3>
<p>A partial <span class="math-container">$(m,k)$</span>-design is a collection of subsets <span class="math-container">$S_1, \ldots, S_n \subseteq [l] = \{1, \ldots, l\}$</span> such that</p>
<ul>
<li>for all <span class="math-container">$i$</span>: <span class="math-container">$|S_i|=m$</span>, and</li>
<li>for all <span class="math-container">$i \neq j$</span>: <span class="math-container">$|S_i \cap S_j| \leq k$</span>.</li>
</ul>
<p>A function <span class="math-container">$f$</span> is <span class="math-container">$\epsilon$</span>-hard for circuits of size <span class="math-container">$s$</span> iff no circuit of size <span class="math-container">$s$</span> can predict <span class="math-container">$f$</span> with probability <span class="math-container">$\epsilon$</span> better than a coin toss.</p>
<p>A function <span class="math-container">$G:\{0,1\}^l \to \{0,1\}^n$</span> is <span class="math-container">$(s, \epsilon)$</span>-secure pseudo-random number generator iff no circuit of size <span class="math-container">$s$</span> can distinguish between a random number and a number generated by <span class="math-container">$G_f$</span> with probability better than <span class="math-container">$\epsilon$</span>.</p>
<p>We use <span class="math-container">$x|_A$</span> for the string composed of <span class="math-container">$x$</span>'s bits with indexes in <span class="math-container">$A$</span>.</p>
 | cryptography pseudo random generators | 1 |
288 | How is the loop invariant obtained in this square root bound finding algorithm? | <p><em>Originally on <a href="https://math.stackexchange.com/questions/74453/how-is-the-loop-invarient-obtained-in-this-square-root-bound-finding-algorithm">math.SE</a> but unanswered there.</em></p>

<p>Consider the following algorithm.</p>

<pre><code>u := 0
v := n+1;
while ( (u + 1) is not equal to v) do
 x := (u + v) / 2;
 if ( x * x <= n) 
 u := x;
 else
 v := x;
 end_if
end_while 
</code></pre>

<p>where u, v, and n are integers and the division operation is integer division. </p>

<ul>
<li>Explain what is computed by the algorithm. </li>
<li>Using your answer to part I as the post-condition for the algorithm, establish a loop invariant and show that 
the algorithm terminates and is correct.</li>
</ul>

<p>In class, the post-condition was found to be $0 \leq u^2 \leq n < (u + 1)^2$ and the 
Invariant is $0 \leq u^2 \leq n < v^2, u + 1 \leq v$. I don't really understand on how the post-condition and invariants were obtained. I figure the post condition was $u + 1 = v$... which is clearly not the case. So I am wondering on how the post-condition and invariant was obtained. I'm also wondering on how the pre-condition can be obtained by using the post-condition.</p>
 | algorithms loop invariants correctness proof | 0 |
289 | Universality of the Toffoli gate | <p>Regarding the quantum <a href="https://en.wikipedia.org/wiki/Toffoli_gate">Toffoli gate</a>:</p>

<ol>
<li>is it <em>classicaly</em> universal, and if so, why?</li>
<li>is it <em>quantumly</em> universal, and why?</li>
</ol>
 | quantum computing circuits turing completeness | 1 |
290 | Understanding $\text{handle}$ in parsing problem | <p><em>Originally <a href="https://math.stackexchange.com/questions/22614/help-understand-texthandle-in-parsing-problem">https://math.stackexchange.com/questions/22614/help-understand-texthandle-in-parsing-problem</a> but unaswered there</em></p>

<p>The BNF is defined as followed:</p>

<pre><code>S -> aAb | bBA 
A -> ab | aAB
B -> bB | b
</code></pre>

<p>The sentence is:</p>

<pre><code>aaAbBb
</code></pre>

<p>And this is the parse tree:
<img src="https://i.stack.imgur.com/gpdeo.png" alt="enter image description here"></p>

<p><strong>Phrases:</strong> aaAbBb, aAbB, bB<br>
<strong>Simple Phrases:</strong> bB<br>
<strong>Handle:</strong> ? </p>

<p>From the book, <code>handle</code> is defined as followed:
<code>B</code> is the handle of the right sentential from $y = aBw$ if and only if:<br>
$S \to_{rm} \cdot aAw \to_{rm} aBw$</p>

<p>So in my case, what's the handle? Any idea? </p>
 | formal languages compilers parsers | 0 |
294 | Strategies for becoming unstuck in understanding TCS | <p>I am a graduate student taking a course in theory of computation and I have serious trouble producing content once I'm asked to. I'm able to follow the textbook (Introduction to the Theory of Computation by Michael Sipser) and lectures; however when asked to prove something or come up with a formal description of a specific TM, I just choke. </p>

<p>What can I do in such situations? I guess my issue is with fully comprehending abstract concepts to the point I can actually use them. Is there a structured way to approaching a new, abstract concept and eventually build intuition?</p>
 | computability education intuition | 0 |
298 | Graph searching: Breadth-first vs. depth-first | <p>When searching graphs, there are two easy algorithms: <strong>breadth-first</strong> and <strong>depth-first</strong> (Usually done by adding all adjactent graph nodes to a queue (breadth-first) or stack (depth-first)).</p>

<p>Now, are there any advantages of one over another?</p>

<p>The ones I could think of:</p>

<ul>
<li>If you expect your data to be pretty far down inside the graph, <em>depth-first</em> might find it earlier, as you are going down into the deeper parts of the graph very fast.</li>
<li>Conversely, if you expect your data to be pretty far up in the graph, <em>breadth-first</em> might give the result earlier.</li>
</ul>

<p>Is there anything I have missed or does it mostly come down to personal preference?</p>
 | algorithms graphs search algorithms graph traversal | 0 |
302 | ML function of type 'a -> 'b | <p>Our professor asked us to think of a function in OCaml that has the type</p>

<pre><code>'a -> 'b
</code></pre>

<p>i.e. a function of one argument that could be anything, and that can return a different anything.</p>

<p>I thought of using <code>raise</code> in a function that ignores its argument:</p>

<pre><code>let f x = raise Exit
</code></pre>

<p>But the professor said there was a solution that doesn't require any function in the standard library. I'm confused: how can you make a <code>'b</code> if you don't have one in the first place?</p>

<p><sub> I'm asking here rather than on Stack Overflow because I want to understand what's going on, I don't want to just see a program with no explanation. </sub></p>
 | programming languages typing functional programming | 1 |
307 | Show that { xy ∣ |x| = |y|, x ≠ y } is context-free | <p>I remember coming across the following question about a language that supposedly is context-free, but I was unable to find a proof of the fact. Have I perhaps misremembered the question?</p>

<p>Anyway, here's the question:</p>

<blockquote>
 <p>Show that the language $L = \{xy \mid |x| = |y|, x\neq y\}$ is context free.</p>
</blockquote>
 | formal languages context free | 1 |
310 | Decision problem such that any algorithm admits an exponentially faster algorithm | <p>In Hromkovič's <a href="http://www.ite.ethz.ch/publications/buch/index_EN">Algorithmics for Hard Problems</a> (2nd edition) there is this theorem (2.3.3.3, page 117):</p>

<blockquote>
 <p>There is a (decidable) decision problem $P$ such that for every algorithm $A$ that solves $P$ there is another algorithm $A'$ that also solves $P$ and additionally fulfills<br>
 $\qquad \forall^\infty n \in \mathbb{N}. \mathrm{Time}_{A'}(n) = \log_2 \mathrm{Time}_A(n)$</p>
</blockquote>

<p>$\mathrm{Time}_A(n)$ is the worst-case runtime of $A$ on inputs of size $n$ and $\forall^\infty$ means "for all but finitely many".</p>

<p>A proof is not given and we have no idea how to go about this; it is quite counter-intuitive, actually. How can the theorem be proven?</p>
 | complexity theory | 1 |
311 | Deriving the regular expression for C-style /**/ comments | <p>I'm working on a parser for a C-style language, and for that parser I need the regular expression that matches C-style /**/ comments. Now, I've found this expression on the web:</p>

<pre><code>/\*([^\*]*\*+[^\*/])*([^\*]*\*+|[^\*]*\*/
</code></pre>

<p>However, as you can see, this is a rather messy expression, and I have no idea whether it actually matches exactly what I want it to match.</p>

<p>Is there a different way of (rigorously) defining regular expressions that are easy to check by hand that they are really correct, and are then convertible ('compilable') to the above regular expression?</p>
 | compilers parsers regular languages | 1 |
315 | Are all context-free and regular languages efficiently decidable? | <p>I came across this figure which shows that context-free and regular languages are (proper) subsets of efficient problems (supposedly $\mathrm{P}$). I perfectly understand that efficient problems are a subset of all decidable problems because we can solve them but it could take a very long time. </p>

<p>Why are <em>all</em> context-free and regular languages efficiently decidable? Does it mean solving them will not take a long time (I mean we know it without more context)?</p>

<p><img src="https://i.stack.imgur.com/xdEBQ.jpg" alt="enter image description here"></p>
 | formal languages regular languages context free efficiency | 0 |
318 | Are all system calls blocking? | <p>I was reading <a href="http://www.eecg.toronto.edu/~livio/papers/flexsc-osdi10.pdf">an article</a> that describes the switch between user-space and kernel-space that happens upon a system call. The article says </p>

<blockquote>
 <p>An application expects the completion of the system call before resuming user-mode execution.</p>
</blockquote>

<p>Now, until now I was assuming that some system calls are <code>blocking</code>, whereas others are <code>non-blocking</code>. With the comment above, I am now confused. Does this mean that all system calls are blocking or did I misunderstand a concept?</p>
 | operating systems os kernel | 1 |
323 | Which instruction yields atomicity in this expression that makes the result 2? | <p>I am reading about atomicity and came across the following scenario</p>

<pre><code>int x = y = z = 0;

Thread 1 Thread 2
--------- --------
x = y + z y = 1
 z = 2
</code></pre>

<p>Which gives the following sets of output</p>

<p>$$\begin{array}{ccc}1&2&3\\
T1 : x = y + z&T2 : y = 1&T2 : y = 1\\T2 : y = 1&T2 : z = 2&T1 : x = y + z\\T2 : z = 2&T1 : x = y + z&T2 : z = 2\end{array}$$</p>

<p>Translating the $x=y+z$ expression to machine code gives</p>

<pre><code>load r1; y
load r2; z
add r3; r1; r2
store r3; 
</code></pre>

<p>However according to some notes I read going down the path of</p>

<pre><code>T1 : load r1, y
T2 : y = 1
 z = 2
T1 : load r2, z
 add r3, r1, r 
 store r3, x
</code></pre>

<p>I cannot seem to understand how the author came to result that $x=2$. </p>

<p>To me, based on the previous machine instructions the result should be 3, which I guess leads me to realize I am supposed to hit <em>eureka</em> (or a simple misread) and realize where the atomicity occurs. Could you explain the atomicity in this fairly simple statement that leads to the correct result?</p>
 | operating systems programming languages concurrency | 1 |
326 | Distinguishing between uppercase and lowercase letters in the "move-to-front" method | <p>Is it not necessary to encode both the uppercase and lowercase letter while encoding a message with the <a href="http://en.wikipedia.org/wiki/Move-to-front_transform" rel="nofollow">move-to-front transform</a>? From an old computer science course exam, the problem was to encode <code>Matt_ate_the_mat</code> starting with an empty list.</p>

<p>Using the author's solution methodology of not taking into account <code>M</code> versus <code>m</code> one arrives at</p>

<p>$C(1)C^∗(M)\\
C(2)C^∗(a)\\
C(3)C^∗(t)\\
C(1)\\
C(4)C^∗(\_)\\
C(3)\\
C(3)\\
C(5)C^∗(e)\\
C(4)\\
C(3)\\
C(6)C^∗(h)\\
C(4)\\
C(4)\\
C(6)\\
C(6)\\
C(6)$</p>

<p>Seeing that move-to-front works best with items that are repeated this works to one advantage as long as the difference between <code>M</code> and <code>m</code> in the original message is not important, correct?</p>

<p>Though would it not change the last encodings if taking into account <code>m</code> to $C(7)C^*(m)$ or was this done for the sake of brevity within the exam?</p>
 | combinatorics data compression | 1 |
329 | Do you get DFS if you change the queue to a stack in a BFS implementation? | <p>Here is the standard pseudocode for breadth first search:</p>

<pre><code>{ seen(x) is false for all x at this point }
push(q, x0)
seen(x0) := true
while (!empty(q))
 x := pop(q)
 visit(x)
 for each y reachable from x by one edge
 if not seen(y)
 push(q, y)
 seen(y) := true
</code></pre>

<p>Here <code>push</code> and <code>pop</code> are assumed to be queue operations. But what if they are stack operations? Does the resulting algorithm visit vertices in depth-first order?</p>

<hr/>

<p>If you voted for the comment "this is trivial", I'd ask you to explain why it is trivial. I find the problem quite tricky.</p>
 | algorithms graphs | 1 |
332 | In-place algorithm for interleaving an array | <p>You are given an array of $2n$ elements </p>

<p>$$a_1, a_2, \dots, a_n, b_1, b_2, \dots b_n$$</p>

<p>The task is to interleave the array, using an in-place algorithm such that the resulting array looks like</p>

<p>$$b_1, a_1, b_2, a_2, \dots , b_n, a_n$$</p>

<p>If the in-place requirement wasn't there, we could easily create a new array and copy elements giving an $\mathcal{O}(n)$ time algorithm.</p>

<p>With the in-place requirement, a divide and conquer algorithm bumps up the algorithm to be $\theta(n \log n)$.</p>

<p>So the question is:</p>

<blockquote>
 <p>Is there an $\mathcal{O}(n)$ time algorithm, which is also in-place?</p>
</blockquote>

<p>(Note: You can assume the uniform cost WORD RAM model, so in-place translates to $\mathcal{O}(1)$ space restriction).</p>
 | algorithms arrays permutations in place | 1 |
335 | What is the height of an empty BST when using it in context for balancing? | <p>Suppose the datatype for a BST is defined as follows (in SML)</p>
<pre><code>datatype 'a bst_Tree =
 Empty
 | Node of (int * 'a) * 'a bst_Tree * 'a bst_Tree;
</code></pre>
<p>So there are two cases one in which the BST is <code>Empty</code> or it can have a (key,value) as well as two children.</p>
<p>Now, for the case of an AVL where the condition is</p>
<blockquote>
<p>In an AVL tree, the heights of the two child subtrees of any node differ by at most one<br />
<sub>- <a href="http://en.wikipedia.org/wiki/AVL_tree" rel="nofollow noreferrer">AVL tree Wikipedia</a></sub></p>
</blockquote>
<p>I want to able to create a height function for use to check whether the tree is balanced. My current setup is as follows</p>
<pre><code>fun height (Empty) = ~1
 | height (Node(v, Empty, Empty)) = 0 (* Redundant matching because of third case *)
 | height (Node(v, L, R)) = 1 + Int.max(height(L),height(R))
</code></pre>
<p>I tried to separate the Tree into three conditions</p>
<ol>
<li>A empty Tree</li>
<li>A Tree with a root node</li>
<li>A populated tree</li>
</ol>
<p>The reason for this is that there does not seem to be a canonical source on what the value is for the height of an <code>Empty</code> Tree as opposed to one in which only has a root. For the purposes of my balance function it did the job, but I rather try to understand why there isn't a canonical answer for the height of an <code>Empty</code> Tree.</p>
<p>There is a canonical answer, in a matter of speaking on <a href="http://en.wikipedia.org/wiki/Tree_height#Terminology" rel="nofollow noreferrer">Wikipedia</a> but while initially doing research on this on Stack Overflow I arrived at many comments stating this to be wrong/incorrect/unconventional</p>
<blockquote>
<p>Conventionally, the value −1 corresponds to a subtree with no nodes, whereas zero corresponds to a subtree with one node.)</p>
</blockquote>
<p>I grabbed the question from which my uncertainty appeared</p>
<p><a href="https://stackoverflow.com/questions/2209777/what-is-the-definition-for-the-height-of-a-tree">What is the definition for the height of a tree?</a></p>
<blockquote>
<p>I think you should take a look at the <a href="http://www.itl.nist.gov/div897/sqg/dads/" rel="nofollow noreferrer">Dictionary of Algorithms and Data Structures</a> at the NIST website. There definition for height says a single node is height 0.</p>
<p>The <a href="http://www.itl.nist.gov/div897/sqg/dads/HTML/tree.html" rel="nofollow noreferrer">definition of a valid tree</a> does include an empty structure. The site doesn't mention the height of such a tree, but based on the definition of the height, it should also be 0.</p>
</blockquote>
 | algorithms data structures terminology | 1 |
341 | ML functions from polymorphic lists to polymorphic lists | <p>I'm learning programming in ML (OCaml), and earlier I asked about <a href="https://cs.stackexchange.com/questions/302/ml-function-of-type-a-b">ML functions of type <code>'a -> 'b</code></a>. Now I've been experimenting a bit with functions of type <code>'a list -> 'b list</code>. There are some obvious simple examples:</p>

<pre><code>let rec loop l = loop l
let return_empty l = []
let rec loop_if_not_empty = function [] -> []
 | l -> loop_if_not_empty l
</code></pre>

<p>What I can't figure out is how to make a function that does something other than return the empty list or loop (without using any library function). Can this be done? Is there a way to return non-empty lists?</p>

<p>Edit: Yes, if I have a function of type <code>'a -> 'b</code>, then I can make another one, or a function of type <code>'a list -> 'b list</code>, but what I'm wondering here is how to make the first one.</p>
 | programming languages typing functional programming | 1 |
342 | Not all Red-Black trees are balanced? | <p>Intuitively, "balanced trees" should be trees where left and right sub-trees at each node must have "approximately the same" number of nodes.</p>
<p>Of course, when we talk about red-black trees*(see definition at the end) being balanced, we actually mean that they are <em>height</em> balanced and in that sense, they are balanced.</p>
<p>Suppose we try to formalize the above intuition as follows:</p>
<blockquote>
<p><strong>Definition:</strong> A Binary Tree is called <span class="math-container">$\mu$</span>-balanced, with <span class="math-container">$0 \le \mu \leq \frac{1}{2}$</span>, if for every node <span class="math-container">$N$</span>, the inequality</p>
<p><span class="math-container">$$ \mu \le \frac{|N_L| + 1}{|N| + 1} \le 1 - \mu$$</span></p>
<p>holds and for every <span class="math-container">$\mu' \gt \mu$</span>, there is some node for which the above statement fails. <span class="math-container">$|N_L|$</span> is the number of nodes in the left sub-tree of <span class="math-container">$N$</span> and <span class="math-container">$|N|$</span> is the number of nodes under the tree with <span class="math-container">$N$</span> as root (including the root).</p>
</blockquote>
<p>I believe, these are called <em>weight-balanced</em> trees in some of the literature on this topic.</p>
<p>One can show that if a binary tree with <span class="math-container">$n$</span> nodes is <span class="math-container">$\mu$</span>-balanced (for a constant <span class="math-container">$\mu \gt 0$</span>), then the height of the tree is <span class="math-container">$\mathcal{O}(\log n)$</span>, thus maintaining the nice search properties.</p>
<p>So the question is:</p>
<blockquote>
<p>Is there some <span class="math-container">$\mu \gt 0$</span> such that every big enough red-black tree is <span class="math-container">$\mu$</span>-balanced?</p>
</blockquote>
<hr />
<p>The definition of Red-Black trees we use (from Introduction to Algorithms by Cormen et al):</p>
<p>A binary search tree, where each node is coloured either red or black and</p>
<ul>
<li>The root is black</li>
<li>All NULL nodes are black</li>
<li>If a node is red, then both its children are black.</li>
<li>For each node, all paths from that node to descendant NULL nodes have the same number of black nodes.</li>
</ul>
<p>Note: we don't count the NULL nodes in the definition of <span class="math-container">$\mu$</span>-balanced above. (Though I believe it does not matter if we do).</p>
 | data structures binary trees search trees | 1 |
349 | Why is encrypting with the same one-time-pad not good? | <p>To encrypt a message $m_1$ with a one-time-pad key $k$ you do
$Enc(m_1,k) = m_1 \oplus k$. </p>

<p>If you use the same $k$ to encrypt a different message $m_2$ you get
$Enc(m_2,k) = m_2 \oplus k$, and if you perform Xor of the two ciphertext you get
$$( m_1 \oplus k) \oplus ( m_2 \oplus k) = m_1 \oplus m_2$$</p>

<p>so, OK, there is some information leakage becuse you learn $m_1 \oplus m_2$, but why is it not secure? I have no way to learn (say) $m_1$ unless I know $m_2$. So why is it wrong to use $k$ twice??</p>
 | cryptography information theory encryption | 1 |
350 | What is an intuitive way to explain and understand De Morgan's Law? | <p>De Morgan's Law is often introduced in an introductory mathematics for computer science course, and I often see it as a way to turn statements from AND to OR by negating terms.</p>

<p>Is there a more intuitive explanation for why this works rather than just remembering truth tables? To me this is like using black magic, what's a better way to explain this so that it makes sense to a less mathematically inclined individual? </p>
 | logic discrete mathematics didactics | 1 |
356 | Why hasn't there been an encryption algorithm that is based on the known NP-Hard problems? | <p>Most of today's encryption, such as the RSA, relies on the integer factorization, which is not believed to be a NP-hard problem, but it belongs to BQP, which makes it vulnerable to quantum computers. I wonder, why has there not been an encryption algorithm which is based on an known NP-hard problem. It sounds (at least in theory) like it would make a better encryption algorithm than a one which is not proven to be NP-hard.</p>
 | complexity theory np hard encryption cryptography | 1 |
358 | How to verify number with Bob without Eve knowing? | <p>You need to check that your friend, Bob, has your correct phone number, but you cannot ask him directly. You must write the question on a card which and give it to Eve who will take the card to Bob and return the answer to you. What must you write on the card, besides the question, to ensure Bob can encode the message so that Eve cannot read your phone number?</p>

<p><em>Note:</em> This question is on a list of "google interview questions". As a result, there are tons of versions of this question on the web, and many of them don't have clear, or even correct answers. </p>

<p><em>Note 2:</em> The snarky answer to this question is that Bob should write "call me". Yes, that's very clever, 'outside the box' and everything, but doesn't use any techniques that field of CS where we call our hero "Bob" and his eavesdropping adversary "Eve". </p>

<p><strong>Update:</strong> <br>
Bonus points for an algorithm that you and Bob could both reasonably complete by hand.</p>

<p><strong>Update 2:</strong> <br>
Note that Bob doesn't have to send you any arbitrary message, but only confirm that he has your correct phone number without Eve being able to decode it, which may or may not lead to simpler solutions.</p>
 | algorithms cryptography | 1 |
366 | What goes wrong with sums of Landau terms? | <p>I wrote</p>

<p>$\qquad \displaystyle \sum\limits_{i=1}^n \frac{1}{i} = \sum\limits_{i=1}^n \cal{O}(1) = \cal{O}(n)$</p>

<p>but my friend says this is wrong. From the TCS cheat sheet I know that the sum is also called $H_n$ which has logarithmic growth in $n$. So my bound is not very sharp, but is sufficient for the analysis I needed it for.</p>

<p>What did I do wrong?</p>

<p><strong>Edit</strong>:
My friend says that with the same reasoning, we can prove that</p>

<p>$\qquad \displaystyle \sum\limits_{i=1}^n i = \sum\limits_{i=1}^n \cal{O}(1) = \cal{O}(n)$</p>

<p>Now this is obviously wrong! What is going on here?</p>
 | asymptotics landau notation | 1 |