id
int64
1
141k
title
stringlengths
15
150
body
stringlengths
43
35.6k
tags
stringlengths
1
118
label
int64
0
1
367
How can it be decidable whether $\pi$ has some sequence of digits?
<p>We were given the following exercise.</p>&#xA;&#xA;<blockquote>&#xA; <p>Let</p>&#xA; &#xA; <p>$\qquad \displaystyle f(n) = \begin{cases} 1 &amp; 0^n \text{ occurs in the decimal representation of } \pi \\ 0 &amp; \text{else}\end{cases}$</p>&#xA; &#xA; <p>Prove that $f$ is computable.</p>&#xA;</blockquote>&#xA;&#xA;<p>How is this possible? As far as I know, we do not know wether $\pi$ contains every sequence of digits (or which) and an algorithm can certainly not decide that some sequence is <em>not</em> occurring. Therefore I think $f$ is not computable, because the underlying problem is only semi-decidable.</p>&#xA;
computability undecidability
1
368
Counting binary trees
<p>(I'm a student with some mathematical background and I'd like to know how to count the number of a specific kind of binary trees.)</p>&#xA;&#xA;<p>Looking at Wikipedia page for <a href="http://en.wikipedia.org/wiki/Binary_tree">Binary Trees</a>, I've noticed this assertion that the number of rooted binary trees of size $n$ would be this <a href="http://en.wikipedia.org/wiki/Catalan_number">Catalan Number</a>:&#xA;$$C_n = \dfrac{1}{n+1}{2n \choose n}$$</p>&#xA;&#xA;<p>But I don't understand how I could come up with such a result by myself? Is there a method to find this result?</p>&#xA;&#xA;<p>Now, what if the order of sub-trees (which is left, which is right) is not considered? For example, from my point of view, I consider that these two trees are the same:</p>&#xA;&#xA;<pre><code> /\ /\&#xA; /\ /\&#xA;</code></pre>&#xA;&#xA;<p>Would it be possible to apply a similar method to count how many of these objects have exactly $n$ nodes?</p>&#xA;
combinatorics binary trees discrete mathematics
1
374
Languages accepted by modified versions of finite automata
<p>A deterministic finite automaton (DFA) is a state machine model capable of accepting all and only regular languages. DFAs can be (and usually are) defined in such a way that each state must provide some transition for all elements of the input alphabet; in other words, the transition function $\delta : Q \times \Sigma \rightarrow Q$ should be a (total) function.</p>&#xA;&#xA;<p>Imagine what we will call a doubly deterministic finite automaton (DDFA). It is defined similarly to a DFA, with two exceptions: first, instead of the transition leading from one state to one other state for every possible input symbol, it must lead to two distinct states; second, in order to accept a string, all potential paths must satisfy either one or the other of the following conditions:</p>&#xA;&#xA;<ol>&#xA;<li>All potential paths through the DDFA lead to an accepting state (we will call this a type-1 DDFA).</li>&#xA;<li>All potential paths through the DDFA lead to the same accepting state (we will call this a type-2 DDFA).</li>&#xA;</ol>&#xA;&#xA;<p>Now for my question:</p>&#xA;&#xA;<blockquote>&#xA; <p>What languages do type-1 and type-2 DDFAs accept? Specifically, is it the case that $L(DFA) \subsetneq L(DDFA)$, $L(DDFA) = L(DFA)$, or $L(DDFA) \subsetneq L(DFA)$? In the case that $L(DDFA) \neq L(DFA)$, is there an easy description of $L(DDFA)$? </p>&#xA;</blockquote>&#xA;&#xA;<p>Proofs (or at least moderately fleshed-out sketches) are appreciated, if they aren't too complicated.</p>&#xA;
formal languages automata finite automata
1
376
Is this language defined using twin primes regular?
<p>Let</p>&#xA;&#xA;<p>$\qquad L = \{a^n \mid \exists_{p \geq n}\ p\,,\ p+2 \text{ are prime}\}.$</p>&#xA;&#xA;<p>Is $L$ regular?</p>&#xA;&#xA;<p>This question looked suspicious at the first glance and I've realized that it is connected with the <a href="https://en.wikipedia.org/wiki/Twin_prime">twin prime conjecture</a>. My problem is that the conjecture has not been resolved yet, so I am not sure how can I proceed with deciding that the language is regular. </p>&#xA;
formal languages automata regular languages finite automata
1
386
Path to formal methods
<p>It is not uncommon to see students starting their PhDs with only a limited background in mathematics and the formal aspects of computer science. Obviously it will be very difficult for such students to become theoretical computer scientists, but it would be good if they could become savvy with using formal methods and reading papers that contain formal methods.</p>&#xA;&#xA;<blockquote>&#xA; <p>What is a good short term path that starting PhD students could follow to gain the expose required to get them reading papers involving formal methods and eventually writing papers that use such formal methods?</p>&#xA;</blockquote>&#xA;&#xA;<p>In terms of context, I'm thinking more in terms of Theory B and formal verification as the kinds of things that they should learn, but also classical TCS topics such as automata theory.</p>&#xA;
formal methods education
0
390
Proving closure under reversal of languages accepted by min-heap automata
<p><em>This is a follow-up question of <a href="https://cs.stackexchange.com/q/110/98">this one</a>.</em></p>&#xA;&#xA;<p>In a previous question about <a href="https://cs.stackexchange.com/q/110/69">exotic state machines</a>, Alex ten Brink and Raphael addressed the computational capabilities of a peculiar kind of state machine: min-heap automata. They were able to show that the set of languages accepted by such machines ($HAL$) is neither a subset nor a superset of the set of context-free languages. Given the successful resolution of and apparent interest in that question, I proceed to ask several follow-up questions.</p>&#xA;&#xA;<p>It is known that the regular languages are closed under a variety of operations (we may limit ourselves to basic operations such as union, intersection, complement, difference, concatenation, Kleene star, and reversal), whereas the context-free languages have different closure properties (these are closed under union, concatenation, Kleene star, and reversal).</p>&#xA;&#xA;<blockquote>&#xA; <p>Is HAL closed under reversal?</p>&#xA;</blockquote>&#xA;
formal languages automata closure properties
0
393
Proving closure under complementation of languages accepted by min-heap automata
<p><em>This is a follow-up question of <a href="https://cs.stackexchange.com/q/110/98">this one</a></em>.</p>&#xA;&#xA;<p>In a previous question about <a href="https://cs.stackexchange.com/q/110/69">exotic state machines</a>, Alex ten Brink and Raphael addressed the computational capabilities of a peculiar kind of state machine: min-heap automata. They were able to show that the set of languages accepted by such machines ($HAL$) is neither a subset nor a superset of the set of context-free languages. Given the successful resolution of and apparent interest in that question, I proceed to ask several follow-up questions.</p>&#xA;&#xA;<p>It is known that the regular languages are closed under a variety of operations (we may limit ourselves to basic operations such as union, intersection, complement, difference, concatenation, Kleene star, and reversal), whereas the context-free languages have different closure properties (these are closed under union, concatenation, Kleene star, and reversal).</p>&#xA;&#xA;<blockquote>&#xA; <p>Is HAL closed under complementation?</p>&#xA;</blockquote>&#xA;
formal languages automata closure properties
1
394
Computational power of deterministic versus nondeterministic min-heap automata
<p><em>This is a follow-up question of <a href="https://cs.stackexchange.com/q/110/98">this one</a>.</em></p>&#xA;&#xA;<p>In a previous question about <a href="https://cs.stackexchange.com/q/110/69">exotic state machines</a>, Alex ten Brink and Raphael addressed the computational capabilities of a peculiar kind of state machine: min-heap automata. They were able to show that the set of languages accepted by such machines ($HAL$) is neither a subset nor a superset of the set of context-free languages. Given the successful resolution of and apparent interest in that question, I proceed to ask several follow-up questions.</p>&#xA;&#xA;<p>It is known that deterministic and nondeterministic finite automata have equivalent computational capabilities, as do deterministic and nondeterministic Turing machines. However, the computational capabilities of deterministic push-down automata are less than those of nondeterministic push-down automata.</p>&#xA;&#xA;<blockquote>&#xA; <p>Are the computational capabilities of deterministic min-heap automata less than, or are they equal to, those of nondeterministic min-heap automata?</p>&#xA;</blockquote>&#xA;
formal languages automata nondeterminism
1
396
A DFA for recognizing comments
<p>The following DFA is a lexical analyzer which is supposed to recognize comments. The lexical analyzer will ignore the comment and goes back to the state one. I'm told that there's something wrong with it but I can't figure it out. What's the problem?</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/EeIdO.png" alt="enter image description here"></p>&#xA;&#xA;<p>FWIW, those tiny signs are stars which are necessary for C-style comment: "/* comment */"<br>&#xA; The loop in the state three is "except *"</p>&#xA;
formal languages automata finite automata compilers
1
401
Is a function looking for subsequences of digits of $\pi$ computable?
<p><a href="https://cs.stackexchange.com/questions/367/how-can-it-be-decidable-whether-pi-has-some-sequence-of-digits">How can it be decidable whether $\pi$ has some sequence of digits?</a> inspired me to ask whether the following innocent-looking variation is computable:</p>&#xA;&#xA;<p>$$f(n) = \begin{cases}&#xA; 1 &amp; \text{if \(\bar n\) occurs in the decimal representation of \(\pi\)} \\&#xA; 0 &amp; \text{otherwise} \\&#xA;\end{cases}$$</p>&#xA;&#xA;<p>where $\bar n$ is the decimal representation of $n$ with no leading zeroes.</p>&#xA;&#xA;<p>If the decimal expansion of $\pi$ contains all finite digit sequences (let's call this a <a href="http://fr.wikipedia.org/wiki/Nombre_univers" rel="nofollow noreferrer">universal number</a> (in base 10)), then $f$ is the constant $1$. But this is an open mathematical question. If $\pi$ is not universal, does this mean that $f$ is uncomputable?</p>&#xA;
computability real numbers
0
407
Measuring the difficulty of SAT instances
<p>Given an instance of SAT, I would like to be able to estimate how difficult it will be to solve the instance.</p>&#xA;<p>One way is to run existing solvers, but that kind of defeats the purpose of estimating difficulty. A second way might be looking a the ratio of clauses to variables, as is done for phase transitions in random-SAT, but I am sure better methods exist.</p>&#xA;<p>Given an instance of SAT, are there some fast heuristics to measure the difficulty? The only condition is that these heuristics be faster than actually running existing SAT solvers on the instance.</p>&#xA;<hr />&#xA;<h3>Related question</h3>&#xA;<p><a href="https://cstheory.stackexchange.com/q/4375/1037">Which SAT problems are easy?</a> on cstheory.SE. This questions asks about tractable sets of instances. This is a similar question, but not exactly the same. I am really interested in a heuristic that given a single instance, makes some sort of semi-intelligent guess of if the instance will be a hard one to solve.</p>&#xA;
complexity theory satisfiability heuristics
1
411
How to find a superstar in linear time?
<p>Consider directed graphs. We call a node $v$ <em>superstar</em> if and only if no other node can be reached from it, but all other nodes have an edge to $v$. Formally:</p>&#xA;&#xA;<p>$\qquad \displaystyle $v$ \text{ superstar } :\Longleftrightarrow \mathrm{outdeg}(v) = 0 \land \mathrm{indeg}(v) = n-1$</p>&#xA;&#xA;<p>with $n$ the number of nodes in the graph. For example, in the below graph, the unfilled node is a superstar (and the other nodes are not).</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/MIGky.png" alt="A Superstar"><br>&#xA;<sup>[<a href="https://github.com/akerbos/sesketches/blob/gh-pages/src/cs_411.dot" rel="noreferrer">source</a>]</sup></p>&#xA;&#xA;<p>How can you identify all superstars in a directed graphs in $\mathcal{O}(n)$ time? A suitable graph representation can be chosen from the <a href="https://en.wikipedia.org/wiki/Graph_%28abstract_data_type%29#Representations" rel="noreferrer">usual candidates</a>; please refrain from using representations that move the problem's complexity to preprocessing.</p>&#xA;&#xA;<p>No assumptions regarding density can be made. We don't assume the graph contains a superstar; if there is none, the algorithm should recognize it.</p>&#xA;&#xA;<p><em>Notation</em>: $\mathrm{outdeg}$ is a node's number of outgoing edges, $\mathrm{indeg}$ similar for incoming edges.</p>&#xA;
algorithms graphs
1
416
Approximation of minimum bandwidth on binary trees
<p>Minimum bandwidth problem is to a find an ordering of graph nodes on integer line that minimizes the largest distance between any two adjacent nodes. </p>&#xA;&#xA;<p>The decision problem is NP-complete even for binary trees. <a href="http://www.jstor.org/stable/10.2307/2100947">Complexity Results for Bandwidth Minimization. Garey, Graham, Johnson and Knuth, SIAM J. Appl. Math., Vol. 34, No.3, 1978</a>.</p>&#xA;&#xA;<p>What is the best known efficient approximability result for computing minimum bandwidth on binary trees? What is best known conditional hardness of approximation result? </p>&#xA;
complexity theory np complete reference request approximation
1
419
Is there any concrete relation between Gödel's incompleteness theorem, the halting problem and universal Turing machines?
<p>I've always thought vaguely that the answer to the above question was affirmative along the following lines. Gödel's incompleteness theorem and the undecidability of the halting problem both being negative results about decidability and established by diagonal arguments (and in the 1930's), so they must somehow be two ways to view the same matters. And I thought that Turing used a universal Turing machine to show that the halting problem is unsolvable. (See also <a href="https://math.stackexchange.com/questions/108964/halting-problem-and-universality">this math.SE</a> question.)</p>&#xA;&#xA;<p>But now that (teaching a course in computability) I look closer into these matters, I am rather bewildered by what I find. So I would like some help with straightening out my thoughts. I realise that on one hand Gödel's diagonal argument is very subtle: it needs a lot of work to construct an arithmetic statement that can be interpreted as saying something about it's own derivability. On the other hand the proof of the undecidability of the halting problem I found <a href="http://en.wikipedia.org/wiki/Halting_problem#Sketch_of_proof" rel="noreferrer">here</a> is extremely simple, and doesn't even explicitly mention Turing machines, let alone the existence of universal Turing machines.</p>&#xA;&#xA;<p>A practical question about universal Turing machines is whether it is of any importance that the alphabet of a universal Turing machine be the same as that of the Turing machines that it simulates. I thought that would be necessary in order to concoct a proper diagonal argument (having the machine simulate itself), but I haven't found any attention to this question in the bewildering collection of descriptions of universal machines that I found on the net. If not for the halting problem, are universal Turing machines useful in any diagonal argument?</p>&#xA;&#xA;<p>Finally I am confused by <a href="http://en.wikipedia.org/wiki/Halting_problem#Relationship_with_G.C3.B6del.27s_incompleteness_theorem" rel="noreferrer">this further section</a> of the same WP article, which says that a weaker form of Gödel's incompleteness follows from the halting problem: "a complete, consistent and sound axiomatisation of all statements about natural numbers is unachievable" where "sound" is supposed to be the weakening. I know a theory is consistent if one cannot derive a contradiction, and a complete theory about natural numbers would seem to mean that all true statements about natural numbers can be derived in it; I know Gödel says such a theory does not exist, but I fail to see how such a hypothetical beast could possibly fail to be sound, i.e., also derive statements which are false for the natural numbers: the negation of such a statement would be true, and therefore by completeness also derivable, which would contradict consistency.</p>&#xA;&#xA;<p>I would appreciate any clarification on one of these points.</p>&#xA;
computability logic halting problem incompleteness
1
421
AVL trees are not weight-balanced?
<p>In a previous <a href="https://cs.stackexchange.com/questions/342/not-all-red-black-trees-are-balanced">question</a> there was a definition of weight balanced trees and a question regarding red-black trees. </p>&#xA;&#xA;<p>This question is to ask the same question, but for <a href="http://en.wikipedia.org/wiki/AVL_tree" rel="nofollow noreferrer">AVL trees</a>. </p>&#xA;&#xA;<p>The question is, given the definition of $\mu$-balanced trees as in the other question,</p>&#xA;&#xA;<blockquote>&#xA; <p>Is there some $\mu \gt 0$ such that all big enough AVL trees are $\mu$-balanced?</p>&#xA;</blockquote>&#xA;&#xA;<p>I presume there is only one definition of AVL trees and there is no ambiguity.</p>&#xA;
data structures binary trees search trees balanced search trees avl trees
1
423
How hard is counting the number of simple paths between two nodes in a directed graph?
<p>There is an easy polynomial algorithm to decide whether there is a path between two nodes in a directed graph (just do a routine graph traversal with, say, depth-first-search).</p>&#xA;&#xA;<p>However it seems that, surprisingly, the problem gets much harder if instead of testing for the existence we want want to <em>count</em> the number of paths.</p>&#xA;&#xA;<p>If we allow paths to reuse vertices then there is a dynamic programming solution to find the number of paths from <em>s</em> to <em>t</em> with <em>n</em> edges. <strong>However, if we only allow simple paths, that don't reuse vertices, the only solution I can think of is brute force enumeration of the paths</strong>, something that has exponential time complexity.</p>&#xA;&#xA;<p>So I ask,</p>&#xA;&#xA;<ul>&#xA;<li>Is counting the number of simple paths between two vertices hard?</li>&#xA;<li>If so, is it kind of NP-complete? (I say kind of because it is technically not a decision problem...)</li>&#xA;<li>Are there other problems in P that have a hard counting versions like that too?**</li>&#xA;</ul>&#xA;
algorithms complexity theory graphs
1
427
An indexing function for graphs
<p>Definition from wikipedia:</p>&#xA;&#xA;<blockquote>&#xA; <p>A graph is an ordered pair $G = (V, E)$ comprising a set $V$ of nodes together with a set $E$ of edges, which are two-element subsets of $V$.</p>&#xA;</blockquote>&#xA;&#xA;<p>The set of all finite graphs (modulo isomorphism: we don't want nodes to have identities) is countable and could be enumerated. But what would be an <em>efficient</em> (low-complexity, from a programming point of view) injection from graphs to $\mathbb{N}$?</p>&#xA;&#xA;<p><em><strong>Edit:</strong> Gilles' comment indicates that it is not know whether there is a such function feasible in polynomial time. An example of an exponential-complexity function would be good enough; we can surely do better than a brute enumeration?</em></p>&#xA;
algorithms graphs
0
430
Easy to state open problems in computability theory
<p>I was searching for interesting and easy to state open problems in computability (understandable by undergraduate students taking their first course in computability) to give examples of open problems (and obviously I want the students to be able to understand the problem without needing too much new definitions and also be interesting to them). </p>&#xA;&#xA;<p>I found <a href="http://math.berkeley.edu/~slaman/qrt/qrt.pdf">this list</a> but the problems in it seem too complicated for undergraduates and will need spending considerable time giving definitions before stating the problem. The only problem I have found so far is </p>&#xA;&#xA;<blockquote>&#xA; <p>Is Diophantine problem over rational numbers decidable?</p>&#xA;</blockquote>&#xA;&#xA;<p>Do you know any other interesting and easy to state open problem in computability theory?</p>&#xA;
computability
1
431
Break an authentication protocol based on a pre-shared symmetric key
<p>Consider the following protocol, meant to authenticate $A$ (Alice) to $B$ (Bob) and vice versa.</p>&#xA;&#xA;<p>$$ \begin{align*}&#xD;&#xA; A \to B: &amp;\quad \text{“I&#39;m Alice”}, R_A \\&#xD;&#xA; B \to A: &amp;\quad E(R_A, K) \\&#xD;&#xA; A \to B: &amp;\quad E(\langle R_A+1, P_A\rangle, K) \\&#xD;&#xA;\end{align*} $$</p>&#xA;&#xA;<ul>&#xA;<li>$R$ is a random nonce.</li>&#xA;<li>$K$ is a pre-shared symmetric key.</li>&#xA;<li>$P$ is some payload.</li>&#xA;<li>$E(m, K)$ means $m$ encrypted with $K$.</li>&#xA;<li>$\langle m_1, m_2\rangle$ means $m_1$ assembled with $m_2$ in a way that can be decoded unambiguously.</li>&#xA;<li>We assume that the cryptographic algorithms are secure and implemented correctly.</li>&#xA;</ul>&#xA;&#xA;<p>An attacker (Trudy) wants to convince Bob to accept her payload $P_T$ as coming from Alice (in lieu of $P_A$). Can Trudy thus impersonate Alice? How?</p>&#xA;&#xA;<p><sub>&#xA;This is slightly modified from exercise 9.6 in <a href="http://rads.stackoverflow.com/amzn/click/0471738484"><em>Information Security: Principles and Practice</em></a> by <a href="http://www.cs.sjsu.edu/~stamp/">Mark Stamp</a>. In the book version, there is no $P_A$, the last message is just $E(R_A+1,K)$, and the requirement is for Trudy to “convince Bob that she is Alice”. Mark Stamp asks us to find two attacks, and the two I found allow Trudy to forge $E(R+1,K)$ but not $E(\langle R, P_T\rangle, K)$.&#xA;</sub></p>&#xA;
cryptography protocols authentication
1
433
Is it possible to derive a string in this rewriting system?
<p>Rewriting system is a set of rules in the form of $A \leftrightarrow B$. &#xA;If we apply that rule to a string $w$ we replace any substring $A$ in $w$ with a substring $B$ and vice versa.</p>&#xA;&#xA;<p>Given a starting string $AAABB$ can we derive $BAAB$ in the system with the following rules:</p>&#xA;&#xA;<ul>&#xA;<li>$A \leftrightarrow BA$</li>&#xA;<li>$BABA \leftrightarrow AABB$</li>&#xA;<li>$AAA \leftrightarrow AB$</li>&#xA;<li>$BA \leftrightarrow AB$</li>&#xA;</ul>&#xA;&#xA;<p>Is there a general algorithm for that?</p>&#xA;
computability term rewriting
1
435
Fine-grained security models for XML data
<p>Access control models are typically very coarse-grained, allowing one access to a resource (possibly with some combination of read/write/execute permission) or exclude such access. Some models of database security allow access to be granted on a per row basis (though I don't have a reference handy).</p>&#xA;&#xA;<p>Have fine-grained access control models been designed to limit access to <em>parts of</em> an XML document? What could/should a model look like? Has any work been done in this area? Are other security models applicable in this context?</p>&#xA;&#xA;<p>For example, one could imagine that the model prevents access to a particular subtree. The subtree could either be removed or encrypted.</p>&#xA;&#xA;<p><em>Note that this need not explicitly apply to XML. Any models devised for semi-structured data are also interesting.</em></p>&#xA;
security access control structured data
0
438
Is interaction more powerful than algorithms?
<p>I've heard the motto <a href="http://www.cs.brown.edu/people/pw/papers/ficacm.ps"><strong>interaction is more powerful than algorithms</strong></a> from <a href="http://www.cs.brown.edu/~pw/">Peter Wegner</a>. The basis of the idea is that a (classical) Turing Machine cannot handle interaction, that is, communication (input/output) with the outside world/environment.</p>&#xA;&#xA;<blockquote>&#xA; <p>How can this be so? How can something be more powerful than a Turing Machine? What is the essence of this story? Why is it not more well known?</p>&#xA;</blockquote>&#xA;
computability computation models
1
439
Which combinations of pre-, post- and in-order sequentialisation are unique?
<p>We know post-order,</p>&#xA;&#xA;<pre><code>post L(x) =&gt; [x]&#xA;post N(x,l,r) =&gt; (post l) ++ (post r) ++ [x]&#xA;</code></pre>&#xA;&#xA;<p>and pre-order</p>&#xA;&#xA;<pre><code>pre L(x) =&gt; [x]&#xA;pre N(x,l,r) =&gt; [x] ++ (pre l) ++ (pre r)&#xA;</code></pre>&#xA;&#xA;<p>and in-order traversal resp. sequentialisation.</p>&#xA;&#xA;<pre><code>in L(x) =&gt; [x]&#xA;in N(x,l,r) =&gt; (in l) ++ [x] ++ (in r)&#xA;</code></pre>&#xA;&#xA;<p>One can easily see that neither describes a given tree uniquely, even if we assume pairwise distinct keys/labels.</p>&#xA;&#xA;<p>Which combinations of the three can be used to that end and which can not?</p>&#xA;&#xA;<p>Positive answers should include an (efficient) algorithm to reconstruct the tree and a proof (idea) why it is correct. Negative answers should provide counter examples, i.e. different trees that have the same representation. </p>&#xA;
algorithms binary trees
1
443
termination of two concurrent threads with shared variables
<p>We're in a shared memory concurrency model where all reads and writes to integer variables are atomic. </p>&#xA;&#xA;<ul>&#xA;<li><code>do:</code> $S_1$ <code>in parallel with:</code> $S_2$&#160;  means to execute $S_1$ and $S_2$ in separate threads, concurrently.</li>&#xA;<li><code>atomically(</code>$E$<code>)</code>   means to evaluate $E$ atomically, i.e. all other threads are stopped during the execution of $E$.</li>&#xA;</ul>&#xA;&#xA;<p>Consider the following program:</p>&#xA;&#xA;<pre><code>x = 0; y = 4&#xA;do: # thread T1&#xA; while x != y:&#xA; x = x + 1; y = y - 1&#xA;in parallel with: # thread T2&#xA; while not atomically (x == y): pass&#xA; x = 0; y = 2&#xA;</code></pre>&#xA;&#xA;<p>Does the program always terminate? When it does terminate, what are the possible values for <code>x</code> and <code>y</code>?</p>&#xA;&#xA;<p><sub> Acknowledgement: this is a light rephrasing of exercise 2.19 in <a href="http://www.cs.arizona.edu/~greg/mpdbook/" rel="nofollow"><em>Foundations of Multithreaded, Parallel, and Distributed Programming</em></a> by Gregory R. Andrews. </sub> </p>&#xA;
concurrency shared memory imperative programming
1
444
CCS process for a drink dispenser with two different prices
<p>A drink dispenser requires the user to insert a coin ($\bar c$), then press one of three buttons: $\bar d_{\text{tea}}$ requests a cup of tea $e_{\text{tea}}$, ditto for coffee, and $\bar r$ requests a refund (i.e. the machine gives back the coin: $\bar b$). This dispenser can be modeled by the following <a href="http://en.wikipedia.org/wiki/Calculus_of_communicating_systems" rel="nofollow">CCS</a> process:</p>&#xA;&#xA;<p>$$ M \stackrel{\mathrm{def}}= c.(d_{\text{tea}}.\bar e_{\text{tea}}.M + d_{\text{coffee}}.\bar e_{\text{coffee}}.M + r.\bar b.M)$$</p>&#xA;&#xA;<p>A civil war raises the price of coffee to two coins, while the price of tea remains one coin. We want a modified machine that delivers coffee only after two coins, and acquiesces to a refund after either one or two coins. How can we model the modified machine with a CCS process?</p>&#xA;
logic concurrency modelling process algebras ccs
1
450
Ray Tracing versus object-based rendering?
<p>Intro graphics courses usually have a project that asks you to build a ray tracer to render a scene. Many graphics students entering grad school say that they want to work on ray tracing. And yet it seems that ray tracing is a dead field in venues like SIGGRAPH etc. </p>&#xA;&#xA;<p>Is ray tracing really the <em>best</em> way to render a scene accurately with all desired illumination etc, and is it just the slow (read non-interactive) performance of ray tracers that makes them uninteresting, or is there something else ?</p>&#xA;
graphics
1
451
Why are there so many programming languages?
<p>I'm pretty fluent in C/C++, and can make my way around the various scripting languages (awk/sed/perl). I've started using python a lot more because it combines some of the nifty aspects of C++ with the scripting capabilities of awk/sed/perl.</p>&#xA;&#xA;<p>But why are there so many different programming languages ? I'm guessing all these languages can do the same things, so why not just stick to one language and use that for programming computers ? In particular, is there any reason I <em>should</em> know a functional language as a computer programmer ? </p>&#xA;&#xA;<p>Some related reading: </p>&#xA;&#xA;<ul>&#xA;<li><a href="http://www.infoworld.com/d/application-development/why-new-programing-languages-succeed-or-fail-188648">Why new programming languages succeed -- or fail ?</a> </li>&#xA;<li><a href="http://tagide.com/blog/2012/03/research-in-programming-languages/">is there still research to be done in programming languages?</a> </li>&#xA;</ul>&#xA;
programming languages
1
454
List of intro TCS books for those who don't know much about TCS
<p>If you have to recommend books for someone who wants to learn more about TCS at the introductory level such as automata theory, algorithmics, complexity theory, etc, what book(s) would you recommend for those who are interested and want to learn more about TCS, but not had any exposure to it?</p>&#xA;
education
0
455
Why do neural networks seem to perform better with restrictions placed on their topology?
<p>Fully connected (at least layer to layer with more than 2 hidden layers) backprop networks are universal learners. Unfortunately, they are often slow to learn and tend to over-fit or have awkward generalizations. </p>&#xA;&#xA;<p>From fooling around with these networks, I have observed that pruning some of the edges (so that their weight is zero and impossible to change) tends to make the networks learn faster and generalize better. Is there a reason for this? Is it only because of a decrease in the dimensionality of the weights search space, or is there a more subtle reason?</p>&#xA;&#xA;<p>Also, is the better generalization an artifact of the 'natural' problems I am looking at?</p>&#xA;
machine learning network topology neural networks
0
465
Similarities and differences in major process algebras
<p>To my knowledge, there are three major process algebras that have inspired a vast range of research into formal models of concurrency. These are:</p>&#xA;&#xA;<ul>&#xA;<li>CCS and $\pi$-calculus both by Robin Milner</li>&#xA;<li>CSP by Tony Hoare and</li>&#xA;<li>ACP by Jan Bergstra and Jan Willem Klop</li>&#xA;</ul>&#xA;&#xA;<p>All three seem to have to this day a quite active following and vast amounts of research has been done on them.</p>&#xA;&#xA;<blockquote>&#xA; <p>What are the key similarities and differences of these approaches? &#xA; Why has research in process algebra diverged instead of converged, in the sense that there is no one universal model to unify the field? </p>&#xA;</blockquote>&#xA;
logic concurrency process algebras
0
470
How is the key in a private key encryption protocol exchanged?
<p>Windows NT used a point-to-point protocol where a client can communicate "securely" with a server by using a stream cipher to encrypt an array of messages with some key $k$. The server also encrypts its response with the same key $k$. But how is it aware of this key?</p>&#xA;&#xA;<p>More general: if Alice and Bob use some encryption/decryption algorithm that operates on the same private key $k$, what is a secure way of exchanging this key? (without using a different key ofcourse) </p>&#xA;&#xA;<p>This is something that I've always asked myself whilest studying private key cryptography.</p>&#xA;
cryptography encryption
1
473
Decision problems vs "real" problems that aren't yes-or-no
<p>I read in many places that some problems are difficult to approximate (it is <a href="https://en.wikipedia.org/wiki/Hardness_of_approximation" rel="noreferrer"><strong>NP-hard</strong> to approximate</a> them). But approximation is not a decision problem: the answer is a real number and not Yes or No. Also for each desired approximation factor, there are many answers that are correct and many that are wrong, and this changes with the desired approximation factor!</p>&#xA;&#xA;<p>So how can one say that this problem is NP-hard?</p>&#xA;&#xA;<p><em>(inspired by the second bullet in <a href="https://cs.stackexchange.com/q/423/157">How hard is counting the number of simple paths between two nodes in a directed graph?</a>)</em></p>&#xA;
complexity theory time complexity np hard approximation
1
477
For what kind of data are hash table operations O(1)?
<p>From the answers to <a href="https://cs.stackexchange.com/questions/249/when-is-hash-table-lookup-o1">(When) is hash table lookup O(1)?</a>, I gather that hash tables have $O(1)$ worst-case behavior, at least amortized, when the data satisfies certain statistical conditions, and there are techniques to help make these conditions broad.</p>&#xA;&#xA;<p>However, from a programmer's perspective, I don't know in advance what my data will be: it often comes from some external source. And I rarely have all the data at once: often insertions and deletions happen at a rate that's not far below the rate of lookups, so preprocessing the data to fine-tune the hash function is out.</p>&#xA;&#xA;<p>So, taking a step out: given some knowledge about data source, how can I determine whether a hash table has a chance of having $O(1)$ operations, and possibly which techniques to use on my hash function?</p>&#xA;
data structures runtime analysis hash tables dictionaries
1
481
Break an authentication protocol based on a pre-shared symmetric key, with message numbers
<p>Consider the following protocol, meant to authenticate $A$ (Alice) to $B$ (Bob) and vice versa.</p>&#xA;&#xA;<p>$$ \begin{align*}&#xA; A \to B: &amp;\quad \text{“I'm Alice”}, R_A \\&#xA; B \to A: &amp;\quad E(\langle 1, R_A\rangle, K) \\&#xA; A \to B: &amp;\quad E(\langle 2, R_A+1, P_A\rangle, K) \\&#xA;\end{align*} $$</p>&#xA;&#xA;<ul>&#xA;<li>$R$ is a random nonce.</li>&#xA;<li>$K$ is a pre-shared symmetric key.</li>&#xA;<li>$P$ is some payload.</li>&#xA;<li>$E(m, K)$ means $m$ encrypted with $K$.</li>&#xA;<li>$\langle m_1, \ldots, m_n\rangle$ means an assemblage of the $m_i$'s that can be decoded unambiguously ($n$ is encoded unambiguously as well).</li>&#xA;<li>We assume that the cryptographic algorithms are secure and implemented correctly.</li>&#xA;</ul>&#xA;&#xA;<p>An attacker (Trudy) wants to convince Bob to accept her payload $P_T$ as coming from Alice (in lieu of $P_A$). Can Trudy thus impersonate Alice? How?</p>&#xA;&#xA;<p><sub>&#xA;This is a follow-up to <a href="https://cs.stackexchange.com/questions/431/break-an-authentication-protocol-based-on-a-pre-shared-symmetric-key">Break an authentication protocol based on a pre-shared symmetric key</a>.&#xA;</sub></p>&#xA;
cryptography protocols authentication
1
492
Saving on array initialization
<p>I recently read that it is possible to have arrays which need not be initialized, i.e. it is possible to use them without having to spend any time trying to set each member to the default value. i.e. you can start using the array as if it has been initialized by the default value without having to initialize it. (Sorry, I don't remember where I read this).</p>&#xA;&#xA;<p>For example as to why that can be surprising:</p>&#xA;&#xA;<p>Say you are trying to model a <em>worst</em> case $\mathcal{O}(1)$ hashtable (for each of insert/delete/search) of integers in the range $[1, n^2]$.</p>&#xA;&#xA;<p>You can allocate an array of size $n^2$ bits and use individual bits to represent the existence of an integer in the hashtable. Note: allocating memory is considered $\mathcal{O}(1)$ time.</p>&#xA;&#xA;<p>Now, if you did not have to initialize this array at all, any sequence of say $n$ operations on this hashtable is now worst case $\mathcal{O}(n)$.</p>&#xA;&#xA;<p>So in effect, you have a "perfect" hash implementation, which for a sequence of $n$ operations uses $\Theta(n^2)$ space, but runs in $\mathcal{O}(n)$ time!</p>&#xA;&#xA;<p>Normally one would expect your runtime to be at least as bad as your space usage!</p>&#xA;&#xA;<p>Note: The example above might be used for an implementation of a sparse set or sparse matrix, so it is not only of theoretical interest, I suppose.</p>&#xA;&#xA;<p>So the question is:</p>&#xA;&#xA;<blockquote>&#xA; <p>How is it possible to have an array like data-structure which allows us to skip the initialization step?</p>&#xA;</blockquote>&#xA;
data structures arrays
1
494
Clustering of Songs (The Joe Walsh Problem)
<p>The Eagles are a rock supergroup from the 70s and 80s, responsible for such classics as <em>Hotel California</em>. They have two quite distinctive sounds, one where guitarist Joe Walsh is present (for example, in <em>Life in the Fast Lane</em>) and one where he is absent. The latter songs have a markedly more sombre/boring feel.</p>&#xA;&#xA;<p>I'm curious to understand the degree to which an (unsupervised) learning algorithm would be able to detect the difference between the two sounds. One could imagine that it would be easy to tell the difference between speed metal and classical music, but what about sounds by the same band.</p>&#xA;&#xA;<blockquote>&#xA; <p>How would I set up such an experiment? Assume that I already have the relevant audio files in some standard format.</p>&#xA;</blockquote>&#xA;&#xA;<p>Note that this should also apply to other rock groups, such as AC/DC who had a change of lead singer in 1980, and possibly even to other genres, possibly even more modern music.</p>&#xA;
machine learning modelling
0
495
Are generational garbage collectors inherently cache-friendly?
<p>A typical <a href="http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Generational_GC_.28ephemeral_GC.29">generational garbage collector</a> keeps recently allocated data in a separate memory region. In typical programs, a lot of data is short-lived, so collecting young garbage (a minor GC cycle) frequently and collecting old garbage infrequently is a good compromise between memory overhead and time spent doing GC.</p>&#xA;&#xA;<p>Intuitively, the benefit of a generational garbage collector compared with a single-region collector should increase as the latency ratio of main memory relative to cache increases, because the data in the young region is accessed often and kept all in one place. Do experimental results corroborate this intuition?</p>&#xA;
programming languages computer architecture cpu cache garbage collection
1
524
Does there exist a priority queue with $O(1)$ extracts?
<p>There are a great many data structures that implement the priority-queue interface:</p>&#xA;&#xA;<ul>&#xA;<li>Insert: insert an element into the structure</li>&#xA;<li>Get-Min: return the smallest element in the structure</li>&#xA;<li>Extract-Min: remove the smallest element in the structure</li>&#xA;</ul>&#xA;&#xA;<p>Common data structures implementing this interface are (min)<a href="http://en.wikipedia.org/wiki/Heap_%28data_structure%29">heaps</a>.</p>&#xA;&#xA;<p>Usually, the (amortized) running times of these operations are:</p>&#xA;&#xA;<ul>&#xA;<li>Insert: $\mathcal{O}(1)$ (sometimes $\mathcal{O}(\log n)$)</li>&#xA;<li>Get-Min: $\mathcal{O}(1)$</li>&#xA;<li>Extract-Min: $\mathcal{O}(\log n)$</li>&#xA;</ul>&#xA;&#xA;<p>The <a href="http://en.wikipedia.org/wiki/Fibonacci_heap">Fibonacci heap</a> achieves these running times for example. Now, my question is the following:</p>&#xA;&#xA;<blockquote>&#xA; <p>Is there a data structure with the following (amortized) running times?</p>&#xA;</blockquote>&#xA;&#xA;<ul>&#xA;<li>Insert: $\mathcal{O}(\log n)$</li>&#xA;<li>Get-Min: $\mathcal{O}(1)$</li>&#xA;<li>Extract-Min: $\mathcal{O}(1)$</li>&#xA;</ul>&#xA;&#xA;<p>If we can construct such a structure in $\mathcal{O}(n)$ time given sorted input, then we can for instance find line intersections on pre-sorted inputs with $o\left(\frac{n}{\log n}\right)$ intersections strictly faster than if we use the 'usual' priority queues.</p>&#xA;
data structures amortized analysis priority queues
1
525
What is coinduction?
<p>I've heard of (structural) induction. It allows you to build up finite structures from smaller ones and gives you proof principles for reasoning about such structures. The idea is clear enough.</p>&#xA;&#xA;<blockquote>&#xA; <p>But what about coinduction? How does it work? How can one say anything conclusive about an infinite structure?</p>&#xA;</blockquote>&#xA;&#xA;<p>There are (at least) two angles to address, namely, coinduction as a way of defining things and as a proof technique. </p>&#xA;&#xA;<blockquote>&#xA; <p>Regarding coinduction as a proof technique, what is the relationship between coinduction and bisimulation?</p>&#xA;</blockquote>&#xA;
terminology logic proof techniques formal methods coinduction
1
539
Visual Programming languages
<p>Most of us learned programming using "textual" programming languages like Basic, C/C++, and Java. I believe it is more natural and efficient for humans to think visually. Visual programming allows developers to write programs by manipulating graphical elements. I guess using visual programming should improve the quality of code and reduce programming bugs. I'm aware of a few visual languages such as <a href="http://appinventoredu.mit.edu/">App Inventor</a>, <a href="http://scratch.mit.edu/">Scratch</a>, and <a href="http://www.ni.com/labview/">LabView</a>. </p>&#xA;&#xA;<p>Why are there no mainstream, general-purpose visual languages for developers? What are the advantages and disadvantages of visual programming?</p>&#xA;
programming languages
1
540
Notions of efficient computation
<p>A polynomial-time Turing machine algorithm is considered efficient if its run-time, in the worst-case, is bounded by a polynomial function in the input size. I'm aware of the strong Church-Turing thesis:</p>&#xA;&#xA;<blockquote>&#xA; <p>Any reasonable model of computation can be efficiently simulated on Turing machines</p>&#xA;</blockquote>&#xA;&#xA;<p>However, I'm not aware of solid theory for analyzing the computational complexity of algorithms of $\lambda$-calculus.</p>&#xA;&#xA;<p>Do we have a notion of computational efficiency for every known model of computation? Are there any models that are only useful for computability questions but useless for computational complexity questions?</p>&#xA;
complexity theory efficiency computation models
1
541
When are two simulations not a bisimulation?
<p>Given a <a href="http://en.wikipedia.org/wiki/State_transition_system">labelled transition system</a> $(S,\Lambda,\to)$, where $S$ is a set of states, $\Lambda$ is a set of labels, and $\to\subseteq S\times\Lambda\times S$ is a ternary relation. As usual, write $p \stackrel\alpha\rightarrow q$ for $(p,\alpha,q)\in\to$. The labelled transition $p\stackrel\alpha\to q$ denotes that the system in state $p$ changes state to $q$ with label $\alpha$, meaning that $\alpha$ is some observable action that causes the state change.</p>&#xA;&#xA;<p>Now a relation $R \subseteq S \times S$ is a called a <em>simulation</em> iff&#xA;$$ \forall (p,q)\in R, &#xD;&#xA; \text{ if } p \stackrel\alpha\rightarrow p&#39;&#xD;&#xA; \text{ then } \exists q&#39;, \;&#xD;&#xA; q \stackrel\alpha\rightarrow q&#39; \text{ and } (p&#39;,q&#39;)\in R.&#xD;&#xA;$$</p>&#xA;&#xA;<p>One LTS is said to <em>simulate</em> another if there exists a simulation relation between them. </p>&#xA;&#xA;<p>Similarly, a relation $R \subseteq S \times S$ is a <em>bisimulation</em> iff $\forall (p,q)\in R,$ &#xA;$$ &#xD;&#xA;\begin{array}{l}&#xD;&#xA; \text{ if } p \stackrel\alpha\rightarrow p&#39;&#xD;&#xA; \text{ then } \exists q&#39;, \;&#xD;&#xA; q \stackrel\alpha\rightarrow q&#39; \text{ and } (p&#39;,q&#39;)\in R&#xD;&#xA;\text{ and } \\&#xD;&#xA;\text{ if } q \stackrel\alpha\rightarrow q&#39;&#xD;&#xA; \text{ then } \exists p&#39;, \;&#xD;&#xA; p \stackrel\alpha\rightarrow p&#39; \text{ and } (p&#39;,q&#39;)\in R.&#xD;&#xA;\end{array}&#xD;&#xA;$$</p>&#xA;&#xA;<p>Two LTSs are said to be bisimilar iff there exists a bisimulation between their state spaces.</p>&#xA;&#xA;<p>Clearly these two notions are quite related, but they are not the same.</p>&#xA;&#xA;<blockquote>&#xA; <p>Under what conditions is it the case that an LTS simulates another and vice versa, but that the two LTSs are not bisimilar?</p>&#xA;</blockquote>&#xA;
programming languages formal methods semantics process algebras
1
553
How Do Common Pathfinding Algorithms Compare To Human Process
<p>This might border on computational cognitive science, but I am curious as to how the process followed by common pathfinding algorithms (such as <a href="http://en.wikipedia.org/wiki/A_star_search_algorithm">A*</a>) compares to the process humans use in different pathfinding situations (given the same information). Are these processes similar?</p>&#xA;
algorithms graphs artificial intelligence
1
555
Given a string and a CFG, what characters can follow the string (in the sentential forms of the CFG)?
<p>Let $\Sigma$ be the set of terminal and $N$ the set of non-terminal symbols of some context-free grammar $G$.</p>&#xA;&#xA;<p>Say I have a string $a \in (\Sigma \cup N)^+$ such that $x a y \in \mathcal{S}(G)$ where $x,y\in (\Sigma \cup N)^*$ and $\mathcal{S}(G)$ are the sentential forms of $G$.</p>&#xA;&#xA;<p>Given $G$, I'd like to determine a set $C = \{ b \mid wabz \in \mathcal{S}(G), b \in \Sigma \cup N \}$. </p>&#xA;&#xA;<p>To clarify, in this case, $w, x, y, z, a, b$ are strings of terminals and non-terminals and $b$ is of length one.</p>&#xA;&#xA;<p>I can see how to do this if $a$ is also of length one; each $b$ is a member of the follow set of $a$ (including non-terminals).</p>&#xA;&#xA;<p>However, I am curious if it's possible for a sequence of characters. For my application, the string $a$ is not much longer than the right hand side of the productions in $G$.</p>&#xA;&#xA;<p>The distinction between terminals and non-terminals is somewhat mute in my application because I am using a generative grammar; and I believe that this won't lead to much trouble since $b$ is of length one.</p>&#xA;
algorithms context free formal grammars compilers
1
559
Creating a Self Ordering Binary Tree
<p>I have an assignment where I need to make use a binary search tree and alter it to self order itself such that items that are accessed the most (have a higher priority) are at the top of the tree, the root being the most accessed node.</p>&#xA;&#xA;<p>The professor gave me the BST and node struct to work with, but trying to get my head around the algorithm to update the tree as things are being inserted is confusing me.</p>&#xA;&#xA;<p>I know that as the insert is happening, it checks if the current node's data is less or greater than the current node, then recursively goes in the correct direction until it finds a null pointer and inserts itself there. and after it is inserted it increases the priority by 1.</p>&#xA;&#xA;<pre><code>template &lt;class Type&gt;&#xA;void BinarySearchTree&lt;Type&gt; :: insert( const Type &amp; x, BinaryNode&lt;Type&gt; * &amp; t )&#xA;{&#xA; if( t == NULL )&#xA; t = new BinaryNode&lt;Type&gt;( x, NULL, NULL );&#xA; else if( x &lt; t-&gt;element )&#xA; insert( x, t-&gt;left );&#xA; else if( t-&gt;element &lt; x )&#xA; insert( x, t-&gt;right );&#xA; else&#xA; t-&gt;priority++; // Duplicate; do nothing for right now&#xA;}&#xA;</code></pre>&#xA;&#xA;<p>Now I need to figure out when the node is equal, how to re-order the tree so that the current node (who is equal to an already existing node) finds the existing node, increases that node's priority, then shifts it up if the root is a lower priority.</p>&#xA;&#xA;<p>I think I have the idea down that the AVL logic would work, and when a shift would take place, it would be a single rotation right or a single rotation left.</p>&#xA;&#xA;<p>Here's where I'm confused, don't really know where to start with creating an algorithm to solve the problem. Since the AVL algorithm works with keeping track of the balance of a tree, then rotating nodes left or right accordingly, this tree doesn't need to worry about being balanced, just that the nodes with the highest priority not have children with a higher priority.</p>&#xA;
algorithms data structures binary trees search trees
1
561
Why has research on genetic algorithms slowed?
<p>While discussing some intro level topics today, including the use of genetic algorithms; I was told that research has really slowed in this field. The reason given was that most people are focusing on machine learning and data mining. <br>&#xA;<strong>Update:</strong> Is this accurate? And if so, what advantages does ML/DM have when compared with GA?</p>&#xA;
machine learning data mining evolutionary computing history
1
569
BigO, Running Time, Invariants - Learning Resources
<p>What are some good online resources that will help me better understand BigO notation, running time &amp; invariants?</p>&#xA;&#xA;<p>I'm looking for lectures, interactive examples if possible. </p>&#xA;
algorithms landau notation education runtime analysis
0
570
Generating uniformly distributed random numbers using a coin
<p>You have one coin. You may flip it as many times as you want. </p>&#xA;&#xA;<p>You want to generate a random number $r$ such that $a \leq r &lt; b$ where $r,a,b\in \mathbb{Z}^+$. </p>&#xA;&#xA;<p>Distribution of the numbers should be uniform. </p>&#xA;&#xA;<p>It is easy if $b -a = 2^n$:</p>&#xA;&#xA;<pre><code>r = a + binary2dec(flip n times write 0 for heads and 1 for tails) &#xA;</code></pre>&#xA;&#xA;<p>What if $b-a \neq 2^n$?</p>&#xA;
algorithms probability theory randomness random number generator
0
576
Expressing an arbitrary permutation as a sequence of (insert, move, delete) operations
<p>Suppose I have two strings. Call them $A$ and $B$. Neither string has any repeated characters.</p>&#xA;&#xA;<p>How can I find the shortest sequence of insert, move, and delete operation that turns $A$ into $B$, where:</p>&#xA;&#xA;<ul>&#xA;<li><code>insert(char, offset)</code> inserts <code>char</code> at the given <code>offset</code> in the string</li>&#xA;<li><code>move(from_offset, to_offset)</code> moves the character currently at offset <code>from_offset</code> to a new position so that it has offset <code>to_offset</code></li>&#xA;<li><code>delete(offset)</code> deletes the character at <code>offset</code></li>&#xA;</ul>&#xA;&#xA;<p>Example application: You do a database query and show the results on your website. Later, you rerun the database query and discover that the results have changed. You want to change what is on the page to match what is currently in the database using the minimum number of DOM operations. There are two reasons why you'd want the shortest sequence of operations. First, efficiency. When only a few records change, you want to make sure that you do $\mathcal{O}(1)$ rather than $\mathcal{O}(n)$ DOM operations, since they are expensive. Second, correctness. If an item moved from one position to another, you want to move the associated DOM nodes in a single operation, without destroying and recreating them. Otherwise you will lose focus state, the content of <code>&lt;input&gt;</code> elements, and so forth.</p>&#xA;
algorithms combinatorics string metrics
0
578
What is the complexity of these tree-based algorithms?
<p>Suppose we have a balanced binary tree, which represents a recursive partitioning of a set of $N$ points into nested subsets. Each node of the tree represents a subset, with the following properties: subsets represented by two children nodes of the same parent are disjoint, and their union is equal to the subset represented by the parent. The root represents the full set of points, and each leaf represents a single distinct point. So there are $\log N$ levels to the tree, and each level of the tree represents a partitioning of the points into increasingly fine levels of granularity.</p>&#xA;&#xA;<p>Now suppose we have two algorithms, each of which operates on all of the subsets of the tree. The first does $O(D^2)$ operations at each node, where $D$ is the size of the subset represented by the node. The second does $O(D \log D)$ operations at each node. What is the worst case runtime of these two algorithms?</p>&#xA;&#xA;<p>We can easily bound the first algorithm as $O(N^2 \log N)$, because it does $O(N^2)$ work at each of $\log N$ levels of the tree. Similarly, we can bound the second algorithm as $O(N \log ^2 N)$, by similar reasoning.</p>&#xA;&#xA;<p>The question is, are these bounds tight, or can we do better? How do we prove it?</p>&#xA;
algorithms time complexity binary trees
1
580
What combination of data structures efficiently stores discrete Bayesian networks?
<p>I understand the theory behind Bayesian networks, and am wondering what it takes to build one in practice. Let's say for this example, that I have a Bayesian (directed) network of 100 discrete random variables; each variable can take one of up to 10 values.</p>&#xA;&#xA;<p>Do I store all the nodes in a DAG, and for each node store its Conditional Probability Table (CPT)? Are there other data structures I should make use of to ensure efficient computation of values when some CPTs change (apart from those used by a DAG)?</p>&#xA;
data structures machine learning
1
581
Algorithmic intuition for logarithmic complexity
<p>I believe I have a reasonable grasp of complexities like <span class="math-container">$\mathcal{O}(1)$</span>, <span class="math-container">$\Theta(n)$</span> and <span class="math-container">$\Theta(n^2)$</span>.</p>&#xA;&#xA;<p>In terms of a list, <span class="math-container">$\mathcal{O}(1)$</span> is a constant lookup, so it's just getting the head of the list.&#xA;<span class="math-container">$\Theta(n)$</span> is where I'd walk the entire list once, and <span class="math-container">$\Theta(n^2)$</span> is walking the list once for each element in the list.</p>&#xA;&#xA;<p>Is there a similarly intuitive way to grasp <span class="math-container">$\Theta(\log n)$</span> other than just knowing it lies somewhere between <span class="math-container">$\mathcal{O}(1)$</span> and <span class="math-container">$\Theta(n)$</span>?</p>&#xA;
algorithms complexity theory time complexity intuition
1
586
Could quantum computing eventually be used to make modern day hashing trivial to break?
<p>Simply put, if one were to build a quantum computing device with the power of, say, 20 qubits, could such a computer be used to make any kind of modern hashing algorithm useless?</p>&#xA;&#xA;<p>Would it even be possible to harness the power of quantum computing in a traditional computing application?</p>&#xA;
cryptography quantum computing hash
1
588
Can every linear grammar be converted to Greibach form?
<p>Can every <a href="http://en.wikipedia.org/wiki/Linear_grammar" rel="nofollow">linear grammar</a> be converted to a linear <a href="http://en.wikipedia.org/wiki/Greibach_normal_form" rel="nofollow">Greibach normal form</a>, a form in which all productions look like $A \rightarrow ax$ where $a \in T$ and $x \in V \cup \{\lambda\}$?</p>&#xA;&#xA;<p>($T$ is the set of terminals, $V$ is the set of non-terminals, $\lambda$ is the empty sequence.)</p>&#xA;
formal languages formal grammars
1
594
CPU frequency per year
<p>I know that since ~2004, Moore's law stopped working for CPU clock speed.&#xA;I'm looking for a graph showing this, but am unable to find it: most charts out there show the transistor count or the capacity per year.</p>&#xA;&#xA;<p>Where can I find some data showing the CPU frequency of computers (anything is fine, personal computers, servers, laptops, ...) from the last few decades to today?<br>&#xA;Raw data that I can plot myself would be fine as well (hum, probably even better).</p>&#xA;
computer architecture empirical research data sets
1
602
Measuring one way network latency
<p>This is a puzzle about measuring network latency that I created. I believe the solution is that it's impossible, but friends disagree. I'm looking for convincing explanations either way. (Though it is posed as a puzzle I think it fits on this web site because of its applicability to the design and experience of communication protocols such as in online games, not to mention NTP.)</p>&#xA;&#xA;<p>Suppose two robots are in two rooms, connected by a network with differing one-way latencies as shown in the graphic below. When robot A sends a message to robot B it takes 3 seconds for it to arrive, but when robot B sends a message to robot A it takes 1 second to arrive. The latencies never vary.</p>&#xA;&#xA;<p>The robots are identical and do not have a shared clock, though they can measure the passage of time (e.g. they have stop watches). They do not know which of them is robot A (whose messages are delayed 3s) and which is robot B (whose messages are delayed by 1s).</p>&#xA;&#xA;<p>A protocol to discover the round trip time is:</p>&#xA;&#xA;<pre><code>whenReceive(TICK).then(send TOCK)&#xA;&#xA;// Wait for other other robot to wake up&#xA;send READY&#xA;await READY&#xA;send READY&#xA;&#xA;// Measure RTT&#xA;t0 = startStopWatch()&#xA;send TICK&#xA;await TOCK&#xA;t1 = stopStopWatch()&#xA;rtt = t1 - t0 //ends up equalling 4 seconds&#xA;</code></pre>&#xA;&#xA;<p>Is there a protocol to determine the one way trip delays? Can the robots discover which of them has the longer message sending delay?</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/uYIGW.png" alt="Two robots one asymmetric network"></p>&#xA;
computer networks protocols distributed systems
1
614
Late and Early Bisimulation
<p>This is a follow up to my earlier questions on <a href="https://cs.stackexchange.com/q/525/31">coinduction</a> and <a href="https://cs.stackexchange.com/q/541/31">bisimulation</a>.</p>&#xA;&#xA;<p>A relation $R \subseteq S \times S$ on the states of an LTS is a <em>bisimulation</em> iff $\forall (p,q)\in R,$ &#xA;$$ &#xA;\begin{array}{l}&#xA; \text{ if } p \stackrel\alpha\rightarrow p'&#xA; \text{ then } \exists q', \;&#xA; q \stackrel\alpha\rightarrow q' \text{ and } (p',q')\in R&#xA;\text{ and } \\&#xA;\text{ if } q \stackrel\alpha\rightarrow q'&#xA; \text{ then } \exists p', \;&#xA; p \stackrel\alpha\rightarrow p' \text{ and } (p',q')\in R.&#xA;\end{array}&#xA;$$</p>&#xA;&#xA;<p>This is a very powerful and very natural notion, after you come to appreciate it. But it's not the only notion of bisimulation. In special circumstances, such as in the context of the <a href="http://en.wikipedia.org/wiki/%CE%A0-calculus" rel="nofollow noreferrer">$\pi$-calculus</a>, other notions such as open, branching, weak, barbed, late and early bisimulation exist, though I do not fully appreciate the differences. But for this question, I want to limit focus just two notions.</p>&#xA;&#xA;<blockquote>&#xA; <p>What are <em>late</em> and <em>early</em> bisimulation and why would I use one of these notions instead of standard bisimulation?</p>&#xA;</blockquote>&#xA;
semantics formal methods process algebras pi calculus
1
615
Improve worst case time of depth first search on Euler graphs
<p>How to improve the worst case scenario for a depth first search on an Euler graph, starting at some point and ending at that same point?</p>&#xA;&#xA;<p>I need to do the whole search but it is not fast enough for large amounts of data. I have tried <a href="https://en.wikipedia.org/wiki/Bidirectional_search" rel="nofollow">bidirectional search</a> but I can not keep the result numerically ordered. Therefore I wonder if there is any other good method to smooth the worst case scenario for the depth first search.</p>&#xA;
algorithms graphs graph traversal eulerian paths
0
616
How can I decide manually whether two CTL formulae are equivalent?
<p>Assume I have two formulae $\Phi$ and $\Psi$ (over the same set of atomic propositions $AP$) in <a href="http://en.wikipedia.org/wiki/Computation_tree_logic" rel="nofollow">CTL</a>. We have that $\Phi \equiv \Psi$ iff $Sat_{TS}(\Phi) = Sat_{TS}(\Psi)$ for all transition systems $TS$ over $AP$.</p>&#xA;&#xA;<p>Given that there are infinitely many transition systems, it's impossible to check them all. I thought about using PNF (Positive Normal Form, allowing negation only next to literals) because judging from its name it should give me the same formula for $\Phi$ as for $\Psi$ iff they are equivalent, but I'm not convinced this works in all cases (you could say, I'm not convinced PNF is actually a normal form).</p>&#xA;&#xA;<p>For example, take $\forall \mathrm{O} \forall \lozenge \Phi_0 \stackrel{?}{\equiv} \forall \lozenge \forall \mathrm{O} \Phi_0$ (where $\mathrm{O}$ is the <code>next</code> operator and $\lozenge$ is the <code>eventually</code> operator). I'm looking for a way do do this by hand.</p>&#xA;
logic model checking computation tree logic
0
618
How can solutions of a Diophantine equation be expressed as a language?
<p>I was given the question </p>&#xA;&#xA;<blockquote>&#xA; <p>Where does the following language fit in the Chomsky hierarchy?</p>&#xA; &#xA; <p>Nonnegative solutions $(x,y)$ to the Diophantine equation $3x-y=1$.</p>&#xA;</blockquote>&#xA;&#xA;<p>I understand languages like $L = \{ 0^n1^n \mid n \ge 1\}$, but this language confuses me. What do the words in the language look like? How could I represent it using a grammar or regular expression?</p>&#xA;
formal languages computability
0
619
How can I prove this language is not context-free?
<p>I have the following language</p>&#xA;&#xA;<p>$\qquad \{0^i 1^j 2^k \mid 0 \leq i \leq j \leq k\}$</p>&#xA;&#xA;<p>I am trying to determine which Chomsky language class it fits into. I can see how it could be made using a context-sensitive grammar so I know it is atleast context-sensitive. It seems like it wouldn't be possible to make with a context-free grammar, but I'm having a problem proving that.</p>&#xA;&#xA;<p>It seems to pass the fork-pumping lemma because if $uvwxy$ is all placed in the third part of any word (the section with all of the $2$s). It could pump the $v$ and $x$ as many times as you want and it would stay in the language. If I'm wrong could you tell me why, if I'm right, I still think this language is not context-free, so how could I prove that?</p>&#xA;
formal languages context free formal grammars pumping lemma
0
625
Find shortest paths in a weighed unipathic graph
<p>A directed graph is said to be <em>unipathic</em> if for any two vertices $u$ and $v$ in&#xA;the graph $G=(V,E)$, there is at most one simple path from $u$ to $v$. </p>&#xA;&#xA;<p>Suppose I am given a unipathic graph $G$ such that each edge has a positive or negative weight, but contains no negative weight cycles.</p>&#xA;&#xA;<p>From this I want to find a $O(|V|)$ algorithm that finds all the the shortest paths to all nodes from a source node $s$.</p>&#xA;&#xA;<p>I am not sure how I would go about approaching this problem. I am trying to see how I could use the fact that it contains no negative weight cycles and of course at most one simple path between any node $u$ to $v$.</p>&#xA;
algorithms graphs
1
627
Decidablity of Languages of Grammars and Automata
<p><em>Note this is a question related to study in a CS course at a university, it is NOT homework and can be found <a href="http://www.cs.ucf.edu/%7Edmarino/ucf/transparency/cot4210/exam/" rel="noreferrer">here</a> under Fall 2011 exam2.</em></p>&#xA;<p>Here are the two questions I'm looking at from a past exam. They seem to be related, the first:</p>&#xA;<blockquote>&#xA;<p>Let</p>&#xA;<p><span class="math-container">$\qquad \mathrm{FINITE}_{\mathrm{CFG}} = \{ &lt; \! G \! &gt; \mid G \text{ is a Context Free Grammar with } |\mathcal{L}(G)|&lt;\infty \} $</span></p>&#xA;<p>Prove that <span class="math-container">$\mathrm{FINITE}_{\mathrm{CFG}}$</span> is a decidable language.</p>&#xA;</blockquote>&#xA;<p>and...</p>&#xA;<blockquote>&#xA;<p>Let</p>&#xA;<p><span class="math-container">$\qquad \mathrm{FINITE}_{\mathrm{TM}} = \{ &lt; \! M\!&gt; \mid M \text{ is a Turing Machine with } |\mathcal{L}(M)|&lt;\infty \}$</span></p>&#xA;<p>Prove that <span class="math-container">$\mathrm{FINITE}_{\mathrm{TM}}$</span> is an undecidable language.</p>&#xA;</blockquote>&#xA;<p>I am a bit lost on how to tackle these problems, but I have a few insights which I think may be in the right direction. The first thing is that I am aware of is that the language <span class="math-container">$A_{\mathrm{REX}}$</span>, where</p>&#xA;<blockquote>&#xA;<p><span class="math-container">$\qquad A_{\mathrm{REX}} = \{ &lt;\! R, w \!&gt; \mid R \text{ is a regular expression with } w \in\mathcal{L}(R)\}$</span></p>&#xA;</blockquote>&#xA;<p>is a decidable language (proof is in Michael Sipser's <i>Theory of Computation</i>, pg. 168). The same source also proves that a Context Free Grammar can be converted to a regular expression, and vice versa. Thus <span class="math-container">$A_{\mathrm{CFG}}$</span>, must also be decidable as it can be converted to a regular expression. This, and the fact that <span class="math-container">$A_{\mathrm{TM}}$</span> is <b>un</b>-decidable, seems to be related to this problem.</p>&#xA;<p>The only thing I can think of is passing G to Turing machines for <span class="math-container">$A_{\mathrm{REX}}$</span> (after converting G to a regular expression) and <span class="math-container">$A_{\mathrm{TM}}$</span>. Then accepting if G does and rejecting if G doesn't. As <span class="math-container">$A_{\mathrm{TM}}$</span> is undecidable, this will never happen. Somehow I feel like I'm making a mistake here, but I'm not sure of what it is. Could someone please lend me a hand here?</p>&#xA;
formal languages computability context free regular languages turing machines
1
634
What is beta equivalence?
<p>In the script I am currently reading on the lambda calculus, beta equivalence is defined as this:</p>&#xA;&#xA;<blockquote>&#xA; <p>The $\beta$-equivalence $\equiv_\beta$ is the smallest equivalence that contains $\rightarrow_\beta$.</p>&#xA;</blockquote>&#xA;&#xA;<p>I have no idea what that means. Can someone explain it in simpler terms? Maybe with an example?</p>&#xA;&#xA;<p>I need it for a lemma following from the Church-Russer theorem, saying</p>&#xA;&#xA;<blockquote>&#xA; <p>If M $\equiv_\beta$ N then there is a L with M $\twoheadrightarrow_\beta$ L and N $\twoheadrightarrow_\beta$ L.</p>&#xA;</blockquote>&#xA;
logic terminology lambda calculus type theory
1
636
A Question relating to a Turing Machine with a useless state
<p>OK, so here is a question from a past test in my Theory of Computation class:</p>&#xA;&#xA;<blockquote>&#xA; <p>A useless state in a TM is one that is never entered on any input string. Let $$\mathrm{USELESS}_{\mathrm{TM}} = \{\langle M, q \rangle \mid q \text{ is a useless state in }M\}.$$&#xA; Prove that $\mathrm{USELESS}_{\mathrm{TM}}$ is undecidable. </p>&#xA;</blockquote>&#xA;&#xA;<p>I think I have an answer, but I'm not sure if it is correct. Will include it in the answer section.</p>&#xA;
computability undecidability formal methods turing machines
1
640
Language of the values of an affine function
<p>Write $\bar n$ for the decimal expansion of $n$ (with no leading <code>0</code>). Let $a$ and $b$ be integers, with $a &gt; 0$. Consider the language of the decimal expansions of the multiples of $a$ plus a constant:</p>&#xA;&#xA;<p>$$M = \{ \overline{a\,x+b} \mid x\in\mathbb{N} \}$$</p>&#xA;&#xA;<p>Is $M$ regular? context-free?</p>&#xA;&#xA;<p>(Contrast with <a href="https://cs.stackexchange.com/questions/641/language-of-the-graph-of-an-affine-function">Language of the graph of an affine function</a>)</p>&#xA;&#xA;<p><sub> I think this would make a good homework question, so answers that start with a hint or two and explain not just how to solve the question but also how to decide what techniques to use would be appreciated. </sub></p>&#xA;
formal languages context free regular languages integers
1
641
Language of the graph of an affine function
<p>Write $\bar n$ for the decimal expansion of $n$ (with no leading <code>0</code>). Let <code>:</code> be a symbol distinct from any digit. Let $a$ and $b$ be integers, with $a &gt; 0$. Consider the language of solutions of the Diophantine equation $y=ax+b$:</p>&#xA;&#xA;<p>$$L = \{ \bar{x} \mathtt: \bar{y} \mid y = a\,x + b \}$$</p>&#xA;&#xA;<p>Is $L$ regular? context-free?</p>&#xA;&#xA;<p>(Contrast with <a href="https://cs.stackexchange.com/questions/640/language-of-the-multiples-of-an-integer">Language of the values of an affine function</a>)</p>&#xA;&#xA;<p><sub>(Follows on <a href="https://cs.stackexchange.com/questions/618/how-can-solutions-of-a-diophantine-equation-be-expressed-as-a-language">How can solutions of a Diophantine equation be expressed as a language?</a>)</sub></p>&#xA;&#xA;<p><sub> I think this would make a good homework question, so answers that start with a hint or two and explain not just how to solve the question but also how to decide what techniques to use would be appreciated. </sub></p>&#xA;
formal languages regular languages context free integers
1
645
Deciding on Sub-Problems for Dynamic Programming
<p>I have used the technique of dynamic programming multiple times however today a friend asked me how I go about defining my sub-problems, I realized I had no way of providing an objective formal answer. How do you formally define a sub-problem for a problem that you would solve using dynamic programming?</p>&#xA;
algorithms dynamic programming
1
653
Is there a difference between $\lambda xy.xy$ and $\lambda x.\lambda y.xy$?
<p>I am currently learning the lambda calculus and was wondering about the following two different kinds of writing a lambda term. </p>&#xA;&#xA;<ol>&#xA;<li>$\lambda xy.xy$ </li>&#xA;<li>$\lambda x.\lambda y.xy$</li>&#xA;</ol>&#xA;&#xA;<p>Is there any difference in meaning or the way you apply beta reduction, or are those just two ways to express the same thing?</p>&#xA;&#xA;<p>Especially this definition of pair creation made me wonder:</p>&#xA;&#xA;<blockquote>&#xA; <p><strong>pair</strong> = $\lambda xy.\lambda p.pxy$</p>&#xA;</blockquote>&#xA;
logic lambda calculus
1
654
Logarithmic vs double logarithmic time complexity
<p>In real world applications is there a concrete benefit when using $\mathcal{O}(\log(\log(n))$ instead of $\mathcal{O}(\log(n))$ algorithms ?</p>&#xA;&#xA;<p>This is the case when one use for instance van Emde Boas trees instead of more conventional binary search tree implementations. &#xA;But for example, if we take $n &lt; 10^6$ then in the best case the double logarithmic algorithm outperforms the logarithmic one by (approximately) a factor of $5$. And also in general the implementation is more tricky and complex. </p>&#xA;&#xA;<p>Given that I personally prefer BST over VEB-trees, what do you think ?</p>&#xA;&#xA;<p><em>One could easily demonstrate that :</em></p>&#xA;&#xA;<p>$\qquad \displaystyle \forall n &lt; 10^6.\ \frac{\log n}{\log(\log(n))} &lt; 5.26146$</p>&#xA;
algorithms complexity theory binary trees algorithm analysis search trees
1
663
What are the uses of Markov Chains in CS?
<p>We all know that Markov Chains can be used for generating real-looking text (or real-sounding music). I've also heard that Markov Chains has some applications in the image processing, is that true? What are some other uses of MCs in CS?</p>&#xA;
probability theory markov chains
0
666
Is there a 'string stack' data structure that supports these string operations?
<p>I'm looking for a data structure that stores a set of strings over a character set $\Sigma$, capable of performing the following operations. We denote $\mathcal{D}(S)$ as the data structure storing the set of strings $S$.</p>&#xA;&#xA;<ul>&#xA;<li><code>Add-Prefix-Set</code> on $\mathcal{D}(S)$: given some set $T$ of (possibly empty) strings, whose size is bounded by a constant and whose string lengths are bounded by a constant, return $\mathcal{D}( \{ t s\ |\ t \in T, s \in S\} )$. Both these bounding constants are global: they are the same for all inputs $T$.</li>&#xA;<li><code>Get-Prefixes</code> on $\mathcal{D}(S)$: return $\{ a \ | \ as \in S, a \in \Sigma \}$. Note that I don't really mind what structure is used for this set, as long as I can enumerate its contents in $O(|\Sigma|)$ time.</li>&#xA;<li><code>Remove-Prefixes</code> on $\mathcal{D}(S)$: return $\mathcal{D}( \{ s \ | \ as \in S, a \in \Sigma \} )$.</li>&#xA;<li><code>Merge</code>: given $\mathcal{D}(S)$ and $\mathcal{D}(T)$, return $\mathcal{D}(S \cup T)$.</li>&#xA;</ul>&#xA;&#xA;<p>Now, I'd really like to do all these operations in $O(1)$ time, but I'm fine with a structure that does all these operations in $o(n)$ time, where $n$ is the length of the longest string in the structure. In the case of the merge, I'd like a $o(n_1+n_2)$ running time, where $n_1$ is $n$ for the first and $n_2$ the $n$ for the second structure.</p>&#xA;&#xA;<p>An additional requirement is that the structure is immutable, or at least that the above operations return 'new' structures such that pointers to the old ones still function as before.</p>&#xA;&#xA;<p>A note about amortization: that is fine, but you have to watch out for persistence. As I re-use old structures all the time, I'll be in trouble if I hit a worst case with some particular set of operations on the same structure (so ignoring the new structures it creates).</p>&#xA;&#xA;<p>I'd like to use such a structure in a parsing algorithm I'm working on; the above structure would hold the lookahead I need for the algorithm.</p>&#xA;&#xA;<p>I've already considered using a <a href="http://en.wikipedia.org/wiki/Trie">trie</a>, but the main problem is that I don't know how to merge tries efficiently. If the set of strings for <code>Add-Prefix-Set</code> consists of only single-character strings, then you could store these sets in a stack, which would give you $O(1)$ running times for the first three operations. However, this approach doesn't work for merging either.</p>&#xA;&#xA;<p>Finally, note that I'm not interested in factors $|\Sigma|$: this is constant for all I care.</p>&#xA;
data structures time complexity strings stacks
0
669
Are Turing machines more powerful than pushdown automata?
<p>I've came up with a result while reading some automata books, that Turing machines appear to be more powerful than pushdown automata. Since the tape of a Turing machine can always be made to behave like a stack, it'd seem that we can actually claim that TMs are more powerful. </p>&#xA;&#xA;<p>Is this true?</p>&#xA;
formal languages computability automata turing machines pushdown automata
1
674
Non-trivial tractable properties of triples
<p>Many intractable $NP$-complete problems can be modeled as deciding whether a set of triples, $F=${$t_1, t_2, ..., t_n$} where each triple $t_i$ is a subset of three elements over base set $U=${$a_1, a_2, ..., a_k$}, satisfy some non-trivial property. For example, 3-edge coloring of cubic graphs can be modeled as the problem of deciding whether sets of triples satisfy that the elements in each triple must have different color. </p>&#xA;&#xA;<p>I'm looking for examples of non-trivial tractable properties ($P_2$) of sets of triples (which have polynomial time algorithms) given that the sets of triples already satisfies some other non-trivial property $P_1$. Non-trivial property means that there are infinite number of sets of triples that satisfy the property and infinite number of sets of triples that do not. Are all non-trivial properties $P_2$ of sets of triples intractable?</p>&#xA;&#xA;<p>Also, I'd appreciate a survey on the subject.</p>&#xA;&#xA;<p><strong>EDIT:</strong> Based on Ben's answer, I added the requirement that $F$ already satisfies some non-trivial property $P_1$ and we are asking weather it satisfies another no-trivial property $P_2$. For instance, in the 3-edge coloring example, the family of triples $F$ must represent the edges incident on the nodes of a cubic graph.</p>&#xA;
complexity theory
1
680
How many edges can a unipathic graph have?
<p>A unipathic graph is a directed graph such that there is at most one simple path from any one vertex to any other vertex.</p>&#xA;&#xA;<p>Unipathic graphs can have cycles. For example, a doubly linked list (not a circular one!) is a unipathic graph; if the list has $n$ elements, the graph has $n-1$ cycles of length 2, for a total of $2(n-1)$.</p>&#xA;&#xA;<p>What is the maximum number of edges in a unipathic graph with $n$ vertices? An asymptotic bound would do (e.g. $O(n)$ or $\Theta(n^2)$).</p>&#xA;&#xA;<p><sub>Inspired by <a href="https://cs.stackexchange.com/questions/625/find-shortest-paths-in-a-weighed-unipathic-graph">Find shortest paths in a weighed unipathic graph</a>; in <a href="https://cs.stackexchange.com/questions/625/find-shortest-paths-in-a-weighed-unipathic-graph/679#679">my proof</a>, I initially wanted to claim that the number of edges was $O(n)$ but then realized that bounding the number of cycles was sufficient.</sub></p>&#xA;
graphs combinatorics
1
684
Turing Completeness + Dataflow Unification = Arbitrarily invertible (pure, nonrecursive) functions?
<p>Assume we are working in a Turing-complete, referentially-transparent, higher-order language that supports arbitrary dataflow unification. Shouldn't it then be possible to construct the following function (using Haskell-like syntax, because that's what I'm most familiar with)?</p>&#xA;&#xA;<pre><code>-- Takes an arbitrary pure function and constructs its inverse. &#xA;-- If the passed-in function is recursive, the result is not guaranteed to terminate&#xA;invert :: (a -&gt; b) -&gt; b -&gt; a&#xA;invert f r = declare a in let r =|= f a in a&#xA;</code></pre>&#xA;&#xA;<p>(if <code>=|=</code> is the dataflow unification operator).</p>&#xA;&#xA;<p>Is this indeed possible in such a language? If so, why haven't people leapt at this before? If not, where did my reasoning go wrong?</p>&#xA;
programming languages
1
689
How to convert an NFA with overlapping cycles into a regular expression?
<p>If I understand correctly, NFA have the same expressive power as regular expressions. Often, reading off equivalent regular expressions from NFA is easy: you translate cycles to stars, junctions as alternatives and so on. But what to do in this case: </p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/yCGnv.png" alt="enter image description here"><br>&#xA;<sup>[<a href="https://github.com/akerbos/sesketches/blob/gh-pages/src/cs_689.tikz" rel="nofollow noreferrer">source</a>]</sup></p>&#xA;&#xA;<p>The overlapping cycles make it hard to see what this automaton accepts (in terms of regular expressions). Is there a trick?</p>&#xA;
algorithms formal languages finite automata regular expressions
1
699
How does a two-way pushdown automaton work?
<p>Note that by "two-way pushdown automaton", I mean a pushdown automaton that can move its reading head both ways on the input tape.</p>&#xA;&#xA;<p>I recently had the question of determining the computational power of two-way PDAs in the Chomsky hierarchy. I don't entirely understand two-way PDAs, but I can see how with the ability to read in both directions on the input, it could handle languages of the form $L=\{0^n 1^n 2^n\}$. I can't say that for sure, but it seems that would make it powerful enough to least handle context-sensitive languages. </p>&#xA;&#xA;<p>This is all a guess because I don't know exactly how they work. Can someone explain the process of how a two-way PDA operates, maybe even on my example?</p>&#xA;&#xA;<p>UPDATE: </p>&#xA;&#xA;<blockquote>&#xA; <p>The model is a generalization of a pushdown automaton in that two-way motion is allowed on the input tape which is assumed to have endmarkers. </p>&#xA;</blockquote>&#xA;
formal languages computability automata pushdown automata
1
701
Decidable restrictions of the Post Correspondence Problem
<p>The <a href="http://en.wikipedia.org/wiki/Post_correspondence_problem" rel="noreferrer">Post Correspondence Problem</a> (PCP) is undecidable.</p>&#xA;&#xA;<p>The <em>bounded version of the PCP</em> is $\mathrm{NP}$-complete and the <em>marked version of the PCP</em> (the words of one of the two lists are required to differ in the first letter) is in $\mathrm{PSPACE}$ [1].</p>&#xA;&#xA;<ol>&#xA;<li>Are these restricted versions used to prove some complexity results of other problems (through reduction)?</li>&#xA;<li>Are there other restricted versions of the PCP that make it decidable (and in particular $\mathrm{PSPACE}$-complete)?</li>&#xA;</ol>&#xA;&#xA;<p>[1] "<a href="http://dx.doi.org/10.1016/S0304-3975%2899%2900163-2" rel="noreferrer">Marked PCP is decidable</a>" by V. Halava, M. Hirvensalo, R. De Wolf (1999)</p>&#xA;
complexity theory computability reference request
1
704
How does a wifi password encrypt data using WEP and WPA?
<p>How does the password that we enter (to connect to a wireless network) encrypt the data on the wireless network?</p>&#xA;&#xA;<p>Through my reading I am not sure if the password that we enter is the same as the passphrase. If that is right then how can the passphrase generate the four WEP keys?</p>&#xA;&#xA;<p>I understand how the four keys work in WEP and how they encrypt the data. Also, I know how WPA's keys encrypt the data but the only thing I have to know is: </p>&#xA;&#xA;<blockquote>&#xA; <p>what is the benefit of the password that we enter to get access to the network, and how does this password help in encrypting the data?</p>&#xA;</blockquote>&#xA;
cryptography computer networks encryption security
0
706
Finding exact corner solutions to linear programming using interior point methods
<p>The simplex algorithm walks greedily on the corners of a polytope to find the optimal solution to the linear programming problem. As a result, the answer is always a corner of the polytope. Interior point methods walk the inside of the polytope. As a result, when a whole plane of the polytope is optimal (if the objective function is exactly parallel to the plane), we can get a solution in the middle of this plane.</p>&#xA;&#xA;<p>Suppose that we want to find a corner of the polytope instead. For example if we want to do maximum matching by reducing it to linear programming, we don't want to get an answer consisting of "the matching contains 0.34% of the edge XY and 0.89% of the edge AB and ...". We want to get an answer with 0's and 1's (which simplex would give us since all corners consist of 0's and 1's). Is there a way to do this with an interior point method that guarantees to find exact corner solutions in polynomial time? (for example perhaps we can modify the objective function to favor corners)</p>&#xA;
algorithms optimization linear programming
1
726
Error-correcting rate is misleading
<p>In coding theory, 'how good a code is' means how many channel errors can be corrected, or better put, the maximal noise level that the code can deal with.</p>&#xA;&#xA;<p>In order to get better codes, the codes are designed using a large alphabet (rather than binary one). And then, the code is good if it can deal with a large rate of erroneous "symbols".</p>&#xA;&#xA;<p><strong>Why isn't this consider cheating?</strong> I mean, shouldn't we only care about what happens when we "translate" each symbol into a binary string? The "rate of bit error" is different than the rate of "symbol error". For instance, the rate of bit-error cannot go above 1/2 while (if I understand this correctly), with large enough alphabet, the symbol-error can go up to $1-\epsilon$. Is this because we <em>artificially</em> restrict the channel to change only "symbols" rather than bits, or is it because the code is actually better?</p>&#xA;
information theory coding theory
0
737
Is Directed Graph a Graph?
<p>I came across an issue with the definition of a (directed) graph in Sipser's Introduction to the theory of computation, 2nd Ed.</p>&#xA;&#xA;<p>On pp.10, An <strong>undirected graph</strong>, or simply a <strong>graph</strong>, is a set of points with lines connecting some of the points. The points are called nodes or vertices, and the lines are called edges, ...</p>&#xA;&#xA;<p>On the same page,</p>&#xA;&#xA;<blockquote>&#xA; <p><strong>No more than one edge is allowed between any two nodes</strong>.</p>&#xA;</blockquote>&#xA;&#xA;<p>On pp.12,</p>&#xA;&#xA;<blockquote>&#xA; <p>If it has arrows instead of lines, the graph is a <strong>directed graph</strong>,...</p>&#xA;</blockquote>&#xA;&#xA;<p>In Figure 0.16 on pp.12, there is an example of a directed graph, an arrow from node 1 to node 2 and an arrow from node 2 to node 1.</p>&#xA;&#xA;<p>So, we have two arrows in opposite direction between two nodes.</p>&#xA;&#xA;<p>I understand all of these basics.</p>&#xA;&#xA;<p>My question is,</p>&#xA;&#xA;<blockquote>&#xA; <p>Is directed graph a graph?</p>&#xA;</blockquote>&#xA;
graphs
1
740
Research on evaluating the performance of cache-obliviousness in practice
<p><a href="http://en.wikipedia.org/wiki/Cache-oblivious_algorithm">Cache-oblivious algorithms and data structures</a> are a rather new thing, introduced by Frigo et al. in <a href="http://userweb.cs.utexas.edu/~pingali/CS395T/2009fa/papers/coAlgorithms.pdf">Cache-oblivious algorithms, 1999</a>. Prokop's <a href="http://www.google.fi/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;ved=0CCYQFjAA&amp;url=http%3A%2F%2Fsupertech.csail.mit.edu%2Fpapers%2FProkop99.pdf&amp;ei=Dc1tT-aLI8bm4QSC4YjAAg&amp;usg=AFQjCNHWhtzqOQqUonQWHduna8_nbQYx2g&amp;sig2=Nf_YDGY3NZLj7q0FY6TZgw">thesis</a> from the same year introduces the early ideas as well.</p>&#xA;&#xA;<p>The paper by Frigo et al. present some experimental results showing the potential of the theory and of the cache-oblivious algorithms and data structures. Many cache-oblivious data structures are based on static search trees. Methods of storing and navigating these trees have been developed quite a bit, perhaps most notably by Bender et al. and also by Brodal et al. Demaine gives a nice <a href="http://www.cs.uwaterloo.ca/~imunro/cs840/DemaineCache.pdf">overview</a>.</p>&#xA;&#xA;<p>The experimental work of investigating the cache behaviour in practice was done at least by Ladner et al. in <a href="http://www.cs.amherst.edu/~ccm/cs34/papers/ladnerbst.pdf">A Comparison of Cache Aware and Cache Oblivious Static Search Trees Using Program Instrumentation, 2002</a>. Ladner et al. benchmarked the cache behaviour of algorithms solving the binary search problem, using the classic algorithm, cache-oblivious algorithm and cache-aware algorithm. Each algorithm was benchmarked with both implicit and explicit navigation methods. In addition to this, the thesis by <a href="http://www.diku.dk/forskning/performance-engineering/frederik/thesis.pdf">Rønn, 2003</a> analyzed the same algorithms to quite high detail and also performed even more thorough testing of the same algorithms as Ladner et al.</p>&#xA;&#xA;<p><strong>My question is</strong></p>&#xA;&#xA;<blockquote>&#xA; <p>Has there been any newer research on <em>benchmarking</em> the cache behaviour of cache-oblivious algorithms in <em>practice</em> since? I'm especially interested in the performance of the static search trees, but I would also be happy with any other cache-oblivious algorithms and data structures.</p>&#xA;</blockquote>&#xA;
algorithms data structures computer architecture reference request cpu cache
1
757
Does every large enough string have repeats?
<p>Let $\Sigma$ be some finite set of characters of fixed size. Let $\alpha$ be some string over $\Sigma$. We say that a nonempty substring $\beta$ of $\alpha$ is a <em>repeat</em> if $\beta = \gamma \gamma$ for some string $\gamma$.</p>&#xA;&#xA;<p>Now, my question is whether the following holds:</p>&#xA;&#xA;<blockquote>&#xA; <p>For every $\Sigma$, there exists some $n \in \mathbb{N}$ such that for every string $\alpha$ over $\Sigma$ of length at least $n$, $\alpha$ contains at least one repeat.</p>&#xA;</blockquote>&#xA;&#xA;<p>I've checked this over the binary alphabet, and this is quite easy for that case, but an alphabet of size 3 is already quite a bit harder to check, amd I'd like a proof for arbitrarily large grammars.</p>&#xA;&#xA;<p>If the above conjecture is true, then I can (almost) remove the demand for inserting empty strings <a href="https://cs.stackexchange.com/questions/666/is-there-a-string-stack-data-structure-that-supports-these-string-operations">in my other question</a>.</p>&#xA;
combinatorics strings word combinatorics
1
764
Are two elements always in a relation within a partially ordered set?
<p>In a partially ordered set, am I always able to order two arbitrary elements out of the set? Or is it possible that two elements within the set have no order relation to each other?</p>&#xA;&#xA;<p>For example if there are three elements $\{a, b, c\}$ and $a \leq b$ and $a \leq c$, does either $b \leq c$ or $c \leq b$ have to hold?</p>&#xA;&#xA;<p>I need this to understand the fixed point theory for semantics of programming languages (denotation of while loops).</p>&#xA;
terminology discrete mathematics order theory
1
772
Error in the use of asymptotic notation
<p>I'm trying to understand what is wrong with the following proof of the following recurrence </p>&#xA;&#xA;<p>$$&#xD;&#xA;T(n) = 2\,T\!\left(\left\lfloor\frac{n}{2}\right\rfloor\right)+n &#xD;&#xA;$$&#xA;$$&#xD;&#xA;T(n) \leq 2\left(c\left\lfloor\frac{n}{2}\right\rfloor\right)+n \leq cn+n = n(c+1) =O(n)&#xD;&#xA;$$</p>&#xA;&#xA;<p>The documentation says it's wrong because of the inductive hypothesis that&#xA;$$&#xD;&#xA;T(n) \leq cn&#xD;&#xA;$$&#xA;What Am I missing?</p>&#xA;
algorithms landau notation asymptotics recurrence relation
1
783
Reduction from 3-Partition problem to Balanced Partition problem
<p>The 3-Partition problem asks whether a set of $3n$ integers can be partitioned into $n$ sets of three integers such that each set sums up to some given integer $B$. The Balanced Partition problem asks whether $2n$ integers can be partitioned into two equal cardinality sets such that both sets have the same sum. Both problems are known to be NP-complete. However, 3-Partition is strongly NP-complete. I haven't seen in the literature any reduction from 3-Partition to Balanced Partition.</p>&#xA;&#xA;<p>I'm looking for (simple) reduction from the 3-Partition to the Balanced Partition problem.</p>&#xA;
complexity theory reductions np complete
0
795
Limitations of Stack Inspection
<p><em>This is a follow up to the <a href="https://cs.stackexchange.com/q/796/31">How does Stack Inspection work?</a> that explores the notion in more detail</em></p>&#xA;&#xA;<p><a href="http://www.securingjava.com/chapter-three/chapter-three-6.html" rel="nofollow noreferrer">Stack inspection</a> is a mechanism for ensuring security in the context of the JVM and CLR virtual machines when externally downloaded code modules of different levels of trust may be running together. That system libraries need some way of distinguishing between calls originating in untrusted code and calls originating from the trusted application itself. This is done by associating with code the principal corresponding to its origin. Then access permissions are recorded on the stack and whenever a call to a sensitive, system method is made, the stack is traversed to see whether the appropriate permissions for the principal making the call are present on the stack.</p>&#xA;&#xA;<blockquote>&#xA; <p>What are the limitations of stack inspection? What mechanisms have been proposed to replace it? Have any significant changes been made to the model since it was introduced in the late 90s?</p>&#xA;</blockquote>&#xA;
security stack inspection
1
796
How does Stack Inspection work?
<p><em>This is precursor to my other, more advanced <a href="https://cs.stackexchange.com/q/795/31">question</a> about Stack Inspection.</em></p>&#xA;&#xA;<p>Stack Inspection is a security mechanism introduced in the JVM to deal with running code originating from locations having different levels of trust. This is question aims at finding a simple description of its functionality. So:</p>&#xA;&#xA;<blockquote>&#xA; <p>How does stack inspection work?</p>&#xA;</blockquote>&#xA;
terminology security stack inspection
1
802
Are the Before and After sets for context-free grammars always context-free?
<p>Let $G$ be a context-free grammar. A string of terminals and nonterminals of $G$ is said to be a <em>sentential form</em> of $G$ if you can obtain it by applying productions of $G$ zero or more times to the start symbol of $S$. Let $\operatorname{SF}(G)$ be the set of sentential forms of $G$.</p>&#xA;&#xA;<p>Let $\alpha \in \operatorname{SF}(G)$ and let $\beta$ be a substring of $\alpha$ - we call $\beta$ a <em>fragment</em> of $\operatorname{SF}(G)$. Now let </p>&#xA;&#xA;<p>$\operatorname{Before}(\beta) = \{ \gamma \ |\ \exists \delta . \gamma \beta \delta \in \operatorname{SF}(G) \}$ </p>&#xA;&#xA;<p>and </p>&#xA;&#xA;<p>$\operatorname{After}(\beta) = \{ \delta \ |\ \exists \gamma . \gamma \beta \delta \in \operatorname{SF}(G) \}$.</p>&#xA;&#xA;<blockquote>&#xA; <p>Are $\operatorname{Before}(\beta)$ and $\operatorname{After}(\beta)$ context-free languages? What if $G$ is unambiguous? If $G$ is unambiguous, are $\operatorname{Before}(\beta)$ and $\operatorname{After}(\beta)$ also describable by an unambiguous context-free language?</p>&#xA;</blockquote>&#xA;&#xA;<p>This is a followup to <a href="https://cs.stackexchange.com/questions/666/is-there-a-string-stack-data-structure-that-supports-these-string-operations">my earlier question</a>, after <a href="https://cs.stackexchange.com/questions/757/does-every-large-enough-string-have-repeats">an earlier attempt</a> to make my question easier to answer failed. A negative answer will make the encompassing question I'm working on very hard to answer.</p>&#xA;
formal languages context free formal grammars closure properties
1
805
Proving a binary tree has at most $\lceil n/2 \rceil$ leaves
<p>I'm trying to prove that a <a href="http://en.wikipedia.org/wiki/Binary_tree" rel="nofollow noreferrer">binary tree</a> with $n$ nodes has at most $\left\lceil \frac{n}{2} \right\rceil$ leaves. How would I go about doing this with induction?</p>&#xA;&#xA;<p><em>For people who were following in the original question about heaps, it has been moved <a href="https://cs.stackexchange.com/questions/841/proving-a-binary-heap-has-lceil-n-2-rceil-leaves">here</a>.</em></p>&#xA;
data structures binary trees combinatorics graphs proof techniques
1
808
NP completeness proof of a spanning tree problem
<p>I am looking for some hints in a question asked by my instructor.</p>&#xA;&#xA;<p>So I just figured out this decision problem is $\sf{NP\text{-}complete}$:</p>&#xA;&#xA;<p>In a graph $G$, is there a spanning tree in $G$ that contain an exact set of $S=\{x_1, x_2,\ldots, x_n\}$ as leafs. I figured out we can prove that it is $\sf{NP\text{-}complete}$ by reducing Hamiltonian path to this decisions problem.</p>&#xA;&#xA;<p>But my instructor also asked us in class:</p>&#xA;&#xA;<blockquote>&#xA; <p>would it also be $\sf{NP\text{-}complete}$ if instead of "exact set of $S$", we do </p>&#xA; &#xA; <p>"include the whole set of $S$ and possibly other leafs" or &#xA; "subset of $S$"</p>&#xA;</blockquote>&#xA;&#xA;<p>I think "subset of S" would be $\sf{NP\text{-}complete}$, but I just can't prove it, I don't know what problem I can reduce it to this. As for "include the set of $S$..." I think it can be solved in polynomial time.</p>&#xA;
complexity theory graphs np complete
1
809
Logic gates from everyday materials
<p>Logic gates are an abstract device which can be implemented with electromagnetic relays, vacuum tubes, or transistors. These implemenations have been successful in computing in part because of various properties of chainability, durability, and size beyond their basic binary stability. They also work well because electricity is the energy source which can rather easily be shipped around.</p>&#xA;&#xA;<p>I've seen adders built out of <a href="http://blog.makezine.com/archive/2007/06/binary-marble-adding-mach.html" rel="noreferrer">wood, marbles, and gravity</a>. I've seen <a href="http://www.technologyreview.com/biomedicine/21784/" rel="noreferrer">"lab on a chip" capilary-action-driven prototypes</a>. I've seen all kinds of specialty mechanical calculators (<a href="http://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=video&amp;cd=1&amp;ved=0CDsQtwIwAA&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DHYsOi6L_Pw4&amp;ctbm=vid&amp;ei=mrlxT-uoOIq-0QHIqpzHAQ&amp;usg=AFQjCNECIg5HDIV-9uL3GnwU_aSXriVDGA" rel="noreferrer">Curta</a>, slide rule). I've seen <a href="http://www.youtube.com/watch?v=SudixyugiX4" rel="noreferrer">domino trails</a> as single-use logic gates.</p>&#xA;&#xA;<p>I'm interested in other illustrative computing devices that aren't <em>necessarily</em> convenient, durable, or fast, but which exploit properties of everyday materials to perform computation and which are directly visible. The dominoes trails are close, but are a little too complicated to reset.</p>&#xA;&#xA;<p>Magneto-mechanical arrangements? Water in pipes/troughs? More general marble contraptions?</p>&#xA;&#xA;<p>PS. Here's a new one. <a href="http://www.liorelazary.com/index.php?option=com_content&amp;view=article&amp;id=46%3amechanical-cpu-clock&amp;catid=10%3aclocks&amp;Itemid=15" rel="noreferrer">Mechanical CPU Clock</a></p>&#xA;
computer architecture didactics
1
811
Proving that directed graph diagnosis is NP-hard
<p>I have a homework assignment that I've been bashing my head against for some time, and I'd appreciate any hints. It is about choosing a known problem, the NP-completeness of which is proven, and constructing a reduction from that problem to the following problem I'll call DGD (directed graph diagnosis).</p>&#xA;<h3>Problem</h3>&#xA;<blockquote>&#xA;<p>An instance of DGD <span class="math-container">$(V,E,k)$</span> consist of vertices <span class="math-container">$V = I \overset{.}{\cup} O \overset{.}{\cup} B$</span>, directed edges <span class="math-container">$E$</span> and a positive integer <span class="math-container">$k$</span>. There are three types of vertices: vertices with only incoming edges <span class="math-container">$I$</span>, vertices with only outgoing edges <span class="math-container">$O$</span> and vertices with both incoming and outgoing edges <span class="math-container">$B$</span>. Let furthermore <span class="math-container">$D=O\times I$</span>.</p>&#xA;<p>Now, the problem is whether we can cover all nodes with at most <span class="math-container">$k$</span> elements of <span class="math-container">$D$</span>, i.e.</p>&#xA;<p><span class="math-container">$\qquad \displaystyle \exists\,S\subseteq D, |S|\leq k.\ \forall\, v\in V.\ \exists\,(v_1,v_2) \in S.\ v_1 \to^* v \to^* v_2 $</span></p>&#xA;<p>where <span class="math-container">$a\to^* b$</span> means that there is a directed path from <span class="math-container">$a$</span> to <span class="math-container">$b$</span>.</p>&#xA;</blockquote>&#xA;<hr />&#xA;<p>I think that the Dominating Set problem is the one I should be reducing from, because this too is concerned about covering a subset of nodes with another subset. I tried creating a DGD instance by first creating two nodes for each element of the dominating set, copying all edges, and then setting the <span class="math-container">$k$</span> of the DGD instance equal to that of the DS instance.</p>&#xA;<p>Suppose a simple DS-instance with nodes <span class="math-container">$1$</span>, <span class="math-container">$2$</span> and <span class="math-container">$3$</span> and edges <span class="math-container">$(1,2)$</span> and <span class="math-container">$(1,3)$</span>. This is a yes-instance with <span class="math-container">$k = 1$</span>; the dominating set in this case consists of only node <span class="math-container">$1$</span>. Reducing with the method just described, this would lead to a DGD instance with two paths <span class="math-container">$(1 \to 2 \to 1')$</span> and <span class="math-container">$(1 \to 3 \to 1')$</span>; to cover all nodes, just one pair <span class="math-container">$(1, 1')$</span> would be sufficient. This would have worked perfectly, were it not for the fact that the dominating set of the DS-instance cannot, of course, be determined in polynomial time, which is a requirement here.</p>&#xA;<p>I have found that there are many good-looking ways to transform the edges and vertices when reducing, but my problem is somehow expressing DGD's <span class="math-container">$k$</span> in terms of DS's <span class="math-container">$k$</span>. Dominating Set seemed a fitting problem to reduce from, but because of this I think that maybe I should try to reduce from a problem that has no such <span class="math-container">$k$</span>?</p>&#xA;
complexity theory np hard graphs
1
813
Are there other ways to describe formal languages other than grammars?
<p>I'm looking for mathematical theories that deal with describing formal languages (set of strings) in general and not just grammar hierarchies.</p>&#xA;
formal languages formal grammars
1
818
How To Best Learn About Algorithms In Depth
<p>I have been reading this site with a great deal of interest, but I find a lot of it goes over my head. This has made me wish to learn a lot more about algorithms and CS in general. As far as I can tell from my research, there are 2 main ways of doing this. </p>&#xA;&#xA;<ol>&#xA;<li><p>I can by a nice thick heavy book and work my way through it slowly but surely.</p></li>&#xA;<li><p>I can "learn by doing" and by a nice book, but instead of reading it cover to cover, move to parts that interest me and work on implementing and applying algorithms I like.</p></li>&#xA;<li><p>?</p></li>&#xA;</ol>&#xA;&#xA;<p>My question is, which of the above did the you use and would you recommend the same approach to someone else?</p>&#xA;
algorithms education
1
820
Learning Automated Theorem Proving
<p><sup><em>I am learning <a href="http://en.wikipedia.org/wiki/Automated_theorem_proving" rel="noreferrer">Automated Theorem Proving</a> / <a href="http://en.wikipedia.org/wiki/Satisfiability_Modulo_Theories" rel="noreferrer">SMT solvers</a> / <a href="http://en.wikipedia.org/wiki/Proof_assistant" rel="noreferrer">Proof Assistants</a> by myself and post a series of questions about the process, starting here.</em></sup> </p>&#xA;&#xA;<p><sup><em>Note that these topics are not easily digested without a background in (mathematical) logics. If you have problems with basic terms, please read up on those, for instance <a href="http://www.cs.bham.ac.uk/research/projects/lics/" rel="noreferrer">Logics in Computer Science</a> by M. Huth and M. Ryan (in particular chapters one, two and four) or <a href="http://gtps.math.cmu.edu/tttp.html" rel="noreferrer">An Introduction to Mathematical Logic and Type Theory</a> by P. Andrews.</em><br>&#xA;<em>For a short introduction into higher order logic (HOL) see <a href="http://www.lix.polytechnique.fr/Labo/Dale.Miller/papers/AIencyclopedia/" rel="noreferrer">here</a>.</em></sup></p>&#xA;&#xA;<p>I looked at <a href="http://coq.inria.fr/" rel="noreferrer">Coq</a> and read the first chapter of the intoduction to <a href="http://www.cl.cam.ac.uk/research/hvg/isabelle/" rel="noreferrer">Isabelle</a> amongst others; <a href="https://cs.stackexchange.com/q/868/268">Types of Automated Theorem Provers</a></p>&#xA;&#xA;<p>I have known Prolog for a few decades and am now learning F#, so ML, O'Caml and LISP are a bonus. Haskell is a different beast.</p>&#xA;&#xA;<p>I have the following books</p>&#xA;&#xA;<p><a href="http://books.google.com/books/about/Handbook_of_automated_reasoning.html?id=X3z8ujBRgmEC" rel="noreferrer">"Handbook of Automated Reasoning"</a> edited by Alan Robinson and Andrei Vornkov</p>&#xA;&#xA;<p><a href="http://www.cl.cam.ac.uk/~jrh13/atp/" rel="noreferrer">"Handbook of Practical Logic and Automated Reasoning"</a> by John Harrison</p>&#xA;&#xA;<p><a href="http://www4.in.tum.de/~nipkow/TRaAT/" rel="noreferrer">"Term Rewriting and All That"</a> by Franz Baader and Tobias Nipkow</p>&#xA;&#xA;<ol>&#xA;<li><p>What are the differences between Coq and Isabelle?</p></li>&#xA;<li><p>Should I learn either Isabelle or Coq, or both?</p></li>&#xA;<li><p>Is there an advantage to learning either Isabelle or Coq first?</p></li>&#xA;</ol>&#xA;&#xA;<p><sup><em>Find the series' next question <a href="https://cs.stackexchange.com/questions/868/types-of-automated-theorem-provers">here</a>.</em></sup></p>&#xA;
logic proof assistants automated theorem proving coq
1
824
Sorting functions by asymptotic growth
<p>Assume I have a list of functions, for example </p>&#xA;&#xA;<p>$\qquad n^{\log \log(n)}, 2^n, n!, n^3, n \ln n, \dots$</p>&#xA;&#xA;<p>How do I sort them asymptotically, i.e. after the relation defined by</p>&#xA;&#xA;<p>$\qquad f \leq_O g \iff f \in O(g)$,</p>&#xA;&#xA;<p>assuming they are indeed pairwise comparable (see also <a href="https://cs.stackexchange.com/questions/1780/are-the-functions-always-asymptotically-comparable">here</a>)? Using the definition of $O$ seems awkward, and it is often hard to prove the existence of suitable constants $c$ and $n_0$.</p>&#xA;&#xA;<p>This is about measures of complexity, so we're interested in asymptotic behavior as $n \to +\infty$, and we assume that all the functions take only non-negative values ($\forall n, f(n) \ge 0$).</p>&#xA;
asymptotics landau notation reference question
1
835
Is the number of coin tosses of a probabilistic Turing machine a Blum complexity measure?
<p>I <a href="http://blog.computationalcomplexity.org/2004/04/blum-complexity-measures.html" rel="nofollow">read</a> that the number of coin tosses of a probabilistic Turing machine (PTM) is not a <a href="http://en.wikipedia.org/wiki/Blum_axioms" rel="nofollow">Blum complexity measure</a>. Why?</p>&#xA;&#xA;<p>Clarification:</p>&#xA;&#xA;<p>Note that since the execution of the machine is not deterministic, one should be careful about defining the number of coin tosses for a PTM $M$ on input $x$ in a way similar to the time complexity for NTMs and PTMs. One way is to define it as the maximum number of coin tosses over possible executions of $M$ on $x$.</p>&#xA;&#xA;<p>We need the definition to satisfy the axiom about decidability of $m(M,x)=k$. We can define it as follows:</p>&#xA;&#xA;<p>$$&#xA;m(M,x) =&#xA;\begin{cases}&#xA;k &amp; \text{all executions of $M$ on $x$ halt, $k=\max$ #coin tosses} \\&#xA;\infty &amp; o.w. \&#xA;\end{cases}&#xA;$$</p>&#xA;&#xA;<p>The number of random bits that an algorithm uses is a complexity measure that appears in papers, e.g. "algorithm $A$ uses only $\lg n$ random bits, whereas algorithm $B$ uses $n$ random bits".</p>&#xA;
computability complexity theory randomness probabilistic algorithms
0