id
int64 1
141k
| title
stringlengths 15
150
| body
stringlengths 43
35.6k
| tags
stringlengths 1
118
| label
int64 0
1
|
---|---|---|---|---|
1,271 | Why is Relativization a barrier? | <p>When I was explaining the Baker-Gill-Solovay proof that there exists an oracle with which we can have, $\mathsf{P} = \mathsf{NP}$, and an oracle with which we can have $\mathsf{P} \neq \mathsf{NP}$ to a friend, a question came up as to why such techniques are ill-suited for proving the $\mathsf{P} \neq \mathsf{NP}$ problem, and I couldn't give a satisfactory answer.</p>

<p>To put it more concretely, if I have an approach to prove $\mathsf{P} \neq \mathsf{NP}$ and if I could construct oracles to make a situation like above happen, why does it make my method invalid? </p>

<p>Any exposition/thoughts on this topic?</p>
 | complexity theory proof techniques p vs np relativization | 1 |
1,274 | How to score a given arrangement of windows on a screen to produce good layouts | <p>(this is related to my other question, see <a href="https://cs.stackexchange.com/questions/1217/how-to-devise-an-algorithm-to-arrange-resizable-windows-on-the-screen-to-cover">here</a>)</p>

<p>I would like to write a function that scores a given arrangement of windows on a screen.</p>

<p>The purpose of this function is to determine whether a particular layout is good and by going over other possible layouts, finding the one with the highest score.</p>

<p>Here are some characteristics that I think make a good layout:</p>

<ol>
<li>maximizing amount of space used by windows (or in other words, the free space on the screen should be minimized)</li>
<li>windows are (more or less) evenly sized</li>
</ol>

<p>Bonus: assigning each window a priority and giving a higher score for layouts where windows with a higher priority take more space.</p>

<p>Here's an example: Suppose our screen is 11x11 and we want to put two windows on it. Window A's initial size is 1x1 and window B is 2x1.</p>

<p>When we resize windows, we preserve their aspect ratio. So here are two possible layout:</p>

<p><img src="https://i.stack.imgur.com/zG3bg.jpg" alt="enter image description here"></p>

<p>The function should give the one on the right a higher score.</p>

<p>Another nice thing to have is the option to 'dock' a window to one or more sides of the screen. Then suppose we want to dock A to the bottom-left of the screen, the scoring function should prefer this layout than the above one on the right:</p>

<p><img src="https://i.stack.imgur.com/Ol6Vw.jpg" alt="enter image description here"></p>
 | computational geometry user interface modelling | 0 |
1,280 | What are examples of inconsistency and incompleteness in Unix/C? | <p>In Richard Gabriel's famous essay <a href="http://dreamsongs.com/RiseOfWorseIsBetter.html">The Rise of Worse is Better</a>, he contrasts caricatured versions of the MIT/Stanford (Lisp) and New Jersey (C/Unix) design philosophies along the axes of simplicity, correctness, consistency, and completeness. He gives the example of the "PC loser-ing problem" (<a href="http://blog.reverberate.org/2011/04/18/eintr-and-pc-loser-ing-the-worse-is-better-case-study/">discussed elsewhere by Josh Haberman</a>) to argue that Unix prioritizes simplicity of implementation over simplicity of interface.</p>

<p>One other example I've come up with is the different approaches to numbers. Lisp can represent arbitrarily large numbers (up to the size of memory), while C limits numbers to a fixed number of bits (typically 32-64). I think this illustrates the correctness axis.</p>

<p>What are some examples for consistency and completeness? Here are all of Gabriel's descriptions (which he admits are caricatures):</p>

<p><strong>The MIT/Stanford approach</strong></p>

<ul>
<li>Simplicity -- the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation.</li>
<li>Correctness -- the design must be correct in all observable aspects. Incorrectness is simply not allowed.</li>
<li>Consistency -- the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness.</li>
<li>Completeness -- the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness.</li>
</ul>

<p><strong>The New Jersey Approach</strong></p>

<ul>
<li>Simplicity -- the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.</li>
<li>Correctness -- the design must be correct in all observable aspects. It is slightly better to be simple than correct.</li>
<li>Consistency -- the design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency.</li>
<li>Completeness -- the design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must be sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.</li>
</ul>

<p>Please note I am not asking whether Gabriel is right (which is a question not appropriate for StackExchange) but for examples of what he might have been referring to.</p>
 | programming languages operating systems | 1 |
1,281 | Cross Compiler's T diagram | <p>I'm studying Bootstrapping from Red Dragon Book Compilers and found the T diagram for cross compiler pretty confusing. I can't understand what is meant by "Run compiler1 through compiler2". Can anyone provide some better explanation, analogy or an example to relate with some real world compiler?</p>

<p>Some notation first. By
$LSN=$ <img src="https://i.stack.imgur.com/7G7ga.png" alt="enter image description here">
 I mean a compiler for language $L$
written in language $S$ that produces output language/machine code $N$. 
This is a <a href="http://en.wikipedia.org/wiki/Tombstone_diagram" rel="nofollow noreferrer"><em>tombstone</em> or <em>T-diagrams</em></a>.</p>

<blockquote>
 <p><strong>Compiling a Compiler</strong></p>
 
 <ol>
 <li><p>Suppose we have cross-compiler for a new language L 
 in implementation language S generating code for machine N.</p>
 
 <p>$LSN=$<br>
 <img src="https://i.stack.imgur.com/xsc0T.png" alt="T-diagram for LSN"></p></li>
 <li><p>Suppose we also have an existing S compiler running on machine M 
 implementing code for machine M:</p>
 
 <p>$SMM=$<br>
 <img src="https://i.stack.imgur.com/UBlkh.png" alt="T-diagram for SMM"></p></li>
 <li><p>Run LSN through SMM to produce LMN</p></li>
 </ol>
 
 <p><strong>Compiler Construction</strong></p>
 
 <p>$LMN = LSN + SMM$<br>
 <img src="https://i.stack.imgur.com/yFrsZ.png" alt="T-diagram for LMN = LSN + SMM"></p>
</blockquote>
 | compilers terminology | 0 |
1,287 | Find subsequence of maximal length simultaneously satisfying two ordering constraints | <p>We are given a set $F=\{f_1, f_2, f_3, …, f_N\}$ of $N$ Fruits. Each Fruit has price $P_i$ and vitamin content $V_i$; we associated fruit $f_i$ with the ordered pair $(P_i, V_i)$. Now we have to arrange these fruits in such a way that the sorted list contains prices in ascending order and vitamin contents in descending order.</p>

<p><strong>Example 1</strong>: $N = 4$ and $F = \{(2, 8), (5, 11), (7, 9), (10, 2)\}$.</p>

<p>If we arrange the list such that all price are in ascending order and vitamin contents in descending order, then the valid lists are the following:</p>

<ul>
<li>$[(2, 8)]$</li>
<li>$[(5, 11)]$</li>
<li>$[(7, 9)]$</li>
<li>$[(10, 2)]$</li>
<li>$[(2, 8), (10, 2)]$</li>
<li>$[(5, 11), (7, 9)]$</li>
<li>$[(5, 11), (10, 2)]$</li>
<li>$[(7, 9), (10, 2)]$</li>
<li>$[(5, 11), (7, 9), (10, 2)]$</li>
</ul>

<p>From the above lists, I want to choose the list of maximal size. If more than one list has maximal size, we should choose the list of maximal size whose sum of prices is least. The list which should be chosen in the above example is $\{(5, 11), (7, 9), (10, 2)\}$.</p>

<p><strong>Example 2</strong>: $N = 10$ and $$F = \{(99,10),(12,23),(34,4),(10,5),(87,11),(19,10), \\(90,18), (43,90),(13,100),(78,65)\}$$</p>

<p>The answer to this example instance is $[(13,100),(43,90),(78,65),(87,11),(99,10)]$.</p>

<p>Until now, this is what I have been doing:</p>

<ol>
<li>Sort the original list in ascending order of price;</li>
<li>Find all subsequences of the sorted list;</li>
<li>Check whether the subsequence is valid, and compare all valid subsequences.</li>
</ol>

<p>However, this takes exponential time; how can I solve this problem more efficiently?</p>
 | algorithms arrays constraint programming subsequences | 1 |
1,288 | Security Lattice Construction | <p>I am having a problem trying to solve a question on a past paper asking to design a security lattice. Here is the question:</p>

<blockquote>
 <p>The AB model (Almost Biba) is a model for expressing integrity policies rather
 than confidentiality. It has the same setup as Bell-LaPadula, except that $L$ is now a set of
 integrity levels which express the degree of confidence we have in the integrity of
 subjects and objects. Subjects and data at higher integrity levels are considered
 to be more accurate or safe. The set of subjects and objects may also be different,
 for example, programs are naturally considered as subjects.</p>
 
 <p>Often, the set $L$ is actually a lattice of levels, with two operations: least
 upper bound $l_1 \vee l_2$ and greatest lower bound $l_1 \wedge l_2$, where $l_1, l_2 \in L$.</p>
 
 <p>i. Design an example integrity lattice for AB, by combining two degrees of
 data integrity <strong>dirty</strong> and <strong>clean</strong> and two means by which a piece of input
 may be received, <strong>website</strong> (external user input from a web site form) and
 <strong>dataentry</strong> (internal user input by trusted staff).</p>
</blockquote>

<p>I have been looking for an explanation on how to build lattices but can't seem to find one on the internet or in textbooks. Can anyone point me in the right direction?</p>
 | security lattices integrity | 1 |
1,290 | How to output all longest decreasing sequences | <p>Suppose I have an array of integers having length $N$. How can I output all longest decreasing sequences? (A subsequence consists of elements of the array that do not have to be consecustive, for example $(3,2,1)$ is a decreasing subsequence of $(7,3,5,2,0,1)$.) I know how to calculate the length of longest decreasing sequences, but don't know how to report all longest decreasing sequences.</p>

<p>Pseudocode will be helpful.</p>
 | algorithms arrays subsequences | 1 |
1,292 | What is required for universal analogue computation? | <p>What operations need to be performed in order to do any arbitrary <a href="http://en.wikipedia.org/wiki/Analog_computer">analogue computation</a>? Would addition, subtraction, multiplication and division be sufficient?</p>

<p>Also, does anyone know exactly what problems are tractable using analogue computation, but not with digital?</p>
 | computability computation models turing completeness | 1 |
1,296 | Solve a recurrence using the master theorem | <p>This is the recursive formula for which I'm trying to find an asymptotic closed form by the <a href="http://en.wikipedia.org/wiki/Master_theorem" rel="nofollow">master theorem</a>:
$$T(n)=9T(n/27)+(n \cdot \lg(n))^{1/2}$$</p>

<p>I started with $a=9,b=27$ and $f(n)=(n\cdot \lg n)^{1/2}$ for using the master theorem by $n^{\log_b(a)}$, and if so $n^{\log_{27}(9)}=n^{2/3}$ but I don't understand how to play with the $(n\cdot \lg n)^{1/2}$. </p>

<p>I think that the $(n\cdot \lg n)^{1/2}$ is bigger than $n^{2/3}$ but I'm sure I skip here on something. </p>

<p>I think it fits to the third case of the master theorem.</p>
 | algorithm analysis asymptotics recurrence relation master theorem | 1 |
1,299 | NP-completeness of a spanning tree problem | <p>I was reviewing some NP-complete problems on this site, and I meet one interesting problem from </p>

<p><a href="https://cs.stackexchange.com/questions/808/np-completeness-proof-of-a-spanning-tree-problem">NP completeness proof of a spanning tree problem</a></p>

<p>In this problem, I am interested in the original problem, which the leaf set is precisely $S$. The author said that he can prove this by reducing it to the Hamiltonian path. However, I still cannot figure it out. Could anybody help me with this in details?</p>
 | complexity theory np complete graphs spanning trees | 1 |
1,300 | Survey of informed search algorithms? | <p>I'm looking for a list of informed search algorithms, also known as heuristic search algorithms. </p>

<p>I'm aware of: </p>

<ol>
<li><p><a href="http://en.wikipedia.org/wiki/Best-first_search" rel="nofollow">best-first search</a></p>

<ul>
<li>Greedy best-first search</li>
<li><a href="http://en.wikipedia.org/wiki/A%2a_search_algorithm" rel="nofollow">A* search</a></li>
</ul></li>
</ol>

<p>Are there more best-first algorithm or other informed searches that are not best-first?</p>
 | algorithms reference request artificial intelligence search algorithms | 0 |
1,301 | Find minimum number 1's so the matrix consist of 1 connected region of 1's | <p>Let $M$ be a $(0, 1)$ matrix. We say two entries are neighbors if they are adjacent horizontal or vertically, and both entries are $1$'s. One wants to find minimum number of $1$'s to add, so every $1$ can reach another one through a sequence of neighbors. </p>

<p>Example:</p>

<pre><code>100
000
001
</code></pre>

<p>Here we need 3 $1$'s:</p>

<pre><code>100
100
111
</code></pre>

<p>How can we efficiently find the minimum number of $1$'s to add, and where?</p>
 | algorithms graphs matrices | 1 |
1,315 | What is a formula for the number of strings with no repeats? | <p>I want to count the number of strings $s$ over a finite alphabet $A$, that contain no repeats, and by that I mean for any substring $t$ of $s$, $1< |t| < |s|$, there is no disjoint copy of $t$ in $s$. For exapmle, let $A=\{a,b\}$. Then $aaa$ <em>is</em> one of the strings I want to count, since for the substring $aa$, there are no disjoint copies. However, $abab$ contains such a repeat.</p>

<p>If someone's already figured out a useful formula, please link. Otherwise, I will refer back to this post in any article I write, if I use someone's answer.</p>

<p>Here is another example. Let's try to construct a long string over $\{a,b\}$, that contains no repeats:</p>

<p>aaa (can't be a) <br>
 aaab (a or b) <br>
 aaabbb (can't be b) <br>
 aaabbba (can't be b or a) <br>
 aaaba (can't be a or b) <br></p>

<p>If we built a tree, we could count the number of nodes, but I want a formula.</p>

<p><strong>Edit:</strong>
Well, it's not as daunting as I first thought if we convert this to a bin-choosing problem. A set of strings of length k with at least one repeat is equal to the set that is the union of all permutations of the cartesian product:
$A \times A \times \cdots\times A \text{(k-4 times)} \times R \times R$ where $R$ is the required repeat. I don't know if that's helpful, but it sounded pro :) Anyway, let their be |A| bins, choose any two (even if the same one) to be the repeat, then choose $k-4$ more and multiply (the first 4 are already chosen, see?). Now I just need to find that formula from discrete math.</p>
 | formal languages combinatorics strings word combinatorics | 0 |
1,319 | Solving problems related to Marginal Contribution Nets | <p>So, I encoutered this problem in examination:</p>

<blockquote>
 <p>Consider the following marginal contribution net:</p>
 
 <p>$\{a \wedge b\} \to 5$</p>
 
 <p>$\{b\} \to 2$</p>
 
 <p>$\{c\} \to 4$</p>
 
 <p>$\{b \wedge \neg c\} \to −2$</p>
 
 <p>Let $v$ be the characteristic function defined by these rules. Give the values of the 
 following:</p>
 
 <p>i) $v(\emptyset)$</p>
 
 <p>ii) $v(\{a\})$</p>
 
 <p>iii) $v(\{b\})$</p>
 
 <p>iv) $v(\{a, b\})$</p>
 
 <p>v) $v(\{a, b, c\})$</p>
</blockquote>

<p>My answer is below, but I am not sure.</p>

<blockquote>
 <p>i) $v(\emptyset) = -2$</p>
 
 <p>ii) $v(\{a\}) = 0 - 2$</p>
 
 <p>iii) $v(\{b\}) = 2 - 2$</p>
 
 <p>iv) $v(\{a, b\}) = 5 + 2 - 2$</p>
 
 <p>v) $v(\{a, b, c\}) = 5 + 4 + 2 - 2$</p>
</blockquote>

<p>If anybody knows how to solve this kind of problem, could you confirm?</p>

<p>Exact same problem is shown on page 4 of following paper: 
<a href="http://research.microsoft.com/pubs/73752/ieong05mcnet.pdf">Marginal Contribution Nets: A Compact Representation Scheme for Coalitional Games (by Samuel Ieong and Yoav Shoham)</a></p>
 | artificial intelligence | 1 |
1,326 | How to prove regular languages are closed under left quotient? | <p>$L$ is a regular language over the alphabet $\Sigma = \{a,b\}$. The left quotient of $L$ regarding $w \in \Sigma^*$ is the language 
$$w^{-1} L := \{v \mid wv \in L\}$$</p>

<p>How can I prove that $w^{-1}L$ is regular?</p>
 | formal languages regular languages closure properties | 1 |
1,329 | Shortest Path on an Undirected Graph? | <p>So I thought this (though somewhat basic) question belonged here:</p>

<p>Say I have a graph of size 100 nodes arrayed in a 10x10 pattern (think chessboard). The graph is undirected, and unweighted. Moving through the graph involves moving three spaces forward and one space to either right or left (similar to how a chess knight moves across a board).</p>

<p>Given a fixed beginning node, how would one find the shortest path to any other node on the board?</p>

<p>I imagined that there would only be an edge between nodes that are viable moves. So, given this information, I would want to find the shortest path from a starting node to an ending node.</p>

<p>My initial thought was that each edge is weighted with weight 1. However, the graph is undirected, so Djikstras would not be an ideal fit. Therefore, I decided to do it using an altered form of a depth first search.</p>

<p>However, I couldn't for the life of me visualize how to get the shortest path using the search.</p>

<p>Another thing I tried was putting the graph in tree form with the starting node as the root, and then selecting the shallowest (lowest row number) result that gave me the desired end node... this worked, but was incredibly inefficient, and thus would not work for a larger graph.</p>

<p>Does anyone have any ideas that might point me in the right direction on this one?</p>

<p>Thank you very much.</p>

<p>(I tried to put in a visualization of the graph, but was unable to due to my low reputation)</p>
 | algorithms graphs search algorithms shortest path | 1 |
1,331 | How to prove a language is regular? | <p>There are many methods to prove that <a href="https://cs.stackexchange.com/q/1031/157">a language is not regular</a>, but what do I need to do to prove that some language <em>is</em> regular?</p>

<p>For instance, if I am given that $L$ is regular, 
how can I prove that the following $L'$ is regular, too?</p>

<p>$\qquad \displaystyle L' := \{w \in L: uv = w \text{ for } u \in \Sigma^* \setminus L \text{ and } v \in \Sigma^+ \}$</p>

<p>Can I draw a nondeterministic finite automaton to prove this?</p>
 | formal languages regular languages automata proof techniques reference question | 1 |
1,332 | Modified Djikstra's algorithm | <p>So, I'm trying to conceptualize something:</p>

<p>Say we have a weighed graph of size N. A and B are nodes on the graph. You want to find the shortest path from A to B, given a few caveats:</p>

<ol>
<li><p>movements on the graph are regulated by a circular cycle of length 48, in such a manner that:</p>

<blockquote>
 <p>cycle{</p>

<pre><code> 0 <= L <= 24 movement IS possible

 25 <= L <= 48 movement IS NOT possible
</code></pre>
 
 <p>}</p>
</blockquote>

<p>For simplicity's sake, we will call this cycle 'time'.</p></li>
<li><p>The distance between nodes A and B is equal to:</p>

<blockquote>
 <p>shortest_distance(A to B) - 1 OR shortest_distance(A to B) + 1</p>
</blockquote>

<p>Depending on their orientation</p></li>
<li><p>the weight of the edges represents the 'time' it takes to travel between nodes.</p></li>
</ol>

<p>I'd like to create an algorithm that will give me the shortest path with these constraints in mind, assuming one is leaving from node A at time(cycle) = 12, traveling towards node B. The shortest path would be defined as the path which takes the least 'time'.</p>

<p>Step one would obviously be to take into account the orientation affecting the shortest distance (i.e. which way are they oriented by above), which would be a simple addition or substraction to the result of djikstra's algorithm</p>

<p>What I'm having trouble figuring out is how to account for the cycle in the algorithm... could it be as simple as just an if statement checking to see if the current cycle time is within the constraints that allow movement?</p>

<p>Would my idea be viable? If not, does anyone h ave any suggestions at different ways I should look at this problem?</p>

<p>I know this question seems really basic, but I just can't wrap my head around it.</p>
 | algorithms graphs shortest path | 1 |
1,334 | Randomized Selection | <p>The randomized selection algorithm is the following:</p>

<p>Input: An array $A$ of $n$ (distinct, for simplicity) numbers and a number $k\in [n]$</p>

<p>Output: The the "rank $k$ element" of $A$ (i.e., the one in position $k$ if $A$ was sorted)</p>

<p>Method:</p>

<ul>
<li>If there is one element in $A$, return it</li>
<li>Select an element $p$ (the "pivot") uniformly at random</li>
<li>Compute the sets $L = \{a\in A : a < p\}$ and $R = \{a\in A : a > p\}$</li>
<li>If $|L| \ge k$, return the rank $k$ element of $L$.</li>
<li>Otherwise, return the rank $k - |L|$ element of $R$</li>
</ul>

<p>I was asked the following question:</p>

<blockquote>
 <p>Suppose that $k=n/2$, so you are looking for the median, and let $\alpha\in (1/2,1)$
 be a constant. What is the probability that, at the first recursive call, the 
 set containing the median has size at most $\alpha n$?</p>
</blockquote>

<p>I was told that the answer is $2\alpha - 1$, with the justification "The pivot selected should lie between $1−\alpha$ and $\alpha$ times the original array"</p>

<p>Why? As $\alpha \in (0.5, 1)$, whatever element is chosen as pivot is either larger or smaller than more than half the original elements. The median always lies in the larger subarray, because the elements in the partitioned subarray are always less than the pivot. </p>

<p>If the pivot lies in the first half of the original array (less than half of them), the median will surely be in the second larger half, because once the median is found, it must be in the middle position of the array, and everything before the pivot is smaller as stated above. </p>

<p>If the pivot lies in the second half of the original array (more than half of the elements), the median will surely first larger half, for the same reason, everything before the pivot is considered smaller. </p>

<p>Example:</p>

<p>3 4 5 8 7 9 2 1 6 10</p>

<p>The median is 5.</p>

<p>Supposed the chosen pivot is 2. So after the first iteration, it becomes:</p>

<p>1 2 ....bigger part....</p>

<p>Only <code>1</code> and <code>2</code> are swapped after the first iteration. Number 5 (the median) is still in the first greater half (accroding to the pivot 2). The point is, median always lies on greater half, how can it have a chance to stay in a smaller subarray?</p>
 | algorithms algorithm analysis probability theory randomized algorithms | 1 |
1,335 | What is a good reference to learn about state transition systems? | <p>I am studying different approaches for the definition of computation with continuous dynamical systems. I have been trying to find a nice introduction to the theory of <a href="http://en.wikipedia.org/wiki/State_transition_system">"State transition systems"</a> but failed to do so.</p>

<p>Does anybody know a modern introduction to the topic? 
Of particular interest would be something dealing with computability.</p>
 | computability automata reference request computation models | 1 |
1,336 | Extending the implementation of a Queue using a circular array | <p>I'm doing some exam (Java-based algorithmics) revision and have been given the question:</p>

<blockquote>
 <p>Describe how you might extend your implementation [of a queue using a circular array] to support the expansion of the Queue to allow it to store more data items.</p>
</blockquote>

<p>The Queue started off implemented as an array with a fixed maximum size. I've got two current answers to this, but I'm not sure either are correct:</p>

<ol>
<li><p>Implement the Queue using the Java Vector class as the underlying array structure. The Vector class is similar to arrays, but a Vector can be resized at any time whereas an array's size is fixed when the array is created.</p></li>
<li><p>Copy all entries into a larger array.</p></li>
</ol>

<p>Is there anything obvious I'm missing?</p>
 | algorithms data structures | 1 |
1,339 | $\log^*(n)$ runtime analysis | <p>So I know that $\log^*$ means iterated logarithm, so $\log^*(3)$ = $(\log\log\log\log...)$ until $n \leq 1$.</p>

<p>I'm trying to solve the following:</p>

<p>is </p>

<blockquote>
 <p>$\log^*(2^{2^n})$</p>
</blockquote>

<p>little $o$, little $\omega$, or $\Theta$ of</p>

<blockquote>
 <p>${\log^*(n)}^2$</p>
</blockquote>

<p>In terms of the interior functions, $\log^*(2^{2^n})$ is much bigger than $\log^*(n)$, but squaring the $\log^*(n)$ is throwing me off. </p>

<p>I know that $\log(n)^2$ is $O(n)$, but I don't think that property holds for the iterative logarithm.</p>

<p>I tried applying the master method, but I'm having trouble with the properties of a $\log^*(n)$ function. I tried setting n to be max (i.e. $n = 5$), but this didn't really simplify the problem.</p>

<p>Does anyone have any tips as to how I should approach this?</p>
 | asymptotics landau notation mathematical analysis | 1 |
1,346 | Sharp concentration for selection via random partitioning? | <p>The usual simple algorithm for finding the median element in an array $A$ of $n$ numbers is:</p>

<ul>
<li>Sample $n^{3/4}$ elements from $A$ with replacement into $B$</li>
<li>Sort $B$ and find the rank $|B|\pm \sqrt{n}$ elements $l$ and $r$ of $B$</li>
<li>Check that $l$ and $r$ are on opposite sides of the median of $A$ and that there are at most $C\sqrt{n}$ elements in $A$ between $l$ and $r$ for some appropriate constant $C > 0$. Fail if this doesn't happen.</li>
<li>Otherwise, find the median by sorting the elements of $A$ between $l$ and $r$</li>
</ul>

<p>It's not hard to see that this runs in linear time and that it succeeds with high probability. (All the bad events are large deviations away from the expectation of a binomial.)</p>

<p>An alternate algorithm for the same problem, which is more natural to teach to students who have seen quick sort is the one described here: <a href="https://cs.stackexchange.com/questions/1334/randomized-selection/1343">Randomized Selection</a></p>

<p>It is also easy to see that this one has linear expected running time: say that a "round" is a sequence of recursive calls that ends when one gives a 1/4-3/4 split, and then observe that the expected length of a round is at most 2. (In the first draw of a round, the probability of getting a good split is 1/2 and then after actually increases, as the algorithm was described so round length is dominated by a geometric random variable.)</p>

<p>So now the question: </p>

<blockquote>
 <p>Is it possible to show that randomized selection runs in linear time with high probability?</p>
</blockquote>

<p>We have $O(\log n)$ rounds, and each round has length at least $k$ with probability at most $2^{-k+1}$, so a union bound gives that the running time is $O(n\log\log n)$ with probability $1-1/O(\log n)$.</p>

<p>This is kind of unsatisfying, but is it actually the truth?</p>
 | algorithms algorithm analysis randomized algorithms | 1 |
1,347 | Complexity of 3SAT variants | <p>This question is motivated by my <a href="https://cs.stackexchange.com/a/1328/96">answer</a> to another question in which I stated the fact that both Betweeness and Non-Betweeness problems are $NP$-complete. In the former problem there is a total order such that the betweeness constraint of each triple is enforced while in the later problem there is a total order such that the betweeness constraint of each triple is violated.</p>

<p>What is the complexity of the following 3SAT variants?:</p>

<p>3SAT_1={($\phi$): $\phi$ has an assignment that makes every clause false}</p>

<p>3SAT_2={($\phi$): $\phi$ has an assignment such that exactly half of the clauses are true and the other half is false}</p>
 | complexity theory satisfiability | 1 |
1,353 | Optimizing a strictly monotone function | <p>I am looking for algorithms to optimize a strictly monotonic function $f$ such that $f(x) < y$ </p>

<p>$f : [a,b] \longrightarrow [c,d]
\qquad \text{where } [a,b] \subset {\mathbb N}, [c,d] \subset {\mathbb N}$<br>
such that $\arg\max{_x} f(x) < y$</p>

<p>My first idea was to use a variant of binary search, pick a point $x$ in $[a,b]$ at random; if $f(x) > y$ then we eliminate $[x, b]$, and if $f(x) < y$ we eliminate $[a, x]$. We repeat this procedure until the solution is found.</p>

<p>Do you have any other ideas to maximize the function $f$ ?</p>
 | algorithms optimization | 1 |
1,354 | Quicksort vs. insertion sort on linked list: performance | <p>I have written a program to sort Linked Lists and I noticed that my insertion sort works much better than my quicksort algorithm. 
Does anyone have any idea why this is?
Insertion sort has a complexity of $\Theta(n^2)$ and quicksort $O(n\log n)$ so therefore quicksort should be faster. I tried for random input size and it shows me the contrary. Strange...</p>

<p>Here the code in Java:</p>



<pre><code>public static LinkedList qSort(LinkedList list) {

 LinkedList x, y;
 Node currentNode;
 int size = list.getSize();

 //Create new lists x smaller equal and y greater
 x = new LinkedList();
 y = new LinkedList();

 if (size <= 1)
 return list;
 else {

 Node pivot = getPivot(list);
 // System.out.println("Pivot: " + pivot.value); 
 //We start from the head
 currentNode = list.head;

 for (int i = 0; i <= size - 1; i++) {
 //Check that the currentNode is not our pivot
 if (currentNode != pivot) {
 //Nodes with values smaller equal than the pivot goes in x
 if (currentNode.value <= pivot.value) {
 {
 x.addNode(currentNode.value);
 // System.out.print("Elements in x:");
 // x.printList();
 }

 } 
 //Nodes with values greater than the pivot goes in y
 else if (currentNode.value > pivot.value) {
 if (currentNode != pivot) {
 y.addNode(currentNode.value);
 // System.out.print("Elements in y:");
 // y.printList();
 }
 }
 }
 //Set the pointer to the next node
 currentNode = currentNode.next;
 }

 //Recursive calls and concatenation of the Lists and pivot
 return concatenateList(qSort(x), pivot, qSort(y));

 }
}
</code></pre>
 | algorithms algorithm analysis sorting lists | 1 |
1,367 | Quicksort explained to kids | <p>Last year, I was reading a fantastic <a href="http://arxiv.org/abs/quant-ph/0510032">paper on “Quantum Mechanics for Kindergarden”</a>. It was not easy paper.</p>

<p>Now, I wonder how to explain quicksort in the simplest words possible. How can I prove (or at least handwave) that the average complexity is $O(n \log n)$, and what the best and the worst cases are, to a kindergarden class? Or at least in primary school?</p>
 | algorithms education algorithm analysis didactics sorting | 1 |
1,370 | What is co-something? | <p>What does the notation <code>co-</code> mean when prefixing <code>co-NP</code>, <code>co-RE</code> (recursively enumerable), or <code>co-CE</code> (computably enumerable) ?</p>
 | complexity theory computability terminology | 1 |
1,371 | Scott-continuous functions: an alternative definition | <p>I'm really struggling with this property:</p>
<blockquote>
<p>Let <span class="math-container">$X,Y$</span> be <a href="http://en.wikipedia.org/wiki/Coherent_space" rel="noreferrer">coherence spaces</a> and <span class="math-container">$f: Cl(X) \rightarrow Cl(Y)$</span> be a monotone function. <span class="math-container">$f$</span> is continuous if and only if <span class="math-container">$f(\bigcup_{x\in D} x)=\bigcup_{x \in D}f(x)$</span>, for all <span class="math-container">$D \subseteq Cl(X)$</span> such that <span class="math-container">$D$</span> is a directed set.</p>
<p><strong>Directed set</strong> is defined thus: <span class="math-container">$D \subseteq $</span> POSET<span class="math-container">$ $</span> is a directed set iff <span class="math-container">$ \forall x, x' \in D$</span> <span class="math-container">$ \exists z \in D $</span> such <span class="math-container">$ x \subseteq z$</span> and <span class="math-container">$x' \subseteq z$</span>.<br />
<span class="math-container">$Cl(X) $</span>stands for cliques of X: <span class="math-container">$\{x \subseteq |X| \mid a,b \in x \Rightarrow a$</span> coherent <span class="math-container">$b \}$</span>.</p>
</blockquote>
<p>Many books give that as a definition of <strong><a href="http://en.wikipedia.org/wiki/Scott_continuity" rel="noreferrer">Scott-continuous</a> functions</strong>, but unluckly not my teacher. He gave us this definition of continuous:</p>
<blockquote>
<p><span class="math-container">$f : Cl(X) \rightarrow Cl(Y)$</span> is continuous iff it is monotone and <span class="math-container">$\forall x \in Cl(X), \forall b \in f(x), \exists x_0 \subseteq_{fin} x, b \in f(x_0)$</span>,<br />
where <strong>monotone</strong> is defined as:
<span class="math-container">$f$</span> is monotone iff <span class="math-container">$a \subseteq b \Rightarrow f(a) \subseteq f(b)$</span></p>
</blockquote>
<p>This is the proposed proof I have, but I can't understand the last equation.</p>
<p><strong>Proof of <span class="math-container">$f$</span> continuous implies <span class="math-container">$f(\bigcup D)=\bigcup f(D)$</span></strong>:<br />
Let <span class="math-container">$b \in f(\bigcup D)$</span>. By the definition of continuity, <span class="math-container">$\exists x_0 \subset_{fin} x \mid b \in f(x_0)$</span>. Note that <span class="math-container">$x_0$</span> is the union of <span class="math-container">$\{ x_i \mid x_i \in D\}$</span>.<br />
If <span class="math-container">$D$</span> is direct then: <span class="math-container">$\exists z \in D \mid x_i \subseteq z$</span> then <span class="math-container">$x_0 \subseteq z$</span>. By the definition of monotony, <span class="math-container">$f(x_0)\subseteq f(z)$</span> so <span class="math-container">$b \in f(z)$</span> <em><strong>(???)</strong></em> <span class="math-container">$\subseteq \bigcup f(D)$</span>. And even that is true we should show that <span class="math-container">$\bigcup f(D) = f(\bigcup D)$</span>, not just <span class="math-container">$\subseteq$</span>.</p>
<p>The proof of the other implication is even worse so I can't write it here... Can you explain to me how the proof can work?</p>
 | terminology programming languages semantics | 0 |
1,375 | C++ Strings vs. Character Arrays | <p>Why do you think it is that most C++ instructors teaching college level computer sciences discourage or even forbid using strings for text, instead requiring students to use character arrays?</p>

<p>I am assuming this methodology is somehow intended to teach good programming habits, but in my experience I don't see anything wrong with just using strings, and they are significantly easier to use and learn.</p>
 | education arrays strings | 0 |
1,377 | Lower bound for finding kth smallest element using adversary arguments | <p>In many texts a lower bound for finding $k$th smallest element is derived making use of arguments using medians. How can I find one using an adversary argument?</p>

<p><a href="http://en.wikipedia.org/wiki/Selection_algorithm">Wikipedia</a> says that tournament algorithm runs in $O(n+k\log n)$, and $n - k + \sum_{j = n+2-k}^{n} \lceil{\operatorname{lg}\, j}\rceil$ is <a href="http://en.wikipedia.org/wiki/Selection_algorithm#Lower_bounds">given</a> as lower bound.</p>
 | algorithms algorithm analysis | 1 |
1,382 | Are universal types a sub-type, or special case, of existential types? | <p>I would like to know whether a universally-quantified type $T_a$: $$T_a = \forall X: \left\{ a\in X,f:X→\{T, F\} \right\}$$ is a sub-type, or special case, of an existentially-quantified type $T_e$ with the same signature: $$T_e = \exists X: \left\{ a\in X,f:X→\{T, F\} \right\}$$</p>

<p>I'd say "yes": If something is true "for all X" ($\forall X$), then it must also be true "for some X" ($\exists X$). That is, a statement with '$\forall$' is simply a more restricted version of the same statement with '$\exists$': $$∀X, P(X) \overset?\implies ∃X, P(X).$$</p>

<p>Am I wrong somewhere?</p>

<blockquote>
 <p><strong>Background: Why am I asking this?</strong></p>
 
 <p>I am studying existential types in order to understand why and how <a href="http://theory.stanford.edu/~jcm/papers/mitch-plotkin-88.pdf">"Abstract [Data] Types Have Existential Type"</a>. I cannot get a good grasp of this concept from theory alone; I need concrete examples, too.</p>
 
 <p>Unfortunately, good code examples are hard to find because most programming languages have only limited support for existential types. (For instance, <a href="http://www.haskell.org/haskellwiki/Existential_type">Haskell's <code>forall</code></a>, or <a href="http://docs.oracle.com/javase/tutorial/extra/generics/wildcards.html">Java's <code>?</code> wildcards</a>.) On the other hand, universally-quantified types are supported by many recent languages via "generics".</p>
 
 <p>What's worse, <em>generics seems to easily get mixed up with existential types</em>, too, making it even harder to tell apart existential from universal types. <em>I'm curious why this mix-up occurs so easily.</em> An answer to this question might explain it: If universal types are indeed only a special case of existential types, then it's no wonder that generic types, e.g. Java's <code>List<T></code>, can be interpreted either way.</p>
</blockquote>
 | logic type theory typing | 0 |
1,388 | Complexity of an optimisation problem in 3D | <p>I have a collection $P \subseteq \mathbb{R}^3$ of $N$ particles and there is a function $f : P^2 \to \mathbb{R}$. I want to find which configuration of the system minimizes the value of $f$. </p>

<p>Can this problem (or similar ones) be reduced to TSP? Could you point me to literature on the topic?</p>

<p>In my application, $f$ is the <a href="https://en.wikipedia.org/wiki/Van_der_Waals_force" rel="nofollow">atomic van der waals force</a>, which for each pair of particles of atoms is attractive or repulsive depending on some predefined thresholds.</p>

<p>In addition, it would be great to have a list of concrete examples of problems that can be reduced to TSP.</p>
 | complexity theory optimization search problem | 0 |
1,392 | Algorithm to chase a moving target | <p>Suppose that we have a black-box $f$ which we can query and reset. When we reset $f$, the state $f_S$ of $f$ is set to an element chosen uniformly at random from the set $$\{0, 1, ..., n - 1\}$$ where $n$ is fixed and known for given $f$. To query $f$, an element $x$ (the guess) from $$\{0, 1, ..., n - 1\}$$ is provided, and the value returned is $(f_S - x) \mod n$. Additionally, the state $f_S$ of$f$ is set to a value $f_S' = f_S \pm k$, where $k$ is selected uniformly at random from $$\{0, 1, 2, ..., \lfloor n/2 \rfloor - ((f_S - x) \mod n)\} $$</p>

<p>By making uniformly random guesses with each query, one would expect to have to make $n$ guesses before getting $f_S = x$, with variance $n^2 - n$ (stated without proof).</p>

<p>Can an algorithm be designed to do better (i.e., make fewer guesses, possibly with less variance in the number of guesses)? How much better could it do (i.e., what's an optimal algorithm, and what is its performance)?</p>

<p>An efficient solution to this problem could have important cost-saving implications for shooting at a rabbit (confined to hopping on a circular track) in a dark room.</p>
 | algorithms probability theory randomized algorithms | 1 |
1,393 | Rectangle Coverage by Sweep Line | <p>I am given an exercise unfortunately I didn't succeed by myself.</p>

<blockquote>
 <p>There is a set of rectangles $R_{1}..R_{n}$ and a rectangle $R_{0}$. Using plane sweeping algorithm determine if $R_{0}$ is completely covered by the set of $R_{1}..R_{n}$.</p>
</blockquote>

<p>For more details about the principle of sweep line algorithms see <a href="http://en.wikipedia.org/wiki/Sweep_line_algorithm" rel="nofollow">here</a>.</p>

<p>Let's start from the beginning. Initially we know sweep line algorithm as the algorithm for finding <a href="http://en.wikipedia.org/wiki/Line_segment_intersection" rel="nofollow">line segment intersections</a>which requires two data structures:</p>

<ul>
<li>a set $Q$ of event points (it stores endpoints of segments and intersections points)</li>
<li>a status $T$ (dynamic structure for the set of segments the sweep line intersecting)</li>
</ul>

<p><strong>The General Idea:</strong> assume that sweep line $l$ is a vertical line that starts approaching the set of rectangles from the left. Sort all $x$ coordinates of rectangles and store them in $Q$ in increasing order - should take $O(n\log n)$. Start from the first event point, for every point determine the set of rectangles that intersect at given $x$ coordinate, identify continuous segments of intersection rectangles and check if they cover $R_{0}$ completely at current $x$ coordinate. With $T$ as a binary tree it's gonna take $O(\log n)$. If any part of $R_{0}$ remains uncovered that $R_{0}$ is not completely covered.</p>

<p><strong>Details:</strong> The idea of segment intersection algorithm was that only adjacent segments intersect. Based on this fact we built status $T$ and maintained it throughout the algorithm. I tried to find a similar idea in this case and so far with no success, the only thing I can say is two rectangles intersect if their corresponding $x$ and $y$ coordinates overlap. </p>

<p>The problem is how to build and maintain $T$, and what the complexity of building and maintain $T$ is. I assume that <a href="http://en.wikipedia.org/wiki/R_Trees" rel="nofollow">R trees</a> can be very useful in this case, but as I found it's very difficult to determine the minimum bounding rectangle using R trees. </p>

<p>Do you have any idea about how to solve this problem, and particularly how to build $T$?</p>
 | algorithms computational geometry | 1 |
1,394 | How to represent the interests of a Facebook user | <p>I'm trying to figure out a way I could represent a Facebook user as a vector. I decided to go with stacking the different attributes/parameters of the user into one big vector (i.e. age is a vector of size 100, where 100 is the maximum age you can have, if you are lets say 50, the first 50 values of the vector would be 1 just like a thermometer).</p>

<p>Now I want to represent the Facebook interests as a vector too, and I just can't figure out a way. They are a collection of words and the space that represents all the words is huge, I can't go for a model like a bag of words or something similar. How should I proceed? I'm still new to this, any reference would be highly appreciated.</p>
 | machine learning modelling social networks knowledge representation | 1 |
1,398 | Maintaining search indices with binary trees | <p>There are some documents to be indexed, that means I need to read the docs and extract the words and index them by storing at which document they appear and at which position.</p>

<p>For each word initially I am creating a separate file. Consider 2 documents:</p>

<ul>
<li>document 1: “The Problem of Programming Communication with”</li>
<li>document 2: “Programming of Arithmetic Operations”</li>
</ul>

<p>Here, there are 10 words, 8 unique. So I create 8 files (<code>the</code>, <code>problem</code>, <code>of</code>, <code>programming</code>, <code>communications</code>, <code>with</code>, <code>arithmetic</code>, <code>operations</code>).</p>

<p>In each file, I will store at which document they appear and at what position. The actual structure I am implementing has lot more information but this basic structure will serve the purpose.</p>

<pre><code>file name file content
the 1 1
problem 1 2
of 1 3 2 2
programming 1 4 2 1
communications 1 5
with 1 6
arithmetic 2 3
operations 2 4
</code></pre>

<p>Meaning. the word is located at document 1, position 3 and at document 2, position 2.</p>

<p>After the initial index is done I will concatenate all the files into a single index file and in another file I store the offset where a particular word will be found.</p>

<p>index file: <code>1 1 1 2 1 3 2 2 1 4 2 1 1 5 1 6 2 3 2 4</code><br>
offset file: <code>the 1 problem 3 of 5 programming 9 communications 13 with 15 arithmetic 17 operations 19</code></p>

<p>So if I need the index information for <code>communications</code>, I will go to position 13 of the file and read up to position 15 excluded, in other words the offset of the next word.</p>

<p>This is all fine for static indexing. But if I change a single index the whole file will need to be rewritten. Can I use a binary tree as the index file's structure, so that I can dynamically change the file content and update the offset somehow ? </p>
 | data structures binary trees data mining | 0 |
1,399 | Distribute objects in a cube so that they have maximum distance between each other | <p>I'm trying to use a color camera to track multiple objects in space. Each object will have a different color and in order to be able to distinguish well between each objects I'm trying to make sure that each color assigned to an object is as different from any color on any other object as possible.</p>

<p>In RGB space, we have three planes, all with values between 0 and 255. In this cube $(0,0,0) / (255,255,255)$, I would like to distribute the $n$ colors so that there is as much distance between themselves and others as possible. An additional restriction is that $(0, 0, 0)$ and $(255, 255, 255)$ (or as close to them as possible) should be included in the $n$ colors, because I want to make sure that none of my $(n-2)$ objects takes either color because the background will probably be one of these colors.</p>

<p>Probably, $n$ (including black and while) will not be more than around 14.</p>

<p>Thanks in advance for any pointers on how to get these colors. </p>
 | algorithms optimization computational geometry | 1 |
1,407 | How to interpret "Windows - Virtual Memory minimum too low" from a CS student point of view? | <p>On my old 256MB RAM, pc I get this message. (I guess it is quite common)</p>

<blockquote>
 <p><strong>Windows - Virtual Memory minimum too low</strong><br>
 Your system is low on virtual memory. Windows is increasing the size of your virtual memory paging file. During this process, memory requests for some applications may be denied. ...</p>
</blockquote>

<p>Please explain from a CS student point of view-</p>

<ol>
<li>"Windows is increasing the size of your virtual memory paging file." and</li>
<li>"during this process...". what is this process called?</li>
</ol>

<p>Thanks, I am currently studying virtual memory management in OS.</p>
 | operating systems virtual memory paging | 1 |
1,413 | Why are blocking artifacts serious when there is fast motion in MPEG? | <p>Why are blocking artifacts serious when there is fast motion in MPEG?</p>

<p>Here is the guess I made:</p>

<p>In MPEG, each block in an encoding frame is matched with a block in the reference frame.
If the difference of two blocks is small, only the difference is encoded using DCT. Is the reason blocking artifacts are serious that the difference of two blocks is too large and DCT cut the AC component?</p>
 | information theory data compression video | 1 |
1,414 | Proving a specific language is regular | <p>In my computability class we were given a practice final to go over and I'm really struggling with one of the questions on it.</p>
<blockquote>
<p>Prove the following statement:</p>
<p>If <span class="math-container">$L_1$</span> is a regular language, then so is</p>
<p><span class="math-container">$L_2 = \{ uv |$</span> <span class="math-container">$u$</span> is in <span class="math-container">$L_1$</span> or <span class="math-container">$v$</span> is in <span class="math-container">$L_1 \}$</span>.</p>
</blockquote>
<p>You can't use the pumping lemma for regular languages (I think), so how would you go about this? I'm inclined to believe that it's false because if <span class="math-container">$u$</span> is in <span class="math-container">$L_1$</span>, what if <span class="math-container">$v$</span> is non-regular? Then it would be impossible to write a regular expression for it. The question is out of 5 marks though and that doesn't seem like enough of an answer for it.</p>
 | formal languages regular languages | 1 |
1,415 | P-Completeness and Parallel Computation | <p>I was recently reading about algorithms for checking bisimilarity and read that the problem is <a href="http://en.wikipedia.org/wiki/P-complete">P-complete</a>. Furthermore, a consequence of this is that this problem, or any P-complete problem, is unlikely to have an efficient parallel algorithms.</p>

<blockquote>
 <p>What is the intuition behind this last statement?</p>
</blockquote>
 | complexity theory parallel computing | 1 |
1,418 | When to use recursion? | <p>When are some (relatively) basic (think first year college level CS student) instances when one would use recursion instead of just a loop? </p>
 | algorithms recursion | 1 |
1,424 | Overflow safe summation | <p>Suppose I am given $n$ fixed width integers (i.e. they fit in a register of width $w$), $a_1, a_2, \dots a_n$ such that their sum $a_1 + a_2 + \dots + a_n = S$ also fits in a register of width $w$.</p>

<p>It seems to me that we can always permute the numbers to $b_1, b_2, \dots b_n$ such that each prefix sum $S_i = b_1 + b_2 + \dots + b_i$ also fits in a register of width $w$.</p>

<p>Basically, the motivation is to compute the sum $S = S_n$ on fixed width register machines without having to worry about integer overflows at any intermediate stage.</p>

<p>Is there a fast (preferably linear time) algorithm to find such a permutation (assuming the $a_i$ are given as an input array)? (or say if such a permutation does not exist).</p>
 | algorithms arrays integers numerical analysis | 1 |
1,426 | Detecting overflow in summation | <p>Suppose I am given an array of $n$ fixed width integers (i.e. they fit in a register of width $w$), $a_1, a_2, \dots a_n$. I want to compute the sum $S = a_1 + \ldots + a_n$ on a machine with 2's complement arithmetic, which performs additions modulo $2^w$ with wraparound semantics. That's easy — but the sum may overflow the register size, and if it does, the result will be wrong.</p>

<p>If the sum doesn't overflow, I want to compute it, and to verify that there is no overflow, as fast as possible. If the sum overflows, I only want to know that it does, I don't care about any value.</p>

<p>Naively adding numbers in order doesn't work, because a partial sum may overflow. For example, with 8-bit registers, $(120, 120, -115)$ is valid and has a sum of $125$, even though the partial sum $120+120$ overflows the register range $[-128,127]$.</p>

<p>Obviously I could use a bigger register as an accumulator, but let's assume the interesting case where I'm already using the biggest possible register size.</p>

<p>There is a well-known technique to <a href="https://cs.stackexchange.com/a/1425">add numbers with the opposite sign as the current partial sum</a>. This technique avoids overflows at every step, at the cost of not being cache-friendly and not taking much advantage of branch prediction and speculative execution.</p>

<p>Is there a faster technique that perhaps takes advantage of the permission to overflow partial sums, and is faster on a typical machine with an overflow flag, a cache, a branch predictor and speculative execution and loads?</p>

<p>(This is a follow-up to <a href="https://cs.stackexchange.com/questions/1424/overflow-safe-summation">Overflow safe summation</a>)</p>
 | algorithms arrays integers numerical analysis | 1 |
1,427 | Sub language is not Turing-recognizable, or could it be? | <p>Let A and B be languages with A ⊆ B, and B is Turing-recognizable. Can A be not Turing-recognizable? If so, is there any example?</p>
 | computability | 1 |
1,434 | Lambda Calculus beta reduction | <p>I am trying to learn Lambda calculus from <a href="http://www.cse.chalmers.se/research/group/logic/TypesSS05/Extra/geuvers.pdf" rel="noreferrer">here</a></p>

<p>and while trying to solve some problems, I got stuck. I was trying to solve the following problem (page 14, excercise 2.6 part (i):</p>

<p>Simplify $M \equiv (\lambda xyz.zyx) aa (\lambda pq. q)$.</p>

<p>My evaluation using the beta rule reduces it to $(\lambda z. z)$ as follows:
First I replace occurences of $x$ in $M$ by $aa (\lambda pq. q)$, and then since there are no occurences of $y$ in the resulting $\lambda$ term, the expression simply evaluates to $(\lambda z. z)$.</p>

<p>Is my reasoning correct? (Since there were no solutions to these notes, I want to ensure my understanding is correct. Any corrections, will be much appreciated! Thanks in advance.</p>
 | logic lambda calculus | 0 |
1,440 | What is the name of this logistic variant of TSP? | <p>I have a logistic problem that can be seen as a variant of $\text{TSP}$. It is so natural, I'm sure it has been studied in Operations research or something similar. Here's one way of looking at the problem.</p>

<p>I have $P$ warehouses on the Cartesian plane. There's a path from a warehouse to every other warehouse and the distance metric used is the Euclidean distance. In addition, there are $n$ different items. Each item $1 \leq i \leq n$ can be present in any number of warehouses. We have a collector and we are given a starting point $s$ for it, say the origin $(0,0)$. The collector is given an order, so a list of items. Here, we can assume that the list only contains distinct items and only one of each. We must determine the shortest tour starting at $s$ visiting some number of warehouses so that the we pick up every item on the order.</p>

<p>Here's a visualization of a randomly generated instance with $P = 35$. Warehouses are represented with circles. Red ones contain item $1$, blue ones item $2$ and green ones item $3$. Given some starting point $s$ and the order ($1,2,3$), we must pick one red, one blue and one green warehouse so the order can be completed. By accident, there are no multi-colored warehouses in this example so they all contain exactly one item. This particular instance is a case of <a href="http://en.wikipedia.org/wiki/Set_TSP_problem" rel="nofollow noreferrer">set-TSP</a>.</p>

<p><img src="https://i.stack.imgur.com/5kKsj.png" alt="An instance of the problem."></p>

<p>I can show that the problem is indeed $\mathcal{NP}$-hard. Consider an instance where each item $i$ is located in a different warehouse $P_i$. The order is such that it contains every item. Now we must visit every warehouse $P_i$ and find the shortest tour doing so. This is equivalent of solving an instance of $\text{TSP}$.</p>

<p>Being so obviously useful at least in the context of logistic, routing and planning, I'm sure this has been studied before. I have two questions:</p>

<ol>
<li>What is the name of the problem?</li>
<li>How well can one hope to approximate the problem (assuming $\mathcal{P} \neq \mathcal{NP}$)? </li>
</ol>

<p>I'm quite happy with the name and/or reference(s) to the problem. Maybe the answer to the second point follows easily or I can find out that myself.</p>
 | algorithms optimization reference request approximation | 1 |
1,444 | How many possible ways are there? | <p>Suppose I have the given data set of length 11 of scores:</p>

<pre><code>p=[2, 5, 1 ,2 ,4 ,1 ,6, 5, 2, 2, 1]
</code></pre>

<p>I want to select scores 6, 5, 5, 4, 2, 2 from the data set. How many ways are there?</p>

<p>For the above example answer is: 6 ways</p>

<pre><code>{p[1], p[2], p[4], p[5], p[7], p[8]}
{p[10], p[2], p[4], p[5], p[7], p[8]}
{p[1], p[2], p[10], p[5], p[7], p[8]}
{p[9], p[2], p[4], p[5], p[7], p[8]}
{p[1], p[2], p[9], p[5], p[7], p[8]}
{p[10], p[2], p[9], p[5], p[7], p[8]}
</code></pre>

<p>How can I count the ways in general?</p>
 | combinatorics | 1 |
1,447 | What is most efficient for GCD? | <p>I know that Euclid’s algorithm is the best algorithm for getting the GCD (great common divisor) of a list of positive integers.
But in practice you can code this algorithm in various ways. (In my case, I decided to use Java, but C/C++ may be another option).</p>

<p>I need to use the most efficient code possible in my program.</p>

<p>In recursive mode, you can write:</p>

<pre><code>static long gcd (long a, long b){
 a = Math.abs(a); b = Math.abs(b);
 return (b==0) ? a : gcd(b, a%b);
 }
</code></pre>

<p>And in iterative mode, it looks like this:</p>

<pre><code>static long gcd (long a, long b) {
 long r, i;
 while(b!=0){
 r = a % b;
 a = b;
 b = r;
 }
 return a;
}
</code></pre>

<hr>

<p>There is also the Binary algorithm for the GCD, which may be coded simply like this:</p>

<pre><code>int gcd (int a, int b)
{
 while(b) b ^= a ^= b ^= a %= b;
 return a;
}
</code></pre>
 | algorithms recursion arithmetic | 1 |
1,450 | Prime number CFG and Pumping Lemma | <p>So I have a problem that I'm looking over for an exam that is coming up in my Theory of Computation class. I've had a lot of problems with the <em>pumping lemma</em>, so I was wondering if I might be able to get a comment on what I believe is a valid proof to this problem. From what I have seen online and in our review I don't think this is the customary answer to this problem so I want to know if I am applying the concepts behind the pumping lemma successfully. The problem is <em>not</em> a homework problem and can be found on my professor's previous exams <a href="http://www.cs.ucf.edu/%7Edmarino/ucf/transparency/cot4210/exam/" rel="nofollow noreferrer">here</a> under the fourth problem of his exam given in Fall of 2011, which is...</p>
<blockquote>
<p>Let <span class="math-container">$L = \{0^p \mid \text{\(p\) is a prime number}\}$</span>. Prove that <span class="math-container">$L$</span> is not context-free using the pumping lemma for context-free languages.</p>
</blockquote>
<p>So here is my proof:</p>
<blockquote>
<p>Assume that the pumping length is <span class="math-container">$m$</span>, where <span class="math-container">$m+1$</span> is a prime number. I shall also assume that there is a string <span class="math-container">$uvxyz = 0^{(m/2)}00^{m/2} \in L$</span>. There are two possible positions that do not violate conditions 2 and 3 of the pumping lemma for context languages, being <span class="math-container">$|vy| > 0$</span> and <span class="math-container">$|vxy| \leq m$</span>. These are:</p>
<ol>
<li><p><span class="math-container">$u = 0^{(m/2)}, v = 0, x = 0^{m/2}$</span>, pumping by one results in <span class="math-container">$0^{m/2}000^{m/2}$</span>. Since m/2 + m/2 is m, which is one less than the prime number m+1, it is an even number. m+2 is also an even number and since <span class="math-container">$|0^{m/2}000^{m/2}| = m + 2$</span>, this number of zeroes is also even and thus cannot be prime, resulting in a contradiction.</p>
</li>
<li><p>The other placement is to place the string on the symmetric opposite or <span class="math-container">$x = 0^{m/2}, y = 0, z = 0^{m/2}$</span>. This results in the same contraction as in case 1.</p>
</li>
</ol>
</blockquote>
<p>The string cannot be placed in the center such that <span class="math-container">$v = 0^{m/2}, x = 0, y = 0^{m/2}$</span> as this would violate condition three or <span class="math-container">$|vxy| \leq m$</span>, since <span class="math-container">$|vxy| = m + 1 > m$</span>.</p>
<p>So my question is essentially, is this a valid proof and if not what is wrong with it?</p>
 | formal languages proof techniques context free pumping lemma | 0 |
1,454 | Can the Bell-LaPadula model emulate the Chinese Wall model? | <p>I have been reading on security policies and the question wether <a href="https://en.wikipedia.org/wiki/Bell-LaPadula_model" rel="nofollow">Bell-LaPadula</a> can be used to implement <a href="https://en.wikipedia.org/wiki/Chinese_wall" rel="nofollow">Chinese Wall</a>. Does anyone know more about it?</p>
 | information theory security access control | 0 |
1,455 | How to use adversary arguments for selection and insertion sort? | <p>I was asked to find the adversary arguments necessary for finding the lower bounds for selection and insertion sort. I could not find a reference to it anywhere.</p>

<p>I have some doubts regarding this. I understand that adversary arguments are usually used for finding lower bounds for certain "problems" rather than "algorithms".</p>

<p>I understand the merging problem. But how could I write one for selection and insertion sort?</p>
 | algorithms algorithm analysis proof techniques lower bounds | 1 |
1,458 | Encoding the sequence 0110 and determining parity, data bit and value | <p>I've been struggling with several Hamming code/error detection questions because the logic behind it doesn't seem to make sense.</p>
<p>eg.1</p>
<p><img src="https://i.stack.imgur.com/v9F4w.png" alt="enter image description here" /></p>
<p>eg.2</p>
<p><img src="https://i.stack.imgur.com/tGxIZ.png" alt="enter image description here" /></p>
<p>I don't really understand the above two examples and the calculations taking place.</p>
<p>How were the conclusions reached concerning the categories of <strong>bin position (dec/bin), parity/data bit and value</strong> in <strong>e.g 1</strong>?</p>
<p>Secondly I don't understand the process taking place in <strong>e.g. 2</strong> at all. Does it follow that:</p>
<blockquote>
<p>General rule: For any code C,</p>
<ul>
<li>errors of less than <span class="math-container">$d(C)$</span> bits can be detected,</li>
<li>errors of less than <span class="math-container">$d(C)/2$</span> bits can be corrected.</li>
</ul>
<p>Definition: A code C with <span class="math-container">$d(C) \geq 3$</span> is called error-correcting.</p>
</blockquote>
<p>If this is correct then how would I put it into practice. Would really appreciate some assistance!</p>
 | coding theory | 1 |
1,460 | Magic Square Check for NxN Matrix - with Minimum Complexity? | <p>Is there any algorithm that works better than $\Theta(n^2)$ to verify whether a square matrix is a magic one? (E.g. such as sum of all the rows, cols and diagonally are equal to each other). 
I did see someone mention a $O(n)$ time on a website a few days ago but could not figure out how.</p>
 | algorithms algorithm analysis | 0 |
1,466 | Circle Intersection with Sweep Line Algorithm | <p>Unfortunately I am still not so strong in understanding <a href="http://en.wikipedia.org/wiki/Sweep_line_algorithm">Sweep Line Algorithm</a>. All papers and textbooks on the topic are already read, however understanding is still far away. Just in order to make it clearer I try to solve as many exercises as I can. But, really interesting and important tasks are still a challenge for me.</p>

<p>The following exercise I found in lecture notes of <a href="http://theory.cs.uiuc.edu/~jeffe/teaching/algorithms/notes/xo-sweepline.pdf">Line Segment Intersection</a> by omnipotent Jeff Erickson.</p>

<blockquote>
 <p><strong>Exercise 2.</strong> Describe and analyze a sweepline algorithm to determine, given $n$ circles in the plane, whether any two intersect, in $O(n \log n)$ time. Each circle is specified by its center and its radius, so the input consists of three arrays $X[1.. n], Y [1.. n]$, and $R[1.. n]$. Be careful to correctly implement the low-level primitives.</p>
</blockquote>

<p>Let's try to make a complex thing easier. What do we know about intersection of circles? What analogue can be found with intersection of lines. Two lines might intersect if they adjacent, which property two circle should have in order to intersect? Let $d$ be the distance between the center of the circles, $r_{0}$ and $r_{1}$ centers of the circles. Consider few cases:</p>

<ul>
<li><p>Case 1: If $d > r_{0} + r_{1}$ then there are no solutions, the circles are separate.</p></li>
<li><p>Case 2: If $d < |r_{0} - r_{1}|$ then there are no solutions because one circle is contained within the other.</p></li>
<li><p>Case 3: If $d = 0$ and $r_{0} = r_{1}$ then the circles are coincident and there are an infinite number of solutions.</p></li>
</ul>

<p>So, it looks like conditions of intersection are ready, of course it may be wrong conditions. Please correct if it's so.</p>

<p><strong>Algorithm.</strong> Now we need to find something in common between two intersecting circles. With analogue to line intersection, we need to have insert condition and delete condition to event queue. Let's say event point are x coordinate of the first and the last points which vertical sweep line touches. On the first point we insert circle to <em>status</em>
 and check for intersection (3 cases for checking are mentioned above) with nearest circles, on the last point we delete circle from <em>status</em>.</p>

<p>It looks like is enough for sweep line algorithm. If there is something wrong, or may be there is something what should be done different, feel free to share your thoughts with us.</p>

<p><strong>Addendum</strong>:</p>

<p>I insert a circle when vertical sweep line touches the circle for the first time, and remove a circle from the status when sweep line touches it for the last time. The check for intersection should be done for the nearest previous circle. If we added a circle to <em>status</em> and there was already another circle which we added before and it was still there, therefore the pervious circle was not "closed", so there might be an intersection.</p>
 | algorithms computational geometry | 1 |
1,467 | Words that have the same right- and left-associative product | <p>I have started to study non deterministic automata using the book of <a href="https://en.wikipedia.org/wiki/Introduction_to_Automata_Theory,_Languages,_and_Computation" rel="nofollow">Hopcroft and Ullman</a>. I'm stuck in a problem that I found very interesting:</p>

<blockquote>
 <p>Give a non deterministic finite automaton accepting all the strings that
 have the same value when evaluated left to right as right to left by
 multiplying according to the following table:</p>
 
 <p>$\qquad \displaystyle\begin{array}{c|ccc} 
 \times & a & b & c \\
 \hline 
 a & a & a & c \\
 b & c & a & b \\
 c & b & c &a
 \end{array}$</p>
</blockquote>

<p>So if we have the string $abc$,<br>
the product from left to right is $(a \times b) \times c=a \times c=c$ and<br>
the product from right to left is $a \times (b \times c)=a \times b=a$</p>

<p>So $abc$ should not be acceptable for the automata. To me its obvious that any string $aa^*$ or $bb^*$ or $cc^*$ is an aceptable string (their right and left evaluation work on the same partial strings). It is easy to give an NFA that describes the left to right evaluation but the problem is that if the machine try to compute the <em>right to left</em> evaluation I think it needs to know the length of the string (so infinite memory is necessary).</p>

<p>So how can a non deterministic automata evaluate from right to left in order to compare with the left to right evaluation?</p>
 | formal languages automata regular languages finite automata nondeterminism | 1 |
1,469 | Null Characters and Splitting the String in the Pumping Lemma | <p>So I'm really struggling with the pumping lemma. I think most of my problems come from not understanding how you can and can't split the string in a pumping lemma question. Here is an example, take the problem prove that $L = \{w | w$ contains more $0$'s than $1$'s over the language $\{0,1\} \}$ is not regular via the pumping lemma.</p>

<p>So I choose the string $01^{p}0^{p}$. Since this is a regular language pumping lemma problem I know that: </p>

<ol>
<li>for each $i > 0, xy^{i}z \in A$,</li>
<li>$|y^{i}| > 0$, and</li>
<li>$|xy| < p$</li>
</ol>

<p>I am little uncertain about other possibilites though, such as if $x$, or $z$ can be null (obviously $y$ can't by condition 2). I assume that this isn't possible since I don't think the preceding or trailing whitespace is considered part of the string, but I'm not sure. <strong>Is it possible for $x$ or $z$ to be null?</strong></p>
 | formal languages regular languages proof techniques pumping lemma | 1 |
1,471 | Looking for a ranking algorithm that favors newer entries | <p>I'm working on a ranking system that will rank entries based on votes that have been cast over a period of time. I'm looking for an algorithm that will calculate a score which is kinda like an average, however I would like it to favor newer scores over older ones. I was thinking of something along the line of: </p>

<p>$$\frac{\mathrm{score}_1 +\ 2\cdot \mathrm{score}_2\ +\ \dots +\ n\cdot \mathrm{score}_n}{1 + 2 + \dots + n}$$</p>

<p>I was wondering if there were other algorithms which are usually used for situations like this and if so, could you please explain them?</p>
 | algorithms data mining | 1 |
1,477 | Dealing with intractability: NP-complete problems | <p>Assume that I am a programmer and I have an NP-complete problem that I need to solve it. What methods are available to deal with NPC problems? Is there a survey or something similar on this topic?</p>
 | algorithms reference request np complete efficiency reference question | 1 |
1,478 | Algorithms for two and three dimensional Knapsack | <p>I know that the 2D and 3D Knapsack problems are NPC, but is there any way to solve them in reasonable time if the instances are not very complicated? Would dynamic programming work?</p>

<p>By 2D (3D) Knapsack I mean I have a square (cube) and a I have list of objects, all data are in centimeters and are at most 20m.</p>
 | algorithms complexity theory np complete computational geometry knapsack problems | 0 |
1,485 | Complexity of finding the largest $m$ numbers in an array of size $n$ | <p>What follows is my algorithm for doing this in what I believe to be $O(n)$ time, and my proof for that. My professor disagrees that it runs in $O(n)$ and instead thinks that it runs in $\Omega(n^2)$ time. Any comments regarding the proof itself, or the style (i.e. my ideas may be clear but the presentation not).</p>

<p>The original question:</p>

<blockquote>
 <p>Given $n$ numbers, find the largest $m \leq n$ among them in time $o(n \log n)$. You may not assume anything else about $m$.</p>
</blockquote>

<p>My answer:</p>

<ol>
<li>Sort the first $m$ elements of the array. This takes $O(1)$ time, as this is totally dependent on $m$, not $n$.</li>
<li>Store them in a linked list (maintaining the sorted order). This also takes $O(1)$ time, for the same reason as above.</li>
<li>For every other element in the array, test if it is greater than the least element of the linked list. This takes $O(n)$ time as $n$ comparisons must be done.</li>
<li>If the number is in fact greater, then delete the first element of the linked list (the lowest one) and insert the new number in the location that would keep the list in sorted order. This takes $O(1)$ time because it is bounded by a constant ($m$) above as the list does not grow.</li>
<li>Therefore, the total complexity for the algorithm is $O(n)$.</li>
</ol>

<p>I am aware that using a red-black tree as opposed to linked list is more efficient in constant terms (as the constant upper bound is $O(m\cdot \log_2(m))$ as opposed to $m$ and the problem of keeping a pointer to the lowest element of the tree (to facilitate the comparisons) is eminently doable, it just didn't occur to me at the time.</p>

<p>What is my proof missing? Is there a more standard way of presenting it (even if it is incorrect)?</p>
 | algorithms time complexity runtime analysis | 1 |
1,490 | Monitoring files in preservation archives | <p>What are efficient and accurate techniques for monitoring the recoverability and integrity of files in very large preservation archives?</p>

<p>In very large archives, the time taken to recompute checksums periodically (scrubbing) is substantial, perhaps taking more than all the available time depending on the read bandwidth available! Also, each access to a preserved file increases the risk of damage due to hardware or software failure. Tapes are most stable in a cold, dark place far from exposure to the hazards of data centers. Disks are most at risk when the read/write head is flying close to the medium. All approaches are probabilistic, so which are most efficient and accurate?</p>

<p>To give the problem specificity, let's assume a fixed probability of local single-bit errors for each medium (one probability for tape, another for disk, SSD, etc) during a standard time period, and ignore all other types of errors (loss of an entire volume, for instance). We can also assume a fixed read bandwidth for each medium.</p>
 | filesystems integrity digital preservation | 0 |
1,493 | Requirements for emulation | <p>What are the complete specifications that must be documented in order to ensure the correct execution of a particular program written in Java? For instance, if one were archiving a program for long-term preservation, and no testing or porting would be done.</p>

<p>I need to be able to compile and execute the Java program. Thus preserving the byte code or capturing the whole thing as a VMware image are excluded. The JVM could be saved as a VMware image though, and compiled libraries that are linked to the compiled code are OK, too. However, if there are dependencies on the OS, the architecture of the machine executing the JVM, the networking environment, external libraries, specification of the Java version used, etc. etc. these must all be listed. Some tech leaders in Dig Pres claim that any program written in Java will be executable "forever". How to do it?</p>
 | operating systems computer architecture digital preservation | 1 |
1,494 | Find the longest path from root to leaf in a tree | <p>I have a <a href="https://www.iis.se/docs/DNS-bok-sid-14.jpg" rel="noreferrer">tree</a> (in the graph theory sense), such as the following example:</p>

<p><img src="https://i.stack.imgur.com/sK90D.jpg" alt="enter image description here"></p>

<p>This is a directed tree with one starting node (the root) and many ending nodes (the leaves). Each of the edge has a length assigned to it.</p>

<p>My question is, how to find the longest path starting at the root and ending at any of the leaves? The brute-force approach is to check all the root-leaf paths and taking the one with maximal length, but I would prefer a more efficient algorithm if there is one. </p>
 | algorithms graphs | 1 |
1,495 | What is the most efficient way to compute factorials modulo a prime? | <p>Do you know any algorithm that calculates the factorial after modulus efficiently?</p>

<p>For example, I want to program:</p>

<pre><code>for(i=0; i<5; i++)
 sum += factorial(p-i) % p;
</code></pre>

<p>But, <code>p</code> is a big number (prime) for applying factorial directly $(p \leq 10^ 8)$.</p>

<p>In Python, this task is really easy, but i really want to know how to optimize.</p>
 | algorithms efficiency integers | 1 |
1,498 | Is it better to store the magnitude of an arbitrary-precision number in BigEndian or LittleEndian order in an integer array? | <p>I'm implementing a class which provides arbitrary-precision arithmetic (also called "bignum", "BigInteger", etc.).</p>

<p>My questions is about a practical implementation detail:</p>

<p>I'm wondering if there is a significant difference in implementation and computational complexity between an implementation which stores the magnitude in an integer array in BigEndian order vs. LittleEndian order.</p>

<p>My data structure is basically:</p>

<pre><code>class BigInt
 val signum: Int
 val magnitude: Array[Int] // two-complement (unsigned)
</code></pre>

<p>Supported operations are for instance:</p>

<p>+, -, * (Long multiplication, Karatsuba, Cook3, Schönhage-Strassen), /, squaring
Conversion to other number types
Comparison, equality, representation as a String</p>

<p><em>The implementation is immutable, so every operation will return a new value and will not change the any existing.</em></p>

<p>Feel free to ask for clarifications!</p>
 | data structures arrays | 0 |
1,500 | Complexity of checking whether linear equations have a positive solution | <p>Consider a system of linear equations $Ax=0$, where $A$ is a $n\times n$ matrix with rational entries. Assume that the rank of $A$ is $<n$. What is the complexiy to check
whether it has a solution $x$ such that all entries of $x$ are stricly greater than 0 (namely, $x$ is a positive vector)? Of course, one can use Gauss elimination, but this seems not to be optimal.</p>
 | algorithms complexity theory linear algebra | 0 |
1,502 | Predecessor query where the insertion order is known | <p>Assume I want to insert elements $1$ to $n$ into a data structure exactly once, and perform predecessor queries while inserting these elements (so <code>insert(x)</code> and <code>pred(x)</code> always come in pairs). The predecessor of $x$ is the largest number in the data structure that is smaller than $x$.</p>

<p>The data structure is created by preprocessing the list of insertions. </p>

<p>When I start to insert elements, an adversary decides to delete some of the elements I have inserted, by adding any number of deletion operations between my insertions. </p>

<p>A query input to the data structure is a sequence of insertions and deletions, which is the insertion sequence with deletions inserted. 
The output of the query is the result of the $n$ predecessor queries executed when the elements are inserted. </p>

<p>Can one design a data structure so the query takes $O(n)$?</p>

<p>Here is an example.</p>

<pre><code>Insertions = [1,3,5,4,2]
DS = makeDataStructure(Insertions)//Runs in polynomial time
//add some deletions into insertions
Operations = [1,3,-3,5,-1,4,-5,-4,2,-2]
DS.query(Operations)//this runs in O(n) time
</code></pre>

<p>Assume -i = delete i. And pred(x) = 0 if there is nothing before it.
result would be:</p>

<pre><code>[pred(1)=0, pred(3)=1, pred(5)=1, pred(4)=0, pred(2)=0]
</code></pre>

<p>for example, the 3rd in the result is <code>pred(5)=1</code> instead of 3 because 3 is deleted when 5 is inserted.</p>
 | data structures runtime analysis | 0 |
1,504 | Efficiently computing or approximating the VC-dimension of a neural network | <p>My goal is to solve the following problem, which I have described by its input and output:</p>
<p><strong>Input:</strong></p>
<p>A directed acyclic graph <span class="math-container">$G$</span> with <span class="math-container">$m$</span> nodes, <span class="math-container">$n$</span> sources, and <span class="math-container">$1$</span> sink (<span class="math-container">$m > n \geq 1$</span>).</p>
<p><strong>Output:</strong></p>
<p>The <a href="https://en.wikipedia.org/wiki/Vc_dimension" rel="noreferrer">VC-dimension</a> (or an approximation of it) for the neural network with topology <span class="math-container">$G$</span>.</p>
<p><strong>More specifics</strong>:</p>
<ul>
<li>Each node in <span class="math-container">$G$</span> is a sigmoid neuron. The topology is fixed, but the weights on the edges can be varied by the learning algorithm.</li>
<li>The learning algorithm is fixed (say backward-propagation).</li>
<li>The <span class="math-container">$n$</span> source nodes are the input neurons and can only take strings from <span class="math-container">$\{-1,1\}^n$</span> as input.</li>
<li>The sink node is the output unit. It outputs a real value from <span class="math-container">$[-1,1]$</span> that we round up to <span class="math-container">$1$</span> or down to <span class="math-container">$-1$</span> if it is more than a certain fixed threshold <span class="math-container">$\delta$</span> away from <span class="math-container">$0$</span>.</li>
</ul>
<p>The naive approach is simply to try to break more and more points, by attempting to train the network on them. However, this sort of simulation approach is not efficient.</p>
<hr />
<h3>Question</h3>
<p>Is there an efficient way (i.e. in <span class="math-container">$\mathsf{P}$</span> when changed to the decision-problem: is VC-dimension less than input parameter <span class="math-container">$k$</span>?) to compute this function? If not, are there hardness results?</p>
<p>Is there a works-well-in-practice way to compute or approximate this function? If it is an approximation, are there any guarantees on its accuracy?</p>
<h3>Notes</h3>
<p>I asked a <a href="https://stats.stackexchange.com/q/25952/4872">similar question</a> on stats.SE but it generated no interest.</p>
 | algorithms complexity theory machine learning neural networks vc dimension | 0 |
1,507 | Runtime of the optimal greedy $2$-approximation algorithm for the $k$-clustering problem | <p>We are given a set 2-dimensional points $|P| = n$ and an integer $k$. We must find a collection of $k$ circles that enclose all the $n$ points such that the radius of the largest circle is as small as possible. In other words, we must find a set $C = \{ c_1,c_2,\ldots,c_k\}$ of $k$ center points such that the cost function $\text{cost}(C) = \max_i \min_j D(p_i, c_j)$ is minimized. Here, $D$ denotes the Euclidean distance between an input point $p_i$ and a center point $c_j$. Each point assigns itself to the closest cluster center grouping the vertices into $k$ different clusters.</p>

<p>The problem is known as the (discrete) $k$-clustering problem and it is $\text{NP}$-hard. It can be shown with a reduction from the $\text{NP}$-complete dominating set problem that if there exists a $\rho$-approximation algorithm for the problem with $\rho < 2$ then $\text{P} = \text{NP}$. </p>

<p>The optimal $2$-approximation algorithm is very simple and intuitive. One first picks a point $p \in P$ arbitrarily and puts it in the set $C$ of cluster centers. Then one picks the next cluster center such that is as far away as possible from all the other cluster centers. So while $|C| < k$, we repeatedly find a point $j \in P$ for which the distance $D(j,C)$ is maximized and add it to $C$. Once $|C| = k$ we are done.</p>

<p>It is not hard to see that the optimal greedy algorithm runs in $O(nk)$ time. This raises a question: can we achieve $o(nk)$ time? How much better can we do?</p>
 | algorithms computational geometry | 1 |
1,509 | Direct reduction from $st\text{-}non\text{-}connectivity$ to $st\text{-}connectivity$ | <p>We know that <span class="math-container">$st\text{-}non\text{-}connectivity$</span> is in <a href="http://en.wikipedia.org/wiki/NL_%28complexity%29" rel="noreferrer"><span class="math-container">$\mathsf{NL}$</span></a> by <a href="https://en.wikipedia.org/wiki/Immerman%E2%80%93Szelepcs%C3%A9nyi_theorem" rel="noreferrer">Immerman–Szelepcsényi theorem</a> theorem and since <span class="math-container">$st\text{-}connectivity$</span> is <span class="math-container">$\mathsf{NL\text{-}hard}$</span> therefore <span class="math-container">$st\text{-}non\text{-}connectivity$</span> is many-one log-space reducible to <span class="math-container">$st\text{-}connectivity$</span>. But is there a direct/combinatorial reduction that doesn't go through the configuration graph of the Turing machines in <span class="math-container">$\mathsf{NL}$</span>?</p>
<blockquote>
<p><a href="http://en.wikipedia.org/wiki/St-connectivity" rel="noreferrer"><span class="math-container">$\mathsf{stConnectivity}$</span></a> (a.k.a. <span class="math-container">$stPATH$</span>):</p>
<p>Given directed graph <span class="math-container">$G$</span> and vertices <span class="math-container">$s$</span> and <span class="math-container">$t$</span>,</p>
<p>Is there a directed path from vertex <span class="math-container">$s$</span> to vertex <span class="math-container">$t$</span>?</p>
</blockquote>
<hr />
<h3>Clarifications:</h3>
<p>You can assume a graph is given by its adjacency matrix (however this is not essential since standard representations of graphs are log-space convertible to each other.)</p>
<p>It is possible to unpack the proof of <span class="math-container">$\mathsf{NL\text{-}hard}$</span>ness of <span class="math-container">$st\text{-}connectivity$</span> and move it into the proof so the proof does not use it that theorem as a lemma. However this is still the same construction essentially. What I am looking for is <em>not</em> this, I want a conceptually direct reduction. Let me give an analogy with the <span class="math-container">$\mathsf{NP}$</span> case. We can reduce various <span class="math-container">$\mathsf{NP\text{-}complete}$</span> problems to each other by using the fact that they are in <span class="math-container">$\mathsf{NP}$</span> therefore reduce to <span class="math-container">$SAT$</span> and <span class="math-container">$SAT$</span> reduces to the other problem. And we can unpack and combine these two reductions to get a direct reduction. However it is often possible to give a conceptually much simpler reduction that doesn't go through this intermediate step (you can remove mentioning it, but it is still there conceptually). For example, to reduce <span class="math-container">$HamPath$</span> or <span class="math-container">$VertexCover$</span> or <span class="math-container">$3\text{-}Coloring$</span> to <span class="math-container">$SAT$</span> we don't say <span class="math-container">$HamPath$</span> is in <span class="math-container">$\mathsf{NP}$</span> and therefore reduces to <span class="math-container">$SA$</span> since <span class="math-container">$SAT$</span> is <span class="math-container">$\mathsf{NP\text{-}hard}$</span>. We can give a simple intuitive formula that is satisfiable iff the graph has a Hamiltonian path.
Another example, we have reductions from other problems in <span class="math-container">$\mathsf{NL}$</span> to <span class="math-container">$st\text{-}Connectivity$</span> which do not rely on <span class="math-container">$\mathsf{NL\text{-}complete}$</span>ness of <span class="math-container">$st\text{-}Connectivity$</span>, e.g. <span class="math-container">$Cycle$</span>, <span class="math-container">$StronglyConnected$</span>, etc, they involve modification on the input graph (and do not refer to any Turing machines that is solving them).</p>
<p>I still don't see any reason why this cannot be done for this one.
I am looking for a reduction of this kind.</p>
<p>It might be the case that this is not possible and any reduction would conceptually go through the <span class="math-container">$\mathsf{NL\text{-}hard}$</span>ness result. However I don't see why that should be the case, why the situation would be different from the <span class="math-container">$\mathsf{NP}$</span> case.
Obviously to give a negative answer to my question we would need to be more formal about when does a proof <em>conceptually</em> include another proof (which is proof theory question that AFAIK not settle in a satisfactory way). However note that for a positive answer one does not need such a formal definition and I am hoping that is the case. (I will think about how to formalize what I am asking in a faithful way when I find more free time. Essentially I want a reduction that would work even if we didn't know that the problem is complete for <span class="math-container">$\mathsf{NL}$</span>.)</p>
<p>Using the proof of Immerman–Szelepcsényi theorem is fine, using <span class="math-container">$\mathsf{NL\text{-}complete}$</span>ness of <span class="math-container">$stPATH$</span> and configuration graph of an <span class="math-container">$\mathsf{NL}$</span> machine is what I want to avoid.</p>
 | complexity theory reductions space complexity | 0 |
1,511 | "Dense" regular expressions generate $\Sigma^*$? | <p>Here's a conjecture for regular expressions:</p>

<blockquote>
 <p>For regular expression $R$, let the length $|R|$ be the number of symbols in it,
 ignoring parentheses and operators. E.g. $|0 \cup 1| = |(0 \cup 1)^*| = 2$</p>
 
 <p><strong>Conjecture:</strong> If $|R| > 1$ and $L(R)$ contains every string of length $|R|$ or less, then $L(R) = \Sigma^*$.</p>
</blockquote>

<p>That is, if $L(R)$ is 'dense' up to $R$'s length, then $R$ actually generates everything.</p>

<p>Some things that may be relevant:</p>

<ol>
<li>Only a small part of $R$ is needed to generate all strings. For example in binary, $R = (0 \cup 1)^* \cup S$ will work for any $S$.</li>
<li>There needs to be a Kleene star in $R$ at some point. If there isn't, it will miss some string of size less than $|R|$. </li>
</ol>

<p>It would be nice to see a proof or counterexample. Is there some case where it's obviously wrong that I missed? Has anyone seen this (or something similar) before? </p>
 | formal languages regular languages regular expressions | 1 |
1,514 | Is it possible to create a "Time Capsule" using encryption? | <p>I want to create a digital time capsule which will remain unreadable for some period of time and then become readable. I do not want to rely on any outside service to, for instance, keep the key secret and then reveal it at the required time. Is this possible? If not, is some kind of proof possible that it is not?</p>

<p>One strategy would be based on projections of future computing capabilities, but that is unreliable and makes assumptions about how many resources would be applied to the task.</p>
 | cryptography encryption digital preservation | 1 |
1,516 | Simple paths with halt in between in directed graphs | <p>I have two problems related to paths in a directed graph. Let $G=(V,E)$ be a directed graph with source $s \in V$ and target $t \in V$. Let $v \in V \setminus \{s,t\}$ be another vertex in $G$. </p>

<ol>
<li><p>Find a simple directed path¹ from $s$ to $t$ through $v$. </p></li>
<li><p>Find a simple directed path from $s$ to $t$ that goes through two fixed edges in $G$.</p></li>
</ol>

<p>I do not know if there are polynomial time algorithms for them. Does anyone have solutions or references for them?</p>

<hr>

<ol>
<li>A simple directed path does not allow any vertex to appear more than once. </li>
</ol>
 | algorithms graphs | 0 |
1,517 | If A is mapping reducible to B then the complement of A is mapping reducible to the complement of B | <p>I'm studying for my final in theory of computation, and I'm struggling with the proper way of answering whether this statement is true of false.</p>

<p>By the <a href="https://en.wikipedia.org/wiki/Mapping_reducibility" rel="noreferrer">definition</a> of $\leq_m$ we can construct the following statement, </p>

<p>$w \in A \iff f(w) \in B \rightarrow w \notin A \iff f(w) \notin B$ </p>

<p>This is where I'm stuck, I want to say that since we have such computable function $f$ then it'll only give us the mapping from A to B if there is one, otherwise it wont. </p>

<p>I don't know how to phrase this correctly, or if I'm even on the right track.</p>
 | complexity theory computability reductions | 1 |
1,521 | How to approach Dynamic graph related problems | <p>I asked this <a href="https://stackoverflow.com/questions/10326446/how-to-approach-dynamic-graph-related-problems">question</a> at generic stackoverflow and I was directed here.</p>

<p>It will be great if some one can explain how to approach partial or fully dynamic graph problems in general.</p>

<p>For example:</p>

<ul>
<li>Find Shortest Path between two vertices $(u,v)$ in a undirected weighted graph for $n$ instances, when an edge is removed at each instance.</li>
<li>Find number of connected components in an undirected graph for n instances when an edge is remove at each instance, etc.</li>
</ul>

<p>I recently encountered this genre of problems in a programming contest. I searched through the web and I found lot of research papers concerning with dynamic graphs [1,2]. I read couple of them and and I couldnt find anything straight forward (clustering, sparsification etc). Sorry for being vague.</p>

<p>I really appreciate if some can provide pointers to understand these concepts better.</p>

<hr>

<ol>
<li><a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.8372" rel="nofollow noreferrer"><em>Dynamic Graph Algorithms</em></a> by D. Eppstein , Z. Galil , G. F. Italiano (1999)</li>
<li><a href="http://www.lix.polytechnique.fr/~liberti/sppsurvey.pdf" rel="nofollow noreferrer"><em>Shortest paths on dynamic graphs</em></a> by G. Nannicini, L. Liberti (2008)</li>
</ol>
 | algorithms data structures graphs | 0 |
1,525 | Chomsky normal form and regular languages | <p>I'd love your help with the following question:</p>

<blockquote>
 <p>Let $G$ be context free grammar in the <strong>Chomksy normal form</strong> with $k$
 variables.</p>
 
 <p>Is the language $B = \{ w \in L(G) : |w| >2^k \}$ regular ?</p>
</blockquote>

<p>What is it about the amount of variables and the Chomsky normal form that is supposed to help me solve this question? I tried to look it up on the web, but besides information about the special form itself, I didn't find an answer to my question.</p>

<p>The answer for the question is that $B$ might be regular.</p>
 | formal languages regular languages context free formal grammars | 1 |
1,526 | All NP problems reduce to NP-complete problems: so how can NP problems not be NP-complete? | <p>My book states this</p>

<blockquote>
 <ul>
 <li>If a decision problem B is in P and
 A reduces to B,
 then decision problem A is in P.</li>
 <li>A decision problem B is NP-complete if
 B is in NP and
 for every problem in A in NP, A reduces to B.</li>
 <li>A decision problem C is NP-complete if
 C is in NP and
 for some NP-complete problem B, B reduces to C.</li>
 </ul>
</blockquote>

<p>So my questions are</p>

<blockquote>
 <ol>
 <li>If B or C is in NP-complete, and all problems in NP reduce to an NP-complete problem, using the first rule, how can any NP problem not be NP complete?</li>
 <li>If A reduces to B, does B reduce to A?</li>
 </ol>
</blockquote>
 | complexity theory np complete decision problem | 0 |
1,527 | Finding the flaw in a reduction from Hamiltonian cycle to Hamiltonian cycle on bipartitie graphs | <p>I'm trying to solve a problem for class that is stated like so:</p>

<blockquote>
 <p>A bipartite graph is an undirected graph in which every cycle has even
 length. We attempt to show that the Hamiltonian cycle (a cycle that
 passes through each node exactly once) problem polynomially reduces to
 the Hamiltonian cycle problem in bipartite graphs. We need a function
 $T: \{\text{graphs}\} \to \{\text{bipartite graphs}\}$ such that $T$ can be computed in
 polynomial time and for any graph $G$, $G$ has Hamiltonian cycle iff $T(G)$
 has a Hamiltonian cycle. Let $T(G)$ be the bipartite graph obtained by
 inserting a new vertex on every edge. What is wrong with this
 transformation?</p>
</blockquote>

<p>I think the problem with the transformation is that for $T(G)$ you need to insert an edge between each pair of vertices and not just insert a new vertex on every edge. I'm actually a bit stumped by this one. Any advice would be much appreciated! </p>
 | complexity theory np complete reductions | 0 |
1,531 | Is Logical Min-Cut NP-Complete? | <h3>Logical Min Cut (LMC) problem definition</h3>

<p>Suppose that $G = (V, E)$ is an unweighted digraph, $s$ and $t$ are two vertices of $V$, and $t$ is reachable from $s$. The LMC Problem studies how we can make $t$ unreachable from $s$ by the removal of some edges of $G$ following the following constraints:</p>

<ol>
<li>The number of the removed edges must be minimal.</li>
<li>We cannot remove every exit edge of any vertex of $G$ (i.e., no vertex with outgoing edges can have all its outgoing edges removed).</li>
</ol>

<p>This second constraint is called logical removal. So we look for a <em>logical, minimal removal</em> of some edges of $G$ such that $t$ would be unreachable from $s$.</p>

<h3>Solution attempts</h3>

<p>If we ignore the logical removal constraint of LMC problem, it will be the min-cut problem in the unweighted digraph $G$, so it will be solvable polynomially (max-flow min-cut theorem).</p>

<p>If we ignore the minimal removal constraint of the LMC problem, it will be again solvable polynomially in a DAG: find a vertex $k$ such that $k$ is reachable from $s$ and $t$ is not reachable from $k$. Then consider a path $p$ which is an arbitrary path from $s$ to $k$. Now consider the path $p$ as a subgraph of $G$: the answer will be every exit edge of the subgraph $p$. It is obvious that the vertex $k$ can be found by DFS in $G$ in polynomial time. Unfortunately this algorithm <a href="https://cs.stackexchange.com/questions/1531/is-logical-min-cut-np-complete#comment13693_1531">doesn't work in general</a> for an arbitrary directed graph.</p>

<p>I tried to solve the LMC problem by a dynamic programming technique but the number of required states for solving the problem became exponential. Moreover, I tried to reduce some NP-Complete problems such as 3-SAT, max2Sat, max-cut, and clique to the LMC problem I didn't manage to find a reduction.</p>

<p>I personally think that the LMC problem is NP-Complete even if $G$ is a binary DAG (i.e., a DAG where no node has out-degree greater than 2).</p>

<h3>Questions</h3>

<ol>
<li>Is the LMC problem NP-Complete in an arbitrary digraph $G$? (main question)</li>
<li>Is the LMC problem NP-Complete in an arbitrary DAG $G$?</li>
<li>Is the LMC problem NP-Complete in an arbitrary binary DAG $G$?</li>
</ol>
 | complexity theory graphs np complete | 1 |
1,536 | Closure against the operator $A(L)=\{ww^Rw \mid w \in L \wedge |w| \lt 2007\}$ | <p>I would like your help with the following question:</p>

<blockquote>
 <p>Let $L$ be a language, and operator $A(L)=\{\,ww^Rw \mid w \in L\ \wedge\ |w| \lt 2007\,\}$ where $x^R$ is the reversed string of $x$. Which of the
 following statements are correct?</p>
 
 <ol>
 <li>If $L$ is regular so $A(L)$ is regular.</li>
 <li>If $L$ is a CFL which is not regular then $A(L)$ is CFL which is not regular.</li>
 <li>If $L$ is a CFL which is not regular, then $A(L)$ is a CFL which may or may not be regular.</li>
 <li>If $L$ is not a CFL then $A(L)$ is not CFL.</li>
 </ol>
</blockquote>

<p>What does the fact that $|w|< 2007$ help me with the decision? 
For (2) I can choose $O^n1^n$ and I get that $0^n1^{2n}0^{2n}1^n$, which is not regular, but for (3),(4) I can't find an examples to refute it. The answer is 3, but I can't understand why, since $A(L)= ww^R \circ w$ but $ww^R$ is not regular.</p>
 | formal languages regular languages context free closure properties | 1 |
1,540 | Finding a worst case of heap sort | <p>I'm working on problem H in the <a href="http://neerc.ifmo.ru/past/2004/problems/problems.pdf" rel="noreferrer">ACM ICPC 2004–2005 Northeastern European contest</a>.</p>

<p>The problem is basically to find the worst case that produces a maximal number of exchanges in the algorithm (sift down) to build the heap.</p>

<ul>
<li>Input: Input file contains $n$ ($1 \le n \le 50{,}000$).</li>
<li>Output: Output the array containing $n$ different integer numbers from $1$ to $n$, such that it is a heap, and when converting it to a sorted array, the total number of exchanges in sifting operations is maximal possible.</li>
</ul>

<p>Sample input: <code>6</code><br>
Corresponding output: <code>6 5 3 2 4 1</code></p>

<p>And the basics outputs:</p>

<pre><code>[2, 1] 
[3, 2, 1] 
[4, 3, 1, 2] 
[5, 4, 3, 2, 1] 
[6, 5, 3, 4, 1, 2]
</code></pre>
 | algorithms data structures algorithm analysis sorting | 0 |
1,542 | Approximation algorithm for TSP variant, fixed start and end anywhere but starting point + multiple visits at each vertex ALLOWED | <p>NOTE: Due to the fact that the trip does not end at the same place it started and also the fact that every point can be visited more than once as long as I still visit all of them, this is not really a TSP variant, but I put it due to lack of a better definition of the problem.</p>

<p>This problem was originally posted on StackOverflow, but I was told that this would be a better place. I got one pointer, which converted the problem from non-metric to a metric one.</p>

<p>So..</p>

<p>Suppose I am going on a hiking trip with n points of interest. These points are all connected by hiking trails. I have a map showing all trails with their distances, giving me a directed graph.</p>

<p>My problem is how to approximate a tour that starts at a point A and visits all n points of interest, while ending the tour anywhere but the point where I started and I want the tour to be as short as possible.</p>

<p>Due to the nature of hiking, I figured this would sadly not be a symmetric problem (or can I convert my asymmetric graph to a symmetric one?), since going from high to low altitude is obviously easier than the other way around.</p>

<p>Since there are no restrictions regarding how many times I visit each point, as long as I visit all of them, it does not matter if the shortest path from a to d goes through b and c. Is this enough to say that triangle inequality holds and thus I have a metric problem?</p>

<p>I believe my problem is easier than TSP, so those algorithms do not fit this problem. I thought about using a minimum spanning tree, but I have a hard time applying it to this problem, which under the circumstances, should be a metric asymmetric directed graph?</p>

<p>What I really want are some pointers as to how I can come up with an approximation algorithm that will find a near optimal tour through all n points</p>
 | algorithms complexity theory graphs approximation | 1 |
1,547 | Closure against right quotient with a fixed language | <p>I'd really love your help with the following:</p>

<p>For <em>any</em> fixed $L_2$ I need to decide whether there is closure under the following operators:</p>

<ol>
<li><p>$A_r(L)=\{x \mid \exists y \in L_2 : xy \in L\}$</p></li>
<li><p>$A_l(L)=\{x \mid \exists y \in L : xy \in L_2\}$.</p></li>
</ol>

<p>The relevant options are:</p>

<ol>
<li><p>Regular languages are closed under $A_l$ resp. $A_r$, for any language $L_2$ </p></li>
<li><p>For some languages $L_2$, regular languages are closed under $A_l$ resp. $A_r$, and for some languages $L_2$, regular languages are not closed under $A_l$ resp. $A_r$.</p></li>
</ol>

<p>I believed that the answer for (1) should be (2), because when I get a word in $w \in L$ and $w=xy$ I can build an automaton that can guess where $x$ turning to $y$, but then it needs to verify that $y$ belongs to $L_2$ and if it won't be regular, how would it do that?<br>
The answer for that is (1).</p>

<p>What should I do in order to analyze those operators correctly and to determine if the regular languages are closed under them or not?</p>
 | formal languages regular languages closure properties | 1 |
1,548 | Techniques/tools for constructing hard instances of a puzzle game | <blockquote>
 <p>Are there techniques and/or software tools that can be used to
 construct hard instances of a simple puzzle game (or a simple planning
 problem)?</p>
</blockquote>

<p>With "hard" I mean that any solution of the instance is "long" with respect to the input size.</p>

<p>What I have in mind:</p>

<ul>
<li>model the puzzle game using a <a href="http://en.wikipedia.org/wiki/Constraint_programming" rel="nofollow">constraint programming</a> language (or even <a href="http://en.wikipedia.org/wiki/STRIPS" rel="nofollow">STRIPS</a>);</li>
<li>the tool starts with assigning some random values to the model parameters to construct an instance;</li>
<li>solve the instance and if solutions are "easy" (shorter than a fixed length) or no solution is found in a specified amount of time, try to adjust it using some heuristics (or other techniques such as GA or simulated annealing).</li>
</ul>
 | algorithms | 0 |
1,549 | Operations under which the class of undecidable languages isn't closed | <p>Do there exist undecidable languages such that their union/intersection/concatenated language is decidable? What is the physical interpretation of such example because in general, undecidable languages are not closed under these operations?</p>

<p>What can we say about the kleene closure? Do we have examples for it too? I.e. can the closure of an undecidable language be decidable?</p>

<p>Also, can we generalize such undecidable classes?</p>
 | formal languages undecidability closure properties | 0 |
1,556 | Is $A=\{ w \in \{a,b,c\}^* \mid \#_a(w)+ 2\#_b(w) = 3\#_c(w)\}$ a CFG? | <p>I wonder whether the following language is a context free language:
$$A = \{w \in \{a,b,c\}^* \mid \#_a(w) + 2\#_b(w) = 3\#c(w)\}$$
where $\#_x(w)$ is the number of occurrences of $x$ in $w$.
I can't find any word that would be useful to refute by the pumping lemma, on the other hand I haven't been able to find a context free grammar generating it. It looks like it has to remember more than one PDA can handle.</p>

<p>What do you say?</p>
 | formal languages context free | 1 |
1,560 | Can a program language be malleable enough to allow programs to extend language semantics | <p>With reference to features in languages like ruby (and javascript), which allow a programmer to extend/override classes any time after defining it (including classes like String), is it theoretically feasible to design a language which can allow programs to later on extend its semantics.</p>

<p>ex: Ruby does not allow multiple inheritance, yet can I extend/override the default language behaviour to allow an implementation of multiple inheritance. </p>

<p>Are there any other languages which allow this? Is this actually a subject of concern for language designers? Looking at the choice of using ruby for building rails framework for web application development, such languages may be very powerful to allow designing frameworks(or DSLs) for wide variety of applications.</p>
 | programming languages semantics | 1 |
1,562 | Turing reducibility implies mapping reducibility | <p>The question is whether the following statement is true or false:</p>

<p>$A \leq_T B \implies A \leq_m B$</p>

<p>I know that if $A \leq_T B$ then there is an oracle which can decide A relative to B. I know that this is not enough to say that there is a computable function from A to B that can satisfy the reduction.</p>

<p>I don't know how to word this in the proper way or if what I'm saying is enough to say that the statement is false. How would I go about showing this?</p>

<p>EDIT: This is not a homework problem per se, I'm reviewing for a test.
Where $\leq_T$ is <a href="http://en.wikipedia.org/wiki/Turing_reduction" rel="nofollow">Turing reducibility</a>, and $\leq_m$ is <a href="http://en.wikipedia.org/wiki/Mapping_reduction" rel="nofollow">mapping reducibility</a>.</p>
 | computability reductions turing machines | 1 |
1,567 | Running time - Linked Lists Polynomial | <p>I have developed two algorithms and now they are asking me to find their running time.
The problem is to develop a singly linked list version for manipulating polynomials. The two main operations are <em>addition</em> and <em>multiplication</em>.</p>

<p>In general for lists the running for these two operations are ($x,y$ are the lists lengths):</p>

<ul>
<li>Addition: Time $O(x+y)$, space $O(x+y)$</li>
<li>Multiplication: Time $O(xy \log(xy))$, space $O(xy)$</li>
</ul>

<p>Can someone help me to find the running times of my algorithms?
I think for the first algorithm it is like stated above $O(x+y)$, for the second one I have two nested loops and two lists so it should be $O(xy)$, but why the $O(xy \log(xy))$ above?</p>

<p>These are the algorithms I developed (in Pseudocode):</p>

<pre><code> PolynomialAdd(Poly1, Poly2):
 Degree := MaxDegree(Poly1.head, Poly2.head);
 while (Degree >=0) do:
 Node1 := Poly1.head;
 while (Node1 IS NOT NIL) do:
 if(Node1.Deg = Degree) then break;
 else Node1 = Node1.next;
 Node2 := Poly2.head;
 while (Node2 IS NOT NIL) do:
 if(Node2.Deg = Degree) then break;
 else Node2 = Node2.next;
 if (Node1 IS NOT NIL AND Node2 IS NOT NIL) then
 PolyResult.insertTerm( Node1.Coeff + Node2.Coeff, Node1.Deg);
 else if (Node1 IS NOT NIL) then
 PolyResult.insertTerm(Node1.Coeff, Node1.Deg);
 else if (Node2 IS NOT NIL) then
 PolyResult.insertTerm(Node2.Coeff, Node2.Deg);
 Degree := Degree – 1;
 return PolyResult; 

 PolynomialMul(Poly1, Poly2): 
 Node1 := Poly1.head;
 while (Node1 IS NOT NIL) do:
 Node2 = Poly2.head;
 while (Node2 IS NOT NIL) do:
 PolyResult.insertTerm(Node1.Coeff * Node2.Coeff, 
 Node1.Deg + Node1.Deg);
 Node2 = Node2.next; 
 Node1 = Node1.next;
 return PolyResult;
</code></pre>

<p><code>InsertTerm</code> inserts the term in the correct place depending on the degree of the term. </p>

<pre><code> InsertTerm(Coeff, Deg):
 NewNode.Coeff := Coeff;
 NewNode.Deg := Deg;
 if List.head = NIL then
 List.head := NewNode;
 else if NewNode.Deg > List.head.Deg then
 NewNode.next := List.head;
 List.head := NewNode;
 else if NewNode.Deg = List.head.Deg then 
 AddCoeff(NewNode, List.head);
 else
 Go through the List till find the same Degree and summing up the coefficient OR
 adding a new Term in the right position if Degree not present;
</code></pre>
 | algorithms algorithm analysis runtime analysis | 1 |
1,576 | Master theorem and constants independent of $n$ | <p>I applied the Master theorem to a recurrence for a running time I encountered (this is a simplified version):</p>

<p>$$T(n)=4T(n/2)+O(r)$$</p>

<p>$r$ is independent of $n$. Case 1 of the Master theorem applies and tells us that $T(n)=O(n^2)$.</p>

<p>However, this hides a constant dependent on $r$ in the big-oh notation: our recurrence has depth $O(\log_2 n)$ so at the final level we have $O(4^{\log_2 n})=O(n^2)$ subproblems, each of which takes $O(r)$ time to be handled. This means the actual running time is $O(n^2 r)$ (or worse: this analysis only talks about the lowest level).</p>

<p>This is my actual recursion:</p>

<p>$$T(n)=r^2T(n/r)+O(nr^2)$$</p>

<p>Is there a method similar to the Master theorem for these kinds of recursions?</p>
 | algorithm analysis asymptotics recurrence relation mathematical analysis master theorem | 0 |
1,577 | How do I test if a polygon is monotone with respect to a line? | <p>It's well known that <a href="http://en.wikipedia.org/wiki/Monotone_polygon" rel="nofollow noreferrer">monotone polygons</a> play a crucial role in <a href="http://en.wikipedia.org/wiki/Polygon_triangulation" rel="nofollow noreferrer">polygon triangulation</a>. </p>

<blockquote>
 <p><strong>Definition:</strong> A polygon $P$ in the plane is called monotone with respect to a straight line $L$, if every line orthogonal to $L$ intersects $P$ at most twice.</p>
</blockquote>

<p>Given a line $L$ and a polygon $P$, is there an efficient algorithm to determine if a polygon $P$ is monotone with respect to $L$?</p>
 | algorithms computational geometry | 1 |
1,579 | Is using a more informed heuristic guaranteed to expand fewer nodes of the search space? | <p>I'm reading through the <a href="http://www.cs.rmit.edu.au/AI-Search/Courseware/Slides1/" rel="noreferrer">RMIT course notes on state space search</a>.
Consider a state space $S$, a set of nodes in which we look for an element having a certain property.
A <a href="http://www.cs.rmit.edu.au/AI-Search/Courseware/Slides1/07ImprovedMethods/07bHeurFunctions/" rel="noreferrer">heuristic function</a> $h:S\to\mathbb{R}$ measures how promising a node is.</p>

<p>$h_2$ is said to <em>dominate</em> (or to be more informed than) $h_1$ if $h_2(n) \ge h_1(n)$ for every node $n$. How does this definition imply that using $h_2$ will lead to expanding fewer nodes? - not only fewer but subset of the others.</p>

<p>In Luger '02 I found the explanation:</p>

<blockquote>
 <p>This can be verified by assuming the opposite (that there is at least one state expanded by $h_2$ and not by $h_1$). But since $h_2$ is more informed than $h_1$, for all $n$, $h_2(n) \le h_1(n)$, and both are bounded above by $h^*$, our assumption is contradictory. </p>
</blockquote>

<p>But I didn't quite get it.</p>
 | artificial intelligence heuristics search problem | 0 |
1,580 | Distributed vs parallel computing | <p>I often hear people talking about <em>parallel</em> computing and <em>distributed</em> computing, but I'm under the impression that there is no clear boundary between the 2, and people tend to confuse that pretty easily, while I believe it is very different:</p>

<ul>
<li><em>Parallel</em> computing is more tightly coupled to multi-threading, or how to make full use of a single CPU.</li>
<li><em>Distributed</em> computing refers to the notion of divide and conquer, executing sub-tasks on different machines and then merging the results.</li>
</ul>

<p>However, since we stepped into the <em>Big Data</em> era, it seems the distinction is indeed melting, and most systems today use a combination of parallel and distributed computing.</p>

<p>An example I use in my day-to-day job is Hadoop with the Map/Reduce paradigm, a clearly distributed system with workers executing tasks on different machines, but also taking full advantage of each machine with some parallel computing.</p>

<p>I would like to get some advice to understand how exactly to make the distinction in today's world, and if we can still talk about parallel computing or there is no longer a clear distinction. To me it seems distributed computing has grown a lot over the past years, while parallel computing seems to stagnate, which could probably explain why I hear much more talking about distributing computations than parallelizing.</p>
 | terminology distributed systems parallel computing | 1 |
1,586 | Can we show a language is not computably enumerable by showing there is no verifier for it? | <p>One of the definitions of a computably enumerable (c.e., equivalent to recursively enumerable, equivalent to semidecidable) set is the following:</p>

<blockquote>
 <p>$A \subseteq \Sigma^*$ is c.e. iff there is a decidable language $V\subseteq \Sigma^*$ (called verifier) s.t. 
 for all $x\in \Sigma^*$, </p>
 
 <p>$x\in A$ iff there exists a $y\in\Sigma^*$ s.t. $\langle x, y \rangle \in V$.</p>
</blockquote>

<p>So one way to show that a language is not c.e. is to show that there is no decidable verifier $V$ for it. Is this method useful to show that languages are not c.e. in practice?</p>
 | computability proof techniques undecidability | 0 |
1,589 | Building ideal skip lists | <p>I'm trying to find the best algorithm for converting an “ordinary” linked list into an “ideal" skip list. </p>

<p>The definition of an “ideal skip list” is that in the first level we'll have all the elements, half of them in the next level, a quarter of them in the level after that, and so on.</p>

<p>I'm thinking about a $\mathcal{O}(n)$ run-time algorithm involving throwing a coin for each node in the original linked-list, to determine for any given node whether it should be placed in a higher or lower level, and create a duplicate node for the current node at a higher level. This algorithm should work in $\mathcal{O}(n)$; is there any better algorithm? </p>
 | algorithms data structures randomized algorithms lists | 0 |
1,591 | Finding the maximum bandwidth along a single path in a network | <p>I am trying to search for an algorithm that can tell me which node has the highest download (or upload) capacity given a weighted directed graph, where weights correspond to individual link bandwidths. I have looked at the maximal flow problem and at the Edmond-Karp algorithm. My questions are the following: </p>

<ol>
<li>Edmond-Karp just tells us how much throughput we can get (at the sink) from source to sink if any of the paths were used. Correct?</li>
<li>Edmond-Karp does not tell us which path can give us the maximum flow. Correct?</li>
</ol>
 | algorithms graphs network flow | 1 |
1,592 | Why does $A(L)= \{ w_1w_2: |w_1|=|w_2|$ and $w_1, w_2^R \in L \}$ generate a context free language for regular $L$? | <p>How can I prove that the language that the operator $A$ defines for regular language $L$ is a context free language.</p>

<p>$A(L)= \{ w_1w_2: |w_1|=|w_2|$ and $w_1, w_2^R \in L \}$, where $x^R$ is the reversed form of $x$. </p>

<p>I understand that since $L$ is regular so does $L^R$.also on my way for a CFG I can reach $w_1$ by the CFG of $L$ concatenation with the one of $L^R$ for making $w_2$. so far I have a CFG, but what promises me that $|w_1|=|w_2|$? how can I generate a grammar that will also keep that in addition to the other conditions?</p>
 | formal languages regular languages context free formal grammars | 1 |
1,595 | What is the difference between classical crypto and post-quantum crypto? | <p>Will there be a need to change the definitions of security if we have quantum computers? What cryptographic constructions will break? Do you know a survey or an article that explains what will be needed to change?</p>
 | reference request cryptography quantum computing | 0 |