source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
44,869 | The halting problem states there is no algorithm that will determine if a given program halts. As a consequence, there should be programs about which we can not tell whether they terminate or not. What are the simplest (smallest) known examples of such programs? | A pretty simple example could be a program testing the Collatz conjecture : $$
f(n) =
\begin{cases}
\text{HALT}, &\text{if $n$ is 1} \\
f(n/2), & \text{if $n$ is even} \\
f(3n+1), & \text{if $n$ is odd}
\end{cases}
$$ It's known to halt for $n$ up to at least $5 × 2^{60} ≈ 5.764 × 10^{18}$ , but in general it's an open problem. | {
"source": [
"https://cs.stackexchange.com/questions/44869",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/35654/"
]
} |
44,981 | Find the least number of comparisons needed to sort (order) five elements and
devise an algorithm that sorts these elements using this number of comparisons. Solution : There are 5! = 120 possible outcomes. Therefore a binary
tree for the sorting procedure will have at least 7 levels. Indeed, $2^h$
≥ 120 implies $h $ ≥ 7. But 7 comparisons is not enough. The least number
of comparisons needed to sort (order) five elements is 8. Here is my actual question: I did find an algorithm that does it in 8 comparison but how can I prove that it can't be done in 7 comparisons? | The solution is wrong. Demuth [1; via 2, sec. 5.3.1] shows that five values can be sorted using only seven comparisons, i.e. that the "information theoretic" lower bound is tight in this instance. The answer is a method tailored to $n=5$, not a general algorithm. It's also not very nice. This is the outline: Sort the first two pairs. Order the pairs w.r.t. their respective larger element. Call the result $[a,b,c,d,e]$; we know $a<b<d$ and $c<d$. Insert $e$ into $[a,b,d]$. Insert $c$ into the result of step 3. The first step clearly takes two comparisons, the second only one. The last two steps take two comparisons each; we insert into a three-element list in both cases (for step 4., note that we know from $c<d$ that $c$ is smaller than the last element of the list at hand) and compare with the middle element first. That makes a total of seven comparisons. Since I don't see how to write "nice" pseudocode of this, see here for a tested (and hopefully readable) implementation. Ph.D. thesis (Stanford University) by H.B. Demuth (1956) See also Electronic Data Sorting by H.B. Demuth (1985) Sorting and Searching by Donald E. Knuth; The Art of Computer Programming Vol. 3 (2nd ed, 1998) | {
"source": [
"https://cs.stackexchange.com/questions/44981",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/10082/"
]
} |
45,002 | The Complexity Zoo defines $LIN$ to be the class of decision problems solvable by a deterministic Turing machine in linear time. $$LIN \subseteq P$$ Since HORN-SAT is solvable in $O(n)$ (as indicated in Linear-time algorithms for testing the satisfiability of propositional horn formulae (1984) ) New algorithms for deciding whether a (propositional) Horn formula is satisfiable are presented. If the Horn formula $A$ contains $K$ distinct propositional letters and if it is assumed that they are exactly $P_1,…, P_K$, the two algorithms presented in this paper run in time $O(N)$, where $N$ is the total number of occurrences of literals in $A$. I am wondering why we can't conclude that $$LIN = P$$ given that HORN-SAT has also been proven to be $P$-complete under log-space reduction ? I must be missing something. Or is that a well-known fact? (I have yet thoroughly gone through the 1984 paper so I don't quite understand the algorithms for solving HORN-SAT in linear time, and thus I may have misunderstood the implication.) | The solution is wrong. Demuth [1; via 2, sec. 5.3.1] shows that five values can be sorted using only seven comparisons, i.e. that the "information theoretic" lower bound is tight in this instance. The answer is a method tailored to $n=5$, not a general algorithm. It's also not very nice. This is the outline: Sort the first two pairs. Order the pairs w.r.t. their respective larger element. Call the result $[a,b,c,d,e]$; we know $a<b<d$ and $c<d$. Insert $e$ into $[a,b,d]$. Insert $c$ into the result of step 3. The first step clearly takes two comparisons, the second only one. The last two steps take two comparisons each; we insert into a three-element list in both cases (for step 4., note that we know from $c<d$ that $c$ is smaller than the last element of the list at hand) and compare with the middle element first. That makes a total of seven comparisons. Since I don't see how to write "nice" pseudocode of this, see here for a tested (and hopefully readable) implementation. Ph.D. thesis (Stanford University) by H.B. Demuth (1956) See also Electronic Data Sorting by H.B. Demuth (1985) Sorting and Searching by Donald E. Knuth; The Art of Computer Programming Vol. 3 (2nd ed, 1998) | {
"source": [
"https://cs.stackexchange.com/questions/45002",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/11573/"
]
} |
45,159 | I'm trying to understand how Slab Allocation works and why it is different or better than ordinary paging. I found this diagram which I believe would be helpful if it had more explanation. Some questions: What do the 3KB and 7KB items represent? Must they be related somehow? Why are they packaged that way? In the caches column, are the caches the grey boxes, or the white/blue boxes inside the grey boxes? Are the grey boxes a package of caches? Are the slabs just the blue boxes or is the whole "Physical Contiguous Pages" a slab? I'd really appreciate some help. THanks! | I can see why you're confused. The diagram is a bit confusing, and may actually be incorrect. First off, let's think about why a kernel needs a memory allocator below the level of pages. This is probably already stuff that you mostly know, but I'll go through it for completeness. Pages are the typical "unit" of memory operations. When a user-space application allocates memory, or memory-maps a file, or something like that, it typically gets a multiple of the machine page size. There are notable some exceptions; Windows uses 64k as the virtual memory allocation unit no matter what the page size of the CPU is. Nonetheless, let's think of it this way. On a modern CPU, as far as user-space code is concerned, it has a flat address space. This is actually an illusion provided by the virtual memory system. The OS provides pages from anywhere in RAM (or possibly not in RAM at all, in the case of swapped memory or memory-mapped files) and maps them into a contiguous virtual address space. The point of all this is that apart from a few special cases for the operating system itself (perhaps DMA buffers, maybe some special data structures set up at boot time, oh and the kernel image itself), the operating system kernel probably never has to manage any block of RAM bigger than a page. This simplifies things enormously, because it means that as far as pages go, every allocation and deallocation is the same size. It also effectively eliminates external fragmentation at the macro level. However, kernels also need to implement some data structures of their own, and for that, they need a different kind of memory allocator. These data structures can usually be thought of as a collection of individual objects (e.g. an object may be a "thread" or a "mutex"). The size of these objects are typically far smaller than a page in size. So, for example, an object which represents the security credentials of a process (think of the user id and group id in POSIX, say) might only be 16 bytes or so, whereas a "process" or "thread" might be up to 1kb in size. Clearly you don't want to use a whole page for these small records, so the idea is to implement an allocator on top of pages. The lower-level allocation system has to satisfy many of the same issues as the page-level allocator: it has to be reasonably fast (including on multicore systems), you want to minimise fragmentation, and so on. But more importantly, it should be tunable and configurable depending on what kind of data structure you're storing. Some data structures are inherently "cache-like". For example, many operating systems maintain a cache of path names to filesystem objects to avoid long chains of directory lookup (called the "name cache" or "namei cache" in Unix-speak). These objects are only needed for performance, not correctness, so you could (in theory) just forget a whole page full of entries if memory is tight and you need to free a page frame quickly. Other data structures could be swapped to disk if memory is tight and you don't need them soon. But you don't want to do that with data structures which control swapping or the virtual memory system! Some data structures can be moved around in memory with no penalty (e.g. if nobody refers to them with a pointer), so could "compact" themselves to avoid fragmentation if needed. So the main idea of the slab allocator is that a page should only store data structures of the same "type". This ticks all the boxes: each object in a page is the same size, so there's no external fragmentation. Objects of the same "type" have the same performance requirements and the same semantics. Incidentally, it's a similar story with allocation. For some types of object it's probably okay to wait if there's no memory immediately available to allocate that object. An object which represents an open file might be one example; opening a file is an expensive operation at the best of times, so waiting a little longer won't hurt that much. For other types of object (e.g. an object which represents a real-time event that must happen a certain time from now), you really don't want to wait. So it makes sense for some types of object to over-allocate (say, have a few free pages in reserve) so that requests can be satisfied without waiting. What you're basically doing is allowing each type of object to have its own allocator, which can be configured for the needs of that object. These per-object allocators are confusingly called "caches". You allocate one cache per type of object. (Yes, you'd typically implement a "cache of caches" as well.) Each cache only stores objects of the same type (e.g. only thread structures, or only address space structures). Each cache, in turn, manages "slabs". A slab is a page frame which contains an array of objects of the same type. Slabs may be "full" (all objects in use), "empty" (no objects in use), or "partial" (some objects in use). Partial slabs are probably the most interesting, since the slab allocator maintains a free list for every partial slab. (Full slabs and empty slabs need no free list.) Objects are allocated from partial slabs first (and probably from the "most full" partial slabs first) to try to avoid allocating pages that aren't needed. The nice thing about slab allocation is that all of these allocation policy options (as well as the memory semantics) can be tuned for each kind of object. Some caches might retain a pool of empty slabs and some might not. Some might be able to be swapped to secondary storage and some might not. Linux has has three different kinds of slab allocator, depending on whether or not you need compactness, cache-friendliness, or raw speed. There was a good presentation on this a couple of years ago which explains the tradeoffs well. The Solaris slab allocator (see the paper for details ) has a few more details to squeeze even more performance. For a start, in Solaris, everything is done with slab allocation, including page frame allocation. (This is Solaris' solution for allocating objects that are larger than half a page in size.) It manages smaller objects by nesting slab allocators in slab-allocated space. Some objects in Solaris require complex and expensive construction and destruction (e.g. objects which have a kernel lock), and so they could be "partly free" (i.e. constructed but not allocated). Solaris also optimises free slab allocation by maintaining free lists on a per-CPU basis, ensuring that some operations are completely wait-free. To support general-purpose allocation (e.g. for arrays whose size is not known at compile-time), most macrokernel-type operating systems also have caches which represent object sizes rather than object types . FreeBSD, for example, maintains caches for unknown objects whose sizes are powers of 2 bytes, from 4 to 256. What I hope you can see is that slab allocation is a very flexible framework which can be tuned for the needs of different kinds of data. It doesn't compete with paging, but complements it (although in Solaris, page frames are allocated with slabs). I hope this helps. Let me know if anything needs clarification. | {
"source": [
"https://cs.stackexchange.com/questions/45159",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/4348/"
]
} |
45,383 | Here I read that: Haskell definitely does not have the most advanced type system (not
even close if you count research languages) but out of all languages
that are actually used in production Haskell is probably at the top. So I am asking two things: which research languages have more powerful type systems than Haskell; what do they improve. I am just a programmer, so I don't know many mathematical objects used in type theory, please provide gentle explanations if you can. | The question is somewhat problematic, since it relies on a subjective definition of "better." Dependently-typed languages such as Agda , Idris , and Coq have a stronger type system than Haskell. This means, you can use the types in these languages to prove strictly more properties about your code than in Haskell. That is, there are more incorrect programs that will be caught. However, this comes at a price: type inference, and testing whether any values of a given type exist, are no longer possible. This means for these languages, you need to explicitly annotate your code with types. Essentially this boils down to writing your own correctness proofs for your code. So are these languages "better" than Haskell? They can check advanced proofs of correctness for your code, but they can't automatically prove properties about your code the way Haskell can. Another research language that is "better" than Haskell is LiquidHaskell . This is basically Haskell with refinement types bolted on top, parsed from special comments. Refinement types allow you you refine types with properties. For example, instead of having an Int , you can specify {i : Int | i > 0} , giving the type of all positive integers. Type inference is decidable with refinement types, but you can't prove nearly as many correctness properties with them as you can with dependent types. There are other refinement type systems out there, but I'm not terribly familiar with any of them. | {
"source": [
"https://cs.stackexchange.com/questions/45383",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/851/"
]
} |
45,486 | Taking a look at Julia's webpage , you can see some benchmarks of several languages across several algorithms (timings shown below). How can a language with a compiler originally written in C, outperform C code? Figure: benchmark times relative to C (smaller is better, C performance = 1.0). | There is no necessary relation between the implementation of the compiler and the output of the compiler. You could write a compiler in a language like Python or Ruby, whose most common implementations are very slow, and that compiler could output highly optimized machine code capable of outperforming C. The compiler itself would take a long time to run, because its code is written in a slow language. (To be more precise, written in a language with a slow implementation. Languages aren't really inherently fast or slow, as Raphael points out in a comment. I expand on this idea below.) The compiled program would be as fast as its own implementation allowed—we could write a compiler in Python that generates the same machine code as a Fortran compiler, and our compiled programs would be as fast as Fortran, even though they would take a long time to compile. It's a different story if we're talking about an interpreter. Interpreters have to be running while the program they're interpreting is running, so there is a connection between the language in which the interpreter is implemented and the performance of the interpreted code. It takes some clever runtime optimization to make an interpreted language which runs faster than the language in which the interpreter is implemented, and the final performance can depend on how amenable a piece of code is to this kind of optimization. Many languages, such as Java and C#, use runtimes with a hybrid model which combines some of the benefits of interpreters with some of the benefits of compilers. As a concrete example, let's look more closely at Python. Python has several implementations. The most common is CPython, a bytecode interpreter written in C. There's also PyPy, which is written in a specialized dialect of Python called RPython, and which uses a hybrid compilation model somewhat like the JVM. PyPy is much faster than CPython in most benchmarks; it uses all sorts of amazing tricks to optimize the code at runtime. However, the Python language which PyPy runs is exactly the same Python language that CPython runs, barring a few differences which don't affect performance. Suppose we wrote a compiler in the Python language for Fortran. Our compiler produces the same machine code as GFortran. Now we compile a Fortran program. We can run our compiler on top of CPython, or we can run it on PyPy, since it's written in Python and both of these implementations run the same Python language. What we'll find is that if we run our compiler on CPython, then run it on PyPy, then compile the same Fortran source with GFortran, we'll get exactly the same machine code all three times, so the compiled program will always run at around the same speed. However, the time it takes to produce that compiled program will be different. CPython will most likely take longer than PyPy, and PyPy will most likely take longer than GFortran, even though all of them will output the same machine code at the end. From scanning the Julia website's benchmark table, it looks like none of the languages running on interpreters (Python, R, Matlab/Octave, Javascript) have any benchmarks where they beat C. This is generally consistent with what I'd expect to see, although I could imagine code written with Python's highly optimized Numpy library (written in C and Fortran) beating some possible C implementations of similar code. The languages which are equal to or better than C are being compiled (Fortran, Julia ) or using a hybrid model with partial compilation (Java, and probably LuaJIT). PyPy also uses a hybrid model, so it's entirely possible that if we ran the same Python code on PyPy instead of CPython, we'd actually see it beat C on some benchmarks. | {
"source": [
"https://cs.stackexchange.com/questions/45486",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/37700/"
]
} |
45,794 | For example, if the computer has 10111100 stored on one particular byte of RAM, how does the computer know to interpret this byte as an integer, ASCII character, or something else? Is type data stored in an adjacent byte? (I don't think this would be the case as this would result in using twice the amount of space for one byte.) I suspect that perhaps a computer does not even know the type of data, that only the program using it knows. My guess is that because RAM is R AM and therefore not read sequentially, that a particular program just tells the CPU to fetch the info from a specific address and the program defines how to treat it. This would seem to fit with programming things such as the need for typecasting. Am I on the right track? | Your suspicion is correct. The CPU doesn't care about the semantics of your data. Sometimes, though, it does make a difference. For example, some arithmetic operations produce different results when the arguments are semantically signed or unsigned. In that case you need to tell the CPU which interpretation you intended. It is up to the programmer to make sense of her data. The CPU only obeys orders, blissfully unaware of their meaning or goals. | {
"source": [
"https://cs.stackexchange.com/questions/45794",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/20498/"
]
} |
46,935 | I would like to ask a few questions about Assembly language. My understanding is that it's very close to machine language, making it faster and more efficient. Since we have different computer architectures that exist, does that mean I have to write different code in Assembly for different architectures? If so, why isn't Assembly, write once - run everywhere type of language? Wouldn't be easier to simply make it universal, so that you write it only once and can run it on virtually any machine with different configurations? (I think that it would be impossible, but I would like to have some concrete, in-depth answers) Some people might say C is the language I'm looking for. I haven't used C before but I think it's still a high-level language, although probably faster than Java, for example. I might be wrong here. | Assembly language is a way to write instructions for the computer's instruction set , in a way that's slightly more understandable to human programmers. Different architectures have different instruction sets: the set of allowed instructions is different on each architecture. Therefore, you can't hope to have a write-once-run-everywhere assembly program. For instance, the set of instructions supported by x86 processors looks very different from the set of instructions supported by ARM processors. If you wrote an assembly program for an x86 processor, it'd have lots of instructions that are not supported on the ARM processor, and vice versa. The core reason to use assembly language is that it allows very low-level control over your program, and to take advantage of all of the instructions of the processor: by customizing the program to take advantage of features that are unique to the particular processor it will run on, sometimes you can speed up the program. The write-once-run-everywhere philosophy is fundamentally at odds with that. | {
"source": [
"https://cs.stackexchange.com/questions/46935",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/39302/"
]
} |
46,947 | I think, maybe some formalism could exist for the task which makes it significantly easier. My problem to solve is that I invented a reentrant algorithm for a task. It is relative simple (its pure logic is around 10 lines in C), but this 10 lines to construct was around 2 days to me. I am 99% sure that it is reentrant (which is not the same as thread-safe!), but the remaining 1% is already enough to disrupt my nights. Of course I could start to do that on a naive way (using a formalized state space, initial conditions, elemental operations and end-conditions for that, etc.), but I think some type of formalism maybe exists which makes this significantly easier and shorter. Proving the non-reentrancy is much easier, simply by showing a state where the end-conditions aren't fulfilled. But of course I constructed the algorithm so that I can't find a such state. I have a strong impression, that it is an algorithmically undecidable problem in the general case (probably it can be reduced to the halting problem), but my single case isn't general. I ask for ideas which make the proof easier. How are similar problems being solved in most cases? For example, a non-trivial condition whose fulfillment would decide the question into any direction, would be already a big help. | Assembly language is a way to write instructions for the computer's instruction set , in a way that's slightly more understandable to human programmers. Different architectures have different instruction sets: the set of allowed instructions is different on each architecture. Therefore, you can't hope to have a write-once-run-everywhere assembly program. For instance, the set of instructions supported by x86 processors looks very different from the set of instructions supported by ARM processors. If you wrote an assembly program for an x86 processor, it'd have lots of instructions that are not supported on the ARM processor, and vice versa. The core reason to use assembly language is that it allows very low-level control over your program, and to take advantage of all of the instructions of the processor: by customizing the program to take advantage of features that are unique to the particular processor it will run on, sometimes you can speed up the program. The write-once-run-everywhere philosophy is fundamentally at odds with that. | {
"source": [
"https://cs.stackexchange.com/questions/46947",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/12151/"
]
} |
47,041 | Everyone knows computing speed has drastically increased since their invention, and it looks set to continue. But one thing is puzzling me: if you ran an electrical current through a material today, it would travel at the same speed as if you did it with the same material 50 years ago. With that in mind, how is it computers have become faster? What main area of processor design is it that has given these incredible speed increases? I thought maybe it could be one or more of the following: Smaller processors (less distance for the current to travel, but it just seems to me like you'd only be able to make marginal gains here). Better materials | if you ran an electrical current through a material today, it would travel at the same speed as if you did it with the same material 50 years ago. With that in mind, how is it computers have become faster? What main area of processor design is it that has given these incredible speed increases? You get erroneous conclusions because your initial hypothesis is wrong: you think that CPU speed is equivalent to the speed of the electrons in the CPU. In fact, the CPU is some synchronous digital logic. The limit for its speed is that the output of a logical equation shall be stable within one clock period. With the logic implemented with transistors, the limit is mainly linked to the time required to make transistors switch. By reducing their channel size, we are able to make them switch faster. This is the main reason for improvement in max frequency of CPUs for 50 years. Today, we also modify the shape of the transistors to increase their switching speed, but, as far as I know , only Intel, Global Foundries and TSMC are able to create FinFETs today. Yet, there are some other ways to improve the maximum clock speed of a CPU: if you split your logical equation into several smaller ones, you can make each step faster, and have a higher clock speed. You also need more clock periods to perform the same action, but, using pipelining techniques , you can make the rate of instructions per second follow your clock rate. Today, the speed of electrons has become a limit: at 10GHz, an electric signal can't be propagated on more than 3cm. This is roughly the size of current processors. To avoid this issue, you may have several independent synchronous domains in your chip, reducing the constraints on signal propagation. But this is only one limiting factor, amongst transistor switching speed, heat dissipation, EMC, and probably others (but I'm not in the silicon foundry industry). | {
"source": [
"https://cs.stackexchange.com/questions/47041",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/39428/"
]
} |
47,216 | Sorry in advance if this question sounds dumb... As far as I know, building an algorithm using dynamic programming works this way: express the problem as a recurrence relation; implement the recurrence relation either via memoization or via a bottom up approach. As far as I know, I have said everything about dynamic programming. I mean: dynamic programming does not give tools/rules/methods/theorems for expressing recurrence relations, nor for turning them into code. So, what's special about dynamic programming? What does it give you, other than a vague method for approaching a certain kind of problems? | Dynamic programming gives you a way to think about algorithm design. This is often very helpful. Memoization and bottom-up methods give you a rule/method for turning recurrence relations into code. Memoization is a relatively simple idea, but the best ideas often are! Dynamic programming gives you a structured way to think about the running time of your algorithm. The running time is basically determined by two numbers: the number of subproblems you have to solve, and the time it takes to solve each subproblem. This provides a convenient easy way to think about the algorithm design problem. When you have a candidate recurrence relation, you can look at it and very quickly get a sense of what the running time might be (for instance, you can often very quickly tell how many subproblems there will be, which is a lower bound on the running time; if there are exponentially many subproblems you have to solve, then the recurrence probably won't be a good approach). This also helps you rule out candidate subproblem decompositions. For instance, if we have a string $S[1..n]$, defining a subproblem by a prefix $S[1..i]$ or suffix $S[j..n]$ or substring $S[i..j]$ might be reasonable (the number of subproblems is polynomial in $n$), but defining a subproblem by a subsequence of $S$ is not likely to be a good approach (the number of subproblems is exponential in $n$). This lets you prune the "search space" of possible recurrences. Dynamic programming gives you a structured approach to look for candidate recurrence relations. Empirically, this approach is often effective. In particular, there are some heuristics/common patterns you can recognize for common ways to define subproblems, depending on the type of the input. For instance: If the input is a positive integer $n$, one candidate way to define a subproblem is by replacing $n$ with a smaller integer $n'$ (s.t. $0 \le n' \le n$). If the input is a string $S[1..n]$, some candidate ways to define a subproblem include: replace $S[1..n]$ with a prefix $S[1..i]$; replace $S[1..n]$ with a suffix $S[j..n]$; replace $S[1..n]$ with a substring $S[i..j]$. (Here the subproblem is determined by the choice of $i,j$.) If the input is a list , do the same as you'd do for a string. If the input is a tree $T$, one candidate way to define a subproblem is to replace $T$ with any subtree of $T$ (i.e., pick a node $x$ and replace $T$ with the subtree rooted at $x$; the subproblem is determined by the choice of $x$). If the input is a pair $(x,y)$, then recursively look at the type of $x$ and the type of $y$ to identify a way to choose a subproblem for each. In other words, one candidate way to define a subproblem is to replace $(x,y)$ by $(x',y')$ where $x'$ is a subproblem for $x$ and $y'$ is a subproblem for $y$. (You can also consider subproblems of the form $(x,y')$ or $(x',y)$.) And so on. This gives you a very useful heuristic: just by looking at the type signature of the method, you can come up with a list of candidate ways to define subproblems. In other words, just by looking at the problem statement -- looking only at the types of the inputs -- you can come up with a handful of candidate ways to define a subproblem. This is often very helpful. It doesn't tell you what the recurrence relation is, but when you have a particular choice for how to define the subproblem, often it's not too hard to work out a corresponding recurrence relation. So, it often turns design of a dynamic programming algorithm into a structured experience. You write down on scrap paper a list of candidate ways to define subproblems (using the heuristic above). Then, for each candidate, you try to write down a recurrence relation, and evaluate its running time by counting the number of subproblems and the time spent per subproblem. After trying each candidate, you keep the best one that you were able to find. Providing some structure to the algorithm design process is a major help, as otherwise algorithm design can be intimidating (there's such a huge space of possible approaches, without some structure it can be unclear how to even get started). | {
"source": [
"https://cs.stackexchange.com/questions/47216",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/39854/"
]
} |
47,222 | I was on wikipedia on list of unsolved computer science problems and found this:
Is public-key cryptography possible? I thought RSA encryption was a form of public-key cryptography? Why is this a problem? | We don't know for sure that RSA is safe. It could be that RSA can be broken in polynomial time, for example if factoring can be done efficiently. What is open is the existence of a a provably secure public-key cryptosystem. We don't know for sure that such a cryptosystem exists at all; for all we know, every cryptosystem could be broken efficiently. A different, unrelated problem with RSA is that it can be broken by quantum computers. This is an unrelated problem since the definition of a secure public-key cryptosystem only requires that the cryptosystem not be breakable by classical (non-quantum) computers. Practically speaking, though, RSA seems secure, and it is used all the time. This is due to the gap between theory and practice. While theoretically we don't know for sure that RSA is secure, practically speaking we have to use some public-key cryptosystem, and RSA is a good choice since people have tried to break it and failed. Generally speaking, a known cryptosystem that people care about is more secure than an obscure one, since it has resisted the attempts of cryptographers. This doesn't constitute a proof that it is secure – it might well not be – but it's the best we can do. | {
"source": [
"https://cs.stackexchange.com/questions/47222",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/39757/"
]
} |
47,474 | I've recently read a really interesting blog entry from Google Research Blog talking about neural network. Basically they use this neural networks for solving various problems like image recognition. They use genetic algorithms to "evolve" the weights of the axons. So basically my idea is the following. If I was supposed to write a program that recognizes numbers I would not know how to start (I could have some vague idea but my point is: It is not trivial, nor easy.) but by using neural network I do not have to. By creating the right context in order for the neural network to evolve, my neural network will "find the correct algorithm". Down below I quoted a really interesting part of the article where they explain how each layer have different role in the process of image recognition. One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees. So basically my question is the following: Couldn't we use genetic algorithms + neural networks in order to solve every NP problem? We just create the right evolutionary context and leave "nature" find a solution. Inceptionism: Going Deeper into Neural Networks EDIT: I know we can use Brute-Force or find a not-efficient solution in many cases. That is why I try to highlight Evolving artificial neural networks. As I said in a comment: Given sufficient time and an appropriate mutation rate we could find the optimal solution (Or at least that is what I think). | No. This direction is unlikely to be useful, for two reasons: Most computer scientists believe that P $\ne$ NP. Assuming P $\ne$ NP, this means there does not exist any polynomial-time algorithm to solve any NP-complete problem. If you want your neural network to solve the problem in a reasonable amount of time, then it can't be too large, and thus the neural network will itself be a polynomial-time algorithm. It follows that if P $\ne$ NP, neural networks cannot efficiently solve any NP-complete problem. Neural networks aren't "magic". They are a way of trying to find patterns. For some problems where there are strong enough patterns to be found, and the patterns can be learned from a reasonable number of examples, they might be effective. But they're not magic fairy dust. Just because you can set up a neural network doesn't mean that backpropagation will necessarily find a good way to solve your problem. It might be that there are no patterns to be found, that the patterns can only be discovered with an unfeasible number of examples, or that patterns exist but the neural network training procedure isn't able to find them. Neural networks are just another form of machine learning. We could make the same remarks about SVMs or random forests or linear regression any other form of machine learning. Neural networks aren't some kind of magical silver bullet that solve all machine learning problems. They're about as effective as other machine learning methods, or for some kinds of problems, maybe a little bit more effective, but they're not magic. Sometimes I run across people who have heard only a little bit about neural networks, and they walk away thinking that neural networks are the answer to everything -- maybe because they heard that "your brain uses neural networks too", or they saw some very cool application (voice recognition or something). But don't be fooled. Don't believe the hype. Neural networks are a useful technique, but they're not going to enable computers to solve NP-complete problems, or to beat the Turing test, take away all our jobs and replace humans with computers. Not anytime soon, anyway. That's just science fiction. | {
"source": [
"https://cs.stackexchange.com/questions/47474",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/26658/"
]
} |
47,539 | From what I have found, a very large amount of protocols that travel over the internet are "text-based" rather than binary. The protocols in question include, but are not limited to HTTP, SMTP, FTP (I think this one is all text-based?), WHOIS, IRC. In fact, some of these protocols jump through some hoops whenever they want to transmit binary data . Is there a reason behind this? Text-based protocols obviously have a bit of an overhead as they require sending more data to transmit the same amount of information (see example below). What benefits outweigh this? By text-based , I mean most of the characters used in the protocol are between 0x20 (space) and 0x7E ( ~ ), with the occasional "special character" used for very special purposes , such as the newlines, null, ETX, and EOT. This is opposed to transmitting raw, binary data over the connection. For instance, transmitting the integer 123456 as text would involve sending the string 123456 (represented in hex as 31 32 33 34 35 36 ), whereas the 32-bit binary value would be sent as (represented in hex) 0x0001E240 (and as you can see, "contains" the special null character. | When the world was younger, and computers weren't all glorified PCs, word sizes varied (a DEC 2020 we had around here had 36 bit words), format of binary data was a contentious issue (big endian vs little endian, and even weirder orders of bits were reasonably common). There was little consensus on character size/encoding (ASCII, EBCDIC were the main contenders, our DEC had 5/6/7/8 bits/character encodings). ARPAnet (the Internet predecessor) was designed to connect machines of any description. The common denominator was (and still is) text. You could be reasonably certain that 7-bit encoded text wouldn't get mangled by the underlying means to ship data around (until quite recently, sending email in some 8-bit encoding carried a guarantee that the recipient would get mutilated messages, serial lines were normally configured as 7-bit with one bit parity). If you rummage around in e.g. the telnet or FTP protocol descriptions (the first Internet protocols, the network idea then was to connect remotely to a "supercomputer", and shuffle files to and fro), you see that the connection includes negotiating lots of details we take as uniform, Yes, binary would be (a bit) more efficient. But machines and memories (and also networks) have grown enormously, so the bit scrimping of yore is a thing of the past (mostly). And nobody in their right mind will suggest ripping out all existing protocols to replace them with binary ones. Besides, text protocols offer a very useful debugging technique. Today I never install the telnet server (better use the encrypted SSH protocol for remote connections), but have to telnet client handy to "talk" to some errant server to figure out snags. Today you'd probably use netcat or ncat for futzing around... | {
"source": [
"https://cs.stackexchange.com/questions/47539",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/37471/"
]
} |
47,712 | High-order pattern matching is an undecidable problem. That means there is no algorithm that, given an equation a => b , where a and b are open terms on the simply typed lambda calculus, finds a substitution S such that aS => bS , where => stands for "has the same Bn normal form". Yet, humans can solve that problem efficiently. For example, given the following problem: a = (λt . t
(F (λ f x . (f (f (f x)))))
(F (λ f x . (f (f x)))))
b = (λ t . t
(λ f x . (f (f (f (f (f (f x)))))))
(λ f x . (f (f (f (f x)))))) Any human with sufficient knowledge on the lambda calculus will be able to notice F is the "double" function for church numbers, quickly coming with the solution that F = (λ a b c . (a b (a b c))) My question is: if that problem is undecidable, how can humans quickly and effortlessly solve it? | Humans can solve some instances of that problem efficiently, but there is no reason to believe that humans can solve all instances efficiently. Showing one instance that a human can solve efficiently does not imply that humans can solve all instances efficiently. Undecidable means "there is no algorithm that can solve all instances and that always terminates". There could still be an algorithm that can solve some instances , even for an undecidable problem. So there is no contradiction. | {
"source": [
"https://cs.stackexchange.com/questions/47712",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/35654/"
]
} |
47,835 | I know that languages which can be defined using regular expressions and those recognisable by DFA/NFA ( finite automata ) are equivalent. Also no DFA exists for the language $\{0^n1^n|n \ge 0\}$. But still it can be written using regular expressions ( for that matter any non-regular language can be ) as $\{ \epsilon \} \cup \{01\} \cup \{0011\}......$ . But we know that every language that has a regular expression has a DFA that recognises it ( contradiction to my earlier statement ).
I know this is a trivial thing, but does the definition of regular expression includes the condition that it should be finite ? | If regular expressions were allowed to be infinite, then any language would have been regular. Given the language $L=\{w_1, w_2, \ldots\}$, we can always define the regular expression $R = w_1 + w_2 + \cdots$, which exactly defines $L$. (Example: the regular expression $R_1 = \epsilon+0+1+00+01+10+11+\cdots$ defines $L_1=\{0,1\}^*$.) We know that some languages are not regular, so this shows that infinite regular expressions describe a larger class of languages than finite regular expressions. | {
"source": [
"https://cs.stackexchange.com/questions/47835",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/27566/"
]
} |
48,381 | Considering only the alphabet $\Sigma = \{0,1\}$, the strings which can be given as input to the Turing machines are from the set $\Sigma^{*}$. But does it make sense for the input to be an infinite binary string ? For example if a Turing machine accepts all strings starting with a 0, does a binary string of infinite zeros also belong to the language accepted by the Turing machine ? | There is no problem in running a Turing machine on a tape initialized with an infinite string, although this is not usually considered. We still need the machine to terminate in finite time, though. There are also notions of infinite-time computation, which may be appropriate here. | {
"source": [
"https://cs.stackexchange.com/questions/48381",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/27566/"
]
} |
48,574 | Per recommendation I am reposting this from Stack Overflow . Recently I have been thinking about following issue. Consider the code for a standard "Hello world!" program: main()
{
printf("Hello World");
} Now almost any change in this code will make it completely useless, in fact almost every change will prevent the code from compiling.
For example: main(5
{
printf("Hello World");
} Now to the actual question.
Is there a programming language where every possible combination of symbols - that is, every expression - makes sense?
I tried thinking about some sort of solution and came up with two: Postfix with a limited number of variables. Essentially all variables are already defined before you write any code and you have to work just with them.
Theoretically you can than perform an arbitrary number of operations by forming a chain of many simple programs, each one of them feeding results to others.
Code could be written as a series of characters in postfix notation; "Postfix" with a stack of variables. Variables are stored on a stack; every operation takes two variables from the top and puts the result in their place.
The program ends when it reaches the last operation or variable. Personally I hate both of these. Not only are they limited, they are inelegant.
They are not even real solutions, more like workarounds, essentially "offshoring" some work to an external process. Does anyone have any other idea how to solve this problem? | Redcode, the assembly language behind codewars, was explicitly written to have very few halting instructions, because the code often gets mangled before it finally gives out, and the more opportunities it has to halt, the less interesting the game is. You see very few such languages in practice because we don't just want a program to run, we want it to run in the way we expect. If you can make a typo and change the way the program ran, it must be acceptably close to the original expected behavior, or the programmers with seethe in frustration. There is some precedence for such things by using natural languages rather than formal languages, but it's not what I would call a large field when you compare it to the use of formal languages. If you're interested in such programming languages, the natural language processing community is where I'd look. Another field you could look at is genetics. There are remarkably few genetic sequences which are simply invalid. Plenty of them that aren't very effective at reproductions, but very few invalid ones. | {
"source": [
"https://cs.stackexchange.com/questions/48574",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/41391/"
]
} |
49,332 | I'm planning to teach a winter course on a varying number of topics, one of which is going to be compilers. Now, I came across this problem while thinking of assignments to give throughout the quarter, but it has me stumped so I might use it as an example instead. public class DeadCode {
public static void main(String[] args) {
return;
System.out.println("This line won't print.");
}
} In the program above, it's obvious that the print statement will never execute because of the return . Compilers sometimes give warnings or errors about dead code. For example, the above code will not compile in Java. The javac compiler, however, will not detect all instances of dead code in every program. How would I prove that no compiler can do so? | It all comes from undecidability of the halting problem. Suppose we have a "perfect" dead code function, some Turing Machine M, and some input string x, and a procedure that looks something like this: Run M on input x;
print "Finished running input"; If M runs forever, then we delete the print statement, since we will never reach it. If M doesn't run forever, then we need to keep the print statement. Thus, if we have a dead-code remover, it also lets us solve the Halting Problem, so we know there can be no such dead-code remover. The way we get around this is by "conservative approximation." So, in my Turing Machine example above, we can assume that running M on x might finish, so we play it safe and don't remove the print statement. In your example, we know that no matter which functions do or don't halt, that there's no way we will reach that print statement. Usually, this is done by constructing a "control-flow graph". We make simplifying assumptions, such as "the end of a while loop is connected to the beginning and the statement after", even if it runs forever or runs only once and doesn't visit both. Similarly, we assume that an if-statement can reach all of its branches, even if in reality some are never used. These kinds of simplifications allow us to remove "obviously dead code" like the example you give, while remaining decidable. To clarify a few confusions from the comments: Nitpick: for fixed M, this is always decidable. M has to be the input As Raphael says, in my example, we consider the Turing Machine as an input. The idea is that, if we had a perfect DCE algorithm, we would be able to construct the code snippet I give for any Turing Machine , and having a DCE would solve the halting problem. not convinced. return as a blunt statement in a no-branch straight forward execution is not hard to decide. (and my compiler tells me it is capable of figuring this out) For the issue njzk2 raises: you are absolutely right, in this case you can determine that there is no way a statement after the return can be reached. This is because it's simple enough that we can describe its unreachability using control-flow graph constraints (i.e. there are no outgoing edges out of a return statement). But there is no perfect dead code eliminator, which eliminates all unused code. I don't take input-dependent proof for a proof. If there exists such kind of user input that can allow the code to be finite, it's correct for the compiler to assume that following branch is not dead. I can't see what are all these upvotes for, it's both obvious (eg. endless stdin) and wrong. For TomášZato: it's not really an input dependent proof. Rather, interpret it as a "forall". It works as follows: assume we have a perfect DCE algorithm. If you give me an arbitrary Turing Machine M and input x, I can use my DCE algorithm to determine whether M halts, by constructing the code snippet above and seeing if the print-statement is removed. This technique, of leaving a parameter arbitrary to prove a forall-statement, is common in math and logic. I don't fully understand TomášZato's point about code being finite. Surely the code is finite, but a perfect DCE algorithm must apply to all code, which is an infinte set. Likewise, while the code-itself is finite, the potential sets of input are infinte, as is the potential running-time of the code. As for considering the final branch not-dead: it is safe in terms of the "conservative approximation" I talk about, but it's not enough to detect all instances of dead code as the OP asks for. Consider code like this: while (true)
print "Hello"
print "goodbye" Clearly we can remove print "goodbye" without changing the behavior of the program. Thus, it is dead code. But if there's a different function call instead of (true) in the while condition, then we don't know if we can remove it or not, leading to the undecidability. Note that I am not coming up with this on my own. It is a well known result in the theory of compilers. It's discussed in The Tiger Book . (You might be able to see where they talk about in in Google books . | {
"source": [
"https://cs.stackexchange.com/questions/49332",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/41040/"
]
} |
50,052 | A long time ago I read a newspaper article where a professor of some sort said that in the future we will be able to compress data to just two bits (or something like that). This is of course not correct (and it could be that my memory of what he exactly stated is not correct). Understandably it would not be practical to compress any string of 0's and 1's to just two bits because (even if it was technically possible), too many different kind of strings would end up compressing to the same two bits (since we only have '01' and '10' to choose from). Anyway, this got me thinking about the feasibility of compressing an arbitrary length string of 0's and 1's according to some scheme. For this kind of string, is there a known relationship between the string length (ratio between 0's and 1's probably does not matter) and maximum compression? In other words, is there a way to determine what is the minimum (smallest possible) length that a string of 0's and 1's can be compressed to? (Here I am interested in the mathematical maximum compression, not what is currently technically possible.) | Kolmogorov complexity is one approach for formalizing this mathematically. Unfortunately, computing the Kolmogorov complexity of a string is an uncomputable problem. See also: Approximating the Kolmogorov complexity . It's possible to get better results if you analyze the source of the string rather than the string itself . In other words, often the source can be modelled as a probabilistic process, that randomly chooses a string somehow, according to some distribution. The entropy of that distribution then tells you the mathematically best possible compression (up to some small additive constant). On the impossibility of perfect compression, you might also be interested in the following. No compression algorithm can compress all input messages? Compression functions are only practical because "The bit strings which occur in practice are far from random"? Is there any theoretically proven optimal compression algorithm? | {
"source": [
"https://cs.stackexchange.com/questions/50052",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/24636/"
]
} |
50,342 | I read somewhere that the most efficient algorithm found can compute the factors in $O(\exp((64/9 \cdot b)^{1/3} \cdot (\log b)^{2/3})$ time, but the code I wrote is $O(n)$ or possibly $O(n \log n)$ depending on how fast division and modulus are. I'm pretty sure I've misunderstood something somewhere, but I'm not sure where. Here's what I wrote in pseudo code form. function factor(number) -> list
factors = new list
if number < 0
factors.append(-1)
number = -number
i = 2
while i <= number
while number % i == 0
factors.append(i)
number /= i
i++
return factors | You are confusing the number $n$ with the number of bits needed to represent $n$. Here $b = $ the number of bits needed to represent $n$ (so $b \approx \lg n$). This makes a huge difference. A $O(n)$-time algorithm is a $O(2^b)$-time algorithm -- exponential in the number of bits. In comparison, the "efficient" algorithm you found has a running time that is subexponential in $b$. Example: Consider $n = 2,000,000$ (2 million). Then $b=21$ bits are enough to represent the number $n$. So, an algorithm that is $O(2^{b^{1/3}})$ will be much faster than an algorithm that is $O(2^b)$. An $O(n)$ algorithm falls into the latter category, i.e., very slow. See https://en.wikipedia.org/wiki/Integer_factorization | {
"source": [
"https://cs.stackexchange.com/questions/50342",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/43270/"
]
} |
50,362 | I am aware that this seems a very stupid (or too obvious to state) question. However, I am confused at some point. We can show that P $=$ NP if and only if we can design an algorithm that solves any given instance of problem in NP in polynomial time. However, I do not understand how on earth can we prove that P $\neq$ NP . Please excuse me for the following similitude as it might be so irrelevant, but telling someone to prove if P is not equal to NP appears to me like telling someone to prove that God does not exists. There is a set of problems, those are unable to be solved by a Non-deterministic Finite Automata (NFA) with polynomial number of states regardless of the current technology (I know this is a sloppy definition). In addition, we have a considerably large set of algorithms which make some crucial problems (shortest path, minimum spanning tree, and even sum of integers ( $1 + 2 + \dots + n$ ) polynomial-time problems. My question in short: If I believe that P $=$ NP ,
you would say "then show your algorithm that solves an NP problem
in polynomial time!". Suppose that I believe P $\neq$ NP . Then what would you exactly ask? What would you want me
to show? The answer is clearly "your proof". However, what kind of proof shows that an algorithm cannot exist? (in this case, a polynomial time algorithm for an NP problem) | There are three main ways I'm aware of that could prove that P $\,\neq\,$ NP . Showing that there is some problem that is in NP but not in P . You're probably familiar with the proof that comparison-based sorting need time $\Omega(n\log n)$ to sort a list of $n$ items. One could, in principle, produce a similar proof showing that 3SAT or some other NP -complete problem can't be solved in time $O(n^c)$ for any constant $c$ . Geometric Complexity Theory seeks to use tools from algebraic geometry and group representation theory to prove such lower bounds, by considering the symmetries that problems possess. Circuit Complexity is another. Showing that P and NP have different structural properties. For example, P is closed under complementation. If you could show that NP $\,\neq\,$ co-NP (i.e., that NP is not closed under complementation), then is must be that P $\,\neq\,$ NP . Of course, this is just pushing the problem one level deeper – how would you prove that NP $\,\neq\,$ co-NP ? Another possibility is that we know that NP is exactly the class of problems that can be defined in something called existential second-order logic. If one could show that there's no logic corresponding exactly to P (or if there is a logic but it's different to $\exists\mathrm{SO}$ ), then P and NP must be different. A related (in fact, equivalent) idea is to show that P doesn't have complete problems under reductions defined by first-order logic, since it's known that NP does have complete problems under these reductions. Prove that some problem isn't NP -complete. If P $\,=\,$ NP , then every non-trivial problem in NP is NP -complete under polynomial-time many-one reductions ("non-trivial" here means not $\emptyset$ or $\Sigma^*$ ). So, if you can show that some problem in NP isn't NP -complete, then we must have P $\,\neq\,$ NP . | {
"source": [
"https://cs.stackexchange.com/questions/50362",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/17185/"
]
} |
50,712 | I was reading CLRS and is said: If factoring large integers is easy, then breaking the RSA
cryptosystem is easy. Which makes sense to me because with the knowledge of $p$ and $q$, it is easy to create the secret key which the knowledge of the public key. Though, it explains the converse statement, which I don't quite understand: The converse statement, that if factoring large integers is hard, then
breaking RSA is hard, is unproven. What does the statement above formally mean? If we assume factoring is hard (in some formal way), why does that not imply that breaking the RSA crypto system is hard? Now consider that if we assumed that factoring is hard...and that we discovered that it meant that the RSA cryptosystem is hard to break. What would that formally mean? | The easiest way to think about it is to think of the contrapositive. The statement: if factoring large integers is hard, then breaking RSA is hard is equivalent to the following: if breaking RSA is easy, then factoring large integers is easy This statement has not been proven. What they're saying is, assume we have an algorithm that solves factoring in polynomial time. Then we can use it to construct an algorithm that solves RSA in polynomial time. But, there could be some other way to crack RSA that doesn't involve factoring integers. It's possible that we will find we can crack RSA in a way that doesn't let us factor integers in polynomial time. In short, we know that RSA is at least as easy as factoring. There are two possible outcomes: RSA and factoring are of equivalent difficulty, or RSA is a strictly easier problem than factoring. We don't know which is the case. | {
"source": [
"https://cs.stackexchange.com/questions/50712",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/12623/"
]
} |
50,767 | First of all, I have just started studying computer science by myself and maybe I just need some clarification of what "polynomial time" means regarding the time complexity of an algorithm and references to study it well. As I have understood it, whether integer factorization can be done in polynomial time is still an open question and, as this article in wikipedia ( https://en.wikipedia.org/wiki/Integer_factorization ) puts it, When the numbers are very large, no efficient, non-quantum integer
factorization algorithm is known; an effort by several researchers
concluded in 2009, factoring a 232-digit number (RSA-768), utilizing
hundreds of machines took two years and the researchers estimated that
a 1024-bit RSA modulus would take about a thousand times as long. So, trying to see that for myself, I have written a very naive code in MATLAB checking it with prime numbers up to 15 digits; the reasoning being that if I can check if a number is prime fast, I can easily modify the code to give me the factorization fast. The time it takes the code to check if a number is prime doesn't grow exponentially with the input. function[]=prime(n)
tic
f=floor(sqrt(n));
for i=2:f
if rem(n,i)~=0
b=0;
else
b=1;
disp(i)
break
end
end
if b==0
disp('prime')
else
disp('not prime')
end
toc
end And so I go back to the question in the title. What is wrong with my reasoning? | Since your algorithm is "fast", why did you only try it with a 15-digit number and not with a 232-digit one? There's serious money to be made if you indeed have a "fast" algorithm. Your algorithm takes time (if we count "div" as taking constant time) proportional to $\sqrt{n}$. A $d$-digit number can be as large as $10^d$, so your algorithm takes time proportional to $\sqrt{10^d} \approx 3.16^d$, i.e. exponential in $d$, the number of digits. That is by no means "fast" and grows very quickly as the numbers get larger. It is polynomial with respect to the value of $n$, but not with respect to the size of $n$. This behavior is called pseudopolynomiality . The "fast" prime testing algorithms use much more sophisticated approaches which can not be modified (easily) to also give a factorization. They just report yes/no whether the number is prime. The AKS primality test uses time proportional to $d^6$. | {
"source": [
"https://cs.stackexchange.com/questions/50767",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/43731/"
]
} |
50,794 | How do we construct a matrix that takes into account whether the first qubit is set? I am trying to construct the controlled-V matrix, but there is no quantum computational paper that describes it so I am looking at CNOT and wondering how the matrix for CNOT determines whether the first bit is set (to apply the NOT part) Could somebody help? | Since your algorithm is "fast", why did you only try it with a 15-digit number and not with a 232-digit one? There's serious money to be made if you indeed have a "fast" algorithm. Your algorithm takes time (if we count "div" as taking constant time) proportional to $\sqrt{n}$. A $d$-digit number can be as large as $10^d$, so your algorithm takes time proportional to $\sqrt{10^d} \approx 3.16^d$, i.e. exponential in $d$, the number of digits. That is by no means "fast" and grows very quickly as the numbers get larger. It is polynomial with respect to the value of $n$, but not with respect to the size of $n$. This behavior is called pseudopolynomiality . The "fast" prime testing algorithms use much more sophisticated approaches which can not be modified (easily) to also give a factorization. They just report yes/no whether the number is prime. The AKS primality test uses time proportional to $d^6$. | {
"source": [
"https://cs.stackexchange.com/questions/50794",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/43452/"
]
} |
50,993 | I am reading a book called Principles of Computer Science (2008), by Carl Reynolds and Paul Tymann (published by Schaum's Outlines). The second chapter introduces algorithms with an example of a sequential search which simply iterates through a list of names and returns TRUE if a given name is found in the list. The author goes on to say (page 17): We say that the "order of growth" of the sequential search algorithm
is n. The notation for this is T(n). We also say that an algorithm whose order of growth is within some constant factor
of T(n) has a theta of NL say. "The sequential search has a theta of
n." The size of the problem is n, the length of the list being
searched. I find this really hard to follow. The book is riddled with errors, so I am not sure if I am missing something or if the there is a typo in the paragraph above. In general English I rarely see any sentence end with "...say". I am very confused. What does T stand for? The book does not explain. Is it for Time or for Theta? If "a theta of NL" means "The sequential search has a theta of n." What does L stand for? 'Linear' or 'length'? I have written to the publishers asking for an explanation. They said they would forward my message to authors. They have not replied. I have also tried looking at other sources but I still get the naggling feeling that I am misunderstanding something - so cannot rest until I have decoded this paragraph. If anyone has a copy of that book, and has understood that paragraph. Then, I'd appreciate if you could let me know if that paragraph is accurate or explain it in other words. Thanks. | The paragraph is wrong. Unfortunately, it looks exactly like the kind of thing that a student who does not understand the material would write as an answer to an exercise. This sort of nonsense has no place in a textbook. Make no sudden movements. Put the book down. Step away from the book. We say that the "order of growth" of the sequential search algorithm is n. The notation for this is $T(n)$. No. $T(n)$ is the notation for a function called $T$, which takes an argument called $n$. That function could be used to mean anything whatsoever. There is something of a tradition of writing recurrence relations for the running time of programs in the form, e.g.,
$$\begin{align*}
T(1)&=k\\
T(n)&=T(n-1)+\log n \quad\text{for }n>1
\end{align*}$$
But $T$ is not an "order of growth", here: it is a specific function defined through a recurrence relation. And you cannot just write "$T(n)=\text{blah}$" and expect people to read your mind and know that the function $T$ denotes the running time of some algorithm. $T$ here stands for time. We also say that an algorithm whose order of growth is within some constant factor of $T(n)$ has a theta of NL say. "The sequential search has a theta of $n$." This has obviously been mangled. I think the authors intended to write something like, We also say that an algorithm whose order of growth is within some constant factor of $T(n)$ has a theta of $\boldsymbol{n}$ and we say, "The sequential search has a theta of $n$." But, please, we do not say "has a theta of $n$," just as, if $h$ is your notation for height, you wouldn't say "John has an $h$ of 180cm." It's just not a correct form of words. We actually say, "The running time of the algorithm is theta $n$ (or theta of $n$)." Note in particular, that $\Theta$ is a tool for talking about mathematical functions, not algorithms. Theta doesn't mean that the running time is something; rather, it's something you can use to talk about the running time. "NL", by the way, denotes the complexity class nondeterministic logspace , which makes no sense at all in the position it appeared in the original quote. | {
"source": [
"https://cs.stackexchange.com/questions/50993",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/10587/"
]
} |
51,362 | This may be a ridiculous question, but is it possible to have a problem that actually gets easier as the inputs grow in size? I doubt any practical problems are like this, but maybe we can invent a degenerate problem that has this property. For instance, perhaps it begins to "solve itself" as it gets larger, or behaves in some other bizarre way. | No, it's not possible: at least, not in an asymptotic sense, where you require the problem to keep getting strictly easier, forever, as $n \to \infty$. Let $T(n)$ be the best possible running time for solving such a problem, where $n$ is the size of the input. Note that the running time is a count of the number of instructions executed by the algorithm, so it has to be a non-negative integer. In other words, $T(n) \in \mathbb{N}$ for all $n$. Now if we consider a function $T: \mathbb{N} \to \mathbb{N}$, we see there is no such function that is strictly monotonically decreasing. (Whatever $T(0)$ is, it has to be finite, say $T(0)=c$; but then since $T$ is monotonically strictly decreasing, $T(c) \le 0$ and $T(c+1) \le -1$, which is impossible.) For similar reasons, there is no function that is asymptotically strictly decreasing: we can similarly prove that there's no running time function $T(n)$ where there exists $n_0$ such that for all $n \ge n_0$, $T(n)$ is monotonically strictly decreasing (any such function would have to become eventually negative). So, such a problem cannot exist, for the simple reason that running times have to be non-negative integers. Note that this answer covers only deterministic algorithms (i.e., worst-case running time). It doesn't rule out the possibility of randomized algorithms whose expected running time is strictly monotonically decreasing, forever. I don't know whether it's possible for such an algorithm to exist. I thank Beni Cherniavsky-Paskin for this observation . | {
"source": [
"https://cs.stackexchange.com/questions/51362",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/37945/"
]
} |
51,554 | I've studied this lots, and they say overfitting the actions in machine learning is bad, yet our neurons do become very strong and find the best actions/senses that we go by or avoid, plus can be de-incremented/incremented from bad/good by bad or good triggers, meaning the actions will level and it ends up with the best(right), super strong confident actions. How does this fail? It uses positive and negative sense triggers to de/re-increment the actions say from 44pos. to 22neg. | The best explanation I've heard is this: When you're doing machine learning, you assume you're trying to learn from data that follows some probabilistic distribution. This means that in any data set, because of randomness, there will be some noise : data will randomly vary. When you overfit, you end up learning from your noise, and including it in your model. Then, when the time comes to make predictions from other data, your accuracy goes down: the noise made its way into your model, but it was specific to your training data, so it hurts the accuracy of your model. Your model doesn't generalize: it is too specific to the data set you happened to choose to train. | {
"source": [
"https://cs.stackexchange.com/questions/51554",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/44529/"
]
} |
52,080 | I'm currently reading introduction to algorithms and came by Johnson’s algorithm that depends on making sure that all paths are positive. the algo depends on finding a new weight function (w') that is positive for all edges and keeps the correctness of the shortest paths relations. It does so by calculating h(s), h(d) values to be added to the w original value. My question is, why not just find the smallest w in the graph and add it to all edges ? this will satisfy both conditions and will require less calculation. | Adding a weight to every edge adds more weight to long paths than short paths. (Long in the sense of having many edges.) For example, suppose the lowest-cost edge has weight $-2$ and there are two paths from $a$ to $b$ : a single edge of weight $3$ and a path with two edges, each of weight $1$ . The two-edge path has the lowest weight. However, if you add $2$ to every edge, the one-edge path has weight $5$ but the two-edge path now has weight $6$ , so you get the wrong answer. | {
"source": [
"https://cs.stackexchange.com/questions/52080",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/45140/"
]
} |
52,411 | Consider two possibilities for the P vs. NP problem: P=NP and P$\neq$NP. Let Q be one of known NP-hard problems.
To prove P=NP, we need to design a single polynomial time algorithm A for Q and
prove that A correctly solves Q. To prove P$\neq$NP, we need to show that no polynomial time algorithm solves Q.
In other words,
we have to rule out all polynomial time algorithms. I have heard people say this makes the second more difficult task
(assuming that it is really true). Is there a reason to think that proving P=NP (assuming that P=NP)
would be easier than proving P$\neq$NP (assuming that P$\neq$NP)? | As Raphael explains, this question is ill-posed, since at most one of P=NP and P≠NP should be provable at all. However, a similar question arises in theoretical computer science in several guises, the most conspicuous of which is in the field of approximation algorithms . Given an NP-hard optimization problem (say, maximization), we can ask how well we can approximate it. Proving an upper bound on the possible approximation is akin to P=NP, while proving a lower bound on the possible approximation is akin to P≠NP. The former is much easier than the latter. Indeed, to prove an upper bound all one has to do is to come up with an approximation algorithm and analyze it. In contrast, all known lower bounds are conditional : they are valid only if P≠NP (indeed, if P=NP then every NP-hard optimization problem would become solvable). To prove these lower bounds, we show that if we could approximate the problem too well, then we would obtain a polynomial time algorithm for some NP-hard problem. Usually this is done via the intricate technical machinery of the PCP theorem. This field, known as hardness of approximation , can be approached only be specialists, and is technical more challenging than most approximation algorithms. So in this case at least, P=NP is indeed easier than P≠NP. | {
"source": [
"https://cs.stackexchange.com/questions/52411",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/45495/"
]
} |
52,488 | Disclaimer: I know there are similar sounding questions already here and on Stackoverflow. But they are all about collisions, which is not what I am asking for. My question is: why is collision- less lookup O(1) in the first place? Let's assume I have this hashtable: Hash Content
-------------
ghdjg Data1
hgdzs Data2
eruit Data3
xcnvb Data4
mkwer Data5
rtzww Data6 Now I'm looking for the key k where the hash function h(k) gives h(k) = mkwer . But how does the lookup "know" that the hash mkwer is at position 5? Why doesn't it have to scroll through all keys in O(n) to find it? The hashes can't be some kind of real hardware addresses because I'd lose the abbility to move the data around. And as far as I know, the hashtable is not sorted on the hashes (even if it was, the search would also take O(log n) )? How does knowing a hash help finding the correct place in the table? | The hash function doesn't return some string such as mkwer . It directly returns the position of the item in the array. If, for example, your hash table has ten entries, the hash function will return an integer in the range 0–9. | {
"source": [
"https://cs.stackexchange.com/questions/52488",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/45592/"
]
} |
53,031 | Throughout my education in computer science, I feel like I've heard the terms "modulo" and "modulus" used interchangeably. It looks like even Wikipedia claims that "modulo" is "sometimes called 'modulus'" (see the first sentence of the page on 'modulo' ). I've looked into this issue a little and it seems that "modulo" finds singular use in modular arithmetic (e.g. "19 and 64 are congruent modulo 5"). In addition, I've seen the symbol % be referred to as "modulo." Meanwhile, "modulus" appears to have several definitions, including "absolute value" and "constant factor" as well as referring to the "5" in "modulo 5." Is it ever correct to use these terms interchangeably in the context of computer science? Are they simply different types of words that represent the same idea (such as "run" and "runner")? Are there important differences in other disciplines? Bonus: Etymologically, what gave rise to these two terms? | "modulo" is an operator. For instance, we might say "19 and 64 are congruent modulo 5". "modulus" is a noun. It describes the 5 in "modulo 5". We might say "the modulus is 5". No, the two should not be used interchangeably. It would be incorrect to say "19 and 64 are congruent modulus 5". It would also be incorrect to "the modulo is 5". See also https://en.wikipedia.org/wiki/Modular_arithmetic and https://en.wikipedia.org/wiki/Modulo_operation . Both define the word "modulus", and as far as I can see they use it correctly. | {
"source": [
"https://cs.stackexchange.com/questions/53031",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/42854/"
]
} |
53,225 | I have been wondering about this question since I was an undergraduate student.
It is a general question but I will elaborate with examples below. I have seen a lot of algorithms - for example, for maximum flow problems, I know around 3 algorithms which can solve the problem: Ford-Fulkerson, Edmonds-Karp & Dinic, with Dinic having the best complexity. For data structures - for example, heaps - there are binary heaps, binomial heaps & Fibonacci heaps, with Fibonacci heap having the best overall complexity. What keeps me confusing is: are there any reasons why we need to know them all? Why not just learn and get familiar with the best complexity one? I know it is the best if we know them all, I just want to know are there any "more valid" reasons, like some problems / algorithms can only be solved by using A but not B , etc. | There's a textbook waiting to be written at some point, with the working title Data Structures, Algorithms, and Tradeoffs . Almost every algorithm or data structure which you're likely to learn at the undergraduate level has some feature which makes it better for some applications than others. Let's take sorting as an example, since everyone is familiar with the standard sort algorithms. First off, complexity isn't the only concern. In practice, constant factors matter, which is why (say) quick sort tends to be used more than heap sort even though quick sort has terrible worst-case complexity. Secondly, there's always the chance that you find yourself in a situation where you're programming under strange constraints. I once had to do quantile extraction from a modest-sized (1000 or so) collection of samples as fast as possible, but it was on a small microcontroller which had very little spare read-write memory, so that ruled out most $O(n \log n)$ sort algorithms. Shell sort was the best tradeoff, since it was sub-quadratic and didn't require additional memory. In other cases, ideas from an algorithm or data structure might be applicable to a special-purpose problem. Bubble sort seems to be always slower than insertion sort on real hardware, but the idea of performing a bubble pass is sometimes exactly what you need. Consider, for example, some kind of 3D visualisation or video game on a modern video card, where you'd like to draw objects in order from closest-to-the-camera to furthest-from-the-camera for performance reasons, but if you don't get the order exact, the hardware will take care of it. If you're moving around the 3D environment, the relative order of objects won't change very much between frames, so performing one bubble pass every frame might be a reasonable tradeoff. (The Source engine by Valve does this for particle effects.) There's persistence, concurrency, cache locality, scalability onto a cluster/cloud, and a host of other possible reasons why one data structure or algorithm may be more appropriate than another even given the same computational complexity for the operations that you care about. Having said that, that doesn't mean that you should memorise a bunch of algorithms and data structures just in case. Most of the battle is realising that there is a tradeoff to be exploited in the first place, and knowing where to look if you think there might be something appropriate. | {
"source": [
"https://cs.stackexchange.com/questions/53225",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/46483/"
]
} |
53,521 | When do we say that a artificial neural network is a multilayer Perceptron ? And when do we say that a artificial neural network is a multilayer ? Is the term perceptron related to learning rule to update the weights? Or It is related to neuron units? | A perceptron is always feedforward , that is, all the arrows are going in the
direction of the output. Neural networks in general might have loops, and if
so, are often called recurrent networks . A recurrent network is much harder
to train than a feedforward network. In addition, it is assumed that in a perceptron, all the arrows are going from
layer $i$ to layer $i+1$, and it is also usual (to start with having) that all
the arcs from layer $i$ to $i+1$ are present. Finally, having multiple layers means more than two layers, that is, you have hidden layers. A perceptron is a network with two layers, one input and one
output. A multilayered network means that you have at least one hidden layer
(we call all the layers between the input and output layers hidden). | {
"source": [
"https://cs.stackexchange.com/questions/53521",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/46825/"
]
} |
54,013 | When considering how multi-thread-friendly our program must be, my team puzzled about whether there's anything that absolutely cannot be done on a single-core CPU. I posited that graphics processing requires massively parallel processing, but they argue that things like DOOM were done on single-core CPUs without GPUs. Is there anything that must be done on a multi-core processor? Assume there is infinite time for both development and running. | If you don't care about the running time, anything you can do on a multi-core machine, you can do on a single-core machine. A multi-core machine is just a way of speeding up some kinds of computations. If you can solve a problem in time $T$ on a multi-core machine with $n$ cores, then you can solve it time $\sim Tn$ (or less look at Amdahl's law ) on a single-core machine. The single-core machine can emulate a multi-core machine using time-slicing / time-sharing . | {
"source": [
"https://cs.stackexchange.com/questions/54013",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/19611/"
]
} |
54,266 | I have the following Python code. def collatz(n):
if n <= 1:
return True
elif (n%2==0):
return collatz(n/2)
else:
return collatz(3*n+1) What is the running-time of this algorithm? Try: If $T(n)$ denotes the running time of the function collatz(n) . Then I think I have
\begin{cases}
T(n)=1 \text{ for } n\le 1\\
T(n)=T(n/2) \text{ for } n\text{ even}\\
T(n)=T(3n+1) \text{ for } n\text{ odd}\\
\end{cases} I think $T(n)$ will be $\lg n$ if $n$ is even but how to calculate the recurrence in general? | This is Collatz conjecture - still open problem. Conjecture is about proof that this sequence stops for any input, since this is unresolved, we do not know how to solve this runtime recurrence relation, moreover it may not halt at all - so until proven, the running time is unknown and may be $\infty$. | {
"source": [
"https://cs.stackexchange.com/questions/54266",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/47668/"
]
} |
54,933 | How are computers able to tell the correct time and date every time? Whenever I close the computer (shut it down) all connections and processes inside stop. How is it that when I open the computer again it tells the exact correct time? Does the computer not shut down completely when I shut it down? Are there some processes still running in it? But then how does my laptop tell the correct time when I take out the battery (and thus forcibly stop all processes) and start it again after a few days? | Computers have a "real-time clock" -- a special hardware device (e.g., containing a quartz crystal) on the motherboard that maintains the time. It is always powered, even when you shut your computer off. Also, the motherboard has a small battery that is used to power the clock device even when you disconnect your computer from power. The battery doesn't last forever, but it will last at least a few weeks. This helps the computer keep track of the time even when your computer is shut off. The real-time clock doesn't need much power, so it's not wasting energy. If you take out the clock battery in addition to removing the main battery and disconnecting the power cable then the computer will lose track of time and will ask you to enter the time and date when you restart the computer. To learn more, see Real-time clock and CMOS battery and Why does my motherboard have a battery . Also, on many computers, when you connect your computer to an Internet connection, the OS will go find a time server on the network and query the time server for the current time. The OS can use this to very accurately set your computer's local clock. This uses the Network Time Protocol , also called NTP. | {
"source": [
"https://cs.stackexchange.com/questions/54933",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/48418/"
]
} |
55,462 | Many computer science programs require two or three calculus classes. I'm wondering, how and when is calculus used in computer science? The CS content of a degree in computer science tends to focus on algorithms, operating systems, data structures, artificial intelligence, software engineering, etc. Are there times when Calculus is useful in these or other areas of Computer Science? | I can think of a few courses that would need Calculus, directly . I have used bold face for the usually obligatory disciplines for a Computer Science degree, and italics for the usually optional ones. Computer Graphics /Image Processing, and here you will also need Analytic Geometry and Linear Algebra, heavily ! If you go down this path, you may also want to study some Differential Geometry (which has multivariate Calculus as a minimum prerequisite). But you'll need Calculus here even for very basic things: try searching for "Fourier Transform" or "Wavelets", for example -- these are two very fundamental tools for people working with images. Optimization , non-linear mostly, where multivariate Calculus is the fundamental language used to develop everything. But even linear optimization benefits from Calculus (the derivative of the objective function is absolutely important) Probability/Statistics . These cannot be seriously studied without multivariate Calculus. Machine Learning , which makes heavy use of Statistics (and consequently, multivariate Calculus) Data Science and related subjects, which also use lots of Statistics; Robotics , where you will need to model physical movements of a robot, so you will need to know partial derivatives and gradients. Discrete Math and Combinatorics ( yes! , you may need Calculus for discrete counting!) -- if you get serious enough about generating functions, you'll need to know how to integrate and derivate certain formulas. And that is useful for Analysis of Algorithms (see the book by Sedgewick and Flajolet, "Analysis of Algorithms"). Similarly, Taylor Series and calculus can be useful in solving certain kinds of recurrence relations, which are used in algorithm analysis. Analysis of Algorithms , where you use the notion of limit right from the start (see Landau notation, "little $o$ " -- it's defined using a limit) There may be others -- this is just off the top of my head. And, besides that, one benefits indirectly from a Calculus course by learning how to reason and explain arguments with technical rigor. This is more valuable than students usually think. Finally -- you will need Calculus in order to, well, interact with people from other Exact Sciences and Engineering. And it's not uncommon that a Computer Scientist needs to not only talk but also work together with a Physicist or an Engineer. | {
"source": [
"https://cs.stackexchange.com/questions/55462",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/16466/"
]
} |
55,471 | I have heard the word "hash" being used in different contexts (all within the world of computing) with different meanings. For example, in the book Learn Python the Hard Way, in the chapter on dictionaries it is said "Python calls them "dicts." Other languages call them "hashes."" So, are hashes dictionaries? The other common usage of the word is in relation to encryption. I have also heard (& read) people using the word "hash" as a specific function within high-level programing. So, what exactly is it? Can anyone (with time and who is knowledgeable) kindly explain the nitty-gritties of "hash (or hashes)?" | The Wikipedia article on hash functions is very good,
but I will here give my take. What is a hash? "Hash" is really a broad term
with different formal meanings in different contexts.
There is not a single perfect answer to your question.
I will explain the general underlying concept and
mention some of the most common usages of the term. A "hash" is a function $h$ referred to as hash function that takes as input objects and outputs a string or number.
The input objects are usually members of basic data types like
strings, integers, or
bigger ones composed of other objects like user defined structures.
The output is a typically a number or a string.
The noun "hash" often refers to this output.
The verb "hash" often means "apply a hash function".
The main properties that a hash function should have are: It should be easy to compute and The outputs should be relatively small. Example: Say we want to hash numbers in the range from 0 to 999,999,999 to
number between 0 and 99.
One simple hash function can be $h(x) = x \mod 100$ . Common additional properties: Depending on use case we might want the hash function to
satisfy additional properties.
Here are some common additional properties: Uniformity :
Often we want the hashes of objects to be distinct.
Moreover we may want the hashes to be "spreading-out".
If I want to hash some objects down into 100 buckets
(so the output of my hash function is a number from 0-99),
then I am usually hoping that about 1/100 objects land in bucket 0,
about 1/100 land in bucket 1, and so on. Cryptographic collision resistance :
Sometimes this is taken even farther,
for instance, in cryptography
I may want a hash function such that
it is computationally difficult for an adversary
to find two different inputs that map to the same output. Compression :
I often want to hash arbitrarily-large inputs down into
a constant-size output or fixed number of buckets. Determinism :
I may want a hash function whose output doesn't change between runs,
i.e. the output of the hash function on the same object will always remain the same.
This may seem to conflict with uniformity above, but
one solution is to choose the hash function randomly once,
and not change it between runs. Some applications One common application is in data structures such as a hash table,
which are a way to implement dictionaries.
Here, you allocate some memory, say, 100 "buckets";
then, when asked to store an (key, value) pair in the dictionary,
you hash the key into a number 0-99, and
store the pair in the corresponding bucket in memory.
Then, when you are asked to look up a key,
you hash the key into a number 0-99 with the same hash function and
check that bucket to see if that key is in there.
If so, you return its value. Note that you could also implement dictionaries in other ways,
such as with a binary search tree (if your objects are comparable). Another practical application is checksums,
which are ways to check that two files are the same
(for example, the file was not corrupted from its previous version).
Because hash functions are very unlikely to map two inputs to the same output, you compute and store a hash of the first file,
usually represented as a string.
This hash is very small, maybe only a few dozen ASCII characters.
Then, when you get the second file,
you hash that and check that the output is the same.
If so, almost certainly it is the exact same file byte-for-byte. Another application is in cryptography,
where these hashes should be hard to "invert" --
that is, given the output and the hash function,
it should be computationally hard to figure out the input(s)
that led to that output.
One use of this is for passwords:
Instead of storing the password itself,
you store a cryptographic hash of the password
(maybe with some other ingredients).
Then, when a user enters a password,
you compute its hash and check that it matches the correct hash;
if so, you say the password is correct.
(Now even someone who can look and find out the hash saved on
the server does not have such an easy time pretending to be the user.)
This application can be a case where
the output is just as long or longer than the input,
since the input is so short. | {
"source": [
"https://cs.stackexchange.com/questions/55471",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/47102/"
]
} |
55,489 | Are there any theoretical machines which exceed Turing machines capability in at least some areas? | The Church–Turing thesis (in one formulation) states that everything that can be physically computable can also be computed on a Turing machine. Assuming you believe this theses, and given that you're interested in functions which such machines could compute (and not in, say, interactive computation), then no hypercomputation is possible. The Church–Turing thesis only concerns itself with what is computable, but not with the efficiency of computation. It is known that Turing machines are not so efficient, though they polynomially simulate classical computers. Quantum computers are believed to be exponentially more efficient than Turing machines. In this sense, you can beat Turing machines (if you could only build a scalable quantum computer). Scott Aaronson probably has more to say about this — I'll let you look this up on your own. | {
"source": [
"https://cs.stackexchange.com/questions/55489",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/41391/"
]
} |
55,646 | From This reference : Strict positivity The strict positivity condition rules out declarations such as data Bad : Set where
bad : (Bad → Bad) → Bad
A B C
-- A is in a negative position, B and C are OK Why is A is negative ? Also Why B is allowed ? I understand why C is allowed. | First a terminological explanation: negative and positive positions come from logic. They are about an assymetry in logical connectives: in $A \Rightarrow B$ the $A$ behaves differently from $B$. A similar thing happens in category theory, where we say contravariant and covariant instead of negative and positive, respectively. In physics they speak of quantities that behave "covariantly" and "contravariantly, too. So this is a very general phenomenon. A programmer might think of them as "input" and "output". Now onto inductive datatypes. Think of an inductive datatype $T$ as a kind of algebraic structure: constructors are the operations which take elements of $T$ as arguments and produce new elements of $T$. This is very similar to ordinary algebra: addition takes two numbers and produces a number. In algebra it is customary that an operation takes a finite number of arguments, and in most cases it takes zero (constant), one (unary) or two (binary) arguments. It is convenient to generalize this for constructors of datatypes. Suppose c is a constructor for a datatype T : if c is a constant we can think of it as a function unit -> T , or equivalently (empty -> T) -> T , if c is unary we can think of it as a function T -> T , or equivalently (unit -> T) -> T , if c is binary we can think of it as a function T -> T -> T ,
or equivalently T * T -> T , or equivalently (bool -> T) -> T , if we wanted a constructor c which takes seven arguments, we could view it as a function (seven -> T) -> T where seven is some previously defined type with seven elements. we can also have a constructor c which takes countably infinitely many arguments, that would be a function (nat -> T) -> T . These examples show that the general form of a constructor should be c : (A -> T) -> T where we call A the arity of c and we think of c as a constructor that takes A -many arguments of type T to produce an element of T . Here is something very important: the arities must be defined before we define T , or else we cannot tell what the constructors are supposed to be doing. If someone tries to have an constructor broken: (T -> T) -> T then the question "how many arguments does broken take?" has no good answer. You might try to answer it with "it takes T -many arguments", but that will not do, because T is not defined yet. We might try to get out of the cunundrum by using fancy fixed-point theory to find a type T and an injective function (T -> T) -> T , and would succeed, but we would also break the induction principle for T along the way. So, it's just a bad idea to try such a thing. For the sake of completeness, let me explain the whole story. We need to generalize the above form of constructors a little bit. Sometimes we have operations or constructors that take parameters . For example, scalar multiplication takes a scalar $\lambda$ and a vector $v$ to produce a vector $\lambda \cdot v$. It is a unary operation on vectors, parameterized by a scalar. We could view scalar multiplication as infinitely many unary operations, one for each scalar, but that's annoying. So, the general form of a constructor c should allow a parameter of some type B : c : B * (A -> T) -> T Indeed, many constructors can be rewritten in this way, but not all, we need one more step, namely we should allow A to depend on B : c : (∑ (x : B), A x -> T) -> T This is the final form of a constructor for an inductive type. It is also precisely what W-types are. The form is so general that we only ever need a single constructor c ! Indeed, if we have two of them d' : (∑ (x : B'), A' x -> T) -> T
d'' : (∑ (x : B''), A'' x -> T) -> T then we can combine them into one d : (∑ (x : B), A x -> T) -> T where B := B' + B''
A(inl x) := A' x
A(inr x) := A'' x By the way, if we curry the general form we see that it is equivalent to c : ∏ (x : B), ((A x -> T) -> T) which is closer to what people actually write down in proof assistants. The proof assistants allow us to write down the constructors in convenient ways, but those are equivalent to the general form above (exercise!). | {
"source": [
"https://cs.stackexchange.com/questions/55646",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/23825/"
]
} |
55,917 | I've been reading about Lambda calculus recently but strangely I can't find an explanation for why it is called "Lambda" or where the expression comes from. Can anyone explain the origins of the term? | An excerpt from History of Lambda-calculus and Combinatory Logic by F. Cardone and J.R. Hindley(2006): By the way, why did Church choose the notation “$\lambda$”? In [Church, 1964, §2] he stated clearly that it came from the notation “$\hat{x}$” used for class-abstraction by Whitehead and Russell, by first modifying “$\hat{x}$” to “$\wedge x$” to distinguish function abstraction from class-abstraction, and then changing “$\wedge$” to “$\lambda$” for ease of printing. This origin was also reported in [Rosser, 1984, p.338]. On the other hand, in his later years Church told two enquirers that the choice was more accidental: a symbol was needed and “$\lambda$” just happened to be chosen. | {
"source": [
"https://cs.stackexchange.com/questions/55917",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/35691/"
]
} |
56,615 | Char Code
==== ====
E 0000
i 0001
y 0010
l 0011
k 0100
. 0101
space 011
e 10
r 1100
s 1101
n 1110
a 1111 Original text: Eerie eyes seen near lake Encoded: 0000101100000110011100010101101101001111101011111100011001111110100100101 Why is there no need for a separator in the Huffman encoding? | You don't need a separator because Huffman codes are prefix-free codes (also, unhelpfully, known as "prefix codes"). This means that no codeword is a prefix of any other codeword. For example, the codeword for "e" in your example is 10, and you can see that no other codewords begin with the digits 10. This means that you can decode greedily by reading the encoded string from left to right and outputting a character as soon as you've seen a codeword. For example, 0, 00 and 000 don't code anything so you keep reading bits. When you read 0000, that encodes "E" and, because the code is prefix-free, you know there's no other codeword 0000x, so you can now output "E" and start to read the next codeword. Again, 1 doesn't encode anything but 10 encodes "e". No other codewords begins with "10", so you can output "e". And so on. | {
"source": [
"https://cs.stackexchange.com/questions/56615",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/50277/"
]
} |
56,888 | I am currently finishing my MSc in computer science. I am interested in programming languages, especially in type systems. I got interested in research in this field and next semester I will start a PhD on the subject. Now here is the real question: how can I explain what I (want to) do to people with no previous knowledge in either computer science or related fields? The title comes from the facts that I am not even able to explain what I do to my parents, friends and so on. Yeah, I can say "the whole point is to help software developers to write better software" , but I do not think it is really useful: they are not aware of "programming", they have not clue of what it means. It feels like I am saying I am an auto mechanic to someone from the Middle Ages: they simply do not know what I am talking about, let alone how to improve it. Does anyone have good analogies with real-world? Enlightening examples causing "a-ha" moments? Should I actually show a short and simple snippet of code to 60+ year-old with no computer science (nor academic) experience? If so, which language should I use? Did anyone here face similar issues? | If you have a few minutes, most people know how to add and multiply two three-digit numbers on paper. Ask them to do that, (or to admit that they could, if they had to) and ask them to acknowledge that they do this task methodically: if this number is greater than 9, then add a carry, and so forth. This description they just gave of what to do that is an example of an algorithm . This is how I teach people the word algorithm, and in my experience this has been the best example. Then you can explain that one may imagine there are more complex tasks that computers must do, and that therefore there is a need for an unambiguous language to feed a computer these algorithms. So there has been a proliferation of programming languages because people express their thoughts differently, and you're researching ways to design these languages so that it is harder to make mistakes. This is a very recognizable situation. Most people have no concept that the computers they use run programs, or that those programs are human-written source code, or that a computer could 'read' source code, or that computation, which they associate with arithmetic, is the only thing computers do (and data movement, and networking, maybe). My research is in quantum computing, so when people ask me what I do, I don't attempt to explain that. Instead, I try to explain that quantum physics exists (they've usually heard of Schrödinger's cat, and things that are in two places at once), and that because of this strange physics, faster computation might be possible. My goal is to leave the person feeling a little more knowledeable than they did going in, feeling excited about a world they didn't know existed, but with which you have now familiarized them. I find that that's much more valuable than explaining my particular research questions. | {
"source": [
"https://cs.stackexchange.com/questions/56888",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/8337/"
]
} |
57,003 | My mother is taking some online course in order to be a librarian of sorts, in this course they cover boolean searches, so they can search databases efficiently, however, she got a question sounding something like this: The search "x OR y" will result in 105 000 hits, while a search for only x will result in 80 000 hits, and a search for only y will get 35 000 hits. Why does the search "x OR y" give 105 000 hits, when the combined individual searches gives 115 000 hits? For me this sounded strange, so I tested this myself, using the words bacon and sandwich . Only bacon yielded 179 000 000 results Only sandwich yielded 312 000 000 results bacon OR sandwich gave 491 000 000 results But for me it adds up: 179 000 000 (bacon) + 312 000 000 (sandwich) = 491 000 000 (bacon OR sandwich) Why could an OR query result in fewer hits than both individual queries combined? | Hint: The search x AND y will result in 10 000 hits. | {
"source": [
"https://cs.stackexchange.com/questions/57003",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/50633/"
]
} |
57,262 | Comparing an ordered pair (x,y) to an unordered pair {x, y} (set), then information theoretically, the difference is only one bit, as whether x comes first or y requires exactly a single bit to represent. So, if we're given a set {x,y} where x,y are two different 32-bit integers, can we pack them into 63 bits (rather 64)? It should be possible to recover the original 32 bit integers from the 63 bit result, but without being able to recover their order. | Yes, one can. If $x<y$, map the set $\{x,y\}$ to the number $$f(x,y) = y(y-1)/2 + x.$$ It is easy to show that $f$ is bijective, and so this can be uniquely decoded. Also, when $0 \le x < y < 2^{32}$, we have $0 \le f(x,y) < 2^{63} - 2^{31}$, so this maps the set $\{x,y\}$ to a 63-bit number $f(x,y)$. To decode, you can use binary search on $y$, or take a square root: $y$ should be approximately $\lfloor \sqrt{2 f(x,y)} \rfloor$. | {
"source": [
"https://cs.stackexchange.com/questions/57262",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/10581/"
]
} |
57,648 | There are many applications where a pseudo random number generator is used. So people implement one that they think is great only to find later that it's flawed. Something like this happened with the Javascript random number generator recently. RandU much earlier too. There are also issues of inappropriate initial seeding for something like the Twister. I cannot find examples of anyone combining two or more families of generators with the usual xor operator. If there is sufficient computer power to run things like java.SecureRandom or Twister implementations, why do people not combine them? ISAAC xor XORShift xor RandU should be a fairly good example, and where you can see the weakness of a single generator being mitigated by the others. It should also help with the distribution of numbers into higher dimensions as the intrinsic algorithms are totally different. Is there some fundamental principle that they shouldn't be combined? If you were to build a true random number generator, people would probably advise that you combine two or more sources of entropy. Is my example different? I'm excluding the common example of several linear feedback shift registers working together as they're from the same family. | Sure, you can combine PRNGs like this, if you want, assuming they are seeded independently. However, it will be slower and it probably won't solve the most pressing problems that people have. In practice, if you have a requirement for a very high-quality PRNG, you use a well-vetted cryptographic-strength PRNG and you seed it with true entropy. If you do this, your most likely failure mode is not a problem with the PRNG algorithm itself; the most likely failure mode is lack of adequate entropy (or maybe implementation errors). Xor-ing multiple PRNGs doesn't help with this failure mode. So, if you want a very high-quality PRNG, there's probably little point in xor-ing them. Alternatively, if you want a statistical PRNG that's good enough for simulation purposes, typically the #1 concern is either speed (generate pseudorandom numbers really fast) or simplicity (don't want to spend much development time on researching or implementing it). Xor-ing slows down the PRNG and makes it more complex, so it doesn't address the primary needs in that context, either. As long as you exhibit reasonable care and competence, standard PRNGs are more than good enough, so there's really no reason why we need anything fancier (no need for xor-ing). If you don't have even minimal levels of care or competence, you're probably not going to choose something complex like xor-ing, and the best way to improve things is to focus on more care and competence in the selection of the PRNG rather than on xor-ing. Bottom line : Basically, the xor trick doesn't solve the problems people usually actually have when using PRNGs. | {
"source": [
"https://cs.stackexchange.com/questions/57648",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/31167/"
]
} |
59,675 | From what I have seen about usage of a pair of public and private keys, the public key is used for encrypting a message, and the private key is used for decrypting the encrypted message. If a message is encrypted by the private key, can it be decrypted by the corresponding public key? If yes, can you give some examples of when this case is used? Thanks. | Q: If you pedal backwards on a fish, does it go backwards? A: ??? A fish is not a bicycle. Similarly, you cannot use a private key to encrypt a message or a public key to decrypt a message. They don't have the right equipment. With RSA , which is a popular public-key cryptosystem but not the only one, the private key and the public key have the same mathematical properties, so it is possible to use them interchangeably in the algorithms. (They don't have the same security properties, however — the public key is usually easily guessable from the private key.) You can take an RSA encryption algorithm and feed it a private key, or an RSA decryption algorithm and feed it a public key. However, the results are not meaningful according to standard algorithms. This symmetry between public keys and private keys does not extend to most other public-key cryptosystems. In general, the public key isn't the right type of mathematical object to use for the decryption algorithm, and the private key isn't the right type of mathematical object to use for the encryption algorithm. This being said, public-key cryptosystems are based on the concept of trapdoor functions . A one-way function is a function that is easy to compute, but whose inverse is hard to compute. A trapdoor function is like a one-way function, but there is a “magic” value that makes the inverse easy to compute. If you have a trapdoor function, you can use it to make a public-key encryption algorithm: going forward (in the easy direction), the function encrypts; going backward (in the hard direction), the function decrypts. The magic value required to decrypt is the private key. If you have a trapdoor function, you can also use it to make a digital signature algorithm: going backward (in the hard direction), the function signs ; going forward (in the easy direction), the function verifies a signature. Once again, the magic value required to sign is the private key. Trapdoor functions generally come in families; the data necessary to specify one particular element of the family is the public key. Even though public-key encryption and digital signatures are based on the same concepts, they are not strictly identical. For example, the RSA trapdoor function is based on the difficulty of undoing a multiplication unless you already know one of the factors. There are two common families of public-key encryption schemes based on RSA , known as PKCS#1 v1.5 and OAEP. There are also two common families of digital signature schemes based on RSA, known as PKCS#1 v1.5 and PSS. The two “PKCS#1 v1.5” are of similar designs, but they are not identical. This answer by Thomas Pornin and this answer by Maarten Bodewes go into some details of the difference between signature/verification and decryption/encryption in the case of RSA. Beware that some layman presentations of public-key cryptography masquerade digital signature and verification as decryption and encryption, for historical reasons: RSA was popularized first, and the core operation of RSA is symmetric. (The core operation of RSA, known as “textbook RSA”, is one of the steps in an RSA signature/verification/encryption/decryption algorithm, but it does not constitute in itself a signature, verification, encryption or decryption algorithm.) They are symmetric from the 10000-foot view, but they are not symmetric once you go into the details. See also Reduction from signatures to encryption? , which explains that you can build an encryption scheme from a signature scheme, but only under certain conditions. | {
"source": [
"https://cs.stackexchange.com/questions/59675",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/336/"
]
} |
59,720 | While doing the second code kata (which asks you to implement a binary search algorithm five times, each time with a different method), I've come up with a slightly different solution which works as follows: If i have a sorted array of lenght 100 and I see its starting field contains the number 200 and its ending field contains the number 400, me, as a maths studying human, would be likely to start searching around field 35 if I was searching the number 270, and not the field 50 like in a normal binary search algorithm. Then, if the number on field 35 of the array is 270, 35 is the index I was searching for. If that isn't the case I can compare the number I got (say 280) and repeat the operation taking the lower part of the array (so I have 35 fields with the starting field containing 200 and the ending field containing 280) if the number I found is greater than what I'm searching for, or the upper part of the array (say I got 260: now I have 65 indexes, the first one containing 260 and the final one containing 400. Orientatively, I would head torward index 4 of this sub array, which is index 39 of the entire array) if the number I got is smaller than the number I'm searching for. The question is: can this algorithm be considered a binary search algorithm? If not, has it got its own name? | I would not call this a binary search. It is clearly similar to binary search and it's natural to see it as a refinement of binary search. However it has significantly different algorithm complexity characteristics, Interpolation Search has expected run time of O(log(log(n)) assuming the data is uniformly distributed, however it pays for this by having O(n) worst case run time. I prefer to say "The worst case run time of binary search is O(log(n))" rather than "Depending on the choice of bounding elements the worst case run time of binary search is O(log(n))". This means that I can not classify Interpolation search as a binary search algorithm. | {
"source": [
"https://cs.stackexchange.com/questions/59720",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/53671/"
]
} |
60,588 | I just came across the following thing: I put multiple identical copies of a png image into a folder and then tried to compress that folder with the following methods: tar czf folder.tar.gz folder/ tar cf folder.tar folder/ && xz --stdout folder.tar > folder.tar.xz (this one works well for identical images, however for similar images the gain is zero) zip -r folder.zip folder/ When I checked the size of the .tar.gz , .tar.xz , .zip I realized that it is almost the same as the one of folder/ . I understand that a png image itself may have a high level of compression and therefore cannot be compressed further. However when merging many similar (in this case even identical) png images to an archive and then compressing the archive I would expect the required size to decrease markedly. In the case of identical images I would expect a size of roughly the size of a single image. | Have a look at how compression algorithms work. At least those in the Lempel-Ziv family ( gzip uses LZ77 , zip apparently mostly does as well , and xz uses LZMA ) compress somewhat locally : Similarities that lie far away from each other can not be identified. The details differ between the methods, but the bottom line is that by the time the algorithm reaches the second image, it has already "forgotten" the beginning of the first. And so on. You can try and manually change the parameters of the compression method; if window size (LZ77) resp. block/chunk size (later methods) are at least as large as two images, you will probably see further compression. Note that the above only really applies if you have identical images or almost identical uncompressed images. If there are differences, compressed images may not look anything alike in memory. I don't know how the PNG compression works; you may want to check the hex representations of the images you have for shared substrings manually. Also note that even with changed parameters and redundancy to exploit, you won't get down to the size of one image. Larger dictionaries mean larger code-word size, and even if two images are exactly identical you may have to encode the second one using multiple code-words (which point into the first). | {
"source": [
"https://cs.stackexchange.com/questions/60588",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/23162/"
]
} |
60,599 | In a description of OOP in my textbook, it is written that “in procedure oriented program the program is organized around its code while in object oriented programming, the program is organized around its data”. What is the meaning of this statement? An explanation with an example would be of great help. | Have a look at how compression algorithms work. At least those in the Lempel-Ziv family ( gzip uses LZ77 , zip apparently mostly does as well , and xz uses LZMA ) compress somewhat locally : Similarities that lie far away from each other can not be identified. The details differ between the methods, but the bottom line is that by the time the algorithm reaches the second image, it has already "forgotten" the beginning of the first. And so on. You can try and manually change the parameters of the compression method; if window size (LZ77) resp. block/chunk size (later methods) are at least as large as two images, you will probably see further compression. Note that the above only really applies if you have identical images or almost identical uncompressed images. If there are differences, compressed images may not look anything alike in memory. I don't know how the PNG compression works; you may want to check the hex representations of the images you have for shared substrings manually. Also note that even with changed parameters and redundancy to exploit, you won't get down to the size of one image. Larger dictionaries mean larger code-word size, and even if two images are exactly identical you may have to encode the second one using multiple code-words (which point into the first). | {
"source": [
"https://cs.stackexchange.com/questions/60599",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/54824/"
]
} |
60,965 | I was trying to explain to someone that C is Turing-complete, and realized that I don't actually know if it is, indeed, technically Turing-complete. (C as in the abstract semantics, not as in an actual implementation.) The "obvious" answer (roughly: it can address an arbitrary amount of memory, so it can emulate a RAM machine, so it's Turing-complete) isn't actually correct, as far as I can tell, as although the C standard allows for size_t to be arbitrarily large, it must be fixed at some length, and no matter what length it is fixed at it is still finite. (In other words, although you could, given an arbitrary halting Turing machine, pick a length of size_t such that it will run "properly", there is no way to pick a length of size_t such that all halting Turing machines will run properly) So: is C99 Turing-complete? | I'm not sure but I think the answer is no, for rather subtle reasons. I asked on Theoretical Computer Science a few years ago and didn't get an answer that goes beyond what I'll present here. In most programming languages, you can simulate a Turing machine by: simulating the finite automaton with a program that uses a finite amount of memory; simulating the tape with a pair of linked lists of integers, representing the content of the tape before and after the current position. Moving the pointer means transferring the head of one of the lists onto the other list. A concrete implementation running on a computer would run out of memory if the tape got too long, but an ideal implementation could execute the Turing machine program faithfully. This can be done with pen and paper, or by buying a computer with more memory, and a compiler targeting an architecture with more bits per word and so on if the program ever runs out of memory. This doesn't work in C because it's impossible to have a linked list that can grow forever: there's always some limit on the number of nodes. To explain why, I first need to explain what a C implementation is. C is actually a family of programming languages. The ISO C standard (more precisely, a specific version of this standard) defines (with the level of formality that English allows) the syntax and semantics a family of programming languages. C has a lot of undefined behavior and implementation-defined behavior . An “implementation” of C codifies all the implementation-defined behavior (the list of things to codify is in appendix J for C99). Each implementation of C is a separate programming language. Note that the meaning of the word “implementation” is a bit peculiar: what it really means is a language variant, there can be multiple different compiler programs that implement the same language variant. In a given implementation of C, a byte has $2^{\texttt{CHAR_BIT}}$ possible values. All data can represented as an array of bytes: a type t has at most
$2^{\texttt{CHAR_BIT} \times \texttt{sizeof(t)}}$ possible values. This number varies in different implementations of C, but for a given implementation of C, it's a constant. In particular, pointers can only take at most $2^{\texttt{CHAR_BIT} \times \texttt{sizeof(void*)}}$ values. This means that there is a finite maximum number of addressable objects. The values of CHAR_BIT and sizeof(void*) are observable, so if you run out of memory, you can't just resume running your program with larger values for those parameters. You would be running the program under a different programming language — a different C implementation. If programs in a language can only have a bounded number of states, then the programming language is no more expressive than finite automata. The fragment of C that's restricted to addressable storage only allows at most $n \times 2^{\texttt{CHAR_BIT} \times \texttt{sizeof(void*)}}$ program states where $n$ is the size of the abstract syntax tree of the program (representing the state of the control flow), therefore this program can be simulated by a finite automaton with that many states. If C is more expressive, it has to be through the use of other features. C does not directly impose a maximum recursion depth. An implementation is allowed to have a maximum, but it's also allowed not to have one. But how do we communicate between a function call and its parent? Arguments are no good if they're addressable, because that would indirectly limit the depth of recursion: if you have a function int f(int x) { … f(…) …} then all the occurrences of x on active frames of f have their own address and so the number of nested calls is bounded by the number of possible addresses for x . A C program can use non-addressable storage in the form of register variables. “Normal” implementations can only have a small, finite number of variables that don't have an address, but in theory an implementation could allow an unbounded amount of register storage. In such an implementation, you can make an unbounded amount of recursive calls to a function, as long as its argument are register . But since the arguments are register , you can't make a pointer to them, and so you need to copy their data around explicitly: you can only pass around a finite amount of data, not an arbitrary-sized data structure that's made of pointers. With unbounded recursion depth, and the restriction that a function can only get data from its direct caller ( register arguments) and return data to its direct caller (the function return value), you get the power of deterministic pushdown automata . I can't find a way to go further. (Of course you could make the program store the tape content externally, through file input/output functions. But then you wouldn't be asking whether C is Turing-complete, but whether C plus an infinite storage system is Turing-complete, to which the answer is a boring “yes”. You might as well define the storage to be a Turing oracle — call fopen("oracle", "r+") , fwrite the initial tape content to it and fread back the final tape content.) | {
"source": [
"https://cs.stackexchange.com/questions/60965",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/19696/"
]
} |
60,979 | The aim of this problem is to find a subset (need not be consecutive) of a given set such that the sum is maximal and less than some given number $w$. (Note, we are trying to find a subset that is less than or equal to $w$ and not closest to $w$). For example, given a set $\{1, 3, 5, 9, 10\}$ and maximum weight 17, the maximal subset is $\{3, 5, 9\}$ since its sum is exactly 17. Another example: given a set $\{1, 3, 4, 9\}$ and maximum weight 15, the maximal subset is $\{1, 4, 9\}$ since its sum is 14, and there are no other subsets whose sum is 15. Example with both positive and negative numbers: given a set $\{-3, 2, 4\}$ and maximum weight 3, the subset is the set itself since -3 + 2 + 4 = 3. I know how to solve it with only positive numbers, but I am struggling to find an algorithm to solve this problem for the general case with both positive and negative numbers. Obviously, my goal is not to use the brute force approach and check every possible subset since the complexity would be $O(n2^n)$. I stumbled upon an idea on another post that suggested adding a sufficiently large number to every elements in the set and subsequently changing the maximum weight. That is given a set $R = \{ a_1, a_2, ... , a_n \}$, we add some number $X$ (we can pick some number greater than equal to the absolute value of the smallest negative number) to get a set that looks like $\{ a_1 + X, a_2 + X, ... , a_n + X \}$ and change the maximum weight to $nX + w$ where $w$ was the original weight. Now, we have reduced the problem to only non-negative numbers. However, I could not see a way to actually find the subset that was closest to the original weight, but only whether any elements add up exactly the original weight (ie, there is no way to actually find the subset, but only to determine that some subset exists). Is there any other clever trick like this one to solve the problem for both positive and negative numbers? Any help would be thoroughly appreciated. | I'm not sure but I think the answer is no, for rather subtle reasons. I asked on Theoretical Computer Science a few years ago and didn't get an answer that goes beyond what I'll present here. In most programming languages, you can simulate a Turing machine by: simulating the finite automaton with a program that uses a finite amount of memory; simulating the tape with a pair of linked lists of integers, representing the content of the tape before and after the current position. Moving the pointer means transferring the head of one of the lists onto the other list. A concrete implementation running on a computer would run out of memory if the tape got too long, but an ideal implementation could execute the Turing machine program faithfully. This can be done with pen and paper, or by buying a computer with more memory, and a compiler targeting an architecture with more bits per word and so on if the program ever runs out of memory. This doesn't work in C because it's impossible to have a linked list that can grow forever: there's always some limit on the number of nodes. To explain why, I first need to explain what a C implementation is. C is actually a family of programming languages. The ISO C standard (more precisely, a specific version of this standard) defines (with the level of formality that English allows) the syntax and semantics a family of programming languages. C has a lot of undefined behavior and implementation-defined behavior . An “implementation” of C codifies all the implementation-defined behavior (the list of things to codify is in appendix J for C99). Each implementation of C is a separate programming language. Note that the meaning of the word “implementation” is a bit peculiar: what it really means is a language variant, there can be multiple different compiler programs that implement the same language variant. In a given implementation of C, a byte has $2^{\texttt{CHAR_BIT}}$ possible values. All data can represented as an array of bytes: a type t has at most
$2^{\texttt{CHAR_BIT} \times \texttt{sizeof(t)}}$ possible values. This number varies in different implementations of C, but for a given implementation of C, it's a constant. In particular, pointers can only take at most $2^{\texttt{CHAR_BIT} \times \texttt{sizeof(void*)}}$ values. This means that there is a finite maximum number of addressable objects. The values of CHAR_BIT and sizeof(void*) are observable, so if you run out of memory, you can't just resume running your program with larger values for those parameters. You would be running the program under a different programming language — a different C implementation. If programs in a language can only have a bounded number of states, then the programming language is no more expressive than finite automata. The fragment of C that's restricted to addressable storage only allows at most $n \times 2^{\texttt{CHAR_BIT} \times \texttt{sizeof(void*)}}$ program states where $n$ is the size of the abstract syntax tree of the program (representing the state of the control flow), therefore this program can be simulated by a finite automaton with that many states. If C is more expressive, it has to be through the use of other features. C does not directly impose a maximum recursion depth. An implementation is allowed to have a maximum, but it's also allowed not to have one. But how do we communicate between a function call and its parent? Arguments are no good if they're addressable, because that would indirectly limit the depth of recursion: if you have a function int f(int x) { … f(…) …} then all the occurrences of x on active frames of f have their own address and so the number of nested calls is bounded by the number of possible addresses for x . A C program can use non-addressable storage in the form of register variables. “Normal” implementations can only have a small, finite number of variables that don't have an address, but in theory an implementation could allow an unbounded amount of register storage. In such an implementation, you can make an unbounded amount of recursive calls to a function, as long as its argument are register . But since the arguments are register , you can't make a pointer to them, and so you need to copy their data around explicitly: you can only pass around a finite amount of data, not an arbitrary-sized data structure that's made of pointers. With unbounded recursion depth, and the restriction that a function can only get data from its direct caller ( register arguments) and return data to its direct caller (the function return value), you get the power of deterministic pushdown automata . I can't find a way to go further. (Of course you could make the program store the tape content externally, through file input/output functions. But then you wouldn't be asking whether C is Turing-complete, but whether C plus an infinite storage system is Turing-complete, to which the answer is a boring “yes”. You might as well define the storage to be a Turing oracle — call fopen("oracle", "r+") , fwrite the initial tape content to it and fread back the final tape content.) | {
"source": [
"https://cs.stackexchange.com/questions/60979",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/27067/"
]
} |
61,022 | I am trying to write a script that generates random graphs and I need to know if an edge in a weighted graph can have the 0 value. actually it makes sense that 0 could be used as an edge's weight, but I've been working with graphs in last few days and I have never seen an example of it. | Allowed by whom ? There is no Central Graph Administration that decides what you can and cannot do. You can define objects in any way that's convenient for you, as long as you're clear about what the definition is. If zero-weighted edges are useful to you, then use them; just make sure your readers know that's what you're doing. The reason you don't usually see zero-weight edges is that, in most contexts, an edge with weight zero is exactly equivalent to the absence of an edge. For example, if your graph represents countries and the amount of trade done between them, a zero-weight edge would mean no trade, which is the same as having no edge at all. If your graph represents distances, a zero-weight edge would correspond to two places at distance zero from each other, which would mean they'd actually be the same place, so should both be represented by the same vertex. However, in other contexts, zero-weight edges could make sense. For example, if your graph represents a road network and edge weights represent the amount of traffic, there's a big difference between a road that nobody uses (zero-weight edge) and no road at all (no edge). | {
"source": [
"https://cs.stackexchange.com/questions/61022",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/55377/"
]
} |
62,275 | If we have any arbitrary computer program that can modify its instructions, is it possible to simulate that program with a program that cannot modify its instructions? Edit: I am new to stackexchange so not sure if I'm allowed to ask a NEW question here, but here goes:
Ok so the proof that it is possible is actually really simple as you guys have shown.
Now, I am wondering: Are there problems for which it is more efficient (and to what extent) to use the most efficient self-modifying algorithm to solve the problem, versus the input-output-equivalent most efficient non-selfmodifying algorithm? | Yes, it's possible. You can simulate the program by using an interpreter for the language it's written in. Now, the program (the interpreter) is fixed and the thing that used to be a self-modifying program is now the interpreter's data. In particular, you could perfectly well have a universal Turing machine that allowed the TM it's simulating to modify its own description. (The description of the simulated machine, I mean; not the UTM.) | {
"source": [
"https://cs.stackexchange.com/questions/62275",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/56687/"
]
} |
62,679 | In the paper "A Conflict-Free Replicated JSON Datatype" , I encountered this notation for formally defining "rules": What is this notation called? How do I read it? For example: the DOC rule doesn't have anything in its "numerator" — why not? the EXEC and GET rules appear to have two separate terms above the line, what does that mean? the VAR rule stands out a bit as well, since while many other rules use some sort of arrow (which I would take to mean "implies") up top this one only seems to be saying that x is an element of something. almost everything is peppered with an initial Ap, which the text describes as "the state of replica p is described by Ap, a finite partial function" — how would a savvy reader of this notation tend to "see" that part of every rule? This site did suggest a related question that has some very similar-looking notation, over on the question What is the significance of ⟨B, s⟩ -> ⟨B', s'⟩ as the initial rule in this question about small-step semantics? — this is tagged as Operational semantics , and that does seem to be a strong lead. Is that indeed the framework under which I should be interpreting these figures? Could you easily summarize this in "crash course" form so that, even if I can't verify the correctness of their proofs, I could at least get a bit more understanding of what they are saying in this section? | This is a standard notation for an inference rule . The premises are put above a horizontal line, and the conclusion is put below the line. Thus, it ends up looking like a "fraction", but with one or more logical propositions above the line and a single proposition below the line. If you see a label (e.g., "LET" or "VAR" in your example) next to it, that's just a human-readable name to identify the particular rule. You might also see this referred to as natural deduction or Gentzen-style natural deduction . This is a common notation in the programming languages literature. You'll see it all over the place. It's very convenient for the kinds of conclusions and recursive-structured proofs that arise in that field. You'll see this notation used to express axioms/rules. You can think of each axiom as a template with "meta-variables" (e.g., expr ); you can replace each meta-variable with some syntax from the programming language (e.g., any expression that is valid in the programming language), and you'll get an instance of the rule. The inference rule promises that if all of the propositions are above the line are true (for some instance of the template, where you consistently replace each metavariable with the same value throughout the rule), then the proposition below the line will be true, too. | {
"source": [
"https://cs.stackexchange.com/questions/62679",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/57205/"
]
} |
62,985 | We have many problems, like factorization, that are strongly conjectured, but not proven, to be outside P. Are there any questions with the opposite property, namely, that they are strongly conjectured but not proven to be inside P? | Two decades ago, one of the plausible answers would be primality testing : there were algorithms that ran in randomized polynomial time, and algorithms that ran in deterministic polynomial time under a plausible number-theoretic conjecture, but no known deterministic polynomial-time algorithms. In 2002, that changed with a breakthrough result by Agrawal, Kayal, and Saxena that primality testing is in P. So, we can no longer use that example. I would put polynomial identity testing as an example of a problem that has a good chance of being in P, but where no one has been able to prove it. We know of randomized polynomial-time algorithms for polynomial identity testing, but no deterministic algorithms. However, there are plausible reasons to believe that the randomized algorithms can be derandomized. For instance, in cryptography it is strongly believed that highly secure pseudorandom generators exist (e.g., AES-CTR is one reasonable candidate). And if that is true, then polynomial identity testing should be in P. (For instance, use a fixed seed, apply the pseudorandom generator, and use its output in lieu of random bits; it would take a tremendous conspiracy for this to fail.) This can be made formal using the random oracle model; if we have hash functions that can be suitably modelled by the random oracle model, then it follows that there is a deterministic polynomial-time algorithm for polynomial identity testing. For more elaboration of this argument, see also my answer on a related subject and my comments on a related question . | {
"source": [
"https://cs.stackexchange.com/questions/62985",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/12550/"
]
} |
63,018 | I've found several open source visual programing tools like Blockly and friends, and other projects hosted at Github, but could't find any that work directly with the abstract syntax tree. Why is that? I'm asking because once I discovered that every compiler out there has a phase in the compilation process where it parses the source code to an AST, it was obvious to me that some visual programing tools could take advantage of this to give the programer ways to edit the AST directly in a visual way, and also to do the round trip from source to node-graph and then back again to source when needed. For instance one could think that from the JavaScript AST Visualizer to an actual JavaSript visual programming tool there isn’t too much of a difference. So, what am I missing? | Many of these tools do work directly with the abstract syntax tree (or rather, a direct one-to-one visualisation of it). That includes Blockly, which you've seen, and the other block-based languages and editors like it ( Scratch , Pencil Code / Droplet , Snap! , GP , Tiled Grace , and so on). Those systems don't show a traditional vertex-and-edge graph representation, for reasons explained elsewhere (space, and also interaction difficulty), but they are directly representing a tree. One node, or block, is a child of another if it is directly, physically inside the parent. I built one of these systems ( Tiled Grace , paper , paper ). I can assure you, it is very much working with the AST directly: what you see on the screen is an exact representation of the syntax tree, as nested DOM elements (so, a tree!). This is the AST of some code. The root is a method call node "for ... do". That node has some children, starting with "_ .. _", which itself has two children, a "1" node and a "10" node. What comes up on screen is exactly what the compiler backend spits out in the middle of the process - that's fundamentally how the system works. If you like, you can think of it as a standard tree layout with the edges pointing out of the screen towards you (and occluded by the block in front of them), but nesting is as valid a way of showing a tree as a vertex diagram. It will also "do the round trip from source to node-graph and then back again to source when needed". In fact, you can see that happen when you click "Code View" at the bottom. If you modify the text, it'll be re-parsed and the resulting tree rendered for you to edit again, and if you modify the blocks, the same thing happens with the source. Pencil Code does essentially the same thing with, at this point, a better interface . The blocks it uses are a graphical view of the CoffeeScript AST. So do the other block- or tile-based systems, by and large, although some of them don't make the nesting aspect quite as clear in the visual representation, and many don't have an actual textual language behind them so the "syntax tree" can be a bit illusive, but the principle is there. What you're missing, then, is that these systems really are working directly with the abstract syntax tree. What you see and manipulate is a space-efficient rendering of a tree, in many cases literally the AST a compiler or parser produces. | {
"source": [
"https://cs.stackexchange.com/questions/63018",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/5282/"
]
} |
63,037 | So I'm doing exercises from Dasgupta's Algorithms. The exercise i'm having trouble with is: Show that, if $c$ is a positive real number, then $g(n) = 1 + c + c^2 +...+c^n$ is: $\Theta(1)$ if $c<1$ $\Theta(n)$ if $c=1$ $\Theta(c^n)$ if $c>1$ (I dont know if this is a hint but it is included in the text:
"The moral: in big- $\Theta$ terms, the sum of a geometric series is simply the first term if the series is strictly decreasing, the last term if the series is strictly indreasing or the number of terms if the series in unchanging") The only one that makes sense for me is 2) where $1+1^2+..+1^n$ is the same as $n+1$ , and removing the 1 gives $O(n)$ . I dont know if my reasoning makes sense, but thats all i've got. I have no idea where to start or how to think on the other two. Any suggestions? | Many of these tools do work directly with the abstract syntax tree (or rather, a direct one-to-one visualisation of it). That includes Blockly, which you've seen, and the other block-based languages and editors like it ( Scratch , Pencil Code / Droplet , Snap! , GP , Tiled Grace , and so on). Those systems don't show a traditional vertex-and-edge graph representation, for reasons explained elsewhere (space, and also interaction difficulty), but they are directly representing a tree. One node, or block, is a child of another if it is directly, physically inside the parent. I built one of these systems ( Tiled Grace , paper , paper ). I can assure you, it is very much working with the AST directly: what you see on the screen is an exact representation of the syntax tree, as nested DOM elements (so, a tree!). This is the AST of some code. The root is a method call node "for ... do". That node has some children, starting with "_ .. _", which itself has two children, a "1" node and a "10" node. What comes up on screen is exactly what the compiler backend spits out in the middle of the process - that's fundamentally how the system works. If you like, you can think of it as a standard tree layout with the edges pointing out of the screen towards you (and occluded by the block in front of them), but nesting is as valid a way of showing a tree as a vertex diagram. It will also "do the round trip from source to node-graph and then back again to source when needed". In fact, you can see that happen when you click "Code View" at the bottom. If you modify the text, it'll be re-parsed and the resulting tree rendered for you to edit again, and if you modify the blocks, the same thing happens with the source. Pencil Code does essentially the same thing with, at this point, a better interface . The blocks it uses are a graphical view of the CoffeeScript AST. So do the other block- or tile-based systems, by and large, although some of them don't make the nesting aspect quite as clear in the visual representation, and many don't have an actual textual language behind them so the "syntax tree" can be a bit illusive, but the principle is there. What you're missing, then, is that these systems really are working directly with the abstract syntax tree. What you see and manipulate is a space-efficient rendering of a tree, in many cases literally the AST a compiler or parser produces. | {
"source": [
"https://cs.stackexchange.com/questions/63037",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/57682/"
]
} |
63,048 | I am learning about radix trees (aka compressed tries) and Patricia tries, but I am finding conflicting information on whether or not they are actually the same. A radix tree can be obtained from a normal (uncompressed) trie by merging nodes with their parents when the nodes are the only child. This also holds for Patricia tries. In what ways are the two data structures different? For example, NIST lists the two as the same: Patricia tree (data structure) Definition: A compact representation of a trie in which any node that
is an only child is merged with its parent. Also known as radix tree. Many sources on the web claim the same. However, apparently Patricia tries are a special case of radix trees. Wikipedia entry says: PATRICIA tries are radix tries with radix equals 2, which means that
each bit of the key is compared individually and each node is a
two-way (i.e., left versus right) branch. I don't really understand this. Is the difference only in the way comparisons are made when doing look-ups? How can each node be a "two-way branch"? Shouldn't there be at most ALPHABET_SIZE possible branches for a given node? Can someone clarify this? For practical purposes, are radix tries typically implemented as Patricia tries (and, hence, often considered the same)? Or can no such generalizations be made? | I found this post very helpful. To see the difference between Patricia tries and radix trees, it is important to understand: The notion of radix , since Patricia tries are radix trees with radix equal to 2. The way keys are treated: as streams of bits . Keys are compared $r$ bits at a time, where $2^r$ is the radix of the trie. Suppose that we insert the keys smile , smiled , and smiles (in this order) in a Patricia trie. The binary representation of these keys is the following: Note that smile is a prefix of smiled , and, analyzing the binary representation, we can see that the first bit that differs (from left to right) is 0 (highlighted in red in the second row); for this reason, smiled will be the left child of smile . Similarly, smiles will be the right child of smiled because they share the same prefix up to a bit whose value is 1 (highlighted in red in the third row). The resulting Patricia trie after inserting the three keys is the following: If the radix was, for example, 4, then internal nodes could have, at most, four children (with their edges labeled 00, 01, 10, and 11, respectively). In this case, keys would be compared by chunks of 2 bits, and not 1 (as in Patricia tries). In what ways are the two data structures different? To my understanding, the only difference is the radix, which is equal to 2 in the case of Patricia tries. This value can be any power of 2 in regular radix trees. Is the difference only in the way comparisons are made when doing look-ups? In both data structures, the comparison operation is bitwise. However, the number of bits that are checked atomically varies according to the radix. In the case of Patricia tries, bits are compared individually (since radix = 2). This is not necessarily the case in radix trees. In general, bits are checked in chunks of size $\log_2{R}$, where $R$ is the radix of the trie. How can each node be a "two-way branch"? Shouldn't there be at most ALPHABET_SIZE possible branches for a given node? The radix establishes the maximum number of children that the nodes of a radix tree can have. For example, when radix = 2, each node can have at most two children. This is the case of Patricia tries (also known as binary radix trees). Are radix tries typically implemented as Patricia tries (and, hence, often considered the same)? Or can no such generalizations be made? To be honest, I do not have an answer for this question. It seems that both data structures were proposed around the same time by different authors. For historical reasons that I am unaware of, both terms still live today. | {
"source": [
"https://cs.stackexchange.com/questions/63048",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/57714/"
]
} |
63,203 | Wikipedia as well as other sources that I have found list C's void type as a unit type as opposed to an empty type. I find this confusing as it seems to me that void better fits the definition of an empty/bottom type. No values inhabit void , as far as I can tell. A function with a return type of void specifies that the function does not return anything and thus can only perform some side-effect. A pointer of type void* is a subtype of all other pointer types. Also, conversions to and from void* in C are implicit. I am not sure if the last point has any merit as an argument for void being an empty type, as void* is more or less a special case with not much relation to void . On the other hand, void itself is not a subtype of all other types, which as far as I can tell is a requirement for a type to be a bottom type. | In C, void is used for multiple unrelated things. Depending on what it's used for, its meaning may be a unit type, an empty type, or something else. When void is used by itself (as opposed to void* , a pointer to void), it's a unit type, i.e. a type with a single value. Functions that return void are said to “return nothing”, but what this really means is that they don't return any information. They return $0$ bits of information, which means that they return a value of a type that contains $2^0 = 1$ distinct values, i.e. a unit type. This is not an empty type: a function that returns an empty type cannot return a value, since there is no value of that type. A function whose return type is empty can only loop forever, or abort the program, or raise an exception ( longjmp ) (or otherwise arrange not to return, e.g. by transferring control to another thread or process using functionality beyond standard C). To keep things confusing, it is conventional in C to use void in lieu of an empty type (C doesn't have an empty type). The void type requires $0$ bits of storage. Because C insists on every object occupying a whole, nonzero number of bytes of storage, it's forbidden to create an object of type void , and there's a special syntax to return the void value (a return statement with the value omitted). There's no syntax that yields the value of type void , but that value is there whenever a function whose return type is void returns. C does not have a bottom type in the sense of allowing any possible type. Even incomplete types specify the general nature of its values, e.g. pointers or structs or unions or functions. But void* is a pointer to any non-function type: it's the least element of the algebra of object pointer types, i.e. it's the bottom object pointer type. Unlike the general case of T * where T is some non-void type, void* is not the type of pointers to a value of type void , but the type of pointers to a value of unspecified type. | {
"source": [
"https://cs.stackexchange.com/questions/63203",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/57964/"
]
} |
63,205 | Church-Turing thesis states that any effectively computable process is computable by a TM. Let's assume for now that it means that every physical machine is computable by a TM. Let's call it A. Now if this process has an unbounded number of states (for example we could think of an engine in a car and take the geographic coordinates of the car as its states) we could say that A is also Turing complete. And therefore Turing equivalent. Edit : Let's try to see more precisely how A could be considered as Turing complete even if the physical is continuous. Let's for example consider a car moved by an engine. And let's consider a planet where towns' names are built with the classic alphabet A, B, C etc. following rules that make the number of towns on that planet potentially infinite or unbounded. Now let's imagine that my vehicle is programmed by an algorithm to drive from town to town indefinitely. Let's call these towns the states of my system . Now I am certain (but please correct me if I am wrong) that my physical system is indeed Turing complete - and not just a push down automaton. Now if you accept that any state an observer (typically a physicist) will use to describe a physical system can be coded in a finite alphabet i.e. an integer (even if the physical world is continuous) then surely we could use the vehicle experience to say that in human eyes the physical world is Turing complete. Is that correct? | In C, void is used for multiple unrelated things. Depending on what it's used for, its meaning may be a unit type, an empty type, or something else. When void is used by itself (as opposed to void* , a pointer to void), it's a unit type, i.e. a type with a single value. Functions that return void are said to “return nothing”, but what this really means is that they don't return any information. They return $0$ bits of information, which means that they return a value of a type that contains $2^0 = 1$ distinct values, i.e. a unit type. This is not an empty type: a function that returns an empty type cannot return a value, since there is no value of that type. A function whose return type is empty can only loop forever, or abort the program, or raise an exception ( longjmp ) (or otherwise arrange not to return, e.g. by transferring control to another thread or process using functionality beyond standard C). To keep things confusing, it is conventional in C to use void in lieu of an empty type (C doesn't have an empty type). The void type requires $0$ bits of storage. Because C insists on every object occupying a whole, nonzero number of bytes of storage, it's forbidden to create an object of type void , and there's a special syntax to return the void value (a return statement with the value omitted). There's no syntax that yields the value of type void , but that value is there whenever a function whose return type is void returns. C does not have a bottom type in the sense of allowing any possible type. Even incomplete types specify the general nature of its values, e.g. pointers or structs or unions or functions. But void* is a pointer to any non-function type: it's the least element of the algebra of object pointer types, i.e. it's the bottom object pointer type. Unlike the general case of T * where T is some non-void type, void* is not the type of pointers to a value of type void , but the type of pointers to a value of unspecified type. | {
"source": [
"https://cs.stackexchange.com/questions/63205",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/56696/"
]
} |
63,236 | I learnt that multi core processors have more than one processing units( i.e. the main executing units ALU etc.) and they are better at performance. I want to know how they share Physical memory. I'll take following example to make my question clearer - Say, There is a memory location M in physical memory and Two threads T1 and T2 running on different cores. Is it possible for T1 and T2 to access M at the same instance of time or do they have to wait for one other to complete access i.e. do they share the same memory bus so that they have to wait, for one another or Can they read M at same instance of time from two different memory buses? If former is the case, There is not much performance gain right, as they have to wait for memory bus to be free? Summarising, Are memory operations independent of other cores or each core can only make a physical memory access when memory bus is free? | In C, void is used for multiple unrelated things. Depending on what it's used for, its meaning may be a unit type, an empty type, or something else. When void is used by itself (as opposed to void* , a pointer to void), it's a unit type, i.e. a type with a single value. Functions that return void are said to “return nothing”, but what this really means is that they don't return any information. They return $0$ bits of information, which means that they return a value of a type that contains $2^0 = 1$ distinct values, i.e. a unit type. This is not an empty type: a function that returns an empty type cannot return a value, since there is no value of that type. A function whose return type is empty can only loop forever, or abort the program, or raise an exception ( longjmp ) (or otherwise arrange not to return, e.g. by transferring control to another thread or process using functionality beyond standard C). To keep things confusing, it is conventional in C to use void in lieu of an empty type (C doesn't have an empty type). The void type requires $0$ bits of storage. Because C insists on every object occupying a whole, nonzero number of bytes of storage, it's forbidden to create an object of type void , and there's a special syntax to return the void value (a return statement with the value omitted). There's no syntax that yields the value of type void , but that value is there whenever a function whose return type is void returns. C does not have a bottom type in the sense of allowing any possible type. Even incomplete types specify the general nature of its values, e.g. pointers or structs or unions or functions. But void* is a pointer to any non-function type: it's the least element of the algebra of object pointer types, i.e. it's the bottom object pointer type. Unlike the general case of T * where T is some non-void type, void* is not the type of pointers to a value of type void , but the type of pointers to a value of unspecified type. | {
"source": [
"https://cs.stackexchange.com/questions/63236",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/58007/"
]
} |
63,403 | It's fairly simple to understand why the halting problem is undecidable for impure programs (i.e., ones that have I/O and/or states dependent on the machine-global state); but intuitively, it seems that a pure program's halting on an ideal computer would be decidable through e.g. static analysis. Is this in fact the case? If not, what are some counterexamples or papers disproving this claim? | Here is a proof of undecidability by reduction from the Halting problem. Reduction: Given a machine $M$ and an input $x$, build a new Turing Machine $H$ which does not read any input, but writes $M$ and $x$ on the tape and simulates $M$ on $x$ until $M$ halts. The behaviour of this new machine $H$ is independent of the input tape, so it is a pure Turing Machine on which only static analysis is applicable. If static analysis were sufficient, then it could show whether $H$ halts, which would show whether $M$ halts on $x$, which would solve the halting problem for impure machines, which we know is undecidable, and therefore your problem is undecidable too. | {
"source": [
"https://cs.stackexchange.com/questions/63403",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/7814/"
]
} |
63,931 | I'm fairly new to heaps and am trying to wrap my head around why min and max heaps are represented as trees when a sorted array appears to both provide min / max properties by default. And a follow up: what is the advantage of dealing with the complexity of inserting into a heap given an algorithm like quick sort handles sorting very well? Context: I'm working through CLRS / MIT 6.006 in python and have only seen integer representations of leaf values. Is this more applicable in a language like C where each leaf contains a struct that can't easily be sorted? | $\small \texttt{find-min}$ (resp. $\small \texttt{find-max}$), $\small \texttt{delete-min}$ (resp. $\small \texttt{delete-max}$) and $\small \texttt{insert}$ are the three most important operations of a min-heap (resp. max-heap), and they usually have complexity of $\small \mathcal{O}(1)$, $\small \mathcal{O}(\log n)$ and $\small \mathcal{O}(\log n)$ respectively if you implement a min/max-heap by a binary tree. Now suppose instead you implement a min-heap by a sorted (non-decreasing) array (The case for max-heap is similar). $\small \texttt{find-min}$ and $\small \texttt{delete-min}$ are of $\small \mathcal{O}(1)$ complexity if $\small \texttt{insert}$ is not required in your application, since you can maintain a pointer $\small p$ that always points to the minimum element in your array. When the minimum element is removed, you just need to move $\small p$ one step to the next element in the array. Dealing with insertion in a sorted array is not trivial. Given a new element $\small e$, we can use binary search to locate its position in the array to insert it. But the point is that if you want to insert it there, you have to move a lot of old elements (can be $\small \mathcal{O}(n)$) around to make a vacancy for the new element to reside. This is quite inefficient for most applications. You may also choose to re-sort the array after an element is inserted, this requires $\small \mathcal{O}(n\log n)$ time however. The last point, how you implement a data structure really depends on your application. NO single implementation is best for all cases. Analyze your application, find out the most frequent operations, and then decide the appropriate implementation. | {
"source": [
"https://cs.stackexchange.com/questions/63931",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/58940/"
]
} |
63,961 | I came across an odd problem when writing an interpreter that (should) hooks to external programs/functions: Functions in 'C' and 'C++' can't hook variadic functions , e.g. I can't make a function that calls 'printf' with the exact same arguments that it got, and instead has to call an alternate version that take a variadic object. This is very problematic since I want to be able to make an object that hold an anonymous hook. So, I thought that this was weird since Forth , JavaScript , and perhaps a plethora of other languages can do this very easily without having to resort to assembly language/machine code. Since other languages can do this so easily, does that mean that the class of problems that each programming language can solve actually varies by language, even though these languages are all Turing complete ? | Turing complete languages can compute the same set of functions $\mathbb{N}^k \rightarrow \mathbb{N}$, which is the set of general recursive partial functions. That's it. This says nothing about the language features. A Turing Machine has very limited compositional features. The untyped $\lambda$-calculus is far more compositional, but lacks many features commonly found in modern languages. Turing completeness tells nothing about having types, built in arrays/integers/dictionaries, input/output capabilities, network access, multithreading, dynamic allocation, ... Just because Java does not have feature X (say, macros, higher-rank types, or dependent types), it does not suddenly stop being Turing complete. Turing completeness and language expressiveness are two different notions. | {
"source": [
"https://cs.stackexchange.com/questions/63961",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/37673/"
]
} |
64,578 | When a computer stores a variable, when a program needs to get the variable's value, how does the computer know where to look in memory for that variable's value? | I'd suggest you look into the wonderful world of Compiler Construction! The answer is that it's a bit of a complicated process. To try to give you an intuition, remember that variable names are purely there for the programmer's sake. The computer will ultimately turn everything into addresses at the end. Local variables are (generally) stored on the stack: that is, they're part of the data structure that represents a function call. We can determine the complete list of variables that a function will (maybe) use by looking at that function, so the compiler can see how many variables it needs for this function and how much space each variable takes. There's a little bit of magic called the stack pointer, which is a register which always stores the address of where the current stack starts. Each variable is given a "stack offset", which is where in the stack it's stored. Then, when the program needs to access a variable x , the compiler replaces x with STACK_POINTER + x_offset , to get the actual physical place it's stored in memory. Note that, this is why you get a pointer back when you use malloc or new in C or C++. You can't determine where exactly in memory a heap-allocated value is, so you have to keep a pointer to it. That pointer will be on the stack, but it will point to the heap. The details of updating stacks for function calls and returns are complicated, so I'd reccomend The Dragon Book or The Tiger Book if you're interested. | {
"source": [
"https://cs.stackexchange.com/questions/64578",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/59774/"
]
} |
64,732 | In java, you must explicitly cast in order to downcast a variable public class Fruit{} // parent class
public class Apple extends Fruit{} // child class
public static void main(String args[]) {
// An implicit upcast
Fruit parent = new Apple();
// An explicit downcast to Apple
Apple child = (Apple)parent;
} Is there any reason for this requirement, aside from the fact that java doesn't do any type inference? Are there any "gotchas" with implementing automatic downcasting in a new language? For instance: Apple child = parent; // no cast required | Upcasts always succeed. Downcasts can result in a runtime error, when the object runtime type is not a subtype of the type used in the cast. Since the second is a dangerous operation, most typed programming languages require the programmer to explicitly ask for it. Essentially, the programmer is telling the compiler "trust me, I know better -- this will be OK at runtime". When type systems are concerned, upcasts put the burden of the proof on the compiler (which has to check it statically), downcasts put the burden of the proof on the programmer (which has to think hard about it). One could argue that a properly designed programming language would forbid downcasts completely, or provide safe casts alternatives, e.g. returning an optional type Option<T> . Many widespread languages, though, chose the simpler and more pragmatic approach of simply returning T and raising an error otherwise. In your specific example, the compiler could have been designed to deduce that parent is actually an Apple through a simple static analysis, and allow the implicit cast. However, in general the problem is undecidable, so we can't expect the compiler to perform too much magic. | {
"source": [
"https://cs.stackexchange.com/questions/64732",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/23549/"
]
} |
64,742 | Let's take for example this addition: 3 + (-1) . 1 in binary is 001, and to obtain it's 1's complement counterpart we
flip the bits. So it is: 110 . 3 in binary is 011. 011 + 110 = 1 001 That first 1 which is in bold has to be added to the number formed by the last 3 bits as follows: 001 + 1 = 010 ( 2 in decimal). Why do we do the last step, adding that outer carry? Which is the logic behind? | Upcasts always succeed. Downcasts can result in a runtime error, when the object runtime type is not a subtype of the type used in the cast. Since the second is a dangerous operation, most typed programming languages require the programmer to explicitly ask for it. Essentially, the programmer is telling the compiler "trust me, I know better -- this will be OK at runtime". When type systems are concerned, upcasts put the burden of the proof on the compiler (which has to check it statically), downcasts put the burden of the proof on the programmer (which has to think hard about it). One could argue that a properly designed programming language would forbid downcasts completely, or provide safe casts alternatives, e.g. returning an optional type Option<T> . Many widespread languages, though, chose the simpler and more pragmatic approach of simply returning T and raising an error otherwise. In your specific example, the compiler could have been designed to deduce that parent is actually an Apple through a simple static analysis, and allow the implicit cast. However, in general the problem is undecidable, so we can't expect the compiler to perform too much magic. | {
"source": [
"https://cs.stackexchange.com/questions/64742",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/59544/"
]
} |
65,401 | I'm having trouble understanding the proof of the undecidability of the Halting Problem. If $H(a,b)$ returns whether or not the program $a$ halts on input $b$, why do we have to pass the code of $P$ for both $a$ and $b$? Why can't we feed $H()$ with $P$ and some arbitrary input, say, $x$? | The proof aims to find a contradiction. You have to understand what the contradiction derived is, in order to understand why $P$ is used as an input to itself. The contradiction is, informally: if we have a machine H(a, b) that decides "a accepts b", then we can construct a machine that accepts machines that do not accept themselves. (Read that a few times until you get it.) The machine shown in the picture – let's call it $M$ – $M(P) = $ does $P$ not accept $\langle P \rangle$? The contradiction happens when you ask: does $M$ accept $\langle M \rangle$? Try to work out the two options to see how there is a contradiction. $M$ accepts $\langle M \rangle$ if and only if $M$ does not accept $\langle M \rangle$; this is clearly a contradiction. This is why it is essential for the proof to run $P$ on itself not some arbitrary input. This is a common theme in impossibility proofs known as diagonal arguments. | {
"source": [
"https://cs.stackexchange.com/questions/65401",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/60730/"
]
} |
65,708 | I have an NP-complete decision problem. Given an instance of the problem, I would like to design an algorithm that outputs YES, if the problem is feasible, and, NO, otherwise. (Of course, if the algorithm is not optimal, it will make errors.) I cannot find any approximation algorithms for such problems. I was looking specifically for SAT and I found in Wikipedia page about Approximation Algorithm the following: Another limitation of the approach is that it applies only to optimization problems and not to "pure" decision problems like satisfiability, although it is often possible to ... Why we do not, for example, define the approximation ratio to be something proportional to the number of mistakes that the algorithm makes? How do we actually solve decision problems in greedy and sub-optimal manner? | Approximation algorithms are only for optimization problems, not for decision problems. Why don't we define the approximation ratio to be the fraction of mistakes an algorithm makes, when trying to solve some decision problem? Because "the approximation ratio" is a term with a well-defined, standard meaning, one that means something else, and it would be confusing to use the same term for two different things. OK, could we define some other ratio (let's call it something else -- e.g., "the det-ratio") that quantifies the number of mistakes an algorithm makes, for some decision problem? Well, it's not clear how to do that. What would be the denominator for that fraction? Or, to put it another way: there are going to be an infinite number of problem instances, and for some of them the algorithm will give the right answer and others it will give the wrong answer, so you end up with a ratio that is "something divided by infinity", and that ends up being meaningless or not defined. Alternatively, we could define $r_n$ to be the fraction of mistakes the algorithm mistakes, on problem instances of size $n$. Then, we could compute the limit of $r_n$ as $n \to \infty$, if such a limit exists. This would be well-defined (if the limit exists). However, in most cases, this might not be terribly useful. In particular, it implicitly assumes a uniform distribution on problem instances. However, in the real world, the actual distribution on problem instances may not be uniform -- it is often very far from uniform. Consequently, the number you get in this way is often not as useful as you might hope: it often gives a misleading impression of how good the algorithm is. To learn more about how people deal with intractability (NP-hardness), take a look at Dealing with intractability: NP-complete problems . | {
"source": [
"https://cs.stackexchange.com/questions/65708",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/22707/"
]
} |
65,754 | Im having a hard time trying to figure out how to find an upper bound to the following recurrence: $T(N)=T(N-1)+\mathcal{O}(n)$ where i know initially $N=\lfloor\tfrac{n}{logn}\rfloor$ I believe it can be solved as a linear recurrence, but i don't know how to put $n$ in terms of $N$. | Approximation algorithms are only for optimization problems, not for decision problems. Why don't we define the approximation ratio to be the fraction of mistakes an algorithm makes, when trying to solve some decision problem? Because "the approximation ratio" is a term with a well-defined, standard meaning, one that means something else, and it would be confusing to use the same term for two different things. OK, could we define some other ratio (let's call it something else -- e.g., "the det-ratio") that quantifies the number of mistakes an algorithm makes, for some decision problem? Well, it's not clear how to do that. What would be the denominator for that fraction? Or, to put it another way: there are going to be an infinite number of problem instances, and for some of them the algorithm will give the right answer and others it will give the wrong answer, so you end up with a ratio that is "something divided by infinity", and that ends up being meaningless or not defined. Alternatively, we could define $r_n$ to be the fraction of mistakes the algorithm mistakes, on problem instances of size $n$. Then, we could compute the limit of $r_n$ as $n \to \infty$, if such a limit exists. This would be well-defined (if the limit exists). However, in most cases, this might not be terribly useful. In particular, it implicitly assumes a uniform distribution on problem instances. However, in the real world, the actual distribution on problem instances may not be uniform -- it is often very far from uniform. Consequently, the number you get in this way is often not as useful as you might hope: it often gives a misleading impression of how good the algorithm is. To learn more about how people deal with intractability (NP-hardness), take a look at Dealing with intractability: NP-complete problems . | {
"source": [
"https://cs.stackexchange.com/questions/65754",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/12923/"
]
} |
65,758 | There are $n$ stairs and a person standing at the bottom wants to reach the top. The person can climb either 1 stair or 2 stairs every time. What is the total number of ways they can reach the top? There are many ways to do this by using code. But when I first read this problem was to use a combinatorics approach. This problems seems very similar to a "stars and bars" problem. The stars would be the number of stairs and the bars would be the number of steps that be taken. Given this is the case, my solution is simply: ${(n)+(2-1)}\choose{1}$. Is the concept of stars and bars the simplest approach to solving this problem or is there a simpler combinatorics concept at play here and if so, how do I recognize such things? By simple, I mean can I solve this problem by simpler computation than combinatorics (even if it requires some sort of brute force method)? | Approximation algorithms are only for optimization problems, not for decision problems. Why don't we define the approximation ratio to be the fraction of mistakes an algorithm makes, when trying to solve some decision problem? Because "the approximation ratio" is a term with a well-defined, standard meaning, one that means something else, and it would be confusing to use the same term for two different things. OK, could we define some other ratio (let's call it something else -- e.g., "the det-ratio") that quantifies the number of mistakes an algorithm makes, for some decision problem? Well, it's not clear how to do that. What would be the denominator for that fraction? Or, to put it another way: there are going to be an infinite number of problem instances, and for some of them the algorithm will give the right answer and others it will give the wrong answer, so you end up with a ratio that is "something divided by infinity", and that ends up being meaningless or not defined. Alternatively, we could define $r_n$ to be the fraction of mistakes the algorithm mistakes, on problem instances of size $n$. Then, we could compute the limit of $r_n$ as $n \to \infty$, if such a limit exists. This would be well-defined (if the limit exists). However, in most cases, this might not be terribly useful. In particular, it implicitly assumes a uniform distribution on problem instances. However, in the real world, the actual distribution on problem instances may not be uniform -- it is often very far from uniform. Consequently, the number you get in this way is often not as useful as you might hope: it often gives a misleading impression of how good the algorithm is. To learn more about how people deal with intractability (NP-hardness), take a look at Dealing with intractability: NP-complete problems . | {
"source": [
"https://cs.stackexchange.com/questions/65758",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/-1/"
]
} |
66,746 | I agree that a Turing Machine can do "all possible mathematical problems". But that is because it is just a machine representation of an algorithm: first do this, then do that, finally output that. I mean anything that is solvable can be represented by an algorithm (because that is precisely the definition of 'solvable'). It is just a tautology. I said nothing new here. And by creating a machine representation of an algorithm, that it will also solve all possible problems is also nothing new. This is also mere tautology. So basically when it is said that a Turing Machine is the most powerful machine, what it effectively means is that the most powerful machine is the most powerful machine! Definition of "most powerful": That which can accept any language. Definition of "Algorithm": Process for doing anything.
Machine representation of "Algorithm": A machine that can do anything. Therefore it is only logical that the machine representation of an algorithm will be the most powerful machine. What's the new thing Alan Turing gave us? | I agree that a Turing Machine can do "all the possible mathematical problems". Well, you shouldn't, because it's not true. For example, Turing machines cannot determine if polynomials with integer coefficients have integer solutions ( Hilbert's tenth problem ). Is Turing Machine “by definition” the most powerful machine? No. We can dream up an infinite hierarchy of more powerful machines . However, the Turing machine is the most powerful machine that we know, at least in principle, how to build. That's not a definition, though: it is just that we do not have any clue how to build anything more powerful, or if it is even possible. What's the new thing Alan Turing gave us? A formal definition of algorithm. Without such a definition (e.g., the Turing machine), we have only informal definitions of algorithm, along the lines of "A finitely specified procedure for solving something." OK, great. But what individual steps are these procedures allowed to take? Are basic arithmetic operations steps? Is finding the gradient of a curve a step? Is finding roots of polynomials a step? Is finding integer roots of polynomials a step? Each of those seems about as natural. However, if you allow all of them, your "finitely specified procedures" are more powerful than Turing machines, which means that they can solve things that can't be solved by algorithms. If you allow all but the last one, you're still within the realms of Turing computation. If we didn't have a formal definition of algorithm, we wouldn't even be able to ask these questions. We wouldn't be able to discuss what algorithms can do, because we wouldn't know what an algorithm is . | {
"source": [
"https://cs.stackexchange.com/questions/66746",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/62301/"
]
} |
66,849 | The underlying question: What does lambda calculus do for us that we can't do with the basic function properties and notation generally learned in middle school algebra? First of all, what does abstract mean in the context of lambda calculus? My understanding of the word abstract is something that is divorced from the machinery, the conceptual summary of a concept. However, lambda functions, by doing away with function names, prevents a certain level of abstraction. For example: f(x) = x + 2
h(x, y) = x + 5 y But even without defining the machinery of these functions, we can easily talk about their composition. For example: 1. h(x, y) . f(x) . f(x) . h(x, y) or
2. h . f . f . h We can include the arguments if we want, or we can abstract away completely to give an overview of what's happening. And we can quickly reduce them to a single function. Let's look at composition 2. I can have student layers of detail I can write with depending on my emphasis: g = h . f . f . h
g(x, y) = h(x, y) . f(x) . f(x) . h(x, y)
g(x, y) = h . f . f . h = x + 10 y + 4 Let's perform the above with lambda calculus, or at least define the functions. I'm not sure this is right, but I believe the first and second expressions increment by 2. (λuv.u(u(uv)))(λwyx.y(wyx))x And to multiply by 5y. (λz.y(5z)) Rather than be abstract, this seems to get into the very machinery of what it means to add, multiply, etc. Abstraction, in my mind, means higher level rather than lower level. Furthermore, I am struggling to see why lambda calculus is even a thing. What is the advantage of (λuv.u(u(uv)))(λwyx.y(wyx))x over h(x) = x + 5 y or a combined notation Hxy.x+5y or even Haskell's notation h x y = x + 5 * y Again, what does lambda calculus do for us that we can't do with the f(x)-style function properties and notation many are familiar with. | There are many reasons why the lambda calculus is so important. A very important reason is the lambda calculus allows us to have a model of computation in which computable functions are first-class citizens. One cannot express higher-order functions in the language of middle school algebra. Take as example the lambda expression $$\lambda f. \lambda g. \lambda x. f(g(x))$$ This simple expression shows us that, within the lambda calculus, function composition is itself a function. In middle school algebra, this is not easily expressed. In the lambda calculus, it is very easy to express that a function will return a function as its result. Here is a small example. The expression (where I here assume an applied lambda calculus with addition and integer constants) $$(\lambda f. \lambda g. \lambda x. f((g(x)))(\lambda x. x+2)$$ will reduce to $$\lambda g. \lambda x. g(x)+2$$ Notice also that within the lambda calculus, functions are expressions and not definitions of the form $f(x) = e$. This frees us from the need to name functions and to distinguish between a syntactic category of expressions and a syntactic category of definitions. Also, when it becomes impossible (or just notationally cumbersome) to express higher-order functions, one will also have problems with assigning types to expressions. Function composition has the polymorphic type $$\forall \alpha.\forall \beta. \forall \gamma. (\beta \rightarrow \gamma) \rightarrow ((\alpha \rightarrow \beta) \rightarrow \gamma)$$ in the Hindley-Milner type system. A very strong selling point for the lambda calculus is precise the notion of typed lambda calculus . The various type systems for functional programming languages such as Haskell and the ML family are based on type systems for lambda calculi, and these type systems provide strong guarantees in the form of mathematical theorems: If a program $e$ is well-typed and $e$ reduces to the residual $e'$, then $e'$ will also be well-typed. And if $e$ is well-typed, then $e$ will not exhibit certain errors. The proofs as programs correspondence is particularly noteworthy. The Curry-Howard isomorphism (see e.g. https://www.rocq.inria.fr/semdoc/Presentations/20150217_PierreMariePedrot.pdf ) shows that there is a very precise correspondence between the simply typed lambda calculus and intuitionistic propositional logic: To every type $T$ corresponds a logical formula $\phi_T$. A proof of $\phi_T$ corresponds to a lambda term with type $T$, and a beta-reduction of this term corresponds to performing a cut elimination in the proof. I urge those who feel that middle school algebra is a good alternative to the lambda calculus to develop an account of higher-order, polymorphically typed middle school algebra together with an appropriate notion of Curry-Howard isomorphism. If you can even work out an interactive proof assistant based on middle school algebra that would allow us to prove the many theorems that have been formalized using lambda calculus-based proof assistants such as Coq and Isabelle, that would be even better. I would then start using middle school algebra, and so, I am sure, would many others with me. | {
"source": [
"https://cs.stackexchange.com/questions/66849",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/62443/"
]
} |
67,284 | Are there programs that can 'translate' source code between any two languages (assuming the translator has access to the requisite libraries)? If there are, how do they work (techniques used, knowledge required, etc)? How would they feasibly be constructed? If they aren't, what are the restrictions preventing their development? Is this an A.I complete problem (natural language translation is listed as one)? EDIT Conversion is only expected, when the language has the same expression power, can solve the same kind of problems and the code to be converted can be expressed in the destination language. (E.g conversion from a shell script to MATLAB isn't expected). | TLDR; this is possible but not practical. (assuming the translator has access to the requisite libraries)? This ends up being the tricky bit, and is part of why things like this don't end up being used in practice. All compilers are translators. Translating from one language to another is definitely possible, and this is literally all a compiler is doing. The language that a compiler spits out as output is generally machine code or assembly, but this is just another language, and there are compilers (sometimes called transpilers or transcompilers) which translate between two languages . For example, there's a gamut of compile-to-Javascript languages like PureScript, Elm, ClojureScript, etc. Translating between any two Turing Complete languages is always possible. Ignoring things like library calls and FFI and other nasty practical bits that get in the way, that is. If a language is Turing Complete, then you have: A translation that converts a Turing Machine to code in this language A translation from this language into a Turing Machine So to translate from language A to language B, you convert the A code into a Turing Machine, then convert that machine into B code. Of course, in practice, the practical bits get in the way, and this also requires you having the translations accessible to you. They exist for basically every language, but that doesn't mean someone has taken the time to write them out. Doing this translation efficiently is hard . Different language prioritize different things. For example, if you translate from C to Python, you're probably going to have to end up simulating C's memory as a Python dictionary, so that you can do pointer arithmetic. There will be overhead associated with this, because you're now not accessing the bare metal memory instructions. Different languages have difference performance priorities, so something that one language optimizes (or rather, an implementation of one language optimizes) might be impossible to do quickly in another language. Translating a functional language with proper tail calls will have slowdown if you translate it into a language without proper tail calls. Doing this translation doesn't make the code readable . It's easy to get a piece of code in language B that behaves the same as the code from language A. It's hard to make it look like code a human would have written in B, for a number of reasons. A and B might have different abstraction tools, and the computer has no idea what makes code readable. This will be particularly true if you end up using the Turing Machine translation I described earlier. This raises the question: what's the point of such a translation? If all you get at the end us a block of slow, unreadable code, why not just compile it to machine code and use some kind of FFI or inter-process communication to link the pieces together? There are some exceptions to this. Sometimes you need things in a certain language (like JavaScript). Sometimes language are similar, and a sensible translation is easy. Sometimes a language is not meant to be run, but to have its code extracted into another language (such as Coq). But in general, it's not a very practical thing. | {
"source": [
"https://cs.stackexchange.com/questions/67284",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/62509/"
]
} |
67,499 | I am preparing for a coding interview and I can't really figure out the most efficient way to solve this problem. Let's say we have two arrays consisting of numbers that are unsorted. Array 2 contains a number that Array 1 does not. Both arrays have randomly located numbers, not necessarily in the same order or at the same indices. For example: Array 1
[78,11, 143, 84, 77, 1, 26, 35 .... n] Array 2
[11,84, 35, 25, 77, 78, 26, 143 ... 21 ... n+1] What is the fastest algorithm for finding the number that differs? What is its running time? In this example, the number we would be looking for is 21. My idea was to run through Array 1 and delete that value from array 2. Iterate until you are finished. This should be around $O(n \log n)$ running time, right? | I see four main ways to solve this problem, with different running times: $O(n^2)$ solution: this would be the solution that you propose. Note that, since the arrays are unsorted, deletion takes linear time. You carry out $n$ deletions; therefore, this algorithm takes quadratic time. $O(n \: log \: n)$ solution: sort the arrays beforehand; then, perform a linear search to identify the distinct element. In this solution, the running time is dominated by the sorting operation, hence the $O(n \: log \: n)$ upper bound. When you identify a solution to a problem, you should always ask yourself: can I do better? In this case, you can, making a clever use of data structures. Note that all you need to do is to iterate one array and perform repeated lookups in the other array. What data structure allows you to do lookups in (expected) constant time? You guessed right: a hash table . $O(n)$ solution (expected): iterate the first array and store the elements in a hash table; then, perform a linear scan in the second array, looking up each element in the hash table. Return the element that is not found in the hash table. This linear-time solution works for any type of element that you can pass to a hash function (e.g., it would work similarly for arrays of strings). If you want upper-bound guarantees and the arrays are strictly composed of integers, the best solution is, probably, the one suggested by Tobi Alafin (even though this solution will not give you the index of the element that differs in the second array): $O(n)$ solution (guaranteed): sum up the elements of the first array. Then, sum up the elements of the second array. Finally, perform the substraction. Note that this solution can actually be generalized to any data type whose values can be represented as fixed-length bit strings, thanks to the bitwise XOR operator . This is thoroughly explained in Ilmari Karonen's answer. Finally, another possibility (under the same assumption of integer arrays) would be to use a linear-time sorting algortihm such as counting sort. This would reduce the running time of the sorting-based solution from $O(n \: log \: n)$ to $O(n)$. | {
"source": [
"https://cs.stackexchange.com/questions/67499",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/63162/"
]
} |
67,684 | I read in this assembly programming tutorial that 8 bits are used for data while 1 bit is for parity, which is then used for detecting parity error (caused by hardware fault or electrical disturbance). Is this true? | A byte of data is eight bits, there may be more bits per byte of data that are used at the OS or even the hardware level for error checking (parity bit, or even a more advanced error detection scheme), but the data is eight bits and any parity bit is usually invisible to the software. A byte has been standardized to mean 'eight bits of data'. The text isn't wrong in saying there may be more bits dedicated to storing a byte of data of than the eight bits of data, but those aren't typically considered part of the byte per se, the text itself points to this fact. You can see this in the following section of the tutorial: Doubleword: a 4-byte (32 bit) data item 4*8=32, it might actually take up 36 bits on the system but for your intents and purposes it's only 32 bits. | {
"source": [
"https://cs.stackexchange.com/questions/67684",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/63276/"
]
} |
67,897 | I've been seeing all over stack Overflow, e.g here , here , here , here , here and some others I don't care to mention, that "any program that uses recursion can be converted to a program using only iteration". There was even a highly upvoted thread with a highly upvoted answer that said yes it's possible. Now I'm not saying they're wrong. It's just that that answer counters my meagre knowledge and understanding about computing. I believe every iterative function can be expressed as recursion, and wikipedia has a statement to that effect. However, I doubt the converse is true. For one, I doubt non-primitive recursive functions can be expressed iteratively. I also doubt hyper-operations can be expressed iteratively. In his answer (which I don't understand by the way) to my question @YuvalFIlmus said that it's not possible to convert any sequence of mathematical operations into a sequence of additions. If YF's answer is indeed correct (I guess it is, but his reasoning was above my head) then doesn't this mean that not every recursion can be converted into iteration? Because if it was possible to convert every recursion into iteration, I'd be able to express all operations as a sequence of additions. My question is this: Can every recursion be converted to iteration and why? Please give an answer a bright highschooler or a first year undergrad will understand. Thank you. P.S I don't know what primitive recursive is (I do know about the Ackermann function, and that it isn't primitive recursive, but is still computable. ALl my knowledge on it comes from the Wikipedia page on the Ackermann function.) P.P.S: If the answer is yes, could you for example write an iterative version of a non-primitive-recursive function. E.g Ackermann in the answer. It'll help me understand. | It's possible to replace recursion by iteration plus unbounded memory . If you only have iteration (say, while loops) and a finite amount of memory, then all you have is a finite automaton. With a finite amount of memory, the computation has a finite number of possible steps, so it's possible to simulate them all with a finite automaton. Having unbounded memory changes the deal. This unbounded memory can take many forms which turn out to have equivalent expressive power. For example, a Turing machine keeps it simple: there's a single tape, and the computer can only move forward or backward on the tape by one step at a time — but that's enough to do anything that you can do with recursive functions. A Turing machine can be seen as an idealized model of a computer (finite state machine) with some extra storage that grows on demand. Note that it's crucial that not only there isn't a finite bound on the tape, but even given the input, you can't reliably predict how much tape will be needed. If you could predict (i.e. compute) how much tape is needed from the input, then you could decide whether the computation would halt by calculating the maximum tape size and then treating the whole system, including the now finite tape, as a finite state machine. Another way to simulate a Turing machine with computers is as follows. Simulate the Turing machine with a computer program that stores the beginning of the tape in memory. If the computation reaches the end of the part of the tape that fits in memory, replace the computer by a bigger computer and run the computation again. Now suppose that you want to simulate a recursive computation with a computer. The techniques for executing recursive functions are well-known: each function call has a piece of memory, called a stack frame . Crucially, recursive functions can propagate information through multiple calls by passing variables around. In terms of implementation on a computer, that means that a function call might access the stack frame of a (grand-) * parent call. A computer is a processor — a finite state machine (with a huge number of states, but we're doing computation theory here, so all that matters is that it's finite) — coupled with a finite memory. The microprocessor runs one giant while loop: “while the power is on, read an instruction from memory and execute it”. (Real processors are much more complex than that, but it doesn't affect what they can compute, only how fast and conveniently they do it.) A computer can execute recursive functions with just this while loop to provide iteration, plus the mechanism to access memory, including the ability to increase the size of the memory at will. If you restrict the recursion to primitive recursion, then you can restrict iteration to bounded iteration. That is, instead of using while loops with an unpredictable running time, you can use for loops where the number of iterations is known at the beginning of the loop¹. The number of iterations might not be known at the beginning of the program: it can itself have been computed by previous loops. I'm not going to even sketch a proof here, but there is an intuitive relationship between going from primitive recursion to full recursion, and going from for loops to while loops: in both cases, it involves not knowing in advance when you'll stop. With full recursion, this is done with the minimization operator, where you keep going until you find a parameter that satisfies the condition. With while loops, this is done by keeping going until the loop condition is satisfied. ¹ for loops in C-like languages can perform unbounded iteration just like while , it's just a matter of convention to restrict them to bounded iteration. When people talk about “for loops” in theory of computation, that means only loops that count from 1 to $n$ (or equivalent). | {
"source": [
"https://cs.stackexchange.com/questions/67897",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/62509/"
]
} |
67,904 | I am a little bit confused about how to calculate the memory capacity. word=data lines size
byte=8 bit
n=adresse lines size Most of the people use this formula to calculate the capacity of the
memory: C=(2^n*word)/8 octet Is this formula correct when speaking about byte-addressable? Because if we have byte-addressable memory, I think the capacity will be 2^n octet. If this is correct why people use the formula in all cases? If not can you please explain why to me? Thanks you. | It's possible to replace recursion by iteration plus unbounded memory . If you only have iteration (say, while loops) and a finite amount of memory, then all you have is a finite automaton. With a finite amount of memory, the computation has a finite number of possible steps, so it's possible to simulate them all with a finite automaton. Having unbounded memory changes the deal. This unbounded memory can take many forms which turn out to have equivalent expressive power. For example, a Turing machine keeps it simple: there's a single tape, and the computer can only move forward or backward on the tape by one step at a time — but that's enough to do anything that you can do with recursive functions. A Turing machine can be seen as an idealized model of a computer (finite state machine) with some extra storage that grows on demand. Note that it's crucial that not only there isn't a finite bound on the tape, but even given the input, you can't reliably predict how much tape will be needed. If you could predict (i.e. compute) how much tape is needed from the input, then you could decide whether the computation would halt by calculating the maximum tape size and then treating the whole system, including the now finite tape, as a finite state machine. Another way to simulate a Turing machine with computers is as follows. Simulate the Turing machine with a computer program that stores the beginning of the tape in memory. If the computation reaches the end of the part of the tape that fits in memory, replace the computer by a bigger computer and run the computation again. Now suppose that you want to simulate a recursive computation with a computer. The techniques for executing recursive functions are well-known: each function call has a piece of memory, called a stack frame . Crucially, recursive functions can propagate information through multiple calls by passing variables around. In terms of implementation on a computer, that means that a function call might access the stack frame of a (grand-) * parent call. A computer is a processor — a finite state machine (with a huge number of states, but we're doing computation theory here, so all that matters is that it's finite) — coupled with a finite memory. The microprocessor runs one giant while loop: “while the power is on, read an instruction from memory and execute it”. (Real processors are much more complex than that, but it doesn't affect what they can compute, only how fast and conveniently they do it.) A computer can execute recursive functions with just this while loop to provide iteration, plus the mechanism to access memory, including the ability to increase the size of the memory at will. If you restrict the recursion to primitive recursion, then you can restrict iteration to bounded iteration. That is, instead of using while loops with an unpredictable running time, you can use for loops where the number of iterations is known at the beginning of the loop¹. The number of iterations might not be known at the beginning of the program: it can itself have been computed by previous loops. I'm not going to even sketch a proof here, but there is an intuitive relationship between going from primitive recursion to full recursion, and going from for loops to while loops: in both cases, it involves not knowing in advance when you'll stop. With full recursion, this is done with the minimization operator, where you keep going until you find a parameter that satisfies the condition. With while loops, this is done by keeping going until the loop condition is satisfied. ¹ for loops in C-like languages can perform unbounded iteration just like while , it's just a matter of convention to restrict them to bounded iteration. When people talk about “for loops” in theory of computation, that means only loops that count from 1 to $n$ (or equivalent). | {
"source": [
"https://cs.stackexchange.com/questions/67904",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/63729/"
]
} |
68,163 | We have been reading about algorithms for MST, strong-connectivity, routing, etc. in directed graphs. Also recently people have been doing research for dynamic and fault tolerant algorithms for directed graphs. But I was wondering if there are any practical applications where the underlining graph network is "Directed". Other than social-networks all problems that I could think of like rail/road network, Internet network, etc. deal with undirected graphs only. Edit 1: I understand that these can be used to model some scenarios where links are directed but I was wondering how often these scenarios occur in real-world, and how important is study of fault tolerance for directed graphs. | Recalling that a directed graph is a graph where the edges have an associated direction with them. Using a directed graph you can represent asymmetrical relationships between nodes, while in undirected graph we can represent only symmetrical relationships. Practically, using a directed graph you can represent: Road networks (using a directed graph you can represent streets' direction); hyperlinks connecting web pages ; dependencies in software modules ; prey-predator relationships ; deterministic finite automaton . Besides these classical examples, you can depict many other real-world scenarios (financial trade, scheduling, infectious disease, citation, control flow, etc) that needs an ordered relationship [1] . | {
"source": [
"https://cs.stackexchange.com/questions/68163",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/60611/"
]
} |
68,231 | I have been studying compilers for a while, and I have been searching what's meant by "context" in grammar and what it means for grammar to be "context-free", but with no result. So can anyone help with this ? | The context can be explained with regards to the production rules allowed for different grammars in Chomsky hierarchy. If you consider context-free grammars, their production rules have the following form: $$ A \rightarrow \alpha$$ So, you can observe that the left part of this kind of rules is made up of only one non-terminal symbol; thus, the substitution of the non-terminal symbol takes place without considering its "context", that is the other symbols it is surrounded by. On the other hand, if you consider production rules of context-sensitive grammars, they have the following form: $$ \beta A \gamma \rightarrow \beta \alpha \gamma$$ where $A$ is a non-terminal and $\alpha$, $\beta$, $\gamma$ are sequences of non-terminals and terminals. In this case the "context" (i.e., $\beta$ and $\gamma$) of the non-terminal symbol to be substituted influences the effect of the substitution and it is part of the rule itself. You can find more details in this answer on mathematics and in this answer on software engineering. | {
"source": [
"https://cs.stackexchange.com/questions/68231",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/64119/"
]
} |
68,635 | In languages like C, the programmer is expected to insert calls to free. Why doesn't the compiler do this automatically? Humans do it in a reasonable amount of time(ignoring bugs), so it is not impossible. EDIT: For future reference, here is another discussion that has an interesting example. | Because it's undecidable whether the program will use the memory again. This means that no algorithm can correctly determine when to call free() in all cases, which means that any compiler that tried to do this would necessarily produce some programs with memory leaks and/or some programs that continued to use memory that had been freed. Even if you ensured that your compiler never did the second one and allowed the programmer to insert calls to free() to fix those bugs, knowing when to call free() for that compiler would be even harder than knowing when to call free() when using a compiler that didn't try to help. | {
"source": [
"https://cs.stackexchange.com/questions/68635",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/64566/"
]
} |
68,888 | I just starting taking a course on Data Structures and Algorithms and my teaching assistant gave us the following pseudo-code for sorting an array of integers: void F3() {
for (int i = 1; i < n; i++) {
if (A[i-1] > A[i]) {
swap(i-1, i)
i = 0
}
}
} It may not be clear, but here $n$ is the size of the array A that we are trying to sort. In any case, the teaching assistant explained to the class that this algorithm is in $\Theta(n^3)$ time (worst-case, I believe), but no matter how many times I go through it with a reversely-sorted array, it seems to me that it should be $\Theta(n^2)$ and not $\Theta(n^3)$. Would someone be able to explain to me why this is $Θ(n^3)$ and not $Θ(n^2)$? | This algorithm can be re-written like this Scan A until you find an inversion . If you find one, swap and start over. If there is none, terminate. Now there can be at most $\binom{n}{2} \in \Theta(n^2)$ inversions and you need a linear-time scan to find each -- so the worst-case running time is $\Theta(n^3)$. A beautiful teaching example as it trips up the pattern-matching approach many succumb to! Nota bene: One has to be a little careful: some inversions appear early, some late, so it is not per se trivial that the costs add up as claimed (for the lower bound). You also need to observe that swaps never introduce new inversions. A more detailed analysis of the case with the inversely sorted array will then yield something like the quadratic case of Gauss' formula. As @gnasher729 aptly comments, it's easy to see the worst-case running time is $\Omega(n^3)$ by analyzing the running time when sorting the input $[1, 2, \dots, n, 2n, 2n-1, \dots, n+1]$ (though this input is probably not the worst case). Be careful: don't assume that a reversely-sorted array will necessarily be the worst-case input for all sorting algorithms. That depends on the algorithm. There are some sorting algorithms where a reversely-sorted array isn't the worst case, and might even be close to the best case. | {
"source": [
"https://cs.stackexchange.com/questions/68888",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/64835/"
]
} |
69,215 | Can a context-free grammar include "dead states" from an automaton, such as $$G = \big(\{a, b, c\}, \{A, B, C\}, \{A\to aB, B\to b, B\to C, C\to cC\}, A\big)\,?$$ The production rules $B\to C$ and $C\to cC$ will loop forever and never generate a word. Is this allowed or MUST production rules end with an terminal at some point? | Context-free grammars are allowed to contain unproductive rules . This is accepted, because every CFG generates the same language as some proper CFG which contains no unproductive rules, no empty string productions, and no cycles; so it is safe to assume that a CFG is proper without loss of generality. | {
"source": [
"https://cs.stackexchange.com/questions/69215",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/65140/"
]
} |
69,580 | We have Hoare logic. Why is it still possible that an algorithm is right but there is no proof that it's correct? Suppose the algorithm is expressed in C. Then we can argue step by step that it's doing what it's supposed to do. So my question is: Give me an example of an algorithm that's right but does not have a proof of correctness. EDIT: I think a little background can help clarify where I'm going. Let me quote Scott Aaronson: Since the 1970s, there's been speculation that P $\ne$ NP might be independent (that is, neither provable nor disprovable) from the standard axiom systems for mathematics, such as Zermelo-Fraenkel set theory. To be clear, this would mean that either a polynomial-time algorithm for NP-complete problems doesn't exist, but we can never prove it (at least not in our usual formal systems), or else a polynomial-time algorithm for NP-complete problems does exist, but either we can never prove that it works, or we can never prove that it halts in polynomial time. I'm referring to the second possibility. Since Aaronson can so confidently list it as a possibility, I think there must be an existing example of type 2. That's why I'm asking this question. But it seems a quick and clear answer is not in view. | Here is an algorithm for the identity function: Input: $n$ Check if the $n$th binary string encodes a proof of $0 > 1$ in ZFC, and if so, output $n+1$ Otherwise, output $n$ Most people suspect this algorithm computes the identity function, but we don't know, and we can't prove it in the commonly accepted framework for mathematics, ZFC . | {
"source": [
"https://cs.stackexchange.com/questions/69580",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/48853/"
]
} |
69,936 | I read the Wikipedia entry about " List of NP-complete problems " and found that games like super mario, pokemon, tetris or candy crush saga are np-complete. How can I imagine np-completeness of a game? Answers don't need to be too precise. I just want to get an overview what it means that games can be np-complete. | It just means that you can create levels or puzzles within these games that encode NP-Hard problems. You can take a graph coloring problem, create an associated Super Mario Bros. level, and that level is beatable if and only if the graph is 3-colorable. If you want to see the specific way the NP-Complete problems are translated into the games, I recommend the paper "Classic Nintendo Games are (Computationally) Hard" . It's well written and easy to follow. An important caveat to keep in mind is that the NP-hardness requires generalizing the games in "obvious" ways. For example, Tetris normally has a fixed size board but the hardness proof requires the game to allow arbitrarily large boards. Another example is off-screen enemies in Super Mario Bros: the proof is for a variant of the game where off-screen enemies continue moving as if they were onscreen, instead of ceasing to exist and being reset to their starting position when Mario comes back. | {
"source": [
"https://cs.stackexchange.com/questions/69936",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/64065/"
]
} |
69,937 | Well known example of Context-sensitive grammar which produces language $\{a^nb^nc^n|n\geq 1\}$ is widely used in various papers. But actually, while this language is definitely context-sensitive, it is also belongs to smaller subset of CS-languages: it is indexed language, because it can be described with indexed grammar as well. What I'm looking for, it is example of Context-sensitive grammar which produces non-indexed language. From this paper it is known that language $\{(ab^n)^n|n\geq 1\}$ is not indexed. But there is no described grammar for this language. So could somebody describe CS-grammar for given non-indexed language example, or provide with any other example? | It just means that you can create levels or puzzles within these games that encode NP-Hard problems. You can take a graph coloring problem, create an associated Super Mario Bros. level, and that level is beatable if and only if the graph is 3-colorable. If you want to see the specific way the NP-Complete problems are translated into the games, I recommend the paper "Classic Nintendo Games are (Computationally) Hard" . It's well written and easy to follow. An important caveat to keep in mind is that the NP-hardness requires generalizing the games in "obvious" ways. For example, Tetris normally has a fixed size board but the hardness proof requires the game to allow arbitrarily large boards. Another example is off-screen enemies in Super Mario Bros: the proof is for a variant of the game where off-screen enemies continue moving as if they were onscreen, instead of ceasing to exist and being reset to their starting position when Mario comes back. | {
"source": [
"https://cs.stackexchange.com/questions/69937",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/20668/"
]
} |
69,953 | I know for pretty sure that there is a function with the type $f: \forall \alpha, \beta . \alpha \rightarrow \beta$ (at least in a Hindley-Milner type system), but I can't wrap my head over it. Neither could I think of an actual function with this type. I found a function of this type, which in Standard ML would be written as: fun f x = f x But I am not sure of the lambda calculus equivalent of this function. Moreover, if I'm right about Curry-Howard, the isomorphism of this type is the proposition $\forall A, B . A \implies B$, which does not make sense for me. Is it possible that someone give a function with type $f$ and explain its Curry-Howard equivalent to me? | It just means that you can create levels or puzzles within these games that encode NP-Hard problems. You can take a graph coloring problem, create an associated Super Mario Bros. level, and that level is beatable if and only if the graph is 3-colorable. If you want to see the specific way the NP-Complete problems are translated into the games, I recommend the paper "Classic Nintendo Games are (Computationally) Hard" . It's well written and easy to follow. An important caveat to keep in mind is that the NP-hardness requires generalizing the games in "obvious" ways. For example, Tetris normally has a fixed size board but the hardness proof requires the game to allow arbitrarily large boards. Another example is off-screen enemies in Super Mario Bros: the proof is for a variant of the game where off-screen enemies continue moving as if they were onscreen, instead of ceasing to exist and being reset to their starting position when Mario comes back. | {
"source": [
"https://cs.stackexchange.com/questions/69953",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/65425/"
]
} |
70,044 | From Wikipedia: In theoretical computer science, correctness of an algorithm is asserted when it is said that the algorithm is correct with respect to a specification. But the problem is that to get the "appropriate" specification is not a trivial task, and there is no 100% correct method (as far as i know) to get the right one, it just an estimation, so if we are going to take a predicate as a specification just because it "looks" like the "one", why not taking the program as correct just because it "looks" correct? | First off, you're absolutely right: you're on to a real concern. Formal verification transfers the problem of confidence in program correctness to the problem of confidence in specification correctness, so it is not a silver bullet. There are several reasons why this process can still be useful, though. Specifications are often simpler than the code itself. For instance, consider the problem of sorting an array of integers. There are fairly sophisticated sorting algorithms that do clever things to improve performance. But the specification is fairly simple to state: the output must be in increasing order, and must be a permutation of the input. Thus, it is arguably easier to gain confidence in the correctness of the specification than in the correctness of the code itself. There is no single point of failure. Suppose you have one person write down a specification, and another person write the source code, and then formally verify that the code meets the spec. Then any undetected flaw would have to be present in both the spec and the code. In some cases, for some types of flaws, this feels less likely: it's less likely that you'd overlook the flaw when inspecting the spec and overlook the flaw when inspecting the source code. Not all, but some. Partial specs can be vastly simpler than the code. For instance, consider the requirement that the program is free of buffer overrun vulnerabilities. Or, the requirement that there are no array index out-of-bounds errors. This is a simple spec that is fairly obviously a good and useful thing to be able to prove. Now you can try to use formal methods to prove that the entire program meets this spec. That might be a fairly involved task, but if you are successful, you gain increased confidence in the program. Specs might change less frequently than code. Without formal methods, each time we update the source code, we have to manually check that the update won't introduce any bugs or flaws. Formal methods can potentially reduce this burden: suppose the spec doesn't change, so that software updates involve only changes to the code and not changes to the spec. Then for each update, you are relieved of the burden to check whether the spec is still correct (it hasn't changed, so there's no risk new bugs have been introduced in the spec) and of the burden to check whether the code is still correct (the program verifier checks that for you). You still need to check that the original spec is correct, but then you don't need to keep checking it each time a developer commits a new patch/update/change. This can potentially reduce the burden of checking correctness as code is maintained and evolves. Finally, remember that specs typically are declarative and can't necessarily be executed nor compiled directly to code. For instance, consider sorting again: the spec says that the output is increasing and is a permutation of the input, but there is no obvious way to "execute" this spec directly and no obvious way for a compiler to automatically compile it to code. So just taking the spec as correct and executing it often isn't an option. Nonetheless, the bottom line remains the same: formal methods are not a panacea. They simply transfer the (very hard) problem of confidence in code correctness to the (merely hard) problem of confidence in spec correctness. Bugs in the spec are a real risk, they are common, and they can't be overlooked. Indeed, the formal methods community sometimes separates the problem into two pieces: verification is about ensuring the code meets the spec; validation is about ensuring the spec is correct (meets our needs). You might also enjoy Formal program verification in practice and Why aren't we researching more towards compile time guarantees? for more perspectives with some bearing on this. | {
"source": [
"https://cs.stackexchange.com/questions/70044",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/64410/"
]
} |
71,648 | I understand that "structure" of data is totally dependent on Boolean Algebra, but: Why is data considered to be a discrete mathematical entity rather than a continuous one? Related to this: What are the drawbacks, or invariants, that are violated in structuring data as a continuous entity in $r$ dimensions? I am not an expert in the field as I am an undergrad math student, so I'd really appreciate it if someone would explain this to me like I'm five. | Answer why was the data considered to be a discrete mathematical entity rather than a continuous one This was not a choice; it is theoretically and practically impossible to represent continuous, concrete values in a digital computer, or actually in any kind of calculation. Note that "discrete" does not mean "integer" or something like that. "discrete" is the opposite of "continuous". This means, to have a computer that is truly able to store non-discrete things, you would need to be able to store two numbers a and b where abs(a-b) < ε for any arbitrarily small value of ε . Sure, you can go as deep as you want (by using more and more storage space), but every (physical) computer always has an upper bound. No matter what you do, you can never make a (physical) computer that stores arbitrarily finely resolved numbers. Even if you are able to represent numbers by mathematical constructs (for example π ), this does not change anything. If you store a graph or whatever that represents a mathematical formula, then this is just as discrete as anything else. Addendum The rest is just a little perspective beyond the field of computer science. As the comments have shown, the physical topic is not undisputed, and as you can see I have formulated my next paragraph in a way that is rather uncommitted to whether it's true or not. Take it more as a motivation that the concept of "continuum" is not a trivial one. The answer given above does not depend on whether space is discrete or not. Note that all of this is not so much a problem of computers, but a problem with the meaning of "continuous". For example, not everyone even agrees, or did agree in the past, that the Universe is continuous (e.g., Does the Planck scale imply that spacetime is discrete? ). For some things (e.g., energy states of electrons and many other features in Quantum (sic) Mechanics) we even know that the Universe is not continuous; for others (e.g., position...) the jury is still out (at least regarding the interpretation of research results...). (Notwithstanding the problem that even if it is continuous, we could not measure to arbitrary precision => Heisenberg etc.). In mathematics, studying the continuum (i.e., the reals) opens up a lot of fascinating aspects, like measure theory, which makes it utterly impossible to actually store a "continuous" kind of number/data. | {
"source": [
"https://cs.stackexchange.com/questions/71648",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/67903/"
]
} |
71,979 | I have noticed that some applications or algorithms that are built on a programming language, say C++/Rust run faster or snappier than those built on say, Java/Node.js, running on the same machine. I have a few question regarding this: Why does this happen? What governs the "speed" of a programming language? Has this anything to do with memory management? I'd really appreciate if someone broke this down for me. | In programming language design and implementation, there is a large number of choices that can affect performance. I'll only mention a few. Every language ultimately has to be run by executing machine code. A "compiled" language such as C++ is parsed, decoded, and translated to machine code only once, at compile-time. An "interpreted" language, if implemented in a direct way, is decoded at runtime, at every step, every time. That is, every time we run a statement, the intepreter has to check whether that is an if-then-else, or an assignment, etc. and act accordingly. This means that if we loop 100 times, we decode the same code 100 times, wasting time. Fortunately, interpreters often optimize this through e.g. a just-in-time compiling system. (More correctly, there's no such a thing as a "compiled" or "interpreted" language -- it is a property of the implementation, not of the language. Still, each language often has one widespread implementation, only.) Different compilers/interpreters perform different optimizations. If the language has automatic memory management, its implementation has to perform garbage collection. This has a runtime cost, but relieves the programmer from an error-prone task. A language might be closer to the machine, allowing the expert programmer to micro-optimize everything and squeeze more performance out of the CPU. However, it is arguable if this is actually beneficial in practice, since most programmers do not really micro-optimize, and often a good higher level language can be optimized by the compiler better than what the average programmer would do.
(However, sometimes being farther from the machine might have its benefits too! For instance, Haskell is extremely high level, but thanks to its design choices is able to feature very lightweight green threads.) Static type checking can also help in optimization. In a dynamically typed, interpreted language, every time one computes x - y , the interpreter often has to check whether both x,y are numbers and (e.g.) raise an exception otherwise. This check can be skipped if types were already checked at compile time. Some languages always report runtime errors in a sane way. If you write a[100] in Java where a has only 20 elements, you get a runtime exception. This requires a runtime check, but provides a much nicer semantics to the programmer than in C, where that would cause undefined behavior, meaning that the program might crash, overwrite some random data in memory, or even perform absolutely anything else (the ISO C standard poses no limits whatsoever). However, keep in mind that, when evaluating a language, performance is not everything. Don't be obsessed about it. It is a common trap to try to micro-optimize everything, and yet fail to spot that an inefficient algorithm/data structure is being used. Knuth once said "premature optimization is the root of all evil". Don't underestimate how hard it is to write a program right . Often, it can be better to choose a "slower" language which has a more human-friendly semantics. Further, if there are some specific performance critical parts, those can always be implemented in another language. Just as a reference, in the 2016 ICFP programming contest , these were the languages used by the winners: 1 700327 Unagi Java,C++,C#,PHP,Haskell
2 268752 天羽々斬 C++, Ruby, Python, Haskell, Java, JavaScript
3 243456 Cult of the Bound Variable C++, Standard ML, Python None of them used a single language. | {
"source": [
"https://cs.stackexchange.com/questions/71979",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/67903/"
]
} |
72,102 | I am a software engineer and after a discussion with some colleagues, I realized I do not have a good grasp of the concept serialization. As I understand, serialization is the process of converting some entity, such as an object in OOP, to a sequence of bytes, so that the said entity can be stored or transmitted for subsequent access (the process of "deserialization"). The trouble I have is: aren't all variables (be it primitives like int or composite objects) already represented by a sequence of bytes? (Of course they are, because they are stored in registers, memory, disk, etc.) So what makes serialization such a deep topic? To serialize a variable, can't we just take these bytes in memory, and write them to a file? What intricacies have I missed? | If you have a complicated data structure, its representation in memory might ordinarily be scattered throughout memory. (Think of a binary tree, for instance.) In contrast, when you want to write it to disk, you probably want to have a representation as a (hopefully short) sequence of contiguous bytes. That's what serialization does for you. | {
"source": [
"https://cs.stackexchange.com/questions/72102",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/68461/"
]
} |
75,046 | I have a file containing ordered binary numbers from $0$ to $2^n - 1$: 0000000000
0000000001
0000000010
0000000011
0000000100
...
1111111111 7z did not compress this file very efficiently (for n = 20, 22 MB were compressed to 300 kB). Are there algorithms that can recognize very simple structure of data and compress file to several bytes? Also I want to know what area of CS or information theory studies such smart algorithms. "AI" would be too broad, please suggest more concrete keywords. Notion of symmetry should play fundamental role in data compression, but search queries "symmetry in data compression" and "group theory in data compression" surprisingly return almost nothing relevant. | This seems to be a clear use case for delta compression . If $n$ is known a priori this is trivial: store the first number verbatim, and for each next number store only the difference to the previous. In your case, this will give 0
1
1
1
1
... This can then with simple run-length encoding be stored in $\mathcal{O}(n)$ space, as there are only $\mathcal{O}(1)$ groups (namely, two) of different deltas. If $n$ is not known, the simplest thing would be a brute-force search for the word-size for which this delta/run-length representation comes out shortest. Perhaps only do this search for randomly-chosen, $\sqrt{N}$-sized chunks, to amortize the overhead of finding $n$ while retaining good reliability. Unlike D.W.'s “all or nothing” proposal, delta compression with run-length encoding can actually give sensible compression ratios for some simple real-world kinds of content, like low-resolution audio. (It is thus suitable for low-quality, very low-latency and low-power audio compression.) | {
"source": [
"https://cs.stackexchange.com/questions/75046",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/15039/"
]
} |
75,327 | The Vapnik–Chervonenkis (VC)-dimension formula for neural networks ranges from $O(E)$ to $O(E^2)$ , with $O(E^2V^2)$ in the worst case, where $E$ is the number of edges and $V$ is the number of nodes. The number of training samples needed to have a strong guarantee of generalization is linear with the VC-dimension. This means that for a network with billions of edges, as in the case of successful deep learning models, the training dataset needs billions of training samples in the best case, to quadrillions in the worst case. The largest training sets currently have about a hundred billion samples. Since there is not enough training data, it is unlikely deep learning models are generalizing. Instead, they are overfitting the training data. This means the models will not perform well on data that is dissimilar to the training data, which is an undesirable property for machine learning. Given the inability of deep learning to generalize, according to VC dimensional analysis, why are deep learning results so hyped? Merely having a high accuracy on some dataset does not mean much in itself. Is there something special about deep learning architectures that reduces the VC-dimension significantly? If you do not think the VC-dimension analysis is relevant, please provide evidence/explanation that deep learning is generalizing and is not overfitting. I.e. does it have good recall AND precision, or just good recall? 100% recall is trivial to achieve, as is 100% precision. Getting both close to 100% is very difficult. As a contrary example, here is evidence that deep learning is overfitting. An overfit model is easy to fool since it has incorporated deterministic/stochastic noise. See the following image for an example of overfitting. Also, see lower ranked answers to this question to understand the problems with an overfit model despite good accuracy on test data. Some have responded that regularization solves the problem of a large VC dimension. See this question for further discussion. | "If the map and the terrain disagree, trust the terrain." It's not really understood why deep learning works as well as it does, but certainly old concepts from learning theory such as VC dimensions appear not to be very helpful. The matter is hotly debated, see e.g.: H. W. Lin, M. Tegmark, D. Rolnick, Why does deep and cheap learning work so well? C. Zhang, S. Bengio, M. Hardt, B. Recht, O. Vinyals, Understanding Deep Learning Requires Rethinking Generalization . D. Krueger, B. Ballas, S. Jastrzebski, D. Arpit, M. S. Kanwal, T. Maharaj, E. Bengio, A. Fischer, A. Courville, Deep Nets Dont Learn Via Memorization . Regarding the issue of adversarial examples , the problem was discovered in: C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions . It is further developed in: I. Goodfellow, J. Shlens, C. Szegedy, Explaining And Harnessing Adversarial Examples . There is a lot of follow-on work. Update March 2020. A new hypothesis that appears to explain some of the mismatch between clear over-parameterisation of modern (feed-forward) NNs and good recognition performance is Frankle and Carbin's Lottery Ticket Hypothesis from 2018: J. Frankle, M. Carbin, The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. The claim is that a "randomly-initialised, dense [feed-forward] neural
network contains a subnetwork that is initialised such that when
trained in isolation it can match the test accuracy of the original
network after training for at most the same number of iterations." Regarding the original question, the Lottery Ticket Hypothesis might be understood as saying that: Training by stochastic gradient descent searches for small subnetworks that work well and deemphasises the rest of the overparameterised network's learning capacity. The bigger the original network, the more likely it is to contain a small subnetwork with good performance on the task at hand. This has found empirical support, e.g. in H. Zhou, J. Lan, R. Liu, J. Yosinski, Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask. and theoretical support in: E. Malach, G. Yehudai, S. Shalev-Shwartz, O. Shamir, Proving the Lottery Ticket Hypothesis: Pruning is All You Need. As far as I'm aware, it has not yet been possible to generalise the Lottery Ticket Hypothesis to recurrent NNs. Update March 2021. The original 2016 paper Understanding Deep Learning Requires Rethinking Generalization has been updated. Here is the new version: C. Zhang, S. Bengio, M. Hardt, B. Recht, O. Vinyals, Understanding Deep Learning (Still) Requires Rethinking Generalization. | {
"source": [
"https://cs.stackexchange.com/questions/75327",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/17264/"
]
} |
75,811 | I know that bit-wise operations are so fast on modern processors, because they can operate on 32 or 64 bits on parallel, so bit-wise operations take only one clock cycle. However addition is a complex operation that consists of at least one and possibly up to a dozen bit-wise operations, so I naturally thought it will be 3-4 times slower. I was surprised to see after a simple benchmark that addition is exactly as fast as any of the bit-wise operations(XOR, OR, AND etc). Can anyone shed light on this? | Addition is fast because CPU designers have put in the circuitry needed to make it fast. It does take significantly more gates than bitwise operations, but it is frequent enough that CPU designers have judged it to be worth it. See https://en.wikipedia.org/wiki/Adder_(electronics) . Both can be made fast enough to execute within a single CPU cycle. They're not equally fast -- addition requires more gates and more latency than a bitwise operation -- but it's fast enough that a processor can do it in one clock cycle. There is a per-instruction latency overhead for the instruction decoding and control logic, and the latency for that is significantly larger than the latency to do a bitwise operation, so the difference between the two gets swamped by that overhead. AProgrammer's answer and Paul92's answer explain those effects well. | {
"source": [
"https://cs.stackexchange.com/questions/75811",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/60365/"
]
} |
75,815 | Let's say I have a function that consists solely of floating-point operations where the last operation rounds the computed value to a predefined number of digits. And I feed this function with a range of floats. How do I conclude - just by looking at the decimals of the operand(s), and the structure of the code / sequence of floating-point calculations - that the round-up errors may or may not yield an unexpected result? Do you know of any rule of thumb or mathematical methodology? Anything I can put to general use? I don't want to actually test the code. Just one example for reference; range of input, required precision, and implementation of calc() are arbitrary: Input: 0, 0.01, 0.02 ... 999.98, 999.99, 1000 (delta = 0.01) Required precision: Rounded accurately to 2nd decimal Pseudo code: function main(a) {
res = calc(a);
return round(res); // Rounds res to desired decimals
}
function calc(a) {
float b = 1.1;
return a * b;
} Update #1: I found an interesting blogpost on the topic: Introduction to Scientific Computing: Error Propagation It seems to contain the mathematical tools one needs to evaluate how "error prone" a computation is. Update #2: I worked out the following method to estimate the error propagation and will describe how I applied it on my use case. It would be great if you comment on my thoughts. I'm not sure if I did it right. And thanks for your help so far! :) Pseudo code: // Floats conform to IEEE 547
function main() {
float a,b; // Real value between 199 and 684
a = some_value;
b = another_value;
res_a = calc(a);
res_b = calc(b);
res = res_a + res_b;
return round(res); // Rounds to 2nd decimal
}
function calc(x) {
float c = yet_another_value; // Real value between 1 and 1.4
return c * x;
} Estimation: As I'm using IEEE 547 floats, the machine epsilon ε is 1.1e-16. The relative error δ is 2ε - according to the article linked under update #1. So the absolute error of my maximal input e max won't exceed 3.8304e-13. * That means the result will be precise up to the 12th decimal. Since I just desire a precision of 2, the calculation does work precisely enough. *) e max = 2ε * res max = 2 * 1.1e-16 * 2 * 684 * 1.4 = 3.8304e-13. Bonus consideration: Q: Would one add up the results of calc() infinitely, how many results would be "needed" approximately to contaminate the rounded total? A: One would have to call calc() at least 4 trillion times. \o/ e has to be at least 0.001 to force a round-up error The maximum result of calc() is x max = 684 * 1.4 = 957.6 e = δ * x => x = e / δ = e / 2ε = 1e-3 / (2 * 1.1e-16) = 1e+13 / 2.2 n = x / x max = 1e+13 / (2.2 * 957.6) ≈ 4e+9 | Addition is fast because CPU designers have put in the circuitry needed to make it fast. It does take significantly more gates than bitwise operations, but it is frequent enough that CPU designers have judged it to be worth it. See https://en.wikipedia.org/wiki/Adder_(electronics) . Both can be made fast enough to execute within a single CPU cycle. They're not equally fast -- addition requires more gates and more latency than a bitwise operation -- but it's fast enough that a processor can do it in one clock cycle. There is a per-instruction latency overhead for the instruction decoding and control logic, and the latency for that is significantly larger than the latency to do a bitwise operation, so the difference between the two gets swamped by that overhead. AProgrammer's answer and Paul92's answer explain those effects well. | {
"source": [
"https://cs.stackexchange.com/questions/75815",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/72510/"
]
} |
76,098 | I have the following quotation from my compiler's course (in the context of graph coloring): Because it is slow, graph coloring tends to be used in batch
compilers, while linear scan tends to be used in JIT compilers. I couldn't find a clear definition online. So, what makes a compiler be a batch compiler? | A JIT (Just-In-Time) compiler compiles code at run-time, i.e. as the program is running. Therefore the cost of compilation is part of the execution time of the program, and so should be minimized. The opposite of this is an ahead-of-time (AOT) compiler which is basically synonymous with "batch compiler". This converts the source code to machine code and then just the machine code is distributed. Therefore, the compiler can be very slow as it doesn't impact the execution time of the resulting program. Nowadays, when people say "compiler" they typically mean an AOT compiler. Indeed, the term "AOT compiler" only really started becoming popular relatively recently when people started making AOT compilers for JIT compiled languages, particularly JavaScript. Many of these languages, e.g. C#, compile to an intermediate language for a VM which is then JIT compiled to machine code at run-time. The term "AOT compiler" has the connotation that the source code will be compiled directly to machine code, so no form of JIT compilation is required at run-time. "Batch compiler" is a bit of an archaic term at this point. The real contrast to a batch compiler when the term was popular was an incremental compiler . Incremental compilation is often associated with languages like Lisp where you had a REPL and you could interactively request the language implementation to compile a specific function. If a function was executed whose compilation had not been requested before, it would typically be interpreted. A batch compiler, by contrast, compiled all functions at once, i.e. in a batch. | {
"source": [
"https://cs.stackexchange.com/questions/76098",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/40409/"
]
} |
76,105 | Suppose you design a computer with a ten-stage Pipeline to execute one instruction, with each stage taking 5nsec A)how long will it take to execute a program that has 30 sequential instructions. Answer: 195nsec B) how long will it take to execute a program that has 30 sequential instructions in a non-pipeline computer Answer: 1500nsec can some one please help me to understand how was the answer achieved. | A JIT (Just-In-Time) compiler compiles code at run-time, i.e. as the program is running. Therefore the cost of compilation is part of the execution time of the program, and so should be minimized. The opposite of this is an ahead-of-time (AOT) compiler which is basically synonymous with "batch compiler". This converts the source code to machine code and then just the machine code is distributed. Therefore, the compiler can be very slow as it doesn't impact the execution time of the resulting program. Nowadays, when people say "compiler" they typically mean an AOT compiler. Indeed, the term "AOT compiler" only really started becoming popular relatively recently when people started making AOT compilers for JIT compiled languages, particularly JavaScript. Many of these languages, e.g. C#, compile to an intermediate language for a VM which is then JIT compiled to machine code at run-time. The term "AOT compiler" has the connotation that the source code will be compiled directly to machine code, so no form of JIT compilation is required at run-time. "Batch compiler" is a bit of an archaic term at this point. The real contrast to a batch compiler when the term was popular was an incremental compiler . Incremental compilation is often associated with languages like Lisp where you had a REPL and you could interactively request the language implementation to compile a specific function. If a function was executed whose compilation had not been requested before, it would typically be interpreted. A batch compiler, by contrast, compiled all functions at once, i.e. in a batch. | {
"source": [
"https://cs.stackexchange.com/questions/76105",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/72882/"
]
} |
76,317 | In "Big O", common notations have common names (instead of saying, "Oh of some constant factor"): O(1) is "Constant" O(log n) is "Logarithmic" O(n) is "Linear" O(n^2) is "Quadratic" O(n * log n) is ??? Is it just "n log n" or does it have a special name like the above? | It's called linearithmic time, and is a special case of a more general class known as quasi linear . As the name may suggests, the algorithms that fall in this class almost run in linear time; in fact they have a lower complexity than algorithms which run in $O(n^k)$ with $k > 1$. | {
"source": [
"https://cs.stackexchange.com/questions/76317",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/22917/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.