Datasets:
aid (string)  mid (string)  abstract (string)  related_work (string)  ref_abstract (json) 

math9912167
 1631980677
 Author(s): Kuperberg, Greg; Thurston, Dylan P.  Abstract: We give a purely topological definition of the perturbative quantum invariants of links and 3manifolds associated with ChernSimons field theory. Our definition is as close as possible to one given by Kontsevich. We will also establish some basic properties of these invariants, in particular that they are universally finite type with respect to algebraically split surgery and with respect to Torelli surgery. Torelli surgery is a mutual generalization of blink surgery of Garoufalidis and Levine and clasper surgery of Habiro.
 Two other generalizations that can be considered are invariants of graphs in 3manifolds, and invariants associated to other flat connections @cite_16 . We will analyze these in future work. Among other things, there should be a general relation between flat bundles and links in 3manifolds on the one hand and finite covers and branched covers on the other hand @cite_26 .
 {
"cite_N": [
"@cite_16",
"@cite_26"
],
"mid": [
"1481005306",
"1641082372"
],
"abstract": [
"This note is a sequel to our earlier paper of the same title [4] and describes invariants of rational homology 3spheres associated to acyclic orthogonal local systems. Our work is in the spirit of the Axelrod–Singer papers [1], generalizes some of their results, and furnishes a new setting for the purely topological implications of their work.",
"Recently, Mullins calculated the CassonWalker invariant of the 2fold cyclic branched cover of an oriented link in S^3 in terms of its Jones polynomial and its signature, under the assumption that the 2fold branched cover is a rational homology 3sphere. Using elementary principles, we provide a similar calculation for the general case. In addition, we calculate the LMO invariant of the pfold branched cover of twisted knots in S^3 in terms of the Kontsevich integral of the knot."
]
}

cs9910011
 2168463568
 A statistical model for segmentation and word discovery in child directed speech is presented. An incremental unsupervised learning algorithm to infer word boundaries based on this model is described and results of empirical tests showing that the algorithm is competitive with other models that have been used for similar tasks are also presented.
 Model Based Dynamic Programming, hereafter referred to as MBDP1 @cite_0 , is probably the most recent work that addresses the exact same issue as that considered in this paper. Both the approach presented in this paper and Brent's MBDP1 are based on explicit probability models. Approaches not based on explicit probability models include those based on information theoretic criteria such as MDL , transitional probability or simple recurrent networks . The maximum likelihood approach due to Olivier:SGL68 is probabilistic in the sense that it is geared towards explicitly calculating the most probable segmentation of each block of input utterances. However, it is not based on a formal statistical model. To avoid needless repetition, we only describe Brent's MBDP1 below and direct the interested reader at Brent:EPS99 which provides an excellent review of many of the algorithms mentioned above.
 {
"cite_N": [
"@cite_0"
],
"mid": [
"2074546930"
],
"abstract": [
"This paper presents a modelbased, unsupervised algorithm for recovering word boundaries in a naturallanguage text from which they have been deleted. The algorithm is derived from a probability model of the source that generated the text. The fundamental structure of the model is specified abstractly so that the detailed component models of phonology, wordorder, and word frequency can be replaced in a modular fashion. The model yields a languageindependent, prior probability distribution on all possible sequences of all possible words over a given alphabet, based on the assumption that the input was generated by concatenating words from a fixed but unknown lexicon. The model is unusual in that it treats the generation of a complete corpus, regardless of length, as a single event in the probability space. Accordingly, the algorithm does not estimate a probability distribution on wordss instead, it attempts to calculate the prior probabilities of various word sequences that could underlie the observed text. Experiments on phonemic transcripts of spontaneous speech by parents to young children suggest that our algorithm is more effective than other proposed algorithms, at least when utterance boundaries are given and the text includes a substantial number of short utterances."
]
}

cs9911003
 2950670108
 We solve the subgraph isomorphism problem in planar graphs in linear time, for any pattern of constant size. Our results are based on a technique of partitioning the planar graph into pieces of small treewidth, and applying dynamic programming within each piece. The same methods can be used to solve other planar graph problems including connectivity, diameter, girth, induced subgraph isomorphism, and shortest paths.
 Recently we were able to characterize the graphs that can occur at most @math times as a subgraph isomorph in an @math vertex planar graph: they are exactly the 3connected planar graphs @cite_41 . However our proof does not lead to an efficient algorithm for 3connected planar subgraph isomorphism. In this paper we use different techniques which do not depend on highorder connectivity.
 {
"cite_N": [
"@cite_41"
],
"mid": [
"2074992286"
],
"abstract": [
"It is well known that any planar graph contains at most O(n) complete subgraphs. We extend this to an exact characterization: G occurs O(n) times as a subgraph of any planar graph, if and only if G is threeconnected. We generalize these results to similarly characterize certain other minorclosed families of graphs; in particular, G occurs O(n) times as a subgraph of the Kb,cfree graphs, b ≥ c and c ≤ 4, iff G is cconnected. Our results use a simple Ramseytheoretic lemma that may be of independent interest. © 1993 John Wiley & Sons, Inc."
]
}

hepth9908200
 2160091034
 Daviau showed the equivalence of matrix Dirac theory, formulated within a spinor bundle (S_x C _x^4 ), to a Clifford algebraic formulation within space Clifford algebra (C ( R ^3 , ) M _ 2 ( C ) P ) Pauli algebra (matrices) ≃ ℍ ⨁ ℍ ≃ biquaternions. We will show, that Daviau's map θ: ( : C ^4 M _ 2 ( C ) ) is an isomorphism. It is shown that Hestenes' and Parra's formulations are equivalent to Daviau's Clifford algebra formulation, which uses outer automorphisms. The connection between different formulations is quite remarkable, since it connects the left and right action on the Pauli algebra itself viewed as a bimodule with the left (resp. right) action of the enveloping algebra (P^ P P^T on P ). The isomorphism established in this article and given by Daviau's map does clearly show that right and left actions are of similar type. This should be compared with attempts of Hestenes, Daviau, and others to interprete the right action as the isospin freedom.
 A further genuine and important approach to the spinortensor transition was developed starting probably with Crawford by P. Lounesto, @cite_6 and references there. He investigated the question, how a spinor field can be reconstructed from known tensor densities. The major characterization is derived, using FierzKofink identities, from elements called Boomerangs because they are able to come back to the spinorial picture. Lounesto's result is a characterization of spinors based on multivector relations which unveils a new unknown type of spinor.
 {
"cite_N": [
"@cite_6"
],
"mid": [
"2082565556"
],
"abstract": [
"A historical review of spinors is given together with a construction of spinor spaces as minimal left ideals of Clifford algebras. Spinor spaces of euclidean spaces over reals have a natural linear structure over reals, complex numbers or quaternions. Clifford algebras have involutions which induce bilinear forms or scalar products on spinor spaces. The automorphism groups of these scalar products of spinors are determined and also classified."
]
}

cs9903014
 1612660921
 We present an open architecture for justintime code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make systemlevel code generation useful in practice.
 Pioneering research in dynamic runtime optimization was done by Hansen @cite_8 who first described a fully automated system for runtime code optimization. His system was similar in structure to our systemit was composed of a loader, a profiler, and an optimizerbut used profiling data only to decide when to optimize and what to optimize, not how to optimize. Also, his system interpreted code prior to optimization, since load time code generation was too memory and time consuming at the time.
 {
"cite_N": [
"@cite_8"
],
"mid": [
"2101776604"
],
"abstract": [
"Abstract : This thesis investigates adaptive compiler systems that perform, during program execution, code optimizations based on the dynamic behavior of the program as opposed to current approaches that employ a fixed code generation strategy, i.e., one in which a predetermined set of code optimizations are applied at compiletime to an entire program. The main problems associated with such adaptive systems are studied in general: which optimizations to apply to what parts of the program and when. Two different optimization strategies result: an ideal scheme which is not practical to implement, and a more basic scheme that is. The design of a practical system is discussed for the FORTRAN IV language. The system was implemented and tested with programs having different behavioral characteristics."
]
}

cs9903014
 1612660921
 We present an open architecture for justintime code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make systemlevel code generation useful in practice.
 Hansen's work was followed by several other projects that have investigated the benefits of runtime optimization: the Smalltalk @cite_33 and SELF @cite_0 systems that focused on the benefits of dynamic optimization in an objectoriented environment; Morph'', a project developed at Harvard University @cite_16 ; and the system described by the authors of this paper @cite_4 @cite_30 . Other projects have experimented with optimization at link time rather than at runtime @cite_18 . At link time, many of the problems described in this paper are nonexistent. Among them the decision when to optimize, what to optimize, and how to replace code. However, there is also a price to pay, namely that it cannot be performed in the presence of dynamic loading.
 {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_4",
"@cite_33",
"@cite_0",
"@cite_16"
],
"mid": [
"1542872339",
"2112378054",
"1541680547",
"1993318777",
"2167997514",
"2153228154"
],
"abstract": [
"Despite the apparent success of the Java Virtual Machine, its lackluster performance makes it illsuited for many speedcritical applications. Although the latest justintime compilers and dedicated Java processors try to remedy this situation, optimized code compiled directly from a C program source is still considerably faster than software transported via Java bytecodes. This is true even if the Java bytecodes are subsequently further translated into native code. In this paper, we claim that these performance penalties are not a necessary consequence of machineindependence, but related to Java's particular intermediate representation and runtime architecture. We have constructed a prototype and are further developing a software transportability scheme founded on a treebased alternative to Java bytecodes. This treebased intermediate representation is not only twice as compact as Java bytecodes, but also contains more highlevel information, some of which is critical for advanced code optimizations. Our architecture not only provides onthefly code generation from this intermediate representation, but also continuous reoptimization of the existing codebase by a lowpriority background process. The reoptimization process is guided by uptotheminute profiling data, leading to superior runtime performance.",
"Modifying code after the compiler has generated it can be useful for both optimization and instrumentation. Several years ago we designed the Mahler system, which uses linktime code modification for a variety of tools on our experimental Titan workstations. Killian’s Pixie tool works even later, translating a fullylinked MIPS executable file into a new version with instrumentation added. Recently we wanted to develop a hybrid of the two, that would let us experiment with both optimization and instrumentation on a standard workstation, preferably without requiring us to modify the normal compilers and linker. This paper describes prototypes of two hybrid systems, closely related to Mahler and Pixie. We implemented basicblock counting in both, and compare the resulting time and space expansion to those of Mahler and Pixie.",
"In the past few years, code optimization has become a major field of research. Many efforts have been undertaken to find new sophisticated algorithms that fully exploit the computing power of today's advanced microprocessors. Most of these algorithms do very well in statically linked, monolithic software systems, but perform perceptibly worse in extensible systems. The modular structure of these systems imposes a natural barrier for intermodular compiletime optimizations. In this paper we discuss a different approach in which optimization is no longer performed at compiletime, but is delayed until runtime. Reoptimized module versions are generated onthefly while the system is running, replacing earlier less optimized versions.",
"The Smalltalk80* programming language includes dynamic storage allocation, full upward funargs, and universally polymorphic procedures; the Smalltalk80 programming system features interactive execution with incremental compilation, and implementation portability. These features of modern programming systems are among the most difficult to implement efficiently, even individually. A new implementation of the Smalltalk80 system, hosted on a small microprocessorbased computer, achieves high performance while retaining complete (object code) compatibility with existing implementations. This paper discusses the most significant optimization techniques developed over the course of the project, many of which are applicable to other languages. The key idea is to represent certain runtime state (both code and data) in more than one form, and to convert between forms when needed.",
"Crossing abstraction boundaries often incurs a substantial runtime overhead in the form of frequent procedure calls. Thus, pervasive use of abstraction, while desirable from a design standpoint, may lead to very inefficient programs. Aggressively optimizing compilers can reduce this overhead but conflict with interactive programming environments because they introduce long compilation pauses and often preclude sourcelevel debugging. Thus, programmers are caught on the horns of two dilemmas: they have to choose between abstraction and efficiency, and between responsive programming environments and efficiency. This dissertation shows how to reconcile these seemingly contradictory goals. Four new techniques work together to achieve this:  Type feedback achieves high performance by allowing the compiler to inline message sends based on information extracted from the runtime system.  Adaptive optimization achieves high responsiveness without sacrificing performance by using a fast compiler to generate initial code while automatically recompiling heavily used program parts with an optimizing compiler.  Dynamic deoptimization allows sourcelevel debugging of optimized code by transparently recreating nonoptimized code as needed.  Polymorphic inline caching speeds up message dispatch and, more significantly, collects concrete type information for the compiler. With better performance yet good interactive behavior, these techniques reconcile exploratory programming, ubiquitous abstraction, and high performance.",
"The Morph system provides a framework for automatic collection and management of profile information and application of profiledriven optimizations. In this paper, we focus on the operating system support that is required to collect and manage profile information on an enduser's workstation in an automatic, continuous, and transparent manner. Our implementation for a Digital Alpha machine running Digital UNIX 4.0 achieves runtime overheads of less than 0.3 during profile collection. Through the application of three code layout optimizations, we further show that Morph can use statistical profiles to improve application performance. With appropriate system support, automatic profiling and optimization is both possible and effective."
]
}

cs9903014
 1612660921
 We present an open architecture for justintime code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make systemlevel code generation useful in practice.
 Common to the abovementioned work is that the main focus has always been on functional aspects, that is how to profile and which optimizations to perform. Related to this is research on how to boost application performance by combining profiling data and code optimizations at compile time (not at runtime), including work on method dispatch optimizations for objectoriented programming languages @cite_22 @cite_35 , profileguided intermodular optimizations @cite_3 @cite_26 , code positioning techniques @cite_13 @cite_25 , and profileguided data cache locality optimizations @cite_29 @cite_10 @cite_12 .
 {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_22",
"@cite_10",
"@cite_29",
"@cite_3",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"2141293928",
"2030400507",
"2164470257",
"2117703621",
"",
"1524758670",
"2116672403",
"2162612712",
"177739376"
],
"abstract": [
"Polymorphic inline caches (PICs) provide a new way to reduce the overhead of polymorphic message sends by extending inline caches to include more than one cached lookup result per call site. For a set of typical objectoriented SELF programs, PICs achieve a median speedup of 11 .",
"SUMMARY This paper describes critical implementation issues that must be addressed to develop a fully automatic inliner. These issues are: integration into a compiler, program representation, hazard prevention, expansion sequence control, and program modification. An automatic interfile inliner that uses profile information has been implemented and integrated into an optimizing C compiler. The experimental results show that this inliner achieves significant speedups for production C programs.",
"The Smalltalk80 system makes it possible to write programs quickly by providing objectoriented programming, incremental compilation, runtime type checking, useextensible data types and control structures, and an interactive graphical interface. However, the potential savings in programming effort have been curtailed by poor performance in widely available computers or high processor cost. Smalltalk80 systems pose tough challenges for implementors: dynamic data typing, a highlevel instruction set, frequent and expensive procedure calls, and objectoriented storage management. The dissertation documents two results that run counter to conventional wisdom: that a reduced instruction set computer can offer excellent performance for a system with dynamic data typing such as Smalltalk80, and that automatic storage reclamation need not be timeconsuming. This project was sponsored by Defense Advance Research Projects Agency (DoD) ARPA Order No. 3803, monitored by Naval Electronic System Command under Contractor No. N00034R0251. It was also sponsored by Defense Advance Research Projects Agency (DoD) ARPA Order No. 4871, monitored by Naval Electronic Systems Command under Contract No. N0003984C0089.",
"The cost of accessing main memory is increasing. Machine designers have tried to mitigate the consequences of the processor and memory technology trends underlying this increasing gap with a variety of techniques to reduce or tolerate memory latency. These techniques, unfortunately, are only occasionally successful for pointermanipulating programs. Recent research has demonstrated the value of a complementary approach, in which pointerbased data structures are reorganized to improve cache locality.This paper studies a technique for using a generational garbage collector to reorganize data structures to produce a cacheconscious data layout, in which objects with high temporal affinity are placed next to each other, so that they are likely to reside in the same cache block. The paper explains how to collect, with low overhead, realtime profiling information about data access patterns in objectoriented languages, and describes a new copying algorithm that utilizes this information to produce a cacheconscious object layout.Preliminary results show that this technique reduces cache miss rates by 2142 , and improves program performance by 1437 over Cheney's algorithm. We also compare our layouts against those produced by the WilsonLamMoher algorithm, which attempts to improve program locality at the page level. Our cacheconscious object layouts reduces cache miss rates by 2041 and improves program performance by 1831 over their algorithm, indicating that improving locality at the page level is not necessarily beneficial at the cache level.",
"",
"We have developed a system called OM to explore the problem of code optimization at linktime. OM takes a collection of object modules constituting the entire program, and converts the object code into a symbolic Register Transfer Language (RTL) form that can be easily manipulated. This RTL is then transformed by intermodule optimization and finally converted back into object form. Although much highlevel information about the program is gone at linktime, this approach enables us to perform optimizations that a compiler looking at a single module cannot see. Since object modules are more or less independent of the particular source language or compiler, this also gives us the chance to improve the code in ways that some compilers might simply have missed. To test the concept, we have used OM to build an optimizer that does interprocedural code motion. It moves simple loopinvariant code out of loops, even when the loop body extends across many procedures and the loop control is in a different procedure from the invariant code. Our technique also easily handles ‘‘loops’’ induced by recursion rather than iteration. Our code motion technique makes use of an interprocedural liveness analysis to discover dead registers that it can use to hold loopinvariant results. This liveness analysis also lets us perform interprocedural dead code elimination. We applied our code motion and dead code removal to SPEC benchmarks compiled with optimization using the standard compilers for the DECstation 5000. Our system improved the performance by 5 on average and by more than 14 in one case. More improvement should be possible soon; at present we move only simple load and loadaddress operations out of loops, and we scavenge registers to hold these values, rather than completely reallocating them. This paper will appear in the March issue of Journal of Programming Languages. It replaces Technical Note TN31, an earlier version of the same material.",
"This paper presents the results of our investigation of code positioning techniques using execution profile data as input into the compilation process. The primary objective of the positioning is to reduce the overhead of the instruction memory hierarchy. After initial investigation in the literature, we decided to implement two prototypes for the HewlettPackard Precision Architecture (PARISC). The first, built on top of the linker, positions code based on whole procedures. This prototype has the ability to move procedures into an order that is determined by a “closest is best” strategy. The second prototype, built on top of an existing optimizer package, positions code based on basic blocks within procedures. Groups of basic blocks that would be better as straightline sequences are identified as chains . These chains are then ordered according to branch heuristics. Code that is never executed during the data collection runs can be physically separated from the primary code of a procedure by a technique we devised called procedure splitting . The algorithms we implemented are described through examples in this paper. The performance improvements from our work are also summarized in various tables and charts.",
"A dynamic instruction trace often contains many unnecessary instructions that are required only by the unexecuted portion of the program. Hotcold optimization (HCO) is a technique that realizes this performance opportunity. HCO uses profile information to partition each routine into frequently executed (hot) and infrequently executed (cold) parts. Unnecessary operations in the hot portion are removed, and compensation code is added on transitions from hot to cold as needed. We evaluate HCO on a collection of large Windows NT applications. HCO is most effective on the programs that are call intensive and have flat profiles, providing a 38 reduction in path length beyond conventional optimization.",
""
]
}

cs9903018
 1593496962
 Scripting languages are becoming more and more important as a tool for software development, as they provide great flexibility for rapid prototyping and for configuring componentware applications. In this paper we present LuaJava, a scripting tool for Java. LuaJava adopts Lua, a dynamically typed interpreted language, as its script language. Great emphasis is given to the transparency of the integration between the two languages, so that objects from one language can be used inside the other like native objects. The final result of this integration is a tool that allows the construction of configurable Java applications, using offtheshelf components, in a high abstraction level.
 For Tcl @cite_13 two integration solutions exist: the TclBlend binding @cite_11 and the Jacl implementation @cite_14 . TclBlend is a binding between Java and Tcl, which, as LuaJava, allows Java objects to be manipulated by scripts. Some operations, such as access to fields and static method invocations, require specific functions. Calls to instance methods are handled naturally by Tcl commands.
 {
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_11"
],
"mid": [
"2162914120",
"2789138443",
"196441419"
],
"abstract": [
"This paper describes the motivations and strategies behind our group’s efforts to integrate the Tcl and Java programming languages. From the Java perspective, we wish to create a powerful scripting solution for Java applications and operating environments. From the Tcl perspective, we want to allow for crossplatform Tcl extensions and leverage the useful features and user community Java has to offer. We are specifically focusing on Java tasks like Java Bean manipulation, where a scripting solution is preferable to using straight Java code. Our goal is to create a synergy between Tcl and Java, similar to that of Visual Basic and Visual C++ on the Microsoft desktop, which makes both languages more powerful together than they are individually.",
"",
"A mechanical brake actuator includes a manual lever which is selflocking in the active braking position. In such position, the lever and associated cable means applies tension to a spring whose force is applied to the plunger of a hydraulic master cylinder included in the conventional turntable hydraulic brake system. In the event of minor leakage and or thermal changes in the hydraulic braking system, the spring force exerted by the mechanical actuator maintains safe braking pressure when the crane is parked. When the mechanical actuator is in a release mode, the turntable hydraulic brake is foot pedal operated from the crane operator's cab without interference from the mechanical actuator."
]
}

hepth9807171
 1774239421
 The thesis begins with an introduction to Mtheory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, Dbranes and finally Matrix theory. The following chapter treats, in a selfcontained way, of general classical pbrane solutions. Black and extremal branes are reviewed, along with their semiclassical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the worldvolume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hepth 9701042, hepth 9704190, hepth 9710027 and hepth 9801053.
 The main objective of this chapter was to study the basic @math branes that one encounters in Mtheory, and to treat them in a unified way. The need to unify the treatment is inspired by Uduality @cite_22 @cite_86 @cite_144 , which states that from the effective lower dimensional spacetime point of view, all the charges carried by the different branes are on the same footing. While string theory breaks' this Uduality symmetry, choosing the NSNS string to be the fundamental object of the perturbative theory, the supergravity lowenergy effective theories realize the Uduality at the classical level.
 {
"cite_N": [
"@cite_86",
"@cite_22",
"@cite_144"
],
"mid": [
"1987603965",
"2141847212",
""
],
"abstract": [
"Abstract The strong coupling dynamics of string theories in dimension d ⩾ 4 are studied. It is argued, among other things, that elevendimensional supergravity arises as a low energy limit of the tendimensional Type IIA superstring, and that a recently conjectured duality between the heterotic string and Type IIA superstrings controls the strong coupling dynamics of the heterotic string in five, six, and seven dimensions and implies S duality for both heterotic and Type II strings.",
"Abstract The effective action for type II string theory compactified on a sixtorus is N = 8 supergravity, which is known to have an E7 duality symmetry. We show that this is broken by quantum effects to a discrete subgroup, E 7 ( Z ) , which contains both the Tduality group O(6, 6; Z ) and the Sduality group SL(2; Z ). We present evidence for the conjecture that E 7 ( Z ) is an exact ‘Uduality’ symmetry of type II string theory. This conjecture requires certain extreme black hole states to be identified with massive modes of the fundamental string. The gauge bosons from the RamondRamond sector couple not to string excitations but to solitons. We discuss similar issues in the context of toroidal string compactifications to other dimensions, compactifications of the type II string on K3 × T2 and compactifications of 11dimensional supermembrane theory.",
""
]
}

hepth9807171
 1774239421
 The thesis begins with an introduction to Mtheory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, Dbranes and finally Matrix theory. The following chapter treats, in a selfcontained way, of general classical pbrane solutions. Black and extremal branes are reviewed, along with their semiclassical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the worldvolume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hepth 9701042, hepth 9704190, hepth 9710027 and hepth 9801053.
 It should also not be underestimated that the derivation of the intersecting solutions presented in this chapter is a thorough consistency check of all the dualities acting on, and between, the supergravity theories. It is straightforward to check that, starting from one definite configuration, all its dual configurations are also found between the solutions presented here (with the exception of the solutions involving waves and KK monopoles). In this line of thoughts, we presented a recipe for building five and four dimensional extreme supersymmetric black holes. Some of these black holes were used in the literature to perform a microscopic counting of their entropy, as in @cite_191 @cite_61 for the 5dimensional ones. Actually, the only (5 dimensional) black holes in the Uduality orbit' that were counted were the ones containing only Dbranes and KK momentum. It is still an open problem to directly count the microscopic states of the same black hole but in a different Mtheoretic formulation.
 {
"cite_N": [
"@cite_191",
"@cite_61"
],
"mid": [
"2130491267",
"2045285156"
],
"abstract": [
"Abstract The BekensteinHawking areaentropy relation S BH = A 4 is derived for a class of fivedimensional extremal black holes in string theory by counting the degeneracy of BPS solition bound states.",
"Abstract Strominger and Vafa have used Dbrane technology to identify and precisely count the degenerate quantum states responsible for the entropy of certain extremal, BPSsaturated black holes. Here we give a TypeII Dbrane description of a class of extremal and nonextremal fivedimensional ReissnerNordstrom solutions and identify a corresponding set of degenerate Dbrane configurations. We use this information to do a string theory calculation of the entropy, radiation rate and “Hawking” temperature. The results agree perfectly with standard Hawking results for the corresponding nearly extremal ReissnerNordstrom black holes. Although these calculations suffer from openstring strong coupling problems, we give some reasons to believe that they are nonetheless qualitatively reliable. In this optimistic scenario there would be no “information loss” in black hole quantum evolution."
]
}

hepth9807171
 1774239421
 The thesis begins with an introduction to Mtheory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, Dbranes and finally Matrix theory. The following chapter treats, in a selfcontained way, of general classical pbrane solutions. Black and extremal branes are reviewed, along with their semiclassical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the worldvolume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hepth 9701042, hepth 9704190, hepth 9710027 and hepth 9801053.
 Some of the intersection rules intersectionrules point towards an Mtheory interpretation in terms of open branes ending on other branes. This idea will be elaborated and made firmer in the next chapter. It suffices to say here that this interpretation is consistent with dualities if we postulate that the open character' of a fundamental string ending on a Dbrane is invariant under dualities. Sduality directly implies, for instance, that Dstrings can end on NS5branes @cite_176 . Then Tdualities imply that all the Dbranes can end on the NS5brane. In particular, the fact that the D2brane can end on the NS5brane should imply that the M5brane is a Dbrane for the M2branes @cite_176 @cite_143 @cite_38 (this could also be extrapolated from the fact that a F1string ends on a D4brane). In the next chapter we will see how these ideas are further supported by the presence of the ChernSimons terms in the supergravities, and by the structure of the worldvolume effective actions of the branes.
 {
"cite_N": [
"@cite_143",
"@cite_38",
"@cite_176"
],
"mid": [
"1981788818",
"2032622195",
""
],
"abstract": [
"Abstract Various aspects of branes in the recently proposed matrix model for Mtheory are discussed. A careful analysis of the supersymmetry algebra of the matrix model uncovers some central changes which can be activated only in the large N limit. We identify the states with nonzero charges as branes of different dimensions.",
"Abstract We formulate boundary conditions for an open membrane that ends on the fivebrane of M theory. We show that the dynamics of the elevendimensional fivebrane can be obtained from the quantization of a “small membrane” that is confined to a single fivebrane and which moves with the speed of light. This shows that the elevendimensional fivebrane has an interpretation as a D brane of an open supermembrane as has recently been proposed by Strominger and Townsend. We briefly discuss the boundary dynamics of an infinitely extended planar membrane that is stretched between two parallel fivebranes.",
""
]
}

hepth9807171
 1774239421
 The thesis begins with an introduction to Mtheory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, Dbranes and finally Matrix theory. The following chapter treats, in a selfcontained way, of general classical pbrane solutions. Black and extremal branes are reviewed, along with their semiclassical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the worldvolume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hepth 9701042, hepth 9704190, hepth 9710027 and hepth 9801053.
 In this chapter we presented only extremal configurations of intersecting branes. The natural further step to take would be to consider also nonextremal configurations of intersecting branes. There is however a subtlety: there could be a difference between intersections of nonextremal branes, and nonextremal intersections of otherwise extremal branes. If we focus on bound states (and thus not on configurations of well separated branes), it appears that a nonextremal configuration would be characterized for instance by @math charges and by its mass. There is only one additional parameter with respect to the extremal configurations. Physically, we could have hardly expected to have, say, as many nonextremality parameters as the number of branes in the bound state. Indeed, nonextremality can be roughly associated to the branes being in an excited state, and it would have thus been very unlikely that the excitations did not mix between the various branes in the bound state. Nonextremal intersecting brane solutions were found first in @cite_48 , and were derived from the equations of motion following a similar approach as here in @cite_81 @cite_16 .
 {
"cite_N": [
"@cite_48",
"@cite_81",
"@cite_16"
],
"mid": [
"2054280159",
"",
"2086840642"
],
"abstract": [
"Abstract We present nonextreme generalisations of intersecting p brane solutions of elevendimensional supergravity which upon toroidal compactification reduce to nonextreme static black holes in dimensions D = 4, D = 5 and 6 ⩽ D ⩽ 9, parameterised by four, three and two charges, respectively. The D = 4 black holes are obtained either from a nonextreme configuration of three intersecting fivebranes with a boost along the common string or from a nonextreme intersecting system of two twobranes and two fivebranes. The D = 5 black holes arise from three intersecting twobranes or from a system of an intersecting twobrane and fivebrane with a boost along the common string. The fivebrane and twobrane with a boost along one direction reduce to black holes in D = 6 and D = 9, respectively, while a D = 7 black hole can be interpreted in terms of a nonextreme configuration of two intersecting twobranes. We discuss the expressions for the corresponding masses and entropies.",
"",
"Abstract We present a general rule determining how extremal branes can intersect in a configuration with zero binding energy. The rule is derived in a model independent way and in arbitrary spacetime dimensions D by solving the equations of motion of gravity coupled to a dilaton and several different n form field strengths. The intersection rules are all compatible with supersymmetry, although derived without using it. We then specialize to the branes occurring in type II string theories and in Mtheory. We show that the intersection rules are consistent with the picture that open branes can have boundaries on some other branes. In particular, all the Dbranes of dimension q , with 1 ≤ q ≤ 6, can have boundaries on the solitonic 5brane."
]
}

hepth9807171
 1774239421
 The thesis begins with an introduction to Mtheory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, Dbranes and finally Matrix theory. The following chapter treats, in a selfcontained way, of general classical pbrane solutions. Black and extremal branes are reviewed, along with their semiclassical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the worldvolume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hepth 9701042, hepth 9704190, hepth 9710027 and hepth 9801053.
 Supergravity solutions corresponding to Dbranes at angles were found in @cite_31 @cite_59 @cite_178 . The resulting solutions contain as expected offdiagonal elements in the internal metric, and the derivation from the equations of motion as in @cite_59 is accordingly rather intricated.
 {
"cite_N": [
"@cite_31",
"@cite_178",
"@cite_59"
],
"mid": [
"2125497699",
"2048444779",
""
],
"abstract": [
"A lowenergy background field solution is presented which describes several Dmembranes oriented at angles with respect to one another. The mass and charge densities for this configuration are computed and found to saturate the Bogomol close_quote nyiPrasadSommerfeld bound, implying the preservation of onequarter of the supersymmetries. T duality is exploited to construct new solutions with nontrivial angles from the basic one. copyright ital 1997 ital The American Physical Society",
"We construct the most general supersymmetric configuration of @math branes and @math branes on a 6torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group.",
""
]
}

hepth9807171
 1774239421
 The thesis begins with an introduction to Mtheory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, Dbranes and finally Matrix theory. The following chapter treats, in a selfcontained way, of general classical pbrane solutions. Black and extremal branes are reviewed, along with their semiclassical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the worldvolume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hepth 9701042, hepth 9704190, hepth 9710027 and hepth 9801053.
 Other halfsupersymmetric bound states of this class are the @math multiplets of 1 and 5branes in type IIB theory @cite_95 @cite_0 , or more precisely the configurations F1 @math D1 and NS5 @math D5, also called @math 1 and 5branes, where @math is the NSNS charge and @math the RR charge of the compound. The classical solutions corresponding to this latter case were actually found more simply performing an @math transformation on the F1 or NS5 solutions.
 {
"cite_N": [
"@cite_0",
"@cite_95"
],
"mid": [
"2022574854",
"2127535930"
],
"abstract": [
"The recent discovery of an explicit conformal field theory description of Type II pbranes makes it possible to investigate the existence of bound states of such objects. In particular, it is possible with reasonable precision to verify the prediction that the Type IIB superstring in ten dimensions has a family of soliton and bound state strings permuted by SL(2,Z). The spacetime coordinates enter tantalizingly in the formalism as noncommuting matrices.",
"An SL(2, Z) family of string solutions of type IIB supergravity in ten dimensions is constructed. The solutions are labeled by a pair of relatively prime integers, which characterize charges of the threeform field strengths. The string tensions depend on these charges in an SL(2, Z) covariant way. Compactifying on a circle and identifying with elevendimensional supergravity compactified on a torus implies that the modulus of the IIB theory should be equated to the modular parameter of the torus."
]
}

hepth9807171
 1774239421
 The thesis begins with an introduction to Mtheory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, Dbranes and finally Matrix theory. The following chapter treats, in a selfcontained way, of general classical pbrane solutions. Black and extremal branes are reviewed, along with their semiclassical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the worldvolume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hepth 9701042, hepth 9704190, hepth 9710027 and hepth 9801053.
 In @cite_106 (inspired by @cite_54 ) a solution is presented which corresponds to a M5 @math M5=1 configuration, which follows the harmonic superposition rule, provided however that the harmonic functions depend on the respective relative transverse space (i.e. they are functions of two different spaces). The problem now is that the harmonic functions do not depend on the overall transverse space (which is 1dimensional in the case above), the configuration thus not being localized there. A method actually inspired by the one presented here to derive the intersecting brane solutions, has been applied in @cite_89 to the intersections of this second kind. Imposing that the functions depend on the relative transverse space(s) (with factorized dependence) and not on the overall one, the authors of @cite_89 arrive at a formula for the intersections very similar to intersectionrules , with @math on the l.h.s. This rule correctly reproduces the M5 @math M5=1 configuration, and moreover also all the configurations of two Dbranes with 8 NeumannDirichlet directions, which preserve @math supersymmetries but were excluded from the intersecting solutions derived in this chapter (only the configurations with 4 ND directions were found as solutions). One such configuration is e.g. D0 @math D8.
 {
"cite_N": [
"@cite_54",
"@cite_106",
"@cite_89"
],
"mid": [
"2065552713",
"1992572456",
""
],
"abstract": [
"We derive an exact stringlike soliton solution of [ital D]=10 heterotic string theory. The solution possesses SU(2)[times]SU(2) instanton structure in the eightdimensional space transverse to the world sheet of the soliton.",
"We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0brane, a fivebrane overlapping a fivebrane in a 3brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D2branes in D = 10, Tduality generates new overlapping Dbrane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D5branes overlapping in a string. Tduality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.",
""
]
}

1903.05435
 2921710190
 In this work, we are interested in the applications of big data in the telecommunication domain, analysing two weeks of datasets provided by Telecom Italia for Milan and Trento. Our objective is to identify hotspots which are places with very high communication traffic relative to others and measure the interaction between them. We model the hotspots as nodes in a graph and then apply node centrality metrics that quantify the importance of each node. We review five node centrality metrics and show that they can be divided into two families: the first family is composed of closeness and betweenness centrality whereas the second family consists of degree, PageRank and eigenvector centrality. We then proceed with a statistical analysis in order to evaluate the consistency of the results over the two weeks. We find out that the ranking of the hotspots under the various centrality metrics remains practically the same with the time for both Milan and Trento. We further identify that the relative difference of the values of the metrics is smaller for PageRank centrality than for closeness centrality and this holds for both Milan and Trento. Finally, our analysis reveals that the variance of the results is significantly smaller for Trento than for Milan.
 Nowadays, telecom companies use widely big data in order to mine the behaviour of their customers, improve the quality of service that they provide and reduce the customers' churn. Towards this direction, demographic statistics, network deployments and call detail records (CDRs) are key factors that need to be carefully integrated in order to make accurate predictions. Though there are various open source data for the first two factors, researchers rarely have access to traffic demand data, since it is a sensitive information for the operators. Therefore, researchers need to rely on synthetic models, which do not always capture accurately largescale mobile networks @cite_5 .
 {
"cite_N": [
"@cite_5"
],
"mid": [
"2741581007"
],
"abstract": [
"In a world of open data and largescale measurements, it is often feasible to obtain a realworld trace to fit to one's research problem. Feasible, however, does not imply simple. Taking nextgeneration cellular network planning as a case study, in this paper we describe a largescale dataset, combining topology, traffic demand from call detail records, and demographic information throughout a whole country. We investigate how these aspects interact, revealing effects that are normally not captured by smallerscale or synthetic datasets. In addition to making the resulting dataset available for download, we discuss how our experience can be generalized to other scenarios and case studies, i.e., how everyone can construct a similar dataset from publicly available information."
]
}

1903.05435
 2921710190
 In this work, we are interested in the applications of big data in the telecommunication domain, analysing two weeks of datasets provided by Telecom Italia for Milan and Trento. Our objective is to identify hotspots which are places with very high communication traffic relative to others and measure the interaction between them. We model the hotspots as nodes in a graph and then apply node centrality metrics that quantify the importance of each node. We review five node centrality metrics and show that they can be divided into two families: the first family is composed of closeness and betweenness centrality whereas the second family consists of degree, PageRank and eigenvector centrality. We then proceed with a statistical analysis in order to evaluate the consistency of the results over the two weeks. We find out that the ranking of the hotspots under the various centrality metrics remains practically the same with the time for both Milan and Trento. We further identify that the relative difference of the values of the metrics is smaller for PageRank centrality than for closeness centrality and this holds for both Milan and Trento. Finally, our analysis reveals that the variance of the results is significantly smaller for Trento than for Milan.
 For example, the authors in @cite_4 analyse an heterogeneous cellular network which consists of different types of nodes, such as macrocells and microcells. Nowadays a popular model is the one from Wyner @cite_0 , but it fails to fully capture a real heterogeneous cellular network because it is simplistic. Another approach is to use the spatial Poisson point process model (SPPP) @cite_9 , which can be derived from the premise that all base stations are uniformly distributed. However, a city can be classified in different areas, which have different population densities. These different areas can be characterised as dense urban, urban and suburban. To be able to classify the heterogeneous networks into these areas, the authors introduce SPPP for homogeneous and inhomogeneous sets. They show that the SPPPmodel captures accurately both urban and suburban areas, whereas this is not the case for dense urban areas, because of a considerable population concentrated in small areas.
 {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_4"
],
"mid": [
"2131070905",
"",
"2005411736"
],
"abstract": [
"The Wyner model has been widely used to model and analyze cellular networks due to its simplicity and analytical tractability. Its key aspects include fixed user locations and the deterministic and homogeneous interference intensity. While clearly a significant simplification of a real cellular system, which has random user locations and interference levels that vary by several orders of magnitude over a cell, a common presumption by theorists is that the Wyner model nevertheless captures the essential aspects of cellular interactions. But is this true? To answer this question, we compare the Wyner model to a model that includes random user locations and fading. We consider both uplink and downlink transmissions and both outagebased and averagebased metrics. For the uplink, for both metrics, we conclude that the Wyner model is in fact quite accurate for systems with a sufficient number of simultaneous users, e.g., a CDMA system. Conversely, it is broadly inaccurate otherwise. Turning to the downlink, the Wyner model becomes inaccurate even for systems with a large number of simultaneous users. In addition, we derive an approximation for the main parameter in the Wyner model  the interference intensity term, which depends on the path loss exponent.",
"",
"In heterogeneous cellular networks spatial characteristics of base stations (BSs) influence the system performance intensively. Existing models like twodimensional hexagonal grid model or homogeneous spatial poisson point process (SPPP) are based on the assumption that BSs are ideal or uniformly distributed, but the aggregation behavior of users in hot spots has an important effect on the location of low power nodes (LPNs), so these models fail to characterize the distribution of BSs in the current mobile cellular networks. In this paper, firstly existing spatial models are analyzed. Then, based on real data from a mobile operator in one large city of China, a set of spatial models is proposed in three typical regions: dense urban, urban and suburban. For dense urban area, “Two Tiers Poisson Cluster Superimposed Process” is proposed to model the spatial characteristics of realworld BSs. Specifically, for urban and suburban area, conventional SPPP model still can be used. Finally, the fundamental relationship between user behavior and BS distribution is illustrated and summarized. Numerous results show that SPPP is only appropriate in the urban and suburban regions where users are not gathered together obviously. Principal parameters of these models are provided as reference for the theoretical analysis and computer simulation, which describe the complex spatial configuration more reasonably and reflect the current mobile cellular network performance more precisely."
]
}

1903.05355
 2968491849
 Learning the dynamics of robots from data can help achieve more accurate tracking controllers, or aid their navigation algorithms. However, when the actual dynamics of the robots change due to external conditions, online adaptation of their models is required to maintain high fidelity performance. In this work, a framework for online learning of robot dynamics is developed to adapt to such changes. The proposed framework employs an incremental support vector regression method to learn the model sequentially from data streams. In combination with the incremental learning, strategies for including and forgetting data are developed to obtain better generalization over the whole state space. The framework is tested in simulation and real experimental scenarios demonstrating its adaptation capabilities to changes in the robot’s dynamics.
 In the field of marine robotics, @cite_10 used locally weighted projection regression to compensate the mismatch between the physics based model and the sensors reading of the AUV Nessie. Autoregressive networks augmented with a genetic algorithm as a gating network were used to identify the model of a simulated AUV with variable mass. In a previous work @cite_16 , an online adaptation method was proposed to model the change in the damping forces resulting from a structural change of an AUVs mechanical structure. The algorithm showed good adaptation capability but was only limited to modelling the damping effect of an AUV model. In this work we build upon our the results of @cite_14 @cite_11 to provide a general framework for online learning of AUV fully coupled nonlinear dynamics, and validating the proposed approach on simulated data as well as real robot data.
 {
"cite_N": [
"@cite_14",
"@cite_16",
"@cite_10",
"@cite_11"
],
"mid": [
"2766583623",
"2774694743",
"2022798939",
""
],
"abstract": [
"This work addresses a data driven approach which employs a machine learning technique known as Support Vector Regression (SVR), to identify the coupled dynamical model of an autonomous underwater vehicle. To train the regressor, we use a dataset collected from the robot's onboard navigation sensors and actuators. To achieve a better fit to the experimental data, a variant of a radialbasisfunction kernel is used in combination with the SVR which accounts for the different complexities of each of the contributing input features of the model. We compare our method to other explicit hydrodynamic damping models that were identified using the total least squares method and with less complex SVR methods. To analyze the transferability, we clearly separate training and testing data obtained in realworld experiments. Our presented method shows much better results especially compared to classical approaches.",
"This paper presents an online technique which employs incremental support vector regression to learn the damping term of an underwater vehicle motion model, subject to dynamical changes in the vehicle's body. To learn the damping term, we use data collected from the robot's onboard navigation sensors and actuator encoders. We introduce a new sampleefficient methodology which accounts for adding new training samples, removing old samples, and outlier rejection. The proposed method is tested in a realworld experimental scenario to account for the model's dynamical changes due to a change in the vehicle's geometrical shape.",
"Navigation is instrumental in the successful deployment of Autonomous Underwater Vehicles (AUVs). Sensor hardware is installed on AUVs to support navigational accuracy. Sensors, however, may fail during deployment, thereby jeopardizing the mission. This work proposes a solution, based on an adaptive dynamic model, to accurately predict the navigation of the AUV. A hydrodynamic model, derived from simple laws of physics, is integrated with a powerful nonparametric regression method. The incremental regression method, namely the Locally Weighted Projection Regression (LWPR), is used to compensate for unmodeled dynamics, as well as for possible changes in the operating conditions of the vehicle. The augmented hydrodynamic model is used within an Extended Kalman Filter, to provide optimal estimations of the AUV’s position and orientation. Experimental results demonstrate an overall improvement in the prediction of the vehicle’s acceleration and velocity.",
""
]
}

1903.05454
 2950587559
 In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panoramatopanorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
 In recent years, research regarding image matching has been influenced by the developments in other areas of computer vision. Deep learning architectures have been developed both for image matching @cite_10 @cite_1 @cite_17 and geopositioning @cite_13 @cite_5 @cite_18 with attractive results.
 {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_1",
"@cite_5",
"@cite_10",
"@cite_17"
],
"mid": [
"1946093182",
"2199890863",
"2609950940",
"1969891195",
"2204975001",
"2614218061"
],
"abstract": [
"The recent availability of geotagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to groundlevel images with known locations (e.g., streetview data). However, most of the Earth does not have groundlevel reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a groundlevel query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these crossview pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, WhereCNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional handcrafted features and existing deep features learned from other largescale databases. We show the effectiveness of WhereCNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.",
"We propose to use deep convolutional neural networks to address the problem of crossview image geolocalization, in which the geolocation of a groundlevel query image is estimated by matching to georeferenced aerial images. We use stateoftheart feature representations for groundlevel images and introduce a crossview training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and groundlevel images from across the United States. Our methods significantly outperform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales.",
"Location recognition is commonly treated as visual instance retrieval on \"street view\" imagery. The dataset items and queries are panoramic views, i.e. groups of images taken at a single location. This work introduces a novel panoramatopanorama matching process, either by aggregating features of individual images in a group or by explicitly constructing a larger panorama. In either case, multiple views are used as queries. We reach near perfect location recognition on a standard benchmark with only four query views.",
"We address the problem of georegistering groundbased multiview stereo models by groundtoaerial image matching. The main contribution is a fully automated georegistration pipeline with a novel viewpointdependent matching method that handles ground to aerial viewpoint variation. We conduct largescale experiments which consist of many popular outdoor landmarks in Rome. The proposed approach demonstrates a high success rate for the task, and dramatically outperforms stateoftheart techniques, yielding georegistration at pixellevel accuracy.",
"Several recent works have shown that image descriptors produced by deep convolutional neural networks provide stateoftheart performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional handengineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully reevaluated. Such reevaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the stateoftheart on four common benchmarks considerably.",
"We propose an attentive local feature descriptor suitable for largescale image retrieval, referred to as DELF (DEep Local Feature). The new feature is based on convolutional neural networks, which are trained only with imagelevel annotations on a landmark image dataset. To identify semantically useful local features for image retrieval, we also propose an attention mechanism for keypoint selection, which shares most network layers with the descriptor. This framework can be used for image retrieval as a dropin replacement for other keypoint detectors and descriptors, enabling more accurate feature matching and geometric verification. Our system produces reliable confidence scores to reject false positivesin particular, it is robust against queries that have no correct match in the database. To evaluate the proposed descriptor, we introduce a new largescale dataset, referred to as GoogleLandmarks dataset, which involves challenges in both database and query such as background clutter, partial occlusion, multiple landmarks, objects in variable scales, etc. We show that DELF outperforms the stateoftheart global and local descriptors in the largescale setting by significant margins. Code and dataset can be found at the project webpage: this https URL ."
]
}

1903.05454
 2950587559
 In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panoramatopanorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
 Convolutional features extracted from the deep layers of CNNs have shown great utility when addressing image matching and retrieval problems. Babenko @cite_10 employ pretrained networks to generate descriptors based on highlevel convolutional features used for retrieving images of various landmarks. Sunderhauf @cite_2 solve the problem of urban scene recognition, employing salient regions and convolutional features of local objects. This method is extended in @cite_8 , where additional spatial information is used to increase the algorithm performance.
 {
"cite_N": [
"@cite_8",
"@cite_10",
"@cite_2"
],
"mid": [
"2518534307",
"2204975001",
"1162411702"
],
"abstract": [
"Recent work by [1] demonstrated improved visual place recognition using proposal regions coupled with features from convolutional neural networks (CNN) to match landmarks between views. In this work we extend the approach by introducing descriptors built from landmark features which also encode the spatial distribution of the landmarks within a view. Matching descriptors then enforces consistency of the relative positions of landmarks between views. This has a significant impact on performance. For example, in experiments on 10 imagepair datasets, each consisting of 200 urban locations with significant differences in viewing positions and conditions, we recorded average precision of around 70 (at 100 recall), compared with 58 obtained using whole image CNN features and 50 for the method in [1].",
"Several recent works have shown that image descriptors produced by deep convolutional neural networks provide stateoftheart performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional handengineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully reevaluated. Such reevaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the stateoftheart on four common benchmarks considerably.",
"Place recognition has long been an incompletely solved problem in that all approaches involve significant compromises. Current methods address many but never all of the critical challenges of place recognition – viewpointinvariance, conditioninvariance and minimizing training requirements. Here we present an approach that adapts stateoftheart object proposal techniques to identify potential landmarks within an image for place recognition. We use the astonishing power of convolutional neural network features to identify matching landmark proposals between images to perform place recognition over extreme appearance and viewpoint variations. Our system does not require any form of training, all components are generic enough to be used offtheshelf. We present a range of challenging experiments in varied viewpoint and environmental conditions. We demonstrate superior performance to current stateofthe art techniques. Furthermore, by building on existing and widely used recognition frameworks, this approach provides a highly compatible place recognition system with the potential for easy integration of other techniques such as object detection and semantic scene interpretation."
]
}

1903.05454
 2950587559
 In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panoramatopanorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
 The problem of geopositioning can be seen as a dedicated branch of image retrieval. In this case, the objective is to compute extrinsic parameters (or coordinates) of a camera capturing the query image, based on the matched georeferenced images from a database. There exist many different algorithms and neural network architectures that attempt to identify the geographical location of a streetlevel query image. Lin @cite_13 learn deep representations for matching aerial and ground images. Workman @cite_18 use spatial features at multiple scales which are fused with streetlevel features, to solve the problem of geolocalization. @cite_5 , a fully automated processing pipeline matches multiview stereo (MVS) models to aerial images. This matching algorithm handles the viewpoint variance across aerial and streetlevel images.
 {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_13"
],
"mid": [
"1969891195",
"2199890863",
"1946093182"
],
"abstract": [
"We address the problem of georegistering groundbased multiview stereo models by groundtoaerial image matching. The main contribution is a fully automated georegistration pipeline with a novel viewpointdependent matching method that handles ground to aerial viewpoint variation. We conduct largescale experiments which consist of many popular outdoor landmarks in Rome. The proposed approach demonstrates a high success rate for the task, and dramatically outperforms stateoftheart techniques, yielding georegistration at pixellevel accuracy.",
"We propose to use deep convolutional neural networks to address the problem of crossview image geolocalization, in which the geolocation of a groundlevel query image is estimated by matching to georeferenced aerial images. We use stateoftheart feature representations for groundlevel images and introduce a crossview training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and groundlevel images from across the United States. Our methods significantly outperform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales.",
"The recent availability of geotagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to groundlevel images with known locations (e.g., streetview data). However, most of the Earth does not have groundlevel reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a groundlevel query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these crossview pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, WhereCNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional handcrafted features and existing deep features learned from other largescale databases. We show the effectiveness of WhereCNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations."
]
}

1903.05454
 2950587559
 In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panoramatopanorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
 A common factor of the above work is that it either requires the combination of aerial and streetlevel images for geopositioning, or extensive training on specific datasets. Both cases and their solutions cannot be easily generalized. In our approach, we utilize georeferenced, streetlevel panoramic images only and a pretrained CNN combined with image matching techniques for coordinate estimation. This avoids lengthy training and labeling procedures and assumes streetlevel data to be available without requiring aerial images. Furthermore, and unlike @cite_1 , we do not assume that our query and database images originate from the same imaging devices.
 {
"cite_N": [
"@cite_1"
],
"mid": [
"2609950940"
],
"abstract": [
"Location recognition is commonly treated as visual instance retrieval on \"street view\" imagery. The dataset items and queries are panoramic views, i.e. groups of images taken at a single location. This work introduces a novel panoramatopanorama matching process, either by aggregating features of individual images in a group or by explicitly constructing a larger panorama. In either case, multiple views are used as queries. We reach near perfect location recognition on a standard benchmark with only four query views."
]
}

1903.05524
 2972959293
 In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graphtheoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
 Kirchhoff index or equivalently effective graph resistance based measures have been instrumental in quantifying the effect of noise on the expected steady state dispersion in linear dynamical networks, particularly in the ones with the consensus dynamics, for instance see @cite_23 @cite_8 @cite_22 . Furthermore, limits on robustness measures that quantify expected steadystate dispersion due to external stochastic disturbances in linear dynamical networks are also studied in @cite_9 @cite_10 . To maximize robustness in networks by minimizing their Kirchhoff indices, various optimization approaches (e.g., @cite_26 @cite_1 ) including graphtheoretic ones @cite_0 have been proposed. The main objective there is to determine crucial edges that need to be added or maintained to maximize robustness under given constraints @cite_11 .
 {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_23",
"@cite_10",
"@cite_11"
],
"mid": [
"1987717935",
"2287065438",
"1818569528",
"",
"",
"2156286761",
"2099689250",
"",
"2736121037"
],
"abstract": [
"The effective resistance between two nodes of a weighted graph is the electrical resistance seen between the nodes of a resistor network with branch conductances given by the edge weights. The effective resistance comes up in many applications and fields in addition to electrical network analysis, including, for example, Markov chains and continuoustime averaging networks. In this paper we study the problem of allocating edge weights on a given graph in order to minimize the total effective resistance, i.e., the sum of the resistances between all pairs of nodes. We show that this is a convex optimization problem and can be solved efficiently either numerically or, in some cases, analytically. We show that optimal allocation of the edge weights can reduce the total effective resistance of the graph (compared to uniform weights) by a factor that grows unboundedly with the size of the graph. We show that among all graphs with @math nodes, the path has the largest value of optimal total effective resistance and the complete graph has the least.",
"This work considers the robustness of uncertain consensus networks. The stability properties of consensus networks with negative edge weights are also examined. We show that the network is unstable if either the negative weight edges form a cut in the graph or any single negative edge weight has a magnitude less than the inverse of the effective resistance between the two incident nodes. These results are then used to analyze the robustness of the consensus network with additive but bounded perturbations of the edge weights. It is shown that the smallgain condition is related again to cuts in the graph and effective resistance. For the single edge case, the smallgain condition is also shown to be exact. The results are then extended to consensus networks with nonlinear couplings.",
"The graphical notion of effective resistance has found wideranging applications in many areas of pure mathematics, applied mathematics and control theory. By the nature of its construction, effective resistance can only be computed in undirected graphs and yet in several areas of its application, directed graphs arise as naturally (or more naturally) than undirected ones. In Part I of this work, we propose a generalization of effective resistance to directed graphs that preserves its controltheoretic properties in relation to consensustype dynamics. We proceed to analyze the dependence of our algebraic definition on the structural properties of the graph and the relationship between our construction and a graphical distance. The results make possible the calculation of effective resistance between any two nodes in any directed graph and provide a solid foundation for the application of effective resistance to problems involving directed graphs.",
"",
"",
"This paper studies an interesting graph measure that we call the effective graph resistance. The notion of effective graph resistance is derived from the field of electric circuit analysis where it is defined as the accumulated effective resistance between all pairs of vertices. The objective of the paper is twofold. First, we survey known formulae of the effective graph resistance and derive other representations as well. The derivation of new expressions is based on the analysis of the associated random walk on the graph and applies tools from Markov chain theory. This approach results in a new method to approximate the effective graph resistance. A second objective of this paper concerns the optimisation of the effective graph resistance for graphs with given number of vertices and diameter, and for optimal edge addition. A set of analytical results is described, as well as results obtained by exhaustive search. One of the foremost applications of the effective graph resistance we have in mind, is the analysis of robustnessrelated problems. However, with our discussion of this informative graph measure we hope to open up a wealth of possibilities of applying the effective graph resistance to all kinds of networks problems. © 2011 Elsevier Inc. All rights reserved.",
"In this paper we study robustness of consensus in networks of coupled single integrators driven by white noise. Robustness is quantified as the H 2 norm of the closedloop system. In particular we investigate how robustness depends on the properties of the underlying (directed) communication graph. To this end several classes of directed and undirected communication topologies are analyzed and compared. The tradeoff between speed of convergence and robustness to noise is also investigated.",
"",
"This paper investigates the robustness of strong structural controllability for linear timeinvariant directed networked systems with respect to structural perturbations, including edge additions and deletions. In this regard, an algorithm is presented that is initiated by endowing each node of a network with a successive set of integers. Using this algorithm, a new notion of perfect graphs associated with a network is introduced, and tight upper bounds on the number of edges that can be added to, or removed from a network, while ensuring strong structural controllability, are derived. Moreover, we obtain a characterization of critical edges with respect to edge additions and deletions; these sets are the maximal sets of edges whose any subset can be respectively added to, or removed from a network, while preserving strong structural controllability."
]
}

1903.05524
 2972959293
 In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graphtheoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
 To quantify controllability, several approaches have been adapted, including determining the minimum number of inputs (leader nodes) needed to (structurally or strong structurally) control a network, determining the worstcase control energy, metrics based on controllability Gramians, and so on (e.g., see @cite_7 @cite_5 ). Strong structural controllability, due to its independence on coupling weights between nodes, is a generalized notion of controllability with practical implications. There have been recent studies providing graphtheoretic characterizations of this concept @cite_20 @cite_13 @cite_17 . There are numerous other studies regarding leader selection to optimize network performance measures under various constraints, such as to minimize the deviation from consensus in a noisy environment @cite_4 @cite_2 , and to maximize various controllability measures, for instance @cite_15 @cite_18 @cite_25 @cite_14 . Recently, optimization methods are also presented to select leader nodes that exploit submodularity properties of performance measures for network robustness and structural controllability @cite_5 @cite_3 .
 {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"",
"",
"2111725629",
"",
"",
"1938602245",
"",
"2315383458",
"",
"2049708951",
"2763583074"
],
"abstract": [
"",
"",
"",
"This paper studies the problem of controlling complex networks, i.e., the joint problem of selecting a set of control nodes and of designing a control input to steer a network to a target state. For this problem, 1) we propose a metric to quantify the difficulty of the control problem as a function of the required control energy, 2) we derive bounds based on the system dynamics (network topology and weights) to characterize the tradeoff between the control energy and the number of control nodes, and 3) we propose an openloop control strategy with performance guarantees. In our strategy, we select control nodes by relying on network partitioning, and we design the control input by leveraging optimal and distributed control techniques. Our findings show several control limitations and properties. For instance, for Schur stable and symmetric networks: 1) if the number of control nodes is constant, then the control energy increases exponentially with the number of network nodes; 2) if the number of control nodes is a fixed fraction of the network nodes, then certain networks can be controlled with constant energy independently of the network dimension; and 3) clustered networks may be easier to control because, for sufficiently many control nodes, the control energy depends only on the controllability properties of the clusters and on their coupling strength. We validate our results with examples from power networks, social networks and epidemics spreading.",
"",
"",
"Controllability and observability have long been recognized as fundamental structural properties of dynamical systems, but have recently seen renewed interest in the context of large, complex networks of dynamical systems. A basic problem is sensor and actuator placement: choose a subset from a finite set of possible placements to optimize some realvalued controllability and observability metrics of the network. Surprisingly little is known about the structure of such combinatorial optimization problems. In this paper, we show that several important classes of metrics based on the controllability and observability Gramians have a strong structural property that allows for either efficient global optimization or an approximation guarantee by using a simple greedy heuristic for their maximization. In particular, the mapping from possible placements to several scalar functions of the associated Gramian is either a modular or submodular set function. The results are illustrated on randomly generated systems and on a problem of powerelectronic actuator placement in a model of the European power grid.",
"",
"In this technical note, we study the controllability of diffusively coupled networks from a graph theoretic perspective. We consider leaderfollower networks, where the external control inputs are injected to only some of the agents, namely the leaders. Our main result relates the controllability of such systems to the graph distances between the agents. More specifically, we present a graph topological lower bound on the rank of the controllability matrix. This lower bound is tight, and it is applicable to systems with arbitrary network topologies, coupling weights, and number of leaders. An algorithm for computing the lower bound is also provided. Furthermore, as a prominent application, we present how the proposed bound can be utilized to select a minimal set of leaders for achieving controllability, even when the coupling weights are unknown.",
"",
"This paper examines strong structural controllability of lineartimeinvariant networked systems. We provide necessary and sufficient conditions for strong structural controllability involving constrained matchings over the bipartite graph representation of the network. An O(n2) algorithm to validate if a set of inputs leads to a strongly structurally controllable network and to find such an input set is proposed. The problem of finding such a set with minimal cardinality is shown to be NPcomplete. Minimal cardinality results for strong and weak structural controllability are compared.",
"Characterization of network controllability through its topology has recently gained a lot of attention in the systems and control community. Using the notion of balancing sets, in this note, such a networkcentric approach for the controllability of certain families of undirected networks is investigated. Moreover, by introducing the notion of a generalized zero forcing set, the structural controllability of undirected networks is discussed; in this direction, lower bounds on the dimension of the controllable subspace are derived. In addition, a method is proposed that facilitates synthesis of structural and strong structural controllable networks as well as examining preservation of network controllability under structural perturbations."
]
}

1903.05524
 2972959293
 In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graphtheoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
 Very recently in @cite_21 , tradeoff between controllability and fragility in complex networks is investigated. Fragility measures the smallest perturbation in edge weights to make the network unstable. Authors in @cite_21 show that networks that require small control energy, as measured by the eigen values of the controllability Gramian, to drive from one state to another are more fragile and vice versa. In our work, for control performance, we consider minimum leaders for strong structural controllability, which is independent of coupling weights; and for robustness, we utilize the Kirchhoff index which measures robustness to noise as well as to structural changes in the underlying network graph. Moreover, in this work we focus on designing and comparing extremal networks for these properties. The rest of the paper is organized as follows: Section describes preliminaries and network dynamics. Section explains the measures for robustness and controllability, and also outlines the main problems. Section presents maximally robust networks for a given @math and @math , and also analyzes their controllability. Section provides a design of maximally controllable networks and also evaluates their robustness. Finally, Section concludes the paper.
 {
"cite_N": [
"@cite_21"
],
"mid": [
"2887109490"
],
"abstract": [
"Mathematical theories and empirical evidence suggest that several complex natural and manmade systems are fragile: as their size increases, arbitrarily small and localized alterations of the system parameters may trigger systemwide failures. Examples are abundant, from perturbation of the population densities leading to extinction of species in ecological networks [1], to structural changes in metabolic networks preventing reactions [2], cascading failures in power networks [3], and the onset of epileptic seizures following alterations of structural connectivity among populations of neurons [4]. While fragility of these systems has long been recognized [5], convincing theories of why natural evolution or technological advance has failed, or avoided, to enhance robustness in complex systems are still lacking. In this paper we propose a mechanistic explanation of this phenomenon. We show that a fundamental tradeoff exists between fragility of a complex network and its controllability degree, that is, the control energy needed to drive the network state to a desirable state. We provide analytical and numerical evidence that easily controllable networks are fragile, suggesting that natural and manmade systems can either be resilient to parameters perturbation or efficient to adapt their state in response to external excitations and controls."
]
}

cmplg9408015
 2952406681
 Effective problem solving among multiple agents requires a better understanding of the role of communication in collaboration. In this paper we show that there are communicative strategies that greatly improve the performance of resourcebounded agents, but that these strategies are highly sensitive to the task requirements, situation parameters and agents' resource limitations. We base our argument on two sources of evidence: (1) an analysis of a corpus of 55 problem solving dialogues, and (2) experimental simulations of collaborative problem solving dialogues in an experimental world, DesignWorld, where we parameterize task requirements, agents' resources and communicative strategies.
 DesignWorld is also based on the method used in Carletta's JAM simulation for the Edinburgh MapTask @cite_10 . JAM is based on the MapTask Dialogue corpus, where the goal of the task is for the planning agent, the instructor, to instruct the reactive agent, the instructee, how to get from one place to another on the map. JAM focuses on efficient strategies for recovery from error and parametrizes agents according to their communicative and error recovery strategies. Given good error recovery strategies, Carletta argues that high risk' strategies are more efficient, where efficiency is a measure of the number of utterances in the dialogue. While the focus here is different, we have shown that that the number of utterances is just one parameter for evaluating performance, and that the task definition determines when strategies are effective.
 {
"cite_N": [
"@cite_10"
],
"mid": [
"2159574206"
],
"abstract": [
"The Principle of Parsimony states that people usually try to complete tasks with the least effort that will produce a satisfactory solution. In taskoriented dialogue, this produces a tension between conveying information carefully to the partner and leaving it to be inferred, risking a misunderstanding and the need for recovery. Using natural dialogue examples, primarily from the HCRC Map Task, we apply the Principle of Parsimony to a range of information types and identify a set of applicable recovery strategies. We argue that risktaking and recovery are crucial for efficient dialogue because they pinpoint which information must be transferred and allow control of the interaction to switch to the participant who can best guide the course of the dialogue."
]
}

cs9907027
 2949190809
 The aim of the Alma project is the design of a strongly typed constraint programming language that combines the advantages of logic and imperative programming. The first stage of the project was the design and implementation of Alma0, a small programming language that provides a support for declarative programming within the imperative programming framework. It is obtained by extending a subset of Modula2 by a small number of features inspired by the logic programming paradigm. In this paper we discuss the rationale for the design of Alma0, the benefits of the resulting hybrid programming framework, and the current work on adding constraint processing capabilities to the language. In particular, we discuss the role of the logical and customary variables, the interaction between the constraint store and the program, and the need for lists.
 We concentrate here on the related work involving addition of constraints to imperative languages. For an overview of related work pertaining to the language we refer the reader to @cite_0 .
 {
"cite_N": [
"@cite_0"
],
"mid": [
"1968265180"
],
"abstract": [
"We describe here an implemented small programming language, called AlmaO, that augments the expressive power of imperative programming by a limited number of features inspired by the logic programming paradigm. These additions encourage declarative programming and make it a more attractive vehicle for problems that involve search. We illustrate the use of AlmaO by presenting solutions to a number of classical problems, including αβ search, STRIPS planning, knapsack, and Eight Queens. These solutions are substantially simpler than their counterparts written in the imperative or in the logic programming style and can be used for different purposes without any modification. We also discuss here the implementation of AlmaO and an operational, executable, semantics of a large subset of the language."
]
}

cmplg9709007
 1578881253
 Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the WidrowHoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.
 To our knowledge, lexical databases have been used only once before in TC, apart from our previous work. Hearst @cite_10 adapted a disambiguation algorithm by Yarowsky using WordNet to recognize category occurrences. Categories are made of WordNet terms, which is not the general case of standard or userdefined categories. It is a hard task to adapt WordNet subsets to preexisting categories, especially when they are domain dependent. Hearst's approach has shown promising results confirmed by our previous work @cite_23 and present results.
 {
"cite_N": [
"@cite_10",
"@cite_23"
],
"mid": [
"1493108551",
"1575569168"
],
"abstract": [
"This dissertation investigates the role of contextual information in the automated retrieval and display of fulltext documents, using robust natural language processing algorithms to automatically detect structure in and assign topic labels to texts. Many long texts are comprised of complex topic and subtopic structure, a fact ignored by existing information access methods. I present two algorithms which detect such structure, and two visual display paradigms which use the results of these algorithms to show the interactions of multiple main topics, multiple subtopics, and the relations between main topics and subtopics. The first algorithm, called TextTiling , recognizes the subtopic structure of texts as dictated by their content. It uses domainindependent lexical frequency and distribution information to partition texts into multiparagraph passages. The results are found to correspond well to reader judgments of major subtopic boundaries. The second algorithm assigns multiple main topic labels to each text, where the labels are chosen from predefined, intuitive category sets; the algorithm is trained on unlabeled text. A new iconic representation, called TileBars uses TextTiles to simultaneously and compactly display query term frequency, query term distribution and relative document length. This representation provides an informative alternative to ranking long texts according to their overall similarity to a query. For example, a user can choose to view those documents that have an extended discussion of one set of terms and a brief but overlapping discussion of a second set of terms. This representation also allows for relevance feedback on patterns of term distribution. TileBars display documents only in terms of words supplied in the user query. For a given retrieved text, if the query words do not correspond to its main topics, the user cannot discern in what context the query terms were used. For example, a query on contaminants may retrieve documents whose main topics relate to nuclear power, food, or oil spills. To address this issue, I describe a graphical interface, called Cougar , that displays retrieved documents in terms of interactions among their automaticallyassigned main topics, thus allowing users to familiarize themselves with the topics and terminology of a text collection.",
"Automatic text categorization is a complex and useful task for many natural language processing applications. Recent approaches to text categorization focus more on algorithms than on resources involved in this operation. In contrast to this trend, we present an approach based on the integration of widely available resources as lexical databases and training collections to overcome current limitations of the task. Our approach makes use of WordNet synonymy information to increase evidence for bad trained categories. When testing a direct categorization, a WordNet based one, a training algorithm, and our integrated approach, the latter exhibits a better perfomance than any of the others. Incidentally, WordNet based approach perfomance is comparable with the training approach one."
]
}

cmplg9709007
 1578881253
 Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the WidrowHoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.
 Lexical databases have been employed recently in word sense disambiguation. For example, Agirre and Rigau @cite_4 make use of a semantic distance that takes into account structural factors in WordNet for achieving good results for this task. Additionally, Resnik @cite_3 combines the use of WordNet and a text collection for a definition of a distance for disambiguating noun groupings. Although the text collection is not a training collection (in the sense of a collection of manually labeled texts for a predefined text processing task), his approach can be regarded as the most similar to ours in the disambiguation setting. Finally, Ng and Lee @cite_11 make use of several sources of information inside a training collection (neighborhood, part of speech, morphological form, etc.) to get good results in disambiguating unrestricted text.
 {
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_11"
],
"mid": [
"",
"1608874027",
"2157025692"
],
"abstract": [
"",
"Word groupings useful for language processing tasks are increasingly available, as thesauri appear online, and as distributional word clustering techniques improve. However, for many tasks, one is interested in relationships among word senses, not words. This paper presents a method for automatic sense disambiguation of nouns appearing within sets of related nouns — the kind of data one finds in online thesauri, or as the output of distributional clustering algorithms. Disambiguation is performed with respect to WordNet senses, which are fairly finegrained; however, the method also permits the assignment of higherlevel WordNet categories rather than sense labels. The method is illustrated primarily by example, though results of a more rigorous evaluation are also presented.",
"In this paper, we present a new approach for word sense disambiguation (WSD) using an exemplarbased learning algorithm. This approach integrates a diverse set of knowledge sources to disambiguate word sense, including part of speech of neighboring words, morphological form, the unordered set of surrounding words, local collocations, and verbobject syntactic relation. We tested our WSD program, named LEXAS, on both a common data set used in previous work, as well as on a large sensetagged corpus that we separately constructed. LEXAS achieves a higher accuracy on the common data set, and performs better than the most frequent heuristic on the highly ambiguous words in the large corpus tagged with the refined senses of WORDNET."
]
}

cmplg9706006
 2953123431
 Learning problems in the text processing domain often map the text to a space whose dimensions are the measured features of the text, e.g., its words. Three characteristic properties of this domain are (a) very high dimensionality, (b) both the learned concepts and the instances reside very sparsely in the feature space, and (c) a high variation in the number of active features in an instance. In this work we study three mistakedriven learning algorithms for a typical task of this nature  text categorization. We argue that these algorithms  which categorize documents by learning a linear separator in the feature space  have a few properties that make them ideal for this domain. We then show that a quantum leap in performance is achieved when we further modify the algorithms to better address some of the specific characteristics of the domain. In particular, we demonstrate (1) how variation in document length can be tolerated by either normalizing feature weights or by using negative weights, (2) the positive effect of applying a threshold range in training, (3) alternatives in considering feature frequency, and (4) the benefits of discarding features while training. Overall, we present an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set.
 The methods that are most similar to our techniques are the online algorithms used in @cite_16 and @cite_10 . In the first, two algorithms, a multiplicative update and additive update algorithms suggested in @cite_5 are evaluated in the domain, and are shown to perform somewhat better than Rocchio's algorithm. While both these works make use of multiplicative update algorithms, as we do, there are two major differences between those studies and the current one. First, there are some important technical differences between the algorithms used. Second, the algorithms we study here are mistakedriven; they update the weight vector only when a mistake is made, and not after every example seen. The Experts algorithm studied in @cite_10 is very similar to a basic version of the algorithm which we study here. The way we treat the negative weights is different, though, and significantly more efficient, especially in sparse domains (see ). Cohen and Singer experiment also, using the same algorithm, with more complex features (sparse ngrams) and show that, as expected, it yields better results.
 {
"cite_N": [
"@cite_5",
"@cite_16",
"@cite_10"
],
"mid": [
"2069317438",
"",
"2440833291"
],
"abstract": [
"We consider two algorithm for online prediction based on a linear model. The algorithms are the wellknown Gradient Descent (GD) algorithm and a new algorithm, which we call EG(+ ). They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG(+ ) algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worstcase loss bounds for EG(+ ) and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG(+ ) has a much smaller loss if only a few components of the input are relevant for the predictions. We have performed experiments, which show that our worstcase upper bounds are quite tight already on simple artificial data.",
"",
"Two recently implemented machinelearning algorithms, RIPPER and sleepingexperts for phrases, are evaluated on a number of large text categorization problems. These algorithms both construct classifiers that allow the “context” of a word w to affect how (or even whether) the presence or absence of w will contribute to a classification. However, RIPPER and sleepingexperts differ radically in many other respects: differences include different notions as to what constitutes a context, different ways of combining contexts to construct a classifier, different methods to search for a combination of contexts, and different criteria as to what contexts should be included in such a combination. In spite of these differences, both RIPPER and sleepingexperts perform extremely well across a wide variety of categorization problems, generally outperforming previously applied learning methods. We view this result as a confirmation of the usefulness of classifiers that represent contextual information."
]
}

cmplg9701003
 2952751702
 In expertconsultation dialogues, it is inevitable that an agent will at times have insufficient information to determine whether to accept or reject a proposal by the other agent. This results in the need for the agent to initiate an informationsharing subdialogue to form a set of shared beliefs within which the agents can effectively reevaluate the proposal. This paper presents a computational strategy for initiating such informationsharing subdialogues to resolve the system's uncertainty regarding the acceptance of a user proposal. Our model determines when informationsharing should be pursued, selects a focus of informationsharing among multiple uncertain beliefs, chooses the most effective informationsharing strategy, and utilizes the newly obtained information to reevaluate the user proposal. Furthermore, our model is capable of handling embedded informationsharing subdialogues.
 Grosz, Sidner and Lochbaum @cite_7 @cite_1 developed a SharedPlan approach to modelling collaborative discourse, and Sidner formulated an artificial language for modeling such discourse. Sidner viewed a collaborative planning process as proposal acceptance and proposal rejection sequences. Her artificial language treats an utterance such as Why do X? as a proposal for the hearer to provide support for his proposal to do X. However, Sidner's work is descriptive and does not provide a mechanism for determining when and how such a proposal should be made nor how responses should be formulated in informationsharing subdialogues.
 {
"cite_N": [
"@cite_1",
"@cite_7"
],
"mid": [
"2148389694",
"332028463"
],
"abstract": [
"A model of plan recognition in discourse must be based on intended recognition, distinguish each agent's beliefs and intentions from the other's, and avoid assumptions about the correctness or completeness of the agents' beliefs. In this paper, we present an algorithm for plan recognition that is based on the SharedPlan model of collaboration (Grosz and Sidner, 1990; , 1990) and that satisfies these constraints.",
"Abstract : Discourses are fundamentally instances of collaboration behavior. We propose a model of the collaborative plans of agents achieving joint goals and illustrate the role of these plans in discourses. Three types of collaborative plans, called Shared Plans, are formulated for joint goals requiring simultaneous, conjoined or sequential actions on the part of the agents who participate in the plans and the discourse; a fourth type of Shared Plan is presented for the circumstance where two agents communicate, but only one acts."
]
}

cmplg9701003
 2952751702
 In expertconsultation dialogues, it is inevitable that an agent will at times have insufficient information to determine whether to accept or reject a proposal by the other agent. This results in the need for the agent to initiate an informationsharing subdialogue to form a set of shared beliefs within which the agents can effectively reevaluate the proposal. This paper presents a computational strategy for initiating such informationsharing subdialogues to resolve the system's uncertainty regarding the acceptance of a user proposal. Our model determines when informationsharing should be pursued, selects a focus of informationsharing among multiple uncertain beliefs, chooses the most effective informationsharing strategy, and utilizes the newly obtained information to reevaluate the user proposal. Furthermore, our model is capable of handling embedded informationsharing subdialogues.
 Several researchers have studied the role of clarification dialogues in disambiguating user plans @cite_3 @cite_5 and in understanding referring expressions @cite_8 . developed an automated librarian that could revise its beliefs and intentions and could generate responses as an attempt to revise the user's beliefs and intentions. Although their system had rules for asking the user whether he holds a particular belief and for telling the system's attitude toward a belief, the emphasis of their work was on conflict resolution and plan disambiguation. Thus they did not investigate a comprehensive strategy for informationsharing during proposal evaluation. For example, they did not identify situations in which informationsharing is necessary, did not address how to select a focus of informationsharing when there are multiple uncertain beliefs, did not consider requesting the user's justifications for a belief, etc. In addition, they do not provide an overall dialogue planner that takes into account discourse structure and appropriately captures embedded subdialogues.
 {
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_8"
],
"mid": [
"",
"2037768686",
"1541473376"
],
"abstract": [
"",
"Recognizing the plan underlying a query aids in the generation of an appropriate response. In this paper, we address the problem of how to generate cooperative responses when the user's plan is ambiguous. We show that it is not always necessary to resolve the ambiguity, and provide a procedure that estimates whether the ambiguity matters to the task of formulating a response. The procedure makes use of the critiquing of possible plans and identifies plans with the same fault. We illustrate the process of critiquing with examples. If the ambiguity does matter, we propose to resolve the ambiguity by entering into a clarification dialogue with the user and provide a procedure that performs this task. Together, these procedures allow a questionanswering system to take advantage of the interactive and collaborative nature of dialogue in order to recognize plans and resolve ambiguity. This work therefore presents a view of generation in advicegiving contexts which is different from the straightforward model of a passive selection of responses to questions asked by users. We also report on a trial implementation in a courseadvising domain, which provides insights on the practicality of the procedures and directions for future research.",
"This paper presents a computational model of how conversational participants collaborate in order to make a referring action successful. The model is based on the view of language as goaldirected behavior. We propose that the content of a referring expression can be accounted for by the planning paradigm. Not only does this approach allow the processes of building referring expressions and identifying their referents to be captured by plan construction and plan inference, it also allows us to account for how participants clarify a referring expression by using metaactions that reason about and manipulate the plan derivation that corresponds to the referring expression. To account for how clarification goals arise and how inferred clarification plans affect the agent, we propose that the agents are in a certain state of mind, and that this state includes an intention to achieve the goal of referring and a plan that the agents are currently considering. It is this mental state that sanctions the adoption of goals and the acceptance of inferred plans, and so acts as a link between understanding and generation."
]
}

cmplg9606025
 2952364737
 This paper presents an analysis conducted on a corpus of software instructions in French in order to establish whether task structure elements (the procedural representation of the users' tasks) are alone sufficient to control the grammatical resources of a text generator. We show that the construct of genre provides a useful additional source of control enabling us to resolve undetermined cases.
 The results from our linguistic analysis are consistent with other research on sublanguages in the instructions domain, in both French and English, e.g., @cite_5 @cite_4 . Our analysis goes beyond previous work by identifying within the discourse context the means for exercising explicit control over a text generator.
 {
"cite_N": [
"@cite_5",
"@cite_4"
],
"mid": [
"2047374384",
"2131196772"
],
"abstract": [
"This paper discusses an approach to planning the content of instructional texts. The research is based on a corpus study of 15 French procedural texts ranging from stepbystep device manuals to general artistic procedures. The approach taken starts from an AI task planner building a task representation, from which semantic carriers are selected. The most appropriate RST relations to communicate these carriers are then chosen according to heuristics developed during the corpus analysis.",
"Instructional texts have been the object of many studies recently, motivated by the increased need to produce manuals (especially multilingual manuals) coupled with the cost of translators and technical writers. Because these studies concentrate on aspects other than the linguistic realisation of instructions  for example, the integration of text and graphics  they all generate a sequence of steps required to achieve a task, using imperatives. Our research so far shows, however, that manuals can in fact have different styles, i. e., not all instructions are stated using a sequence of imperatives, and that, furthermore, different parts of manuals often use different styles. In this paper, we present our preliminary results from an analysis of over 30 user guides manuals for consumer appliances and discuss some of the implications."
]
}

cmplg9505006
 1617827527
 In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDTresolution. In this article we motivate a variant of Datalog grammars which allows us a metagrammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
 A notion that is central to recent work on ellipsis, and which has been present in embryonic form, as we have seen, even in the early work on coordination, is that of parallelism as a key element in the determination of implicit meanings. Asher @cite_3 defines parallelism as
 {
"cite_N": [
"@cite_3"
],
"mid": [
"1495022714"
],
"abstract": [
"Preface. Introduction. 1. From Events to Propositions: a Tour of Abstract Entities, Eventualities and the Nominals that Denote them. 2. A Crash Course in DRT. 3. Attitudes and Attitude Descriptions. 4. The Semantic Representation for Sentential Nominals. 5. Problems for the Semantics of Nominals. 6. Anaphora and Abstract Entities. 7. A Theory of Discourse Structure for an Analysis of Abstract Entity Anaphora. 8. Applying the Theory of Discourse Structure to the Anaphoric Phenomena. 9. Applications of the Theory of Discourse Structure to Concept Anaphora and VP Ellipsis. 10. Model Theory for Abstract Entities and its Philosophical Implications. Conclusion. Bibliography. Index."
]
}

cmplg9505006
 1617827527
 In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDTresolution. In this article we motivate a variant of Datalog grammars which allows us a metagrammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
 a) neither method formulates exactly how parallelism is to be determined it is just postulated as a prerequisite to the resolution of ellipsis (although @cite_6 speculates on possible ways of formulating this, leaving it for future work)
 {
"cite_N": [
"@cite_6"
],
"mid": [
"2119997945"
],
"abstract": [
"We describe an implementation in Carpenter's typed feature formalism, ALE, of a discourse grammar of the kind proposed by Scha, Polanyi, et al We examine their method for resolving parallelismdependent anaphora and show that there is a coherent featurestructural rendition of this type of grammar which uses the operations of priority union and generalization. We describe an augmentation of the ALE system to encompass these operations and we show that an appropriate choice of definition for priority union gives the desired multiple output for examples of VPellipsis which exhibit a strict sloppy ambiguity."
]
}

cmplg9505006
 1617827527
 In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDTresolution. In this article we motivate a variant of Datalog grammars which allows us a metagrammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
 By examining ellipsis in the context of coordinated structures, which are parallel by definition, and by using extended DLGs, we provide a method in which parallel structures are detected and resolved through syntactic and semantic criteria, and which can be applied to either grammars using different semantic representations feature structure, @math calculus, or other. We exemplify using a logic based semantics along the lines of @cite_8 .
 {
"cite_N": [
"@cite_8"
],
"mid": [
"2006589508"
],
"abstract": [
"Logic grammars are grammars expressible in predicate logic. Implemented in the programming language Prolog, logic grammar systems have proved to be a good basis for natural language processing. One of the most difficult constructions for natural language grammars to treat is coordination (construction with conjunctions like 'and'). This paper describes a logic grammar formalism, modifier structure grammars (MSGs), together with an interpreter written in Prolog, which can handle coordination (and other natural language constructions) in a reasonable and general way. The system produces both syntactic analyses and logical forms, and problems of scoping for coordination and quantifiers are dealt with. The MSG formalism seems of interest in its own right (perhaps even outside natural language processing) because the notions of syntactic structure and semantic interpretation are more constrained than in many previous systems (made more implicit in the formalism itself), so that less burden is put on the grammar writer."
]
}

cmplg9505038
 2952182676
 Augmented reality is a research area that tries to embody an electronic information space within the real world, through computational devices. A crucial issue within this area, is the recognition of real world objects or situations. In natural language processing, it is much easier to determine interpretations of utterances, even if they are illformed, when the context or situation is fixed. We therefore introduce robust, natural language processing into a system of augmented reality with situation awareness. Based on this idea, we have developed a portable system, called the Ubiquitous Talker. This consists of an LCD display that reflects the scene at which a user is looking as if it is a transparent glass, a CCD camera for recognizing real world objects with colorbar ID codes, a microphone for recognizing a human voice and a speaker which outputs a synthesized voice. The Ubiquitous Talker provides its user with some information related to a recognized object, by using the display and voice. It also accepts requests or questions as voice inputs. The user feels as if he she is talking with the object itself through the system.
 Ubiquitous computing @cite_4 proposes that very small computational devices (i.e., ubiquitous computers) be embedded and integrated into physical environments in such a way that they operate seamlessly and almost transparently. These devices are aware of their physical surroundings. In contrast to ubiquitous computers, our barcode (colorcode) system is a low cost and reliable solution to making everything a computer. Suppose that every page in a book has a unique barcode. When the user opens a page, its page ID is detected by the system, so it can supply specific information regarding the page. When the user adds some information to the page, the system stores it with the page ID tagged for later retrieval. This is almost the same as having a computer in every page of the book without the cost. Our IDaware system is better than ubiquitous computers from the viewpoint of reliability and costperformance, since it does not require batteries and never breaks down.
 {
"cite_N": [
"@cite_4"
],
"mid": [
"2084069552"
],
"abstract": [
"Ubiquitous computing is the method of enhancing computer use by making many computers available throughout the physical environment, but making them effectively invisible to the user. Since we started this work at Xerox PARC in 1988, a number of researchers around the world have begun to work in the ubiquitous computing framework. This paper explains what is new and different about the computer science in ubiquitous computing. It starts with a brief overview of ubiquitous computing, and then elaborates through a series of examples drawn from various subdisciplines of computer science: hardware components (e.g. chips), network protocols, interaction substrates (e.g. software for screens and pens), applications, privacy, and computational methods. Ubiquitous computing offers a framework for new and exciting research across the spectrum of computer science."
]
}

math9702221
 2117609843
 This paper reexamines univariate reduction from a toric geometric point of view. We begin by constructing a binomial variant of the @math resultant and then retailor the generalized characteristic polynomial to fully exploit sparsity in the monomial structure of any given polynomial system. We thus obtain a fast new algorithm for univariate reduction and a better understanding of the underlying projections. As a corollary, we show that a refinement of Hilbert's Tenth Problem is decidable within singleexponential time. We also show how certain multisymmetric functions of the roots of polynomial systems can be calculated with sparse resultants.
 From an applied angle, our observations on degeneracies and handling polynomial systems with infinitely many roots nicely complement the work of Emiris and Canny @cite_43 . In particular, their sparse resultant based algorithms for polynomial system solving can now be made to work even when problem B occurs. Also, an added benefit of working torically (as opposed to the classical approach of working in projective space) is the increased efficiency of the sparse resultant: the resulting matrix calculations (for polynomial system solving) are much smaller and faster. In particular, whereas it was remarked in @cite_41 that Gr "obner basis methods are likely to be faster than the GCP for sparse polynomial systems, the toric GCP appears to be far more competitive in such a comparison.
 {
"cite_N": [
"@cite_41",
"@cite_43"
],
"mid": [
"1976392590",
"2066130115"
],
"abstract": [
"Multipolynomial resultants provide the most efficient methods known (in terms as asymptoticcomplexity) for solving certain systems of polynomial equations or eliminating variables (, 1988). The resultant of f\"1, ..., f\"n in K[x\"1,...,x\"m] will be a polynomial in mn+1 variables which is zero when the system f\"1=0 has a solution in ^m ( the algebraic closure of K). Thus the resultant defines a projection operator from ^m to ^(^m^^n^+^1^). However, resultants are only exact conditions for homogeneous systems, and in the affine case just mentioned, the resultant may be zero even if the system has no affine solution. This is most serious when the solution set of the system of polynomials has ''excess components'' (components of dimension >mn), which may not even be affine, since these cause the resultant to vanish identically. In this paper we describe a projection operator which is not identically zero, but which is guaranteed to vanish on all the proper (dimension=mn) components of the system f\"i=0. Thus it fills the role of a general affine projection operator or variable elimination ''black box'' which can be used for arbitrary polynomial systems. The construction is based on a generalisation of the characteristic polynomial of a linear system to polynomial systems. As a corollary, we give a singleexponential time method for finding all the isolated solution points of a system of polynomials, even in the presence of infinitely many solutions, at infinity or elsewhere.",
"Abstract We propose a new and efficient algorithm for computing the sparse resultant of a system of n + 1 polynomial equations in n unknowns. This algorithm produces a matrix whose entries are coefficients of the given polynomials and is typically smaller than the matrices obtained by previous approaches. The matrix determinant is a nontrivial multiple of the sparse resultant from which the sparse resultant itself can be recovered. The algorithm is incremental in the sense that successively larger matrices are constructed until one is found with the above properties. For multigraded systems, the new algorithm produces optimal matrices, i.e. expresses the sparse resultant as a single determinant. An implementation of the algorithm is described and experimental results are presented. In addition, we propose an efficient algorithm for computing the mixed volume of n polynomials in n variables. This computation provides an upper bound on the number of common isolated roots. A publicly available implementation of the algorithm is presented and empirical results are reported which suggest that it is the fastest mixed volume code to date."
]
}

cmplg9503009
 2951769629
 This paper presents an algorithm for tagging words whose partofspeech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.
 The simplest partofspeech taggers are bigram or trigram models @cite_12 @cite_1 . They require a relatively large tagged training text. Transformationbased tagging as introduced by also requires a handtagged text for training. No pretagged text is necessary for Hidden Markov Models @cite_7 @cite_13 @cite_2 . Still, a lexicon is needed that specifies the possible parts of speech for every word. have shown that the effort necessary to construct the partofspeech lexicon can be considerably reduced by combining learning procedures and a partial partofspeech categorization elicited from an informant.
 {
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_2",
"@cite_13",
"@cite_12"
],
"mid": [
"2100796029",
"2166394306",
"",
"2046224275",
"1509596266"
],
"abstract": [
"Abstract A system for partofspeech tagging is described. It is based on a hidden Markov model which can be trained using a corpus of untagged text. Several techniques are introduced to achieve robustness while maintaining high performance. Word equivalence classes are used to reduce the overall number of parameters in the model, alleviating the problem of obtaining reliable estimates for individual words. The context for category prediction is extended selectively via predefined networks, rather than using a uniformly higherorder conditioning which requires exponentially more parameters with increasing context. The networks are embedded in a firstorder model and network structure is developed by analysis of erros, and also via linguistic considerations. To compensate for incomplete dictionary coverage, the categories of unknown words are predicted using both local context and suffix information to aid in disambiguation. An evaluation was performed using the Brown corpus and different dictionary arrangements were investigated. The techniques result in a model that correctly tags approximately 96 of the text. The flexibility of the methods is illustrated by their use in a tagging program for French.",
"We derive from first principles the basic equations for a few of the basic hiddenMarkovmodel word taggers as well as equations for other models which may be novel (the descriptions in previous papers being too spare to be sure). We give performance results for all of the models. The results from our best model (96.45 on an unused test sample from the Brown corpus with 181 distinct tags) is on the upper edge of reported results. We also hope these results clear up some confusion in the literature about the best equations to use. However, the major purpose of this paper is to show how the equations for a variety of models may be derived and thus encourage future authors to give the equations for their model and the derivations thereof.",
"",
"We present an implementation of a partofspeech tagger based on a hidden Markov model. The methodology enables robust and accurate tagging with few resource requirements. Only a lexicon and some unlabeled training text are required. Accuracy exceeds 96 . We describe implementation strategies and optimizations which result in highspeed operation. Three applications for tagging are described: phrase recognition; word sense disambiguation; and grammatical function assignment.",
"A consideration of problems engendered by the use of concordances to study additional word senses. The use of factor analysis as a research tool in lexicography is discussed. It is shown that this method provides information not obtainable through other approaches. This includes provision of several major senses for each word, an indication of the relationship between collocational patterns, & a more detailed analysis of the senses themselves. Sample factor analyses for the collocates of certain & right are presented & discussed. 3 Tables, 11 References. B. Annesser Murray"
]
}

cmplg9503009
 2951769629
 This paper presents an algorithm for tagging words whose partofspeech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.
 The present paper is concerned with tagging languages and sublanguages for which no a priori knowledge about grammatical categories is available, a situation that occurs often in practice @cite_4 .
 {
"cite_N": [
"@cite_4"
],
"mid": [
"144990771"
],
"abstract": [
"In this paper, we will discuss a method for assigning part of speech tags to words in an unannotated text corpus whose structure is completely unknown, with a little bit of help from an informant. Starting from scratch, automated and semiautomated methods are employed to build a part of speech tagger for the text. There are three steps to building the tagger: uncovering a set of part of speech tags, building a lexicon which indicates for each word its most likely tag, and learning rules to both correct mistakes in the learned lexicon and discover where contextual information can repair tagging mistakes. The long term goal of this work is to create a system which would enable somebody to take a large text in a language he does not know, and with only a few hours of help from a speaker of the language, accurately annotate the text with part of speech information."
]
}

cmplg9503009
 2951769629
 This paper presents an algorithm for tagging words whose partofspeech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.
 In a previous paper @cite_15 , we trained a neural network to disambiguate partofspeech using context; however, no information about the word that is to be categorized was used. This scheme fails for cases like The soldiers rarely come home.'' vs. The soldiers will come home.'' where the context is identical and information about the lexical item in question ( rarely'' vs. will'') is needed in combination with context for correct classification. In this paper, we will compare two tagging algorithms, one based on classifying word types, and one based on classifying wordspluscontext.
 {
"cite_N": [
"@cite_15"
],
"mid": [
"2163514362"
],
"abstract": [
"This paper presents a method for inducing the parts of speech of a language and partofspeech labels for individual words from a large text corpus. Vector representations for the partofspeech of a word are formed from entries of its near lexical neighbors. A dimensionality reduction creates a space representing the syntactic categories of unambiguous words. A neural net trained on these spatial representations classifies individual contexts of occurrence of ambiguous words. The method classifies both ambiguous and unambiguous words correctly with high accuracy."
]
}

cmplg9607020
 1497662183
 In this paper, we propose a novel strategy which is designed to enhance the accuracy of the parser by simplifying complex sentences before parsing. This approach involves the separate parsing of the constituent subsentences within a complex sentence. To achieve that, the divideandconquer strategy first disambiguates the roles of the link words in the sentence and segments the sentence based on these roles. The separate parse trees of the segmented subsentences and the noun phrases within them are then synthesized to form the final parse. To evaluate the effects of this strategy on parsing, we compare the original performance of a dependency parser with the performance when it is enhanced with the divideandconquer strategy. When tested on 600 sentences of the IPSM'95 data sets, the enhanced parser saw a considerable error reduction of 21.2 in its accuracy.
 Magerman discussed the poor performance of his parser SPATTER on sentences with conjunctions @cite_9 . As a result, he augmented SPATTER's probabilistic model with an additional conjunction feature. However, he reported that though SPATTER's performance on conjoined sentences improves with the conjunction feature, a significant percentage is still misanalyzed, as the simple conjunction feature model finds it difficult to capture long distance dependencies.
 {
"cite_N": [
"@cite_9"
],
"mid": [
"1924403233"
],
"abstract": [
"Traditional natural language parsers are based on rewrite rule systems developed in an arduous, timeconsuming manner by grammarians. A majority of the grammarian's efforts are devoted to the disambiguation process, first hypothesizing rules which dictate constituent categories and relationships among words in ambiguous sentences, and then seeking exceptions and corrections to these rules. In this work, I propose an automatic method for acquiring a statistical parser from a set of parsed sentences which takes advantage of some initial linguistic input, but avoids the pitfalls of the iterative and seemingly endless grammar development process. Based on distributionallyderived and linguisticallybased features of language, this parser acquires a set of statistical decision trees which assign a probability distribution on the space of parse trees given the input sentence. These decision trees take advantage of significant amount of contextual information, potentially including all of the lexical information in the sentence, to produce highly accurate statistical models of the disambiguation process. By basing the disambiguation criteria selection on entropy reduction rather than human intuition, this parser development method is able to consider more sentences than a human grammarian can when making individual disambiguation rules. In experiments between a parser, acquired using this statistical framework, and a grammarian's rulebased parser, developed over a tenyear period, both using the same training material and test sentences, the decision tree parser significantly outperformed the grammarbased parser on the accuracy measure which the grammarian was trying to maximize, achieving an accuracy of 78 compared to the grammarbased parser's 69 ."
]
}

cmplg9607020
 1497662183
 In this paper, we propose a novel strategy which is designed to enhance the accuracy of the parser by simplifying complex sentences before parsing. This approach involves the separate parsing of the constituent subsentences within a complex sentence. To achieve that, the divideandconquer strategy first disambiguates the roles of the link words in the sentence and segments the sentence based on these roles. The separate parse trees of the segmented subsentences and the noun phrases within them are then synthesized to form the final parse. To evaluate the effects of this strategy on parsing, we compare the original performance of a dependency parser with the performance when it is enhanced with the divideandconquer strategy. When tested on 600 sentences of the IPSM'95 data sets, the enhanced parser saw a considerable error reduction of 21.2 in its accuracy.
 Jones explored another type of link words, the punctuations @cite_7 . He showed successfully that for longer sentences, a grammar which makes use of punctuation massively outperforms one which does not. Besides improving parsing accuracy, the use of punctuations also significantly reduces the number of possible parses generated. However, as theoretical forays into the syntactic roles of punctuation are limited, the grammar he designed can only cover a subset of all punctuation phenomena. Unexpected constructs thus cause the grammar to fail completely.
 {
"cite_N": [
"@cite_7"
],
"mid": [
"2039117335"
],
"abstract": [
"Few, if any, current NLP systems make any significant use of punctuation. Intuitively, a treatment of punctuation seems necessary to the analysis and production of text. Whilst this has been suggested in the fields of discourse structure, it is still unclear whether punctuation can help in the syntactic field. This investigation attempts to answer this question by parsing some corpusbased material with two similar grammars  one including rules for punctuation, the other ignoring it. The punctuated grammar significantly outperforms the unpunctuated one, and so the conclusion is that punctuation can play a useful role in syntactic processing."
]
}

1908.11823
 2970705892
 In this work we investigate to which extent one can recover class probabilities within the empirical risk minimization (ERM) paradigm. The main aim of our paper is to extend existing results and emphasize the tight relations between empirical risk minimization and class probability estimation. Based on existing literature on excess risk bounds and proper scoring rules, we derive a class probability estimator based on empirical risk minimization. We then derive fairly general conditions under which this estimator will converge, in the L1norm and in probability, to the true class probabilities. Our main contribution is to present a way to derive finite sample L1convergence rates of this estimator for different surrogate loss functions. We also study in detail which commonly used loss functions are suitable for this estimation problem and finally discuss the setting of modelmisspecification as well as a possible extension to asymmetric loss functions.
 perform an analysis similar to ours as they also investigate convergence properties of a class probability estimator, their start and end point are very different though. While we start with theory from proper scoring rules, their paper directly starts with the class probability estimator as found in @cite_5 . The problem is that the estimator in @cite_5 only appears as a side remark, and it is unclear to which extent this is the best, only or even the correct choice. This paper contributes to close this gap and answers those questions. They show that the estimator converges to a unique class probability model. In relation to this one can view this paper as an investigation of this unique class probability model and we give necessary and sufficient conditions that lead to convergence to the true class probabilities. Note also that their paper uses convex methods, while our work in comparison draws from the theory of proper scoring rules.
 {
"cite_N": [
"@cite_5"
],
"mid": [
"2023163512"
],
"abstract": [
"We study how closely the optimal Bayes error rate can be approximately reached using a classification algorithm that computes a classifier by minimizing a convex upper bound of the classification error function. The measurement of closeness is characterized by the loss function used in the estimation. We show that such a classification scheme can be generally regarded as a (nonmaximumlikelihood) conditional inclass probability estimate, and we use this analysis to compare various convex loss functions that have appeared in the literature. Furthermore, the theoretical insight allows us to design good loss functions with desirable properties. Another aspect of our analysis is to demonstrate the consistency of certain classification methods using convex risk minimization. This study sheds light on the good performance of some recently proposed linear classification methods including boosting and support vector machines. It also shows their limitations and suggests possible improvements."
]
}

1908.11823
 2970705892
 In this work we investigate to which extent one can recover class probabilities within the empirical risk minimization (ERM) paradigm. The main aim of our paper is to extend existing results and emphasize the tight relations between empirical risk minimization and class probability estimation. Based on existing literature on excess risk bounds and proper scoring rules, we derive a class probability estimator based on empirical risk minimization. We then derive fairly general conditions under which this estimator will converge, in the L1norm and in probability, to the true class probabilities. Our main contribution is to present a way to derive finite sample L1convergence rates of this estimator for different surrogate loss functions. We also study in detail which commonly used loss functions are suitable for this estimation problem and finally discuss the setting of modelmisspecification as well as a possible extension to asymmetric loss functions.
 The probability estimator we use also appears in @cite_10 where it is used to derive excess risk bounds, referred to as surrogate risk bounds, for bipartite ranking. The methods used are very similar in the sense that these are also based on proper scoring rules. The difference is again the focus, and even more so the conditions used. They introduce the notion of strongly proper scoring rules which directly allows one to bound the @math norm, and thus the @math norm, of the estimator in terms of the excess risk. We show that convergence can be achieved already under milder conditions. We then use the concept of modulus of continuity, of which strongly proper scoring rules are a particular case, to analyze the rate of convergence.
 {
"cite_N": [
"@cite_10"
],
"mid": [
"2141789531"
],
"abstract": [
"The problem of bipartite ranking, where instances are labeled positive or negative and the goal is to learn a scoring function that minimizes the probability of misranking a pair of positive and negative instances (or equivalently, that maximizes the area under the ROC curve), has been widely studied in recent years. A dominant theoretical and algorithmic framework for the problem has been to reduce bipartite ranking to pairwise classification; in particular, it is well known that the bipartite ranking regret can be formulated as a pairwise classification regret, which in turn can be upper bounded using usual regret bounds for classification problems. Recently, (2011) showed regret bounds for bipartite ranking in terms of the regret associated with balanced versions of the standard (nonpairwise) logistic and exponential losses. In this paper, we show that such (nonpairwise) surrogate regret bounds for bipartite ranking can be obtained in terms of a broad class of proper (composite) losses that we term as strongly proper. Our proof technique is much simpler than that of (2011), and relies on properties of proper (composite) losses as elucidated recently by Reid and Williamson (2010, 2011) and others. Our result yields explicit surrogate bounds (with no hidden balancing terms) in terms of a variety of strongly proper losses, including for example logistic, exponential, squared and squared hinge losses as special cases. An important consequence is that standard algorithms minimizing a (nonpairwise) strongly proper loss, such as logistic regression and boosting algorithms (assuming a universal function class and appropriate regularization), are in fact consistent for bipartite ranking; moreover, our results allow us to quantify the bipartite ranking regret in terms of the corresponding surrogate regret. We also obtain tighter surrogate bounds under certain lownoise conditions via a recent result of Clemencon and Robbiano (2011)."
]
}

1908.11829
 2970633507
 We consider the minimum cut problem in undirected, weighted graphs. We give a simple algorithm to find a minimum cut that @math respects (cuts two edges of) a spanning tree @math of a graph @math . This procedure can be used in place of the complicated subroutine given in Karger's nearlinear time minimum cut algorithm (J. ACM, 2000). We give a selfcontained version of Karger's algorithm with the new procedure, which is easy to state and relatively simple to implement. It produces a minimum cut on an @math edge, @math vertex graph in @math time with high probability. This performance matches that achieved by Karger, thereby matching the current state of the art.
 On an unweighted graph, Gabow @cite_13 showed how to compute the minimum cut in @math time, where @math is the capacity of the minimum cut. Karger @cite_29 improved Gabow's algorithm by applying random sampling, achieving runtime @math The @math notation hides @math factors. Las Vegas. The sampling technique developed by Karger @cite_29 , combined with the treepacking technique devised by Gabow @cite_13 , form the basis of Karger's nearlinear time minimum cut algorithm @cite_16 . As previously mentioned, this technique finds the minimum cut in an undirected, weighted graph in @math time with high probability.
 {
"cite_N": [
"@cite_29",
"@cite_16",
"@cite_13"
],
"mid": [
"2150516767",
"1964510837",
"2012287357"
],
"abstract": [
"We use random sampling as a tool for solving undirected graph problems. We show that the sparse graph, or skeleton, that arises when we randomly sample a graph's edges will accurately approximate the value of all cuts in the original graph with high probability. This makes sampling effective for problems involving cuts in graphs. We present fast randomized (Monte Carlo and Las Vegas) algorithms for approximating and exactly finding minimum cuts and maximum flows in unweighted, undirected graphs. Our cutapproximation algorithms extend unchanged to weighted graphs while our weightedgraph flow algorithms are somewhat slower. Our approach gives a general paradigm with potential applications to any packing problem. It has since been used in a nearlinear time algorithm for finding minimum cuts, as well as faster cut and flow algorithms. Our sampling theorems also yield faster algorithms for several other cutbased problems, including approximating the best balanced cut of a graph, finding a kconnected orientation of a 2kconnected graph, and finding integral multicommodity flows in graphs with a great deal of excess capacity. Our methods also improve the efficiency of some parallel cut and flow algorithms. Our methods also apply to the network design problem, where we wish to build a network satisfying certain connectivity requirements between vertices. We can purchase edges of various costs and wish to satisfy the requirements at minimum total cost. Since our sampling theorems apply even when the sampling probabilities are different for different edges, we can apply randomized rounding to solve network design problems. This gives approximation algorithms that guarantee much better approximations than previous algorithms whenever the minimum connectivity requirement is large. As a particular example, we improve the best approximation bound for the minimum kconnected subgraph problem from 1.85 to [math not displayed].",
"We significantly improve known time bounds for solving the minimum cut problem on undirected graphs. We use a \"semiduality\" between minimum cuts and maximum spanning tree packings combined with our previously developed random sampling techniques. We give a randomized (Monte Carlo) algorithm that finds a minimum cut in an m edge, n vertex graph with high probability in O (m log 3 n ) time. We also give a simpler randomized algorithm that finds all minimum cuts with high probability in O( m log 3 n ) time. This variant has an optimal RNC parallelization. Both variants improve on the previous best time bound of O ( n 2 log 3 n ). Other applications of the treepacking approach are new, nearly tight bounds on the number of nearminimum cuts a graph may have and a new data structure for representing them in a spaceefficient manner.",
"We present an algorithm that finds the edge connectivity ? of a graph having n vectices and m edges. The running time is O(? m log(n2 m)) for directed graphs and slightly less for undirected graphs, O(m+?2n log(n ?)). This improves the previous best time bounds, O(min mn, ?2n2 ) for directed graphs and O(?n2) for undirected graphs. We present an algorithm that finds k edgedisjoint arborescences on a directed graph in time O((kn)2). This improves the previous best time bound, O(kmn + k3n2). Unlike previous work, our approach is based on two theorems of Edmonds that link these two problems and show how they can be solved."
]
}

1908.11829
 2970633507
 We consider the minimum cut problem in undirected, weighted graphs. We give a simple algorithm to find a minimum cut that @math respects (cuts two edges of) a spanning tree @math of a graph @math . This procedure can be used in place of the complicated subroutine given in Karger's nearlinear time minimum cut algorithm (J. ACM, 2000). We give a selfcontained version of Karger's algorithm with the new procedure, which is easy to state and relatively simple to implement. It produces a minimum cut on an @math edge, @math vertex graph in @math time with high probability. This performance matches that achieved by Karger, thereby matching the current state of the art.
 A recent development uses lowconductance cuts to find the minimum cut in an undirected unweighted graph. This technique was introduced by Kawarabayashi and Thorup @cite_2 , who achieve nearlinear deterministic time (estimated to be @math ). This was improved by Henzinger, Rao, and Wang @cite_31 , who achieve deterministic runtime @math . Although the algorithm of is more efficient than Karger's algorithm @cite_16 on unweighted graphs, the procedure, as well as the one it was based on @cite_2 are quite involved, thus making them largely impractical for implementation purposes.
 {
"cite_N": [
"@cite_31",
"@cite_16",
"@cite_2"
],
"mid": [
"2569104968",
"1964510837",
"2963972775"
],
"abstract": [
"We study the problem of computing a minimum cut in a simple, undirected graph and give a deterministic O(m log2 n log log2 n) time algorithm. This improves both on the best previously known deterministic running time of O(m log12 n) (Kawarabayashi and Thorup [12]) and the best previously known randomized running time of O(m log3 n) (Karger [11]) for this problem, though Karger's algorithm can be further applied to weighted graphs. Our approach is using the Kawarabayashi and Thorup graph compression technique, which repeatedly finds lowconductance cuts. To find these cuts they use a diffusionbased local algorithm. We use instead a flowbased local algorithm and suitably adjust their framework to work with our flowbased subroutine. Both flow and diffusion based methods have a long history of being applied to finding low conductance cuts. Diffusion algorithms have several variants that are naturally local while it is more complicated to make flow methods local. Some prior work has proven nice properties for local flow based algorithms with respect to improving or cleaning up low conductance cuts. Our flow subroutine, however, is the first that is both local and produces low conductance cuts. Thus, it may be of independent interest.",
"We significantly improve known time bounds for solving the minimum cut problem on undirected graphs. We use a \"semiduality\" between minimum cuts and maximum spanning tree packings combined with our previously developed random sampling techniques. We give a randomized (Monte Carlo) algorithm that finds a minimum cut in an m edge, n vertex graph with high probability in O (m log 3 n ) time. We also give a simpler randomized algorithm that finds all minimum cuts with high probability in O( m log 3 n ) time. This variant has an optimal RNC parallelization. Both variants improve on the previous best time bound of O ( n 2 log 3 n ). Other applications of the treepacking approach are new, nearly tight bounds on the number of nearminimum cuts a graph may have and a new data structure for representing them in a spaceefficient manner.",
"We present a deterministic algorithm that computes the edgeconnectivity of a graph in nearlinear time. This is for a simple undirected unweighted graph G with n vertices and m edges. This is the first o(mn) time deterministic algorithm for the problem. Our algorithm is easily extended to find a concrete minimum edgecut. In fact, we can construct the classic cactus representation of all minimum cuts in nearlinear time. The previous fastest deterministic algorithm by Gabow from STOC '91 took O(m+λ2 n), where λ is the edge connectivity, but λ can be as big as n−1. Karger presented a randomized nearlinear time Monte Carlo algorithm for the minimum cut problem at STOC’96, but the returned cut is only minimum with high probability. Our main technical contribution is a nearlinear time algorithm that contracts vertex sets of a simple input graph G with minimum degree Δ, producing a multigraph Ḡ with O(m Δ) edges, which preserves all minimum cuts of G with at least two vertices on each side. In our deterministic nearlinear time algorithm, we will decompose the problem via lowconductance cuts found using PageRank a la Brin and Page (1998), as analyzed by Andersson, Chung, and Lang at FOCS’06. Normally, such algorithms for lowconductance cuts are randomized Monte Carlo algorithms, because they rely on guessing a good start vertex. However, in our case, we have so much structure that no guessing is needed."
]
}

1908.11656
 2971124296
 We propose LUNet  for LiDAR UNet, a new method for the semantic segmentation of a 3D LiDAR point cloud. Instead of applying some global 3D segmentation method such as PointNet, we propose an endtoend architecture for LiDAR point cloud semantic segmentation that efficiently solves the problem as an image processing problem. We first extract highlevel 3D features for each point given its 3D neighbors. Then, these features are projected into a 2D multichannel rangeimage by considering the topology of the sensor. Thanks to these learned features and this projection, we can finally perform the segmentation using a simple UNet segmentation network, which performs very well while being very efficient. In this way, we can exploit both the 3D nature of the data and the specificity of the LiDAR sensor. This approach outperforms the stateoftheart by a large margin on the KITTI dataset, as our experiments show. Moreover, this approach operates at 24fps on a single GPU. This is above the acquisition rate of common LiDAR sensors which makes it suitable for realtime applications.
 Semantic segmentation of images has been the subject of many works in the past years. Recently, deep learning methods have largely outperformed previous ones. The method presented in @cite_16 was the first to propose an accurate endtoend network for semantic segmentation. This method is based on an encoder in which each scale is used to compute the final segmentation. Only a few month later, the UNet architecture @cite_20 was proposed for the semantic segmentation of medical images. This method is an encoderdecoder able to provide highly precise segmentation. These two methods have largely influenced recent works such as DeeplabV3+ @cite_11 that uses dilated convolutional layers and spatial pyramid pooling modules in an encoderdecoder structure to improve the quality of the prediction. Other approaches explore multiscale architectures to produce and fuse segmentations performed at different scales @cite_15 @cite_7 . Most of these methods are able to produce very accurate results, on various types of images (medical, outdoor, indoor). The survey @cite_1 of CNNs methods for semantic segmentation provides a deep analysis of some recent techniques. This work demonstrates that a combination of various components would most likely improve segmentation results on wider classes of objects.
 {
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_15",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"2611259176",
"2898743055",
"2563705555",
"1903029394",
"1901129140",
"2964309882"
],
"abstract": [
"We focus on the challenging task of realtime semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixelwise label inference. We propose an image cascade network (ICNet) that incorporates multiresolution branches under proper label guidance to address this challenge. We provide indepth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve highquality segmentation. Our system yields realtime inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCOStuff.",
"Majority of CNN architecture design is aimed at achieving high accuracy in public benchmarks by increasing the complexity. Typically, they are overspecified by a large margin and can be optimized by a factor of 10100x with only a small reduction in accuracy. In spite of the increase in computational power of embedded systems, these networks are still not suitable for embedded deployment. There is a large need to optimize for hardware and reduce the size of the network by orders of magnitude for computer vision applications. This has led to a growing community which is focused on designing efficient networks. However, CNN architectures are evolving rapidly and efficient architectures seem to lag behind. There is also a gap in understanding the hardware architecture details and incorporating it into the network design. The motivation of this paper is to systematically summarize efficient design techniques and provide guidelines for an application developer. We also perform a case study by benchmarking various semantic segmentation algorithms for autonomous driving.",
"Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multipath refinement network that explicitly exploits all the information available along the downsampling process to enable highresolution prediction using longrange residual connections. In this way, the deeper layers that capture highlevel semantic features can be directly refined using finegrained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective endtoend training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new stateoftheart results on seven public datasets. In particular, we achieve an intersectionoverunion score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained endtoend, pixelstopixels, exceed the stateoftheart in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondinglysized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by finetuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves stateoftheart segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained endtoend from very few images and outperforms the prior best method (a slidingwindow convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.unifreiburg.de people ronneber unet .",
"Spatial pyramid pooling module or encodedecoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multiscale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fieldsofview, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoderdecoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89 and 82.1 without any postprocessing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: github.com tensorflow models tree master research deeplab."
]
}

1908.11656
 2971124296
 We propose LUNet  for LiDAR UNet, a new method for the semantic segmentation of a 3D LiDAR point cloud. Instead of applying some global 3D segmentation method such as PointNet, we propose an endtoend architecture for LiDAR point cloud semantic segmentation that efficiently solves the problem as an image processing problem. We first extract highlevel 3D features for each point given its 3D neighbors. Then, these features are projected into a 2D multichannel rangeimage by considering the topology of the sensor. Thanks to these learned features and this projection, we can finally perform the segmentation using a simple UNet segmentation network, which performs very well while being very efficient. In this way, we can exploit both the 3D nature of the data and the specificity of the LiDAR sensor. This approach outperforms the stateoftheart by a large margin on the KITTI dataset, as our experiments show. Moreover, this approach operates at 24fps on a single GPU. This is above the acquisition rate of common LiDAR sensors which makes it suitable for realtime applications.
 Recently, SqueezeSeg, a novel approach for the semantic segmentation of a LiDAR point cloud represented as a spherical rangeimage @cite_14 , was proposed. This representation allows to perform the segmentation by using simple 2D convolutions, which lowers the computational cost while keeping good accuracy. The architecture is derived from the SqueezeNet image segmentation method @cite_13 . The intermediate layers are "fire layers", layers made of one squeeze module and one expansion module. Later on, the same authors improved this method in @cite_3 by adding a context aggregation module and by considering focal loss and batch normalization to improve the quality of the segmentation. A similar rangeimage approach was proposed in @cite_17 , where a Atrous Spatial Pyramid Pooling @cite_4 and squeeze reweighting layer @cite_8 are added. Finally, in @cite_10 , the authors offer to input a rangeimage directly to the UNet architecture described in @cite_20 . This method achieves results that are comparable to the state of the art of rangeimage methods with a much simpler and more intuitive architecture. All these rangeimage methods succeed in realtime computation. However, their results often lack of accuracy which limits their usage in real scenarios.
 {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_10",
"@cite_3",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2412782625",
"",
"2946747865",
"2968557240",
"2279098554",
"1901129140",
"2884355388"
],
"abstract": [
"",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fieldsofviews, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of maxpooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new stateofart at the PASCAL VOC2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCALContext, PASCALPersonPart, and Cityscapes. All of our code is made publicly available online.",
"",
"This paper proposes RIUNet (for RangeImage UNet), the adaptation of a popular semantic segmentation network for the semantic segmentation of a 3D LiDAR point cloud. The point cloud is turned into a 2D rangeimage by exploiting the topology of the sensor. This image is then used as input to a Unet. This architecture has already proved its efficiency for the task of semantic segmentation of medical images. We demonstrate how it can also be used for the accurate semantic segmentation of a 3D LiDAR point cloud and how it represents a valid bridge between image processing and 3D point cloud processing. Our model is trained on rangeimages built from KITTI 3D object detection dataset. Experiments show that RIUNet, despite being very simple, offers results that are comparable to the stateoftheart of rangeimage based methods. Finally, we demonstrate that this architecture is able to operate at 90fps on a single GPU, which enables deployment for realtime segmentation.",
"Earlier work demonstrates the promise of deeplearningbased approaches for point cloud segmentation; however, these approaches need to be improved to be practically useful. To this end, we introduce a new model SqueezeSegV2. With an improved model structure, SqueezeSetV2 is more robust against dropout noises in LiDAR point cloud and therefore achieves significant accuracy improvement. Training models for point cloud segmentation requires large amounts of labeled data, which is expensive to obtain. To sidestep the cost of data collection and annotation, simulators such as GTAV can be used to create unlimited amounts of labeled, synthetic data. However, due to domain shift, models trained on synthetic data often do not generalize well to the real world. Existing domainadaptation methods mainly focus on images and most of them cannot be directly applied to point clouds. We address this problem with a domainadaptation training pipeline consisting of three major components: 1) learned intensity rendering, 2) geodesic correlation alignment, and 3) progressive domain calibration. When trained on real data, our new model exhibits segmentation accuracy improvements of 6.08.6 over the original SqueezeSeg. When training our new model on synthetic data using the proposed domain adaptation pipeline, we nearly double test accuracy on realworld data, from 29.0 to 57.4 . Our source code and synthetic dataset are open sourced11https: github.com xuanyuzhou98 SqueezeSegV2",
"Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNetlevel accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained endtoend from very few images and outperforms the prior best method (a slidingwindow convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.unifreiburg.de people ronneber unet .",
"In this paper, we propose PointSeg, a realtime endtoend semantic segmentation method for roadobjects based on spherical images. We take the spherical image, which is transformed from the 3D LiDAR point clouds, as input of the convolutional neural networks (CNNs) to predict the pointwise semantic map. To make PointSeg applicable on a mobile system, we build the model based on the lightweight network, SqueezeNet, with several improvements. It maintains a good balance between memory cost and prediction performance. Our model is trained on spherical images and label masks projected from the KITTI 3D object detection dataset. Experiments show that PointSeg can achieve competitive accuracy with 90fps on a single GPU 1080ti. which makes it quite compatible for autonomous driving applications."
]
}

1908.11769
 2971290534
 Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
 Several temporal logics have been proposed that make joint use of actions and propositions on states: ACTL* @cite_34 , RLTL @cite_0 , SELTL @cite_4 , TLR* @cite_25 . There are also transition structures with mixed ingredients: LKS @cite_4 , L2TS @cite_54 . Although all of them bring actions (or transitions) to the focus, none tries to be utterly egalitarian, as we do.
 {
"cite_N": [
"@cite_4",
"@cite_54",
"@cite_0",
"@cite_34",
"@cite_25"
],
"mid": [
"",
"2167352300",
"2024215898",
"1602925513",
"1599831144"
],
"abstract": [
"",
"Three temporal logics are introduced which induce on labeled transition systems the same identifications as branching bisimulation. The first is an extension of HennessyMilner logic with a kind of unit operator. The second is another extension of HennessyMilner logic which exploits the power of backward modalities. The third is CTL* with the nexttime operator interpreted over all paths, not just over maximal ones. A relevant side effect of the last characterization is that it sets a bridge between the state and eventbased approaches to the semantics of concurrent systems. >",
"We study efficient translations of Regular Linear Temporal Logic ( ) into automata on infinite words. is a temporal logic that fuses Linear Temporal Logic (LTL) with regular expressions, extending its expressive power to all @math regular languages. The first contribution of this paper is a novel bottom up translation from into alternating parity automata of linear size that requires only colors @math , @math and @math . Moreover, the resulting automata enjoy the stratified internal structure of hesitant automata. Our translation is defined inductively for every operator, and does not require an upfront transformation of the expression into a normal form. Our construction builds at every step two automata: one equivalent to the formula and another to its complement. Inspired by this construction, our second contribution is to extend with new operators, including universal sequential composition, that enrich the logic with duality laws and negation normal forms. The third contribution is a ranking translation of the resulting alternating automata into nondeterministic automata. To provide this efficient translation we introduce the notion of stratified rankings, and show how the translation is optimal for the LTL fragment of the logic.",
"A temporal logic based on actions rather than on states is presented and interpreted over labelled transition systems. It is proved that it has essentially the same power as CTL*, a temporal logic interpreted over Kripke structures. The relationship between the two logics is established by introducing two mappings from Kripke structures to labelled transition systems and viceversa and two transformation functions between the two logics which preserve truth. A branching time version of the action based logic is also introduced. This new logic for transition systems can play an important role as an intermediate between HennessyMilner Logic and the modal μcalculus. It is sufficiently expressive to describe safety and liveness properties but permits model checking in linear time.",
"This paper presents the temporal logic of rewriting @math . Syntactically, @math is a very simple extension of @math which just adds action atoms, in the form of spatial action patterns, to @math . Semantically and pragmatically, however, when used together with rewriting logic as a \"tandem\" of system specification and property specification logics, it has substantially more expressive power than purely statebased logics like @math , or purely actionbased logics like A @math . Furthermore, it avoids the system property mismatch problem experienced in statebased or actionbased logics, which makes many useful properties inexpressible in those frameworks without unnatural changes to a system's specification. The advantages in expresiveness of @math are gained without losing the ability to use existing tools and algorithms to model check its properties: a faithful translation of models and formulas is given that allows verifying @math properties with @math model checkers."
]
}

1908.11769
 2971290534
 Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
 The best move towards egalitarianism we know of is the temporal logic of rewriting, TLR* (which was an inspiration for the present work). The explanations and examples in @cite_25 are good arguments for an egalitarian view. Consider this formula to express fairness in the execution of a rule with label @math : @math . The proposition @math is on states: it means that the current state of the system admits the rule @math to be applied to it. But @math is on transitions: it means that a transition is being executed according to rule @math . The simplicity of the formula is only possible by being egalitarian.
 {
"cite_N": [
"@cite_25"
],
"mid": [
"1599831144"
],
"abstract": [
"This paper presents the temporal logic of rewriting @math . Syntactically, @math is a very simple extension of @math which just adds action atoms, in the form of spatial action patterns, to @math . Semantically and pragmatically, however, when used together with rewriting logic as a \"tandem\" of system specification and property specification logics, it has substantially more expressive power than purely statebased logics like @math , or purely actionbased logics like A @math . Furthermore, it avoids the system property mismatch problem experienced in statebased or actionbased logics, which makes many useful properties inexpressible in those frameworks without unnatural changes to a system's specification. The advantages in expresiveness of @math are gained without losing the ability to use existing tools and algorithms to model check its properties: a faithful translation of models and formulas is given that allows verifying @math properties with @math model checkers."
]
}

1908.11769
 2971290534
 Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
 Plain TLR*, as described in @cite_25 , stays a step away from our goal, because transitions are given by proof terms, that univocally determine one origin state and one destination state for each transition. TLR* uses proofterm patterns (called ), that are used literally in temporal formulas. The problem is that, in this way, a TLR* formula is tied to a particular algebraic specification (one in which the pattern makes sense). In contrast, an LTL or CTL formula is meaningful by itself and can be used on any system specification by using atomic proposition definitions as interfaces. Notably, propositions on transitions have been added to plain TLR*, in some way or another, in all the implementations of model checkers for (the lineartime subset of) TLR* that we are aware of @cite_36 @cite_51 @cite_14 @cite_9 . None of them, however, tries to allow a same proposition to be defined both in states and in transitions, which we need for flexible synchronization.
 {
"cite_N": [
"@cite_14",
"@cite_36",
"@cite_9",
"@cite_51",
"@cite_25"
],
"mid": [
"1986424898",
"1583068981",
"",
"2151284417",
"1599831144"
],
"abstract": [
"This paper presents the linear temporal logic of rewriting (LTLR) model checker under localized fairness assumptions for the Maude system. The linear temporal logic of rewriting extends linear temporal logic (LTL) with spatial action patterns that describe patterns of rewriting events. Since LTLR generalizes and extends various statebased and eventbased logics, mixed properties involving both state propositions and actions, such as fairness properties, can be naturally expressed in LTLR. However, often the needed fairness assumptions cannot even be expressed as propositional temporal logic formulas because they are parametric, that is, they correspond to universally quantified temporal logic formulas. Such universal quantification is succinctly captured by the notion of localized fairness; for example, fairness is localized to the object name parameter in object fairness conditions. We summarize the foundations, and present the language design and implementation of the Maude Fair LTLR model checker, developed at the C++ level within the Maude system by extending the existing Maude LTL model checker. Our tool provides not only an efficient LTLR model checking algorithm under parameterized fairness assumptions but also suitable specification languages as part of its user interface. The expressiveness and effectiveness of the Maude Fair LTLR model checker are illustrated by five case studies. This is the first tool we are aware of that can model check temporal logic properties under parameterized fairness assumptions. We develop the LTLR model checker under localized fairness assumptions.The linear temporal logic of rewriting (LTLR) extends LTL with action patterns.Localized fairness specifies parameterized fairness over generic system entities.We present the foundations, the language design, and the implementation of our tool.We illustrate the expressiveness and effectiveness of our tool with case studies.",
"This paper presents the foundation, design, and implementation of the Linear Temporal Logic of Rewriting model checker as an extension of the Maude system. The Linear Temporal Logic of Rewriting (LTLR) extends linear temporal logic with spatial action patterns which represent rewriting events. LTLR generalizes and extends various statebased and eventbased logics and aims to avoid certain types of mismatches between a system and its temporal logic properties. We have implemented the LTLR model checker at the C++ level within the Maude system by extending the existing Maude LTL model checker. Our LTLR model checker provides very expressive methods to define eventrelated properties as well as staterelated properties, or, more generally, properties involving both events and state predicates. This greater expressiveness is gained without compromising performance, because the LTLR implementation minimizes the extra costs involved in handling the events of systems.",
"",
"This paper presents a model checker for LTLR, a subset of the temporal logic of rewriting TLR* extending linear temporal logic with spatial action patterns. Both LTLR and TLR* are very expressive logics generalizing wellknown statebased and actionbased logics. Furthermore, the semantics of TLR* is given in terms of rewrite theories, so that the concurrent systems on which the LTLR properties are model checked can be specified at a very high level with rewrite rules. This paper answers a nontrivial challenge, namely, to be able to build a model checker to model check LTLR formulas on rewrite theories with relatively little effort by reusing [email protected]?s LTL model checker for rewrite theories. For this, the reflective features of both rewriting logic and its Maude implementation have proved extremely useful.",
"This paper presents the temporal logic of rewriting @math . Syntactically, @math is a very simple extension of @math which just adds action atoms, in the form of spatial action patterns, to @math . Semantically and pragmatically, however, when used together with rewriting logic as a \"tandem\" of system specification and property specification logics, it has substantially more expressive power than purely statebased logics like @math , or purely actionbased logics like A @math . Furthermore, it avoids the system property mismatch problem experienced in statebased or actionbased logics, which makes many useful properties inexpressible in those frameworks without unnatural changes to a system's specification. The advantages in expresiveness of @math are gained without losing the ability to use existing tools and algorithms to model check its properties: a faithful translation of models and formulas is given that allows verifying @math properties with @math model checkers."
]
}

1908.11769
 2971290534
 Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
 Our paper @cite_5 contains a first definition of the synchronous composition of rewrite systems. There, we proposed to synchronise the execution of rules from different systems based on the coincidence of (atomic) rule labels. This reflects the synchronisation of actions in process algebras and in automata, for example. We also proposed to synchronise states by agreement on the Boolean values of propositions defined on them. We implemented that concept of synchronisation on Maude. That proposal had the advantage that it used standard machinery already existing in Maude: rule labels are basic elements of Maude's syntax, and propositions are customarily defined and used to build LTL formulas to be used with Maude's model checker. Why is the present, much more involved paper needed? We refer the reader to . In short: Booleanvalued propositions are not enough to allow flexible synchronisation and valuepassing; we need to give more substance to transitions; we want to be able to synchronise an action at one system with several consecutive ones at the other system. A complex realistic example like the one on the alternating bit protocol in @cite_33 would not be possible in our previous setting.
 {
"cite_N": [
"@cite_5",
"@cite_33"
],
"mid": [
"2521108378",
"2890370072"
],
"abstract": [
"We present a concept of module composition for rewrite systems that we call synchronous product, and also a corresponding concept for doubly labeled transition systems (as proposed by De Nicola and Vaandrager) used as semantics for the former. In both cases, synchronization happens on states and on transitions, providing in this way more flexibility and more natural specifications. We describe our implementation in Maude, a rewriting logicbased language and system. A series of examples shows their use for modular specification and hints at other possible uses, including modular verification.",
"Our overall goal is compositional specification and verification in rewriting logic. In previous work, we described a way to compose system specifications using the operation we call synchronous composition. In this paper, we propose the use of parameterized programming to encapsulate and handle specifications: theories represent interfaces; modules parameterized by such theories instruct on how to assemble the parameter systems using the synchronous composition operation; the implementation of the whole system is then obtained by instantiating the parameters with implementations for the components. We show, and illustrate with examples, how this setting facilitates compositionality."
]
}

1908.11769
 2971290534
 Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
 In a different topic, the paper @cite_33 also describes the use of parameterised programming to add encapsulation to our setting. We have already mentioned it in . We outline it roughly refering to the example from , on two controlled trains. First, a socalled theory is used to state that a train is any system that defines a Booleanvalued property called "isMoving". Requirements for reckoners and controllers are similarly stated. These are our interface specifications. The composition is specified in a parameterised module, whose formal parameters are the theories (that is, the interfaces). Thus, the composition can only be specified using the formal names and the properties in the interfaces. The particular implementations of trains and the other components are written and the needed properties are defined. Finally, the parameters of the composition module are instantiated with the component implementations, producing an implementation of the complete system.
 {
"cite_N": [
"@cite_33"
],
"mid": [
"2890370072"
],
"abstract": [
"Our overall goal is compositional specification and verification in rewriting logic. In previous work, we described a way to compose system specifications using the operation we call synchronous composition. In this paper, we propose the use of parameterized programming to encapsulate and handle specifications: theories represent interfaces; modules parameterized by such theories instruct on how to assemble the parameter systems using the synchronous composition operation; the implementation of the whole system is then obtained by instantiating the parameters with implementations for the components. We show, and illustrate with examples, how this setting facilitates compositionality."
]
}

1908.11769
 2971290534
 Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
 Process algebras were initially designed as theoretical tools. They focus on actions and synchronisation, and do not provide any means to specify internal computations, or to handle complex data types. However, later developments have taken process algebras as a basis for practical modelling and verification tools. Examples are occam @cite_19 , SCEL @cite_18 , FSP+LTSA @cite_45 , CSP @math B @cite_27 , and LOTOS and the CADP tool @cite_49 .
 {
"cite_N": [
"@cite_18",
"@cite_19",
"@cite_27",
"@cite_45",
"@cite_49"
],
"mid": [
"24701999",
"2110425399",
"1532163655",
"1995830301",
"2757839910"
],
"abstract": [
"SCEL (Service Component Ensemble Language) is a new language specifically designed to rigorously model and program autonomic components and their interaction, while supporting formal reasoning on their behaviors. SCEL brings together various programming abstractions that allow one to directly represent aggregations, behaviors and knowledge according to specific policies. It also naturally supports programming interaction, selfawareness, contextawareness, and adaptation. The solid semantic grounds of the language is exploited for developing logics, tools and methodologies for formal reasoning on system behavior to establish qualitative and quantitative properties of both the individual components and the overall systems.",
"This paper suggests that input and output are basic primitives of programming and that parallel composition of communicating sequential processes is a fundamental program structuring method. When combined with a development of Dijkstra's guarded command, these concepts are surprisingly versatile. Their use is illustrated by sample solutions of a variety of a familiar programming exercises.",
"ProB is a model checking tool for the B Method. In this paper we present an extension of ProB that supports checking of specifications written in a combination of CSP and B. We explain how the notations are combined semantically and give an overview of the implementation of the combination. We illustrate the benefit that appropriate use of CSP, in conjunction with our tool, gives to B developments both for specification and for verification purposes.",
"Concurrency provides a thoroughly updatedapproach to the basic concepts and techniques behind concurrent programming. Concurrent programming is complex and demands a much more formal approach than sequential programming. In order to develop a thorough understanding of the topicMagee and Kramer present concepts, techniques and problems through a variety of forms: informal descriptions, illustrative examples, abstract models and concrete Java examples. These combineto provide problem patterns and associated solution techniqueswhich enablestudents torecognise problems and arrive at solutions. New features include: New chapters covering program verification and logical properties. More student exercises. Supporting website contains an updated version of the LTSA tool for modelling concurrency, model animation, and model checking. Website also includes the full set of state models, java examples, and demonstration programs and a comprehensive set of overhead slides for course presentation.",
"We revisit the early publications of Ed Brinksma devoted, on the one hand, to the definition of the formal description technique LOTOS (ISO International Standard 8807:1989) for specifying communication protocols and distributed systems, and, on the other hand, to two proposals (Extended LOTOS and Modular LOTOS) for making LOTOS a simpler and more expressive language. We examine how this scientific agenda has been dealt with during the last decades. We review the successive enhancements of LOTOS that led to the definition of three languages: ELOTOS (ISO International Standard 15437:2001), then LOTOS NT, and finally LNT. We present the software implementations (compilers and translators) developed for these new languages and report about their use in various application domains."
]
}

1908.11769
 2971290534
 Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
 Typically, there are two ways to compose Petri nets. One is given by hierarchical nets, that is, nets in which a transition can represent a complete separate net, that is described independently. The second way is to identify, or , either places or transitions from two different nets. For example, the coffee machine and the scientist can be modelled and then composed by fusing transitions like this: Some approaches propose the introduction of interfaces, that allow to see each component net as a black box. That is the case of the recent work described in @cite_29 .
 {
"cite_N": [
"@cite_29"
],
"mid": [
"2016355163"
],
"abstract": [
"A quite fourishing research thread in the recent literature on component based system is concerned with the algebraic properties of different classes of connectors. In a recent paper, an algebra of stateless connectors was presented that consists of five kinds of basic connectors, namely symmetry, synchronization, mutual exclusion, hiding and inaction, plus their duals and it was shown how they can be freely composed in series and in parallel to model sophisticated \"glues\". In this paper we explore the expressiveness of stateful connectors obtained by adding oneplace buffers or unbounded buffers to the stateless connectors. The main results are: i) we show how different classes of connectors exactly correspond to suitable classes of Petri nets equipped with compositional interfaces, called nets with boundaries; ii) we show that the difference between strong and weak semantics in stateful connectors is reflected in the semantics of nets with boundaries by moving from the classic step semantics (strong case) to a novel banking semantics (weak case), where a step can be executed by taking some \"debit\" tokens to be given back during the same step; iii) we show that the corresponding bisimilarities are congruences (w.r.t. composition of connectors in series and in parallel); iv) we show that suitable monoidality laws, like those arising when representing stateful connectors in the tile model, can nicely capture concurrency aspects; and v) as a side result, we provide a basic algebra, with a finite set of symbols, out of which we can compose all P T nets, fulfilling a long standing quest."
]
}

1908.11769
 2971290534
 Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
 Tile logic was introduced in @cite_28 , and is closely related to rewriting logic. In short, tile logic is rewriting logic with side effects for composition. A tile is written as @math with @math being the condition for, and @math the effect of, rewriting @math to @math . The intuitive meaning of that tile is: the term @math is rewritten to the term @math , producing an effect @math , but the rewrite can only happen if the variables of @math (that represent as yet unspecified subcomponents) are rewritten with a cumulative effect @math .'' Effects are given by terms of any complexity.
 {
"cite_N": [
"@cite_28"
],
"mid": [
"1909063750"
],
"abstract": [
"In this paper we introduce a model for a wide class of computational systems, whose behaviour can be described by certain rewriting rules. We gathered our inspiration both from the world of term rewriting, in particular from the rewriting logic framework Mes92 , and of concurrency theory: among the others, the structured operational semantics Plo81 , the context systems LX90 and the structured transition systems CM92 approaches. Our model recollects many properties of these sources: first, it provides a compositional way to describe both the states and the sequences of transitions performed by a given system, stressing their distributed nature. Second, a suitable notion of typed proof allows to take into account also those formalisms relying on the notions of synchronization and sideeffects to determine the actual behaviour of a system. Finally, an equivalence relation over sequences of transitions is defined, equipping the system under analysis with a concurrent semantics, where each equivalence class denotes a family of computationally equivalent'''' behaviours, intended to correspond to the execution of the same set of (causally unrelated) events. As a further abstraction step, our model is conveniently represented using doublecategories: its operational semantics is recovered with a free construction, by means of a suitable adjunction."
]
}

1908.11769
 2971290534
 Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
 Connections between tile logic and rewriting logic have been drawn in @cite_47 and @cite_39 , mainly in the language of category theory.
 {
"cite_N": [
"@cite_47",
"@cite_39"
],
"mid": [
"1601458080",
"2026741712"
],
"abstract": [
"Rewriting logic extends to concurrent systems with state changes the body of theory developed within the algebraic semantics approach. It is both a foundational tool and the kernel language of several implementation efforts (Cafe, ELAN, Maude). Tile logic extends (unconditional) rewriting logic since it takes into account state changes with side effects and synchronization. It is especially useful for defining compositional models of computation of reactive systems, coordination languages, mobile calculi, and causal and located concurrent systems. In this paper, the two logics are defined and compared using a recently developed algebraic specification methodology, membership equational logic. Given a theory T, the rewriting logic of T is the free monoidal 2category, and the tile logic of T is the free monoidal double category, both generated by T. An extended version of monoidal 2categories, called 2VHcategories, is also defined, able to include in an appropriate sense the structure of monoidal double categories. We show that 2VHcategories correspond to an extended version of rewriting logic, which is able to embed tile logic, and which can be implemented in the basic version of rewriting logic using suitable internal strategies. These strategies can be significantly simpler when the theory is uniform. A uniform theory is provided in the paper for CCS, and it is conjectured that uniform theories exist for most process algebras.",
"We propose a modular highlevel approach to the specification of transactions in rewriting logic, where the operational and the abstract views are related by suitable adjunctions between categories of tile theories and of rewrite theories."
]
}

1908.11769
 2971290534
 Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
 The goal of coordination is to make different components work together. The components may have been coded in different languages, reside in different servers, with different architectures. The paper @cite_20 is a comprehensive reference, though old.
 {
"cite_N": [
"@cite_20"
],
"mid": [
"1500859230"
],
"abstract": [
"A new class of models, formalisms and mechanisms has recently evolved for describing concurrent and distributed computations based on the concept of coordination''''. The purpose of a coordination model and associated language is to provide a means of integrating a number of possibly heterogeneous components together, by interfacing with each component in such a way that the collective set forms a single application that can execute on and take advantage of parallel and distributed systems. In this chapter we initially define and present in sufficient detail the fundamental concepts of what constitutes a coordination model or language. We then go on to classify these models and languages as either datadriven'''' or controldriven'''' (also called process'''' or taskoriented''''). Next, the main existing coordination models and languages are described in sufficient detail to let the reader appreciate their features and put them into perspective with respect to each other. The chapter ends with a discussion comparing the various models and some conclusions."
]
}

1908.11769
 2971290534
 Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
 Coordination is a very general term, and some of the proposals we have discussed above can be seen as belonging to it. Let us name a few additional examples. Linda, with all its variants and implementations, is one of the best known coordination languages. See @cite_35 for a relatively recent take. It is based on the idea of a shared repository of data and relies on each component using coordination primitives in appropriate ways. REO is at the other extreme. It enforces separation of concerns and is based on composing basic channels to provide port to port communication between components. Basic channels can be of any imaginable sort (synchronous or not, lossy ) and the means to compose them are very flexible. REO is described in @cite_44 .
 {
"cite_N": [
"@cite_44",
"@cite_35"
],
"mid": [
"2139842876",
"28813266"
],
"abstract": [
"In this paper, we present Reo, which forms a paradigm for composition of software components based on the notion of mobile channels. Reo is a channelbased exogenous coordination model in which complex coordinators, called connectors, are compositionally built out of simpler ones. The simplest connectors in Reo are a set of channels with welldefined behaviour supplied by users. Reo can be used as a language for coordination of concurrent processes, or as a ‘glue language’ for compositional construction of connectors that orchestrate component instances in a componentbased system. The emphasis in Reo is just on connectors and their composition, and not on the entities that connect to, communicate and cooperate through these connectors. Each connector in Reo imposes a specific coordination pattern on the entities (for example, components) that perform I O operations through that connector, without the knowledge of those entities. Channel composition in Reo is a very powerful mechanism for construction of connectors. We demonstrate the expressive power of connector composition in Reo through a number of examples. We show that exogenous coordination patterns that can be expressed as (metalevel) regular expressions over I O operations can be composed in Reo out of a small set of only five primitive channel types.",
"The original Linda model of coordination has always been attractive due primarily to its simplicity, but also due to the model’s other strong features of orthogonality, and the spatialand temporaldecoupling of concurrent processes. Recently there has been a resurgence of interest in the Linda coordination model, particularly in the Java community. We believe that the simplicity of this model still has much to offer, but that there are still challenges in overcoming the performance issues inherent in the Linda approach, and extending the range of applications to which it is suited. Our prior work has focused on mechanisms for generalising the input mechanisms in the Linda model, over a range of different implementation strategies. We believe that similar optimisations may be applicable to other aspects of the model, especially in the context of middleware support for components utilising webservices. The outcome of such improvements would be to provide a simple, but highly effective coordination language, that is applicable to a wide range of different application areas."
]
}

1908.11769
 2971290534
 Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
 BIP stands for , , the three of a composed specification, as proposed by the authors. The behaviour of atomic components is specified by automata (of a special kind) some of whose actions are also taken as port names for communication. These automata are a specification of requirements on the component, whose real implementation can be made using any language or tool. Interaction is performed through connectors linking ports in potentially complex ways. Among the interactions that are allowed at any given time, the one with the highest priority is chosen and performed. Interaction and priority together implement control. The paper @cite_17 has a good overview. Several implementations exist that allow to use the BIP framework within programming languages like Java and C++.
 {
"cite_N": [
"@cite_17"
],
"mid": [
"2133038101"
],
"abstract": [
"We present a methodology for modeling heterogeneous realtime components. Components are obtained as the superposition of three layers : Behavior, specified as a set of transitions; Interactions between transitions of the behavior; Priorities, used to choose amongst possible interactions. A parameterized binary composition operator is used to compose components layer by layer. We present the BIP language for the description and composition of layered components as well as associated tools for executing and analyzing components on a dedicated platform. The language provides a powerful mechanism for structuring interactions involving rendezvous and broadcast. We show that synchronous and timed systems are particular classes of components. Finally, we provide examples and compare the BIP framework to existing ones for heterogeneous componentbased modeling."
]
}

1908.11645
 2971105318
 In the wake of the success of convolutional neural networks in image classification, object recognition, speech recognition, etc., the demand for deploying these computeintensive ML models on embedded and mobile systems with tight power and energy constraints at low cost, as well as for boosting throughput in data centers, is growing rapidly. This has sparked a surge of research into specialized hardware accelerators. Their performance is typically limited by I O bandwidth, power consumption is dominated by I O transfers to offchip memory, and onchip memories occupy a large part of the silicon area. We introduce and evaluate a novel, hardwarefriendly and lossless compression scheme for the feature maps present within convolutional neural networks. Its hardware implementation fits into 2.8 kGE and 1.7 kGE of silicon area for the compressor and decompressor, respectively. We show that an average compression ratio of 5.1x for AlexNet, 4x for VGG16, 2.4x for ResNet34 and 2.2x for MobileNetV2 can be achieveda gain of 4570 over existing methods. Our approach also works effectively for various number formats, has a low frametoframe variance on the compression ratio, and achieves compression factors for gradient map compression during training that are even better than for inference.
 There are several methods out there describing hardware accelerators which exploit feature map sparsity to reduce computation: Cnvlutin @cite_2 , SCNN @cite_47 , CambriconX @cite_46 , NullHop @cite_24 , Eyeriss @cite_12 , EIE @cite_43 . Their focus is on power gating or skipping some of the operations and memory accesses. This entails defining a scheme to feed the data into the system. They all use one of three methods: ZeroRLE (used in SCNN): A simple runlength encoding for the zero values, i.e. a single prefix bit followed by the number of zerovalues or the nonzero value. Zerofree neuron array format (ZFNAf) (used in Cnvlutin): Similarly to the widelyused compressed sparse row (CSR) format, nonzero elements are encoded with an offset and their value. Compressed column storage (CCS) format (e.g. used in EIE, and similar to NullHop): Similar to ZFNAf, but the offsets are stored in relative form, thus requiring less bits to store them. Few bits are sufficient, and in case they are all exhausted, a zerovalue can be encoded as if it was nonzero.
 {
"cite_N": [
"@cite_47",
"@cite_24",
"@cite_43",
"@cite_2",
"@cite_46",
"@cite_12"
],
"mid": [
"2625457103",
"2623629680",
"2285660444",
"2516141709",
"2565851976",
"2442974303"
],
"abstract": [
"Convolutional Neural Networks (CNNs) have emerged as a fundamental technology for machine learning. High performance and extreme energy efficiency are critical for deployments of CNNs, especially in mobile platforms such as autonomous vehicles, cameras, and electronic personal assistants. This paper introduces the Sparse CNN (SCNN) accelerator architecture, which improves performance and energy efficiency by exploiting the zerovalued weights that stem from network pruning during training and zerovalued activations that arise from the common ReLU operator. Specifically, SCNN employs a novel dataflow that enables maintaining the sparse weights and activations in a compressed encoding, which eliminates unnecessary data transfers and reduces storage requirements. Furthermore, the SCNN dataflow facilitates efficient delivery of those weights and activations to a multiplier array, where they are extensively reused; product accumulation is performed in a novel accumulator array. On contemporary neural networks, SCNN can improve both performance and energy by a factor of 2.7x and 2.3x, respectively, over a comparably provisioned dense CNN accelerator.",
"Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many stateoftheart (SOA) visual processing tasks. Even though graphical processing units are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp s W for singleframe runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for lowpower and lowlatency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from @math to @math . NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq fieldprogrammable gate array (FPGA) platform and presented the results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Postsynthesis simulations using Mentor Modelsim in a 28nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp s. By exploiting sparsity, NullHop achieves an efficiency of 368 , maintains over 98 utilization of the multiply–accumulate units, and achieves a power efficiency of over 3 TOp s W in a core area of 6.3 mm2. As further proof of NullHop’s usability, we interfaced its FPGA implementation with a neuromorphic event camera for realtime interactive demonstrations.",
"Stateoftheart deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in onchip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrixvector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120× energy saving; Exploiting sparsity saves 10×; Weight sharing gives 8×; Skipping zero activations from ReLU saves another 3×. Evaluated on nine DNN benchmarks, EIE is 189× and 13× faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102 GOPS working directly on a compressed network, corresponding to 3 TOPS on an uncompressed network, and processes FC layers of AlexNet at 1.88×104 frames sec with a power dissipation of only 600mW. It is 24,000× and 3,400× more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9×, 19× and 3× better throughput, energy efficiency and area efficiency.",
"This work observes that a large fraction of the computations performed by Deep Neural Networks (DNNs) are intrinsically ineffectual as they involve a multiplication where one of the inputs is zero. This observation motivates Cnvlutin (CNV), a valuebased approach to hardware acceleration that eliminates most of these ineffectual operations, improving performance and energy over a stateoftheart accelerator with no accuracy loss. CNV uses hierarchical dataparallel units, allowing groups of lanes to proceed mostly independently enabling them to skip over the ineffectual computations. A codesigned data storage format encodes the computation elimination decisions taking them off the critical path while avoiding control divergence in the data parallel units. Combined, the units and the data storage format result in a dataparallel architecture that maintains wide, aligned accesses to its memory hierarchy and that keeps its data lanes busy. By loosening the ineffectual computation identification criterion, CNV enables further performance and energy efficiency improvements, and more so if a loss in accuracy is acceptable. Experimental measurements over a set of stateoftheart DNNs for image classification show that CNV improves performance over a stateoftheart accelerator from 1.24× to 1.55× and by 1.37× on average without any loss in accuracy by removing zerovalued operand multiplications alone. While CNV incurs an area overhead of 4.49 , it improves overall EDP (Energy Delay Product) and ED2P (Energy Delay Squared Product) on average by 1.47× and 2.01×, respectively. The average performance improvements increase to 1.52× without any loss in accuracy with a broader ineffectual identification policy. Further improvements are demonstrated with a loss in accuracy.",
"Neural networks (NNs) have been demonstrated to be useful in a broad range of applications such as image recognition, automatic translation and advertisement recommendation. Stateoftheart NNs are known to be both computationally and memory intensive, due to the everincreasing deep structure, i.e., multiple layers with massive neurons and connections (i.e., synapses). Sparse neural networks have emerged as an effective solution to reduce the amount of computation and memory required. Though existing NN accelerators are able to efficiently process dense and regular networks, they cannot benefit from the reduction of synaptic weights. In this paper, we propose a novel accelerator, CambriconX, to exploit the sparsity and irregularity of NN models for increased efficiency. The proposed accelerator features a PEbased architecture consisting of multiple Processing Elements (PE). An Indexing Module (IM) efficiently selects and transfers needed neurons to connected PEs with reduced bandwidth requirement, while each PE stores irregular and compressed synapses for local computation in an asynchronous fashion. With 16 PEs, our accelerator is able to achieve at most 544 GOP s in a small form factor (6.38 mm2 and 954 mW at 65 nm). Experimental results over a number of representative sparse networks show that our accelerator achieves, on average, 7.23x speedup and 6.43x energy saving against the stateoftheart NN accelerator.",
"Deep convolutional neural networks (CNNs) are widely used in modern AI systems for their superior accuracy but at the cost of high computational complexity. The complexity comes from the need to simultaneously process hundreds of filters and channels in the highdimensional convolutions, which involve a significant amount of data movement. Although highlyparallel compute paradigms, such as SIMD SIMT, effectively address the computation requirement to achieve high throughput, energy consumption still remains high as data movement can be more expensive than computation. Accordingly, finding a dataflow that supports parallel processing with minimal data movement cost is crucial to achieving energyefficient CNN processing without compromising accuracy. In this paper, we present a novel dataflow, called rowstationary (RS), that minimizes data movement energy consumption on a spatial architecture. This is realized by exploiting local data reuse of filter weights and feature map pixels, i.e., activations, in the highdimensional convolutions, and minimizing data movement of partial sum accumulations. Unlike dataflows used in existing designs, which only reduce certain types of data movement, the proposed RS dataflow can adapt to different CNN shape configurations and reduces all types of data movement through maximally utilizing the processing engine (PE) local storage, direct interPE communication and spatial parallelism. To evaluate the energy efficiency of the different dataflows, we propose an analysis framework that compares energy cost under the same hardware area and processing parallelism constraints. Experiments using the CNN configurations of AlexNet show that the proposed RS dataflow is more energy efficient than existing dataflows in both convolutional (1.4× to 2.5×) and fullyconnected layers (at least 1.3× for batch size larger than 16). The RS dataflow has also been demonstrated on a fabricated chip, which verifies our energy analysis."
]
}

1908.11645
 2971105318
 In the wake of the success of convolutional neural networks in image classification, object recognition, speech recognition, etc., the demand for deploying these computeintensive ML models on embedded and mobile systems with tight power and energy constraints at low cost, as well as for boosting throughput in data centers, is growing rapidly. This has sparked a surge of research into specialized hardware accelerators. Their performance is typically limited by I O bandwidth, power consumption is dominated by I O transfers to offchip memory, and onchip memories occupy a large part of the silicon area. We introduce and evaluate a novel, hardwarefriendly and lossless compression scheme for the feature maps present within convolutional neural networks. Its hardware implementation fits into 2.8 kGE and 1.7 kGE of silicon area for the compressor and decompressor, respectively. We show that an average compression ratio of 5.1x for AlexNet, 4x for VGG16, 2.4x for ResNet34 and 2.2x for MobileNetV2 can be achieveda gain of 4570 over existing methods. Our approach also works effectively for various number formats, has a low frametoframe variance on the compression ratio, and achieves compression factors for gradient map compression during training that are even better than for inference.
 Oher compression methods are focusing on minimizing the model size and are very complex (silicon area) to implement in hardware. One such method, deep compression @cite_48 , combines pruning, trained clusteringbased quantization, and Huffman coding. Most of these steps cannot be applied to the intermediate feature maps, which change for every inference as opposed to the weights which are static and can be optimized offline. Furthermore, applying Huffman codingwhile being optimal in terms of compression rate and given a specification of input symbols and their statisticsimplies storing a very large dictionary: encoding a 16 ,bit word requires a table with @math k entries, but effectively multiple values would have to be encoded jointly in order to exploit their joint distribution (e.g. the smoothness), immediately increasing the dictionary size to @math G even for just two values. Similar issues arise when using LempelZivWelch (LZW) coding @cite_38 @cite_42 as present in e.g. the ZIP compression scheme, where the dictionary is encoded in the compressed data stream. This makes it unsuitable for a lightweight and energyefficient VLSI implementation @cite_0 @cite_10 .
 {
"cite_N": [
"@cite_38",
"@cite_48",
"@cite_42",
"@cite_0",
"@cite_10"
],
"mid": [
"1990653637",
"2964299589",
"2122962290",
"2162310490",
"2487148512"
],
"abstract": [
"",
"Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into onchip SRAM cache rather than offchip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"Compressibility of individual sequences by the class of generalized finitestate informationlossless encoders is investigated. These encoders can operate in a variablerate mode as well as a fixedrate one, and they allow for any finitestate scheme of variablelengthtovariablelength coding. For every individual infinite sequence x a quantity (x) is defined, called the compressibility of x , which is shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved for x by any finitestate encoder. This is demonstrated by means of a constructive coding theorem and its converse that, apart from their asymptotic significance, also provide useful performance criteria for finite and practical datacompression tasks. The proposed concept of compressibility is also shown to play a role analogous to that of entropy in classical information theory where one deals with probabilistic ensembles of sequences rather than with individual sequences. While the definition of (x) allows a different machine for each different sequence to be compressed, the constructive coding theorem leads to a universal algorithm that is asymptotically optimal for all sequences.",
"In this paper, we propose a new twostage hardware architecture that combines the features of both parallel dictionary LZW (PDLZW) and an approximated adaptive Huffman (AH) algorithms. In this architecture, an ordered list instead of the treebased structure is used in the AH algorithm for speeding up the compression data rate. The resulting architecture shows that it not only outperforms the AH algorithm at the cost of only onefourth the hardware resource but it is also competitive to the performance of LZW algorithm (compress). In addition, both compression and decompression rates of the proposed architecture are greater than those of the AH algorithm even in the case realized by software",
"LZW algorithm is one of the most famous dictionarybased compression and decompression algorithms. The main contribution of this paper is to present a hardware LZW decompression algorithm and to implement it in an FPGA. The experimental results show that one proposed module on Virtex7 family FPGA XC7VX485T2 runs up to 2.16 times faster than sequential LZW decompression on a single CPU, where the frequency of FPGA is 301.02MHz. Since the proposed module is compactly designed and uses a few resources of the FPGA, we have succeeded to implement 150 identical modules which works in parallel on the FPGA, where the frequency of FPGA is 245.4MHz. In other words, our implementation runs up to 264 times faster than a sequential implementation on a single CPU."
]
}

1908.11645
 2971105318
 In the wake of the success of convolutional neural networks in image classification, object recognition, speech recognition, etc., the demand for deploying these computeintensive ML models on embedded and mobile systems with tight power and energy constraints at low cost, as well as for boosting throughput in data centers, is growing rapidly. This has sparked a surge of research into specialized hardware accelerators. Their performance is typically limited by I O bandwidth, power consumption is dominated by I O transfers to offchip memory, and onchip memories occupy a large part of the silicon area. We introduce and evaluate a novel, hardwarefriendly and lossless compression scheme for the feature maps present within convolutional neural networks. Its hardware implementation fits into 2.8 kGE and 1.7 kGE of silicon area for the compressor and decompressor, respectively. We show that an average compression ratio of 5.1x for AlexNet, 4x for VGG16, 2.4x for ResNet34 and 2.2x for MobileNetV2 can be achieveda gain of 4570 over existing methods. Our approach also works effectively for various number formats, has a low frametoframe variance on the compression ratio, and achieves compression factors for gradient map compression during training that are even better than for inference.
 Few more methods exist by changing the CNN's structure in order to compress the weights @cite_14 @cite_37 or the feature maps @cite_1 @cite_50 @cite_18 . However, they require altering the CNN's model and or retraining, and they introduce some accuracy loss. Furthermore, they can only be used to compress a few feature maps at specific points within the network and introduce additional compute effort, such as applying a Fourier transform to the feature maps.
 {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_14",
"@cite_1",
"@cite_50"
],
"mid": [
"2946521496",
"2732044853",
"2172166488",
"2787697768",
"2741232386"
],
"abstract": [
"We present a theoretical and experimental investigation of the quantization problem for artificial neural networks. We provide a mathematical definition of quantized neural networks and analyze their approximation capabilities, showing in particular that any Lipschitzcontinuous map defined on a hypercube can be uniformly approximated by a quantized neural network. We then focus on the regularization effect of additive noise on the arguments of multistep functions inherent to the quantization of continuous variables. In particular, when the expectation operator is applied to a nondifferentiable multistep random function, and if the underlying probability density is differentiable (in either classical or weak sense), then a differentiable function is retrieved, with explicit bounds on its Lipschitz constant. Based on these results, we propose a novel gradientbased training algorithm for quantized neural networks that generalizes the straightthrough estimator, acting on noise applied to the network's parameters. We evaluate our algorithm on the CIFAR10 and ImageNet image classification benchmarks, showing stateoftheart performance on AlexNet and MobileNetV2 for ternary networks.",
"We present a new approach to learn compressible representations in deep architectures with an endtoend training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our softtohard quantization approach gives results competitive with the stateoftheart for both.",
"As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb everincreasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a lowcost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.",
"In this paper, we introduce a method to compress intermediate feature maps of deep neural networks (DNNs) to decrease memory storage and bandwidth requirements during inference. Unlike previous works, the proposed method is based on converting fixedpoint activations into vectors over the smallest GF(2) finite field followed by nonlinear dimensionality reduction (NDR) layers embedded into a DNN. Such an endtoend learned representation finds more compact feature maps by exploiting quantization redundancies within the fixedpoint activations along the channel or spatial dimensions. We apply the proposed network architecture to the tasks of ImageNet classification and PASCAL VOC object detection. Compared to prior approaches, the conducted experiments show a factor of 2 decrease in memory requirements with minor degradation in accuracy while adding only bitwise computations.",
""
]
}

1908.11645
 2971105318
 In the wake of the success of convolutional neural networks in image classification, object recognition, speech recognition, etc., the demand for deploying these computeintensive ML models on embedded and mobile systems with tight power and energy constraints at low cost, as well as for boosting throughput in data centers, is growing rapidly. This has sparked a surge of research into specialized hardware accelerators. Their performance is typically limited by I O bandwidth, power consumption is dominated by I O transfers to offchip memory, and onchip memories occupy a large part of the silicon area. We introduce and evaluate a novel, hardwarefriendly and lossless compression scheme for the feature maps present within convolutional neural networks. Its hardware implementation fits into 2.8 kGE and 1.7 kGE of silicon area for the compressor and decompressor, respectively. We show that an average compression ratio of 5.1x for AlexNet, 4x for VGG16, 2.4x for ResNet34 and 2.2x for MobileNetV2 can be achieveda gain of 4570 over existing methods. Our approach also works effectively for various number formats, has a low frametoframe variance on the compression ratio, and achieves compression factors for gradient map compression during training that are even better than for inference.
 The most directly comparable approach, cDMA @cite_25 , describes a hardwarefriendly compression scheme to reduce the data size of intermediate feature maps. Their target application differs in that their main goal is to allow faster temporary offloading of the feature maps from GPU to CPU memory through the PCIe bandwidth bottleneck during training, thereby enabling larger batch sizes and deeper and wider networks without sacrificing performance. They propose to use , which takes a block of 32 activation values, and generates a 32bit mask where only the bits to the nonzero values are set. The nonzero values are transferred after the mask. This provides the main advantage over ZeroRLE that the resulting data volume is independent of how the values of the feature maps are serialized while also providing small compression ratio advantages. Note that this is a special case of ZeroRLE with a maximum zero burst length of 1.
 {
"cite_N": [
"@cite_25"
],
"mid": [
"2962821792"
],
"abstract": [
"Popular deep learning frameworks require users to finetune their memory usage so that the training data of a deep neural network (DNN) fits within the GPU physical memory. Prior work tries to address this restriction by virtualizing the memory usage of DNNs, enabling both CPU and GPU memory to be utilized for memory allocations. Despite its merits, virtualizing memory can incur significant performance overheads when the time needed to copy data back and forth from CPU memory is higher than the latency to perform DNN computations. We introduce a highperformance virtualization strategy based on a \"compressing DMA engine\" (cDMA) that drastically reduces the size of the data structures that are targeted for CPUside allocations. The cDMA engine offers an average 2.6x (maximum 13.8x) compression ratio by exploiting the sparsity inherent in offloaded data, improving the performance of virtualized DNNs by an average 53 (maximum 79 ) when evaluated on an NVIDIA Titan Xp."
]
}

1908.11645
 2971105318
 In the wake of the success of convolutional neural networks in image classification, object recognition, speech recognition, etc., the demand for deploying these computeintensive ML models on embedded and mobile systems with tight power and energy constraints at low cost, as well as for boosting throughput in data centers, is growing rapidly. This has sparked a surge of research into specialized hardware accelerators. Their performance is typically limited by I O bandwidth, power consumption is dominated by I O transfers to offchip memory, and onchip memories occupy a large part of the silicon area. We introduce and evaluate a novel, hardwarefriendly and lossless compression scheme for the feature maps present within convolutional neural networks. Its hardware implementation fits into 2.8 kGE and 1.7 kGE of silicon area for the compressor and decompressor, respectively. We show that an average compression ratio of 5.1x for AlexNet, 4x for VGG16, 2.4x for ResNet34 and 2.2x for MobileNetV2 can be achieveda gain of 4570 over existing methods. Our approach also works effectively for various number formats, has a low frametoframe variance on the compression ratio, and achieves compression factors for gradient map compression during training that are even better than for inference.
 For this work, we build on a method known in the area of texture compression for GPUs, @cite_44 , fuse it with sparsityfocused compression methods, and evaluate the resulting compression algorithm on intermediate feature maps and gradient maps to show compression ratios of 5.1 (8 ,bit AlexNet), 4 (VGG16), 2.4 (ResNet34), 2.8 (SqueezeNet), and 2.2 (MobileNetV2).
 {
"cite_N": [
"@cite_44"
],
"mid": [
"2516109628"
],
"abstract": [
"As key applications become more dataintensive and the computational throughput of processors increases, the amount of data to be transferred in modern memory subsystems grows. Increasing physical bandwidth to keep up with the demand growth is challenging, however, due to strict area and energy limitations. This paper presents a novel and lightweight compression algorithm, BitPlane Compression (BPC), to increase the effective memory bandwidth. BPC aims at homogeneouslytyped memory blocks, which are prevalent in manycore architectures, and applies a smart data transformation to both improve the inherent data compressibility and to reduce the complexity of compression hardware. We demonstrate that BPC provides superior compression ratios of 4.1:1 for integer benchmarks and reduces memory bandwidth requirements significantly."
]
}

1908.11787
 2970281864
 We present a novel approach to answering sequential questions based on structured objects such as knowledge bases or tables without using a logical form as an intermediate representation. We encode tables as graphs using a graph neural network model based on the Transformer architecture. The answers are then selected from the encoded graph using a pointer network. This model is appropriate for processing conversations around structured data, where the attention mechanism that selects the answers to a question can also be used to resolve conversational references. We demonstrate the validity of this approach with competitive results on the Sequential Question Answering (SQA) task (, 2017).
 Semantic parsing models can be trained to produce gold logical forms using an encoderdecoder approach @cite_26 or by filling templates @cite_22 @cite_18 @cite_30 . When gold logical forms are not available, they are typically treated as latent variables or hidden states and the answers or denotations are used to search for correct logical forms @cite_31 @cite_29 @cite_34 . In some cases, feedback from query execution is used as a reward signal for updating the model through reinforcement learning @cite_5 @cite_6 @cite_32 @cite_16 or for refining parts of the query @cite_7 . In our work, we do not use logical forms or RL, which can be hard to train, but simplify the training process by directly matching questions to table cells.
 {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_29",
"@cite_32",
"@cite_6",
"@cite_16",
"@cite_5",
"@cite_31",
"@cite_34"
],
"mid": [
"",
"",
"2963868320",
"2768409085",
"2891000242",
"",
"2891991579",
"",
"2917052767",
"2751448157",
"2251079237",
""
],
"abstract": [
"",
"",
"",
"Synthesizing SQL queries from natural language is a longstanding open problem and has been attracting considerable interest recently. Toward solving the problem, the de facto approach is to employ a sequencetosequencestyle model. Such an approach will necessarily require the SQL queries to be serialized. Since the same SQL query may have multiple equivalent serializations, training a sequencetosequencestyle model is sensitive to the choice from one of them. This phenomenon is documented as the \"ordermatters\" problem. Existing stateoftheart approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations. However, we observe that the improvement from reinforcement learning is limited. In this paper, we propose a novel approach, i.e., SQLNet, to fundamentally solve this problem by avoiding the sequencetosequence structure when the order does not matter. In particular, we employ a sketchbased approach where the sketch contains a dependency graph, so that one prediction can be done by taking into consideration only the previous predictions that it depends on. In addition, we propose a sequencetoset model as well as the column attention mechanism to synthesize the query based on the sketch. By combining all these novel techniques, we show that SQLNet can outperform the prior art by 9 to 13 on the WikiSQL task.",
"",
"",
"This paper presents MAPO: a novel policy optimization formulation that incorporates a memory buffer of promising trajectories to reduce the variance of policy gradient estimates for deterministic environments with discrete actions. The formulation expresses the expected return objective as a weighted sum of two terms: an expectation over a memory of trajectories with high rewards, and a separate expectation over the trajectories outside the memory. We propose 3 techniques to make an efficient training algorithm for MAPO: (1) distributed sampling from inside and outside memory with an actorlearner architecture; (2) a marginal likelihood constraint over the memory to initiate training; (3) systematic exploration to discover new high reward trajectories. MAPO improves the sample efficiency and robustness of policy gradient, especially on tasks with a sparse reward. We evaluate MAPO on weakly supervised program synthesis from natural language semantic parsing. On the WikiTableQuestions benchmark we improve the stateoftheart by 2.5 , achieving an accuracy of 46.2 , and on the WikiSQL benchmark, MAPO achieves an accuracy of 74.9 with only weak supervision, outperforming the stateoftheart with full supervision.",
"",
"We consider the problem of learning from sparse and underspecified rewards, where an agent receives a complex input, such as a natural language instruction, and needs to generate a complex response, such as an action sequence, while only receiving binary successfailure feedback. Such successfailure rewards are often underspecified: they do not distinguish between purposeful and accidental success. Generalization from underspecified rewards hinges on discounting spurious trajectories that attain accidental success, while learning from sparse feedback requires effective exploration. We address exploration by using a mode covering direction of KL divergence to collect a diverse set of successful trajectories, followed by a mode seeking KL divergence to train a robust policy. We propose Meta Reward Learning (MeRL) to construct an auxiliary reward function that provides more refined feedback for learning. The parameters of the auxiliary reward function are optimized with respect to the validation performance of a trained policy. The MeRL approach outperforms our alternative reward learning technique based on Bayesian Optimization, and achieves the stateoftheart on weaklysupervised semantic parsing. It improves previous work by 1.2 and 2.4 on WikiTableQuestions and WikiSQL datasets respectively.",
"Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from in the loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 handannotated examples of questions and SQL queries distributed across 24241 tables fromWikipedia that is an order of magnitude larger than comparable datasets. By applying policy based reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a stateoftheart semantic parser, improving execution accuracy from 35.9 to 59.4 and logical form accuracy from 23.4 to 48.3 .",
"We propose a novel semantic parsing framework for question answering using a knowledge base. We define a query graph that resembles subgraphs of the knowledge base and can be directly mapped to a logical form. Semantic parsing is reduced to query graph generation, formulated as a staged search problem. Unlike traditional approaches, our method leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem. By applying an advanced entity linking system and a deep convolutional neural network model that matches questions and predicate sequences, our system outperforms previous methods substantially, and achieves an F1 measure of 52.5 on the WEBQUESTIONS dataset.",
""
]
}

1908.11598
 2971001703
 We consider a crowdsourcing data acquisition scenario, such as federated learning, where a Center collects data points from a set of rational Agents, with the aim of training a model. For linear regression models, we show how a payment structure can be designed to incentivize the agents to provide highquality data as early as possible, based on a characterization of the influence that data points have on the loss function of the model. Our contributions can be summarized as follows: (a) we prove theoretically that this scheme ensures truthful data reporting as a gametheoretic equilibrium and further demonstrate its robustness against mixtures of truthful and heuristic data reports, (b) we design a procedure according to which the influence computation can be efficiently approximated and processed sequentially in batches over time, (c) we develop a theory that allows correcting the difference between the influence and the overall change in loss and (d) we evaluate our approach on real datasets, confirming our theoretical findings.
 The topic of learning a model when the input data points are provided by strategic sources has been the focus of a growing literature at the intersection of machine learning and game theory. A significant amount of work has been devoted to the setting in which Agents are interested in the outcome of the estimation process itself, e.g., when they are trying to sway the learned model closer to their own data points @cite_21 @cite_14 @cite_5 @cite_9 @cite_15 . Our setting is concerned with the fundamental question of eliciting accurate data when data acquisition is costly for the agents, or when they are not willing to share their data without some form of monetary compensation. Another line of work considers settings in which the Agents have to be compensated for their loss of privacy @cite_2 @cite_19 .
 {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_21",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_15"
],
"mid": [
"1983916796",
"2435502378",
"2110135769",
"2767657742",
"2962877736",
"2054844093",
"2963668529"
],
"abstract": [
"We initiate the study of incentives in a general machine learning framework. We focus on a gametheoretic regression learning setting where private information is elicited from multiple agents with different, possibly conflicting, views on how to label the points of an input space. This conflict potentially gives rise to untruthfulness on the part of the agents. In the restricted but important case when every agent cares about a single point, and under mild assumptions, we show that agents are motivated to tell the truth. In a more general setting, we study the power and limitations of mechanisms without payments. We finally establish that, in the general setting, the VCG mechanism goes a long way in guaranteeing truthfulness and economic efficiency.",
"We revisit the classic problem of estimating the population mean of an unknown singledimensional distribution from samples, taking a gametheoretic viewpoint. In our setting, samples are supplied by strategic agents, who wish to pull the estimate as close as possible to their own value. In this setting, the sample mean gives rise to manipulation opportunities, whereas the sample median does not. Our key question is whether the sample median is the best (in terms of mean squared error) truthful estimator of the population mean. We show that when the underlying distribution is symmetric, there are truthful estimators that dominate the median. Our main result is a characterization of worstcase optimal truthful estimators, which provably outperform the median, for possibly asymmetric distributions with bounded support.",
"Abstract This paper introduces a whole class of estimators (clockwise repeated median estimators or CRM) for the simple regression model that are immune to strategic manipulation by the agents generating the data. We find that some wellknown robust estimators proposed in the literature like the resistant line method are included in our family. Finally, we also undertake a Monte Carlo study to compare the distribution of some estimators that are robust to data manipulation with the OLS estimators under different scenarios.",
"We consider a data analyst's problem of purchasing data from strategic agents to compute an unbiased estimate of a statistic of interest. Agents incur private costs to reveal their data and the costs can be arbitrarily correlated with their data. Once revealed, data are verifiable. This paper focuses on linear unbiased estimators. We design an individually rational and incentive compatible mechanism that optimizes the worstcase meansquared error of the estimation, where the worstcase is over the unknown correlation between costs and data, subject to a budget constraint in expectation. We characterize the form of the optimal mechanism in closedform. We further extend our results to acquiring data for estimating a parameter in regression analysis, where private costs can correlate with the values of the dependent variable but not with the values of the independent variables.",
"We consider the problem of fitting a linear model to data held by individuals who are concerned about their privacy. Incentivizing most players to truthfully report their data to the analyst constrains our design to mechanisms that provide a privacy guarantee to the participants; we use differential privacy to model individuals’ privacy losses. This immediately poses a problem, as differentially private computation of a linear model necessarily produces a biased estimation, and existing approaches to design mechanisms to elicit data from privacysensitive individuals do not generalize well to biased estimators. We overcome this challenge through an appropriate design of the computation and payment scheme.",
"The strategyproof classification problem deals with a setting where a decision maker must classify a set of input points with binary labels, while minimizing the expected error. The labels of the input points are reported by selfinterested agents, who might lie in order to obtain a classifier that more closely matches their own labels, thereby creating a bias in the data; this motivates the design of truthful mechanisms that discourage false reports. In this paper we give strategyproof mechanisms for the classification problem in two restricted settings: (i) there are only two classifiers, and (ii) all agents are interested in a shared set of input points. We show that these plausible assumptions lead to strong positive results. In particular, we demonstrate that variations of a random dictator mechanism, that are truthful, can guarantee approximately optimal outcomes with respect to any family of classifiers. Moreover, these results are tight in the sense that they match the best possible approximation ratio that can be guaranteed by any truthful mechanism. We further show how our mechanisms can be used for learning classifiers from sampled data, and provide PACstyle generalization bounds on their expected error. Interestingly, our results can be applied to problems in the context of various fields beyond classification, including facility location and judgment aggregation.",
"This paper is part of an emerging line of work at the intersection of machine learning and mechanism design, which aims to avoid noise in training data by correctly aligning the incentives of data sources. Specifically, we focus on the ubiquitous problem of linear regression, where strategyproof mechanisms have previously been identified in two dimensions. In our setting, agents have singlepeaked preferences and can manipulate only their response variables. Our main contribution is the discovery of a family of group strategyproof linear regression mechanisms in any number of dimensions, which we call generalized resistant hyperplane mechanisms. The gametheoretic properties of these mechanisms  and, in fact, their very existence  are established through a connection to a discrete version of the Ham Sandwich Theorem."
]
}

1908.11598
 2971001703
 We consider a crowdsourcing data acquisition scenario, such as federated learning, where a Center collects data points from a set of rational Agents, with the aim of training a model. For linear regression models, we show how a payment structure can be designed to incentivize the agents to provide highquality data as early as possible, based on a characterization of the influence that data points have on the loss function of the model. Our contributions can be summarized as follows: (a) we prove theoretically that this scheme ensures truthful data reporting as a gametheoretic equilibrium and further demonstrate its robustness against mixtures of truthful and heuristic data reports, (b) we design a procedure according to which the influence computation can be efficiently approximated and processed sequentially in batches over time, (c) we develop a theory that allows correcting the difference between the influence and the overall change in loss and (d) we evaluate our approach on real datasets, confirming our theoretical findings.
 Our ideas are closely related to the literature of mechanisms @cite_8 and @cite_17 . The idea behind this literature is to extract highquality information from individuals by comparing their reports against those of randomly chosen peers. This approach has been largely successful in eliciting information. The same principle applies to our case, where the payments are dependent on the improvement of the model and therefore agents are rewarded for providing information. Finally, jia2019towards [ jia2019towards ] recently considered a setting in which the value of the provided data information is determined via the . Their approach is inherently different from ours, but it is worth noting that they consider the influence approximation of @cite_1 for approximating the Shapley value.
 {
"cite_N": [
"@cite_1",
"@cite_17",
"@cite_8"
],
"mid": [
"2597603852",
"2267283990",
"2757853533"
],
"abstract": [
"How can we explain the predictions of a blackbox model? In this paper, we use influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessianvector products. We show that even on nonconvex and nondifferentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visuallyindistinguishable trainingset attacks.",
"Crowdsourcing is widely proposed as a method to solve a large variety of judgment tasks, such as classifying website content, peer grading in online courses, or collecting realworld data. As the data reported by workers cannot be verified, there is a tendency to report random data without actually solving the task. This can be countered by making the reward for an answer depend on its consistency with answers given by other workers, an approach called peer consistency. However, it is obvious that the best strategy in such schemes is for all workers to report the same answer without solving the task. Dasgupta and Ghosh [2013] show that, in some cases, exerting high effort can be encouraged in the highestpaying equilibrium. In this article, we present a general mechanism that implements this idea and is applicable to most crowdsourcing settings. Furthermore, we experimentally test the novel mechanism, and validate its theoretical properties.",
"Abstract Intelligent systems often depend on data provided by information agents, for example, sensor data or crowdsourced human computation. Providing accurate and relevant data requires costly effort that agents may not always be willing to provide. Thus, it becomes important not only to verify the correctness of data, but also to provide incentives so that agents that provide highquality data are rewarded while those that do not are discouraged by low rewards. We cover different settings and the assumptions they admit, including sensing, human computation, peer grading, reviews, and predictions. We survey different incentive mechanisms, including proper scoring rules, prediction markets and peer prediction, Bayesian Truth Serum, Peer Truth Serum, Correlated Agreement, and the settings where each of them would be suitable. As an alternative, we also consider reputation mechanisms. We complement the gametheoretic analysis with practical examples of applications in prediction platforms, community sensin..."
]
}

1908.11515
 2970190087
 When collecting information, local differential privacy (LDP) alleviates privacy concerns of users, as users' private information is randomized before being sent to the central aggregator. However, LDP results in loss of utility due to the amount of noise that is added. To address this issue, recent work introduced an intermediate server and with the assumption that this intermediate server did not collude with the aggregator. Using this trust model, one can add less noise to achieve the same privacy guarantee; thus improving the utility. In this paper, we investigate this multipleparty setting of LDP. We first analyze the threat model and identify potential adversaries. We then make observations about existing approaches and propose new techniques that achieve a better privacyutility tradeoff than existing ones. Finally, we perform experiments to compare different methods and demonstrate the benefits of using our proposed method.
 Frequency Oracle One basic mechanism in LDP is to estimate frequencies of values. There have been several mechanisms @cite_34 @cite_30 @cite_2 @cite_31 @cite_5 @cite_0 proposed for this task. Among them, @cite_31 introduces , which achieves low estimation errors and low communication costs. The application of is crucial for the utility of other application such as heavy hitter identification @cite_2 and frequent itemset mining @cite_4 @cite_11 . And one major contribution of this paper is to enable to enjoy the privacy amplification effect.
 {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_34",
"@cite_11"
],
"mid": [
"1986293063",
"2532967691",
"2964010583",
"2734833749",
"",
"2742225091",
"1981029888",
""
],
"abstract": [
"We give efficient protocols and matching accuracy lower bounds for frequency estimation in the local model for differential privacy. In this model, individual users randomize their data themselves, sending differentially private reports to an untrusted server that aggregates them. We study protocols that produce a succinct histogram representation of the data. A succinct histogram is a list of the most frequent items in the data (often called \"heavy hitters\") along with estimates of their frequencies; the frequency of all other items is implicitly estimated as 0. If there are n users whose items come from a universe of size d, our protocols run in time polynomial in n and log(d). With high probability, they estimate the accuracy of every item up to error O(√ log(d) (e2n) ). Moreover, we show that this much error is necessary, regardless of computational efficiency, and even for the simple setting where only one item appears with significant frequency in the data set. Previous protocols (Mishra and Sandler, 2006; Hsu, Khanna and Roth, 2012) for this task either ran in time Ω(d) or had much worse error (about √[6] log(d) (e2n) ), and the only known lower bound on error was Ω(1 √ n ). We also adapt a result of (2010) to the local setting. In a model with public coins, we show that each user need only send 1 bit to the server. For all known local protocols (including ours), the transformation preserves computational efficiency.",
"In local differential privacy (LDP), each user perturbs her data locally before sending the noisy data to a data collector. The latter then analyzes the data to obtain useful statistics. Unlike the setting of centralized differential privacy, in LDP the data collector never gains access to the exact values of sensitive data, which protects not only the privacy of data contributors but also the collector itself against the risk of potential data leakage. Existing LDP solutions in the literature are mostly limited to the case that each user possesses a tuple of numeric or categorical values, and the data collector computes basic statistics such as counts or mean values. To the best of our knowledge, no existing work tackles more complex data mining tasks such as heavy hitter discovery over setvalued data. In this paper, we present a systematic study of heavy hitter mining under LDP. We first review existing solutions, extend them to the heavy hitter estimation, and explain why their effectiveness is limited. We then propose LDPMiner, a twophase mechanism for obtaining accurate heavy hitters with LDP. The main idea is to first gather a candidate set of heavy hitters using a portion of the privacy budget, and focus the remaining budget on refining the candidate set in a second phase, which is much more efficient budgetwise than obtaining the heavy hitters directly from the whole dataset. We provide both indepth theoretical analysis and extensive experiments to compare LDPMiner against adaptations of previous solutions. The results show that LDPMiner significantly improves over existing methods. More importantly, LDPMiner successfully identifies the majority true heavy hitters in practical settings.",
"We study the problem of estimating @math ary distributions under @math local differential privacy. @math samples are distributed across users who send privatized versions of their sample to a central server. All previously known sample optimal algorithms require linear (in @math ) communication from each user in the high privacy regime @math , and run in time that grows as @math , which can be prohibitive for large domain size @math . @PARASPLIT We propose Hadamard Response (HR), a local privatization scheme that requires no shared randomness and is symmetric with respect to the users. Our scheme has order optimal sample complexity for all @math , a communication of at most @math bits per user, and nearly linear running time of @math . @PARASPLIT Our encoding and decoding are based on Hadamard matrices and are simple to implement. The statistical performance relies on the coding theoretic aspects of Hadamard matrices, ie, the large Hamming distance between the rows. An efficient implementation of the algorithm using the Fast WalshHadamard transform gives the computational gains. @PARASPLIT We compare our approach with Randomized Response (RR), RAPPOR, and subsetselection mechanisms (SS), both theoretically, and experimentally. For @math , our algorithm runs about 100x faster than SS, and RAPPOR.",
"We present new practical local differentially private heavy hitters algorithms achieving optimal or nearoptimal worstcase error and running time  TreeHist and Bitstogram. In both algorithms, server running time is @math and user running time is @math , hence improving on the prior stateoftheart result of Bassily and Smith [STOC 2015] requiring @math server time and @math user time. With a typically large number of participants in local algorithms ( @math in the millions), this reduction in time complexity, in particular at the user side, is crucial for making locally private heavy hitters algorithms usable in practice. We implemented Algorithm TreeHist to verify our theoretical analysis and compared its performance with the performance of Google's RAPPOR code.",
"",
"",
"Randomized Aggregatable PrivacyPreserving Ordinal Response, or RAPPOR, is a technology for crowdsourcing statistics from enduser client software, anonymously, with strong privacy guarantees. In short, RAPPORs allow the forest of client data to be studied, without permitting the possibility of looking at individual trees. By applying randomized response in a novel manner, RAPPOR provides the mechanisms for such collection as well as for efficient, highutility analysis of the collected data. In particular, RAPPOR permits statistics to be collected on the population of clientside strings with strong privacy guarantees for each client, and without linkability of their reports. This paper describes and motivates RAPPOR, details its differentialprivacy and utility guarantees, discusses its practical deployment and properties in the face of different attack models, and, finally, gives results of its application to both synthetic and realworld data.",
""
]
}

1908.11515
 2970190087
 When collecting information, local differential privacy (LDP) alleviates privacy concerns of users, as users' private information is randomized before being sent to the central aggregator. However, LDP results in loss of utility due to the amount of noise that is added. To address this issue, recent work introduced an intermediate server and with the assumption that this intermediate server did not collude with the aggregator. Using this trust model, one can add less noise to achieve the same privacy guarantee; thus improving the utility. In this paper, we investigate this multipleparty setting of LDP. We first analyze the threat model and identify potential adversaries. We then make observations about existing approaches and propose new techniques that achieve a better privacyutility tradeoff than existing ones. Finally, we perform experiments to compare different methods and demonstrate the benefits of using our proposed method.
 There also exist relaxed models that seem incompatible with the shuffler model, i.e., @cite_15 considers the inferring probability as the adversary's power; and @cite_3 utilizes the linkage between each user's sensitive and public attributes.
 {
"cite_N": [
"@cite_15",
"@cite_3"
],
"mid": [
"2902114605",
"2948055046"
],
"abstract": [
"In largescale statistical learning, data collection and model fitting are moving increasingly toward peripheral devicesphones, watches, fitness trackersaway from centralized data collection. Concomitant with this rise in decentralized data are increasing challenges of maintaining privacy while allowing enough information to fit accurate, useful statistical models. This motivates local notions of privacymost significantly, local differential privacy, which provides strong protections against sensitive data disclosureswhere data is obfuscated before a statistician or learner can even observe it, providing strong protections to individuals' data. Yet local privacy as traditionally employed may prove too stringent for practical use, especially in modern highdimensional statistical and machine learning problems. Consequently, we revisit the types of disclosures and adversaries against which we provide protections, considering adversaries with limited prior information and ensuring that with high probability, ensuring they cannot reconstruct an individual's data within useful tolerances. By reconceptualizing these protections, we allow more useful data releaselarge privacy parameters in local differential privacyand we design new (minimax) optimal locally differentially private mechanisms for statistical learning problems for privacy levels. We thus present practicable approaches to largescale locally private model training that were previously impossible, showing theoretically and empirically that we can fit largescale image classification and language models with little degradation in utility.",
"Multidimensional analytical (MDA) queries are often issued against a fact table with predicates on (categorical or ordinal) dimensions and aggregations on one or more measures. In this paper, we study the problem of answering MDA queries under local differential privacy (LDP). In the absence of a trusted agent, sensitive dimensions are encoded in a privacypreserving (LDP) way locally before being sent to the data collector. The data collector estimates the answers to MDA queries, based on the encoded dimensions. We propose several LDP encoders and estimation algorithms, to handle a large class of MDA queries with different types of predicates and aggregation functions. Our techniques are able to answer these queries with tight error bounds and scale well in highdimensional settings (i.e., error is polylogarithmic in dimension sizes). We conduct experiments on real and synthetic data to verify our theoretical results, and compare our solution with marginalestimation based solutions."
]
}

1908.11515
 2970190087
 When collecting information, local differential privacy (LDP) alleviates privacy concerns of users, as users' private information is randomized before being sent to the central aggregator. However, LDP results in loss of utility due to the amount of noise that is added. To address this issue, recent work introduced an intermediate server and with the assumption that this intermediate server did not collude with the aggregator. Using this trust model, one can add less noise to achieve the same privacy guarantee; thus improving the utility. In this paper, we investigate this multipleparty setting of LDP. We first analyze the threat model and identify potential adversaries. We then make observations about existing approaches and propose new techniques that achieve a better privacyutility tradeoff than existing ones. Finally, we perform experiments to compare different methods and demonstrate the benefits of using our proposed method.
 Distributed DP In the distributed setting of DP, each data owner (or proxy) has access to a (disjoint) subset of users. For example, each patient's information is possessed by a hospital. The DP noise is added at the level of the intermediate data owners (e.g., @cite_27 ). A special case (twoparty computation) is also considered @cite_18 @cite_10 . @cite_7 studies the limitation of twoparty DP. @cite_16 , a distributed noise generation protocol was proposed to prevent some party from adding malicious noise. @cite_37 lays the theoretical foundation of the relationship among several kinds of computational DP definitions.
 {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_7",
"@cite_27",
"@cite_16",
"@cite_10"
],
"mid": [
"2751501713",
"1493407996",
"2029905190",
"",
"2610910029",
"2943322967"
],
"abstract": [
"Private record linkage (PRL) is the problem of identifying pairs of records that are similar as per an input matching rule from databases held by two parties that do not trust one another. We identify three key desiderata that a PRL solution must ensure: (1) perfect precision and high recall of matching pairs, (2) a proof of endtoend privacy, and (3) communication and computational costs that scale subquadratically in the number of input records. We show that all of the existing solutions for PRL? including secure 2party computation (S2PC), and their variants that use nonprivate or differentially private (DP) blocking to ensure subquadratic cost  violate at least one of the three desiderata. In particular, S2PC techniques guarantee endtoend privacy but have either low recall or quadratic cost. In contrast, no endtoend privacy guarantee has been formalized for solutions that achieve subquadratic cost. This is true even for solutions that compose DP and S2PC: DP does not permit the release of any exact information about the databases, while S2PC algorithms for PRL allow the release of matching records. In light of this deficiency, we propose a novel privacy model, called output constrained differential privacy, that shares the strong privacy protection of DP, but allows for the truthful release of the output of a certain function applied to the data. We apply this to PRL, and show that protocols satisfying this privacy model permit the disclosure of the true matching records, but their execution is insensitive to the presence or absence of a single nonmatching record. We find that prior work that combine DP and S2PC techniques even fail to satisfy this endtoend privacy model. Hence, we develop novel protocols that provably achieve this endtoend privacy guarantee, together with the other two desiderata of PRL. Our empirical evaluation also shows that our protocols obtain high recall, scale near linearly in the size of the input databases and the output set of matching pairs, and have communication and computational costs that are at least 2 orders of magnitude smaller than S2PC baselines.",
"The definition of differential privacy has recently emerged as a leading standard of privacy guarantees for algorithms on statistical databases. We offer several relaxations of the definition which require privacy guarantees to hold only against efficienti.e., computationallyboundedadversaries. We establish various relationships among these notions, and in doing so, we observe their close connection with the theory of pseudodense sets by [1]. We extend the dense model theorem of to demonstrate equivalence between two definitions (indistinguishability and simulatabilitybased) of computational differential privacy. Our computational analogues of differential privacy seem to allow for more accurate constructions than the standard informationtheoretic analogues. In particular, in the context of private approximation of the distance between two vectors, we present a differentiallyprivate protocol for computing the approximation, and contrast it with a substantially more accurate protocol that is only computationally differentially private.",
"We study differential privacy in a distributed setting where two parties would like to perform analysis of their joint data while preserving privacy for both datasets. Our results imply almost tight lower bounds on the accuracy of such data analyses, both for specific natural functions (such as Hamming distance) and in general. Our bounds expose a sharp contrast between the twoparty setting and the simpler clientserver setting (where privacy guarantees are onesided). In addition, those bounds demonstrate a dramatic gap between the accuracy that can be obtained by differentially private data analysis versus the accuracy obtainable when privacy is relaxed to a computational variant of differential privacy. The first proof technique we develop demonstrates a connection between differential privacy and deterministic extraction from SanthaVazirani sources. A second connection we expose indicates that the ability to approximate a function by a lowerror differentially private protocol is strongly related to the ability to approximate it by a low communication protocol. (The connection goes in both directions.)",
"",
"In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacypreserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form Σ i f(d i ), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution.",
"Private record linkage protocols allow multiple parties to exchange matching records, which refer to the same entities or have similar values, while keeping the nonmatching ones secret. Conventional protocols are based on computationally expensive cryptographic primitives and therefore do not scale. To address these scalability issues, hybrid protocols have been proposed that combine differential privacy techniques with secure multiparty computation techniques. However, a drawback of such protocols is that they disclose to the parties both the matching records and the differentially private synopses of the datasets involved in the linkage. Consequently, differential privacy is no longer always satisfied. To address this issue, we propose a novel framework that separates the private synopses from the matching records. The two parties do not access the synopses directly, but still use them to efficiently link records. We theoretically prove the security of our framework under the stateoftheart privacy notion of differential privacy for record linkage (DPRL). In addition, we develop a simple but effective strategy for releasing private synopses. Extensive experimental results show that our framework is superior to the existing methods in terms of efficiency."
]
}

1908.11515
 2970190087
 When collecting information, local differential privacy (LDP) alleviates privacy concerns of users, as users' private information is randomized before being sent to the central aggregator. However, LDP results in loss of utility due to the amount of noise that is added. To address this issue, recent work introduced an intermediate server and with the assumption that this intermediate server did not collude with the aggregator. Using this trust model, one can add less noise to achieve the same privacy guarantee; thus improving the utility. In this paper, we investigate this multipleparty setting of LDP. We first analyze the threat model and identify potential adversaries. We then make observations about existing approaches and propose new techniques that achieve a better privacyutility tradeoff than existing ones. Finally, we perform experiments to compare different methods and demonstrate the benefits of using our proposed method.
 DP by Trusted Hardware In this approach, a trusted hardware (e.g., SGX) is utilized to collect data, tally the data, and add the noise within the protected hardware. The result is then sent to the analyst. Google propose Prochlo @cite_29 that uses SGX. Note that the trusted hardware can be run by the server. Thus @cite_38 and @cite_1 designed oblivious DP algorithms to overcome the threat of side information (memory access pattern may be related to the underlying data). These proposals assume the trusted hardware is safe to use. However, using trusted hardware has potential risks (e.g., @cite_23 ). This paper considers the setting without trusted hardware.
 {
"cite_N": [
"@cite_38",
"@cite_29",
"@cite_1",
"@cite_23"
],
"mid": [
"2795267922",
"2761138375",
"2809701319",
"2889224812"
],
"abstract": [
"It is wellknown that a program's memory access pattern can leak information about its input. To thwart such leakage, most existing works adopt the technique of oblivious RAM (ORAM) simulation. Such an obliviousness notion has stimulated much debate. Although ORAM techniques have significantly improved over the past few years, the concrete overheads are arguably still undesirable for realworld systems  part of this overhead is in fact inherent due to a wellknown logarithmic ORAM lower bound by Goldreich and Ostrovsky. To make matters worse, when the program's runtime or output length depend on secret inputs, it may be necessary to perform worstcase padding to achieve full obliviousness and thus incur possibly superlinear overheads. Inspired by the elegant notion of differential privacy, we initiate the study of a new notion of access pattern privacy, which we call \"(ϵ, δ)differential obliviousness\". We separate the notion of (ϵ, δ)differential obliviousness from classical obliviousness by considering several fundamental algorithmic abstractions including sorting smalllength keys, merging two sorted lists, and range query data structures (akin to binary search trees). We show that by adopting differential obliviousness with reasonable choices of ϵ and δ, not only can one circumvent several impossibilities pertaining to full obliviousness, one can also, in several cases, obtain meaningful privacy with little overhead relative to the nonprivate baselines (i.e., having privacy \"almost for free\"). On the other hand, we show that for very demanding choices of ϵ and δ, the same lower bounds for oblivious algorithms would be preserved for (ϵ, δ)differential obliviousness.",
"The largescale monitoring of computer users' software activities has become commonplace, e.g., for application telemetry, error reporting, or demographic profiling. This paper describes a principled systems architectureEncode, Shuffle, Analyze (ESA)for performing such monitoring with high utility while also protecting user privacy. The ESA design, and its Prochlo implementation, are informed by our practical experiences with an existing, large deployment of privacypreserving software monitoring. With ESA, the privacy of monitored users' data is guaranteed by its processing in a threestep pipeline. First, the data is encoded to control scope, granularity, and randomness. Second, the encoded data is collected in batches subject to a randomized threshold, and blindly shuffled, to break linkability and to ensure that individual data items get \"lost in the crowd\" of the batch. Third, the anonymous, shuffled data is analyzed by a specific analysis engine that further prevents statistical inference attacks on analysis results. ESA extends existing bestpractice methods for sensitivedata analytics, by using cryptography and statistical techniques to make explicit how data is elided and reduced in precision, how only commonenough, anonymous data is analyzed, and how this is done for only specific, permitted purposes. As a result, ESA remains compatible with the established workflows of traditional database analysis. Strong privacy guarantees, including differential privacy, can be established at each processing step to defend against malice or compromise at one or more of those steps. Prochlo develops new techniques to harden those steps, including the Stash Shuffle, a novel scalable and efficient obliviousshuffling algorithm based on Intel's SGX, and new applications of cryptographic secret sharing and blinding. We describe ESA and Prochlo, as well as experiments that validate their ability to balance utility and privacy.",
"Differential privacy has emerged as the main definition for private data analysis and machine learning. The global model of differential privacy, which assumes that users trust the data collector, provides strong privacy guarantees and introduces small errors in the output. In contrast, applications of differential privacy in commercial systems by Apple, Google, and Microsoft, use the local model. Here, users do not trust the data collector, and hence randomize their data before sending it to the data collector. Unfortunately, local model is too strong for several important applications and hence is limited in its applicability. In this work, we propose a framework based on trusted processors and a new definition of differential privacy called Oblivious Differential Privacy, which combines the best of both local and global models. The algorithms we design in this framework show interesting interplay of ideas from the streaming algorithms, oblivious algorithms, and differential privacy.",
"Intel Software Guard Extensions (SGX) isolate securitycritical code inside a protected memory area called enclave. Previous research on SGX has demonstrated that memory corruption vulnerabilities within enclave code can be exploited to extract secret keys and bypass remote attestation. However, these attacks require kernel privileges, and rely on frequently probing enclave code which results in many enclave crashes. Further, they assume a constant, not randomized memory layout. In this paper, we present novel exploitation techniques against SGX that do not require any enclave crashes and work in the presence of existing SGX randomization approaches such as SGXShield. A key contribution of our attacks is that they work under weak adversarial assumptions, e.g., not requiring kernel privileges. In fact, they can be applied to any enclave that is developed with the standard Intel SGX SDK on either Linux or Windows."
]
}

1908.11515
 2970190087
 When collecting information, local differential privacy (LDP) alleviates privacy concerns of users, as users' private information is randomized before being sent to the central aggregator. However, LDP results in loss of utility due to the amount of noise that is added. To address this issue, recent work introduced an intermediate server and with the assumption that this intermediate server did not collude with the aggregator. Using this trust model, one can add less noise to achieve the same privacy guarantee; thus improving the utility. In this paper, we investigate this multipleparty setting of LDP. We first analyze the threat model and identify potential adversaries. We then make observations about existing approaches and propose new techniques that achieve a better privacyutility tradeoff than existing ones. Finally, we perform experiments to compare different methods and demonstrate the benefits of using our proposed method.
 Aggregation using Secure Computation Our work shares similar threat models from work on secure aggregation such as the electronic voting @cite_12 and statistic aggregation @cite_25 . In particular, multiple users compute some information collaboratively without leaking information about their own data. Note that the primary goal of secure aggregation is different: the result must be deterministic and correct, while in , a significant amount of noise is necessary.
 {
"cite_N": [
"@cite_25",
"@cite_12"
],
"mid": [
"2599930814",
"2128227627"
],
"abstract": [
"This paper presents Prio, a privacypreserving system for the collection of aggregate statistics. Each Prio client holds a private data value (e.g., its current location), and a small set of servers compute statistical functions over the values of all clients (e.g., the most popular location). As long as at least one server is honest, the Prio servers learn nearly nothing about the clients' private data, except what they can infer from the aggregate statistics that the system computes. To protect functionality in the face of faulty or malicious clients, Prio uses secretshared noninteractive proofs (SNIPs), a new cryptographic technique that yields a hundredfold performance improvement over conventional zeroknowledge approaches. Prio extends classic private aggregation techniques to enable the collection of a large class of useful statistics. For example, Prio can perform a leastsquares regression on highdimensional clientprovided data without ever seeing the data in the clear.",
"We present new cryptographic protocols for multiauthority secret ballot elections that guarantee privacy, robustness, and universal verifiability. Application of some novel techniques, in particular the construction of witness hiding indistinguishable protocols from Cramer, Damgard and Schoenmakers, and the verifiable secret sharing scheme of Pedersen, reduce the work required by the voter or an authority to a linear number of cryptographic operations in the population size (compared to quadratic in previous schemes). Thus we get significantly closer to a practical election scheme."
]
}

1908.11421
 2970918254
 Incorporating Item Response Theory (IRT) into NLP tasks can provide valuable information about model performance and behavior. Traditionally, IRT models are learned using human response pattern (RP) data, presenting a significant bottleneck for large data sets like those required for training deep neural networks (DNNs). In this work we propose learning IRT models using RPs generated from artificial crowds of DNN models. We demonstrate the effectiveness of learning IRT models using DNNgenerated data through quantitative and qualitative analyses for two NLP tasks. Parameters learned from human and machine RPs for natural language inference and sentiment analysis exhibit medium to large positive correlations. We demonstrate a usecase for latent difficulty item parameters, namely training set filtering, and show that using difficulty to sample training data outperforms baseline methods. Finally, we highlight cases where human expectation about item difficulty does not match difficulty as estimated from the machine RPs.
 There have been a number of studies on modeling latent traits of data to identify a correct label, [e.g.][] bruce1999recognizing . There has also been work in modeling individuals to identify poor annotators @cite_35 , but neither jointly model the ability of individuals and data points, nor apply the resulting metrics to interpret DNN models. Other work has modeled the probability a label is correct along with the probability of an annotator to label an item correctly according to the @cite_15 model, but do not consider difficulty or discriminatory ability of the data points @cite_2 . In the above models an annotator's response depends on an item only through its correct label. IRT assumes a more sophisticated response mechanism involving both annotator qualities and item characteristics. The DARE model @cite_30 jointly estimates ability, difficulty and response using probabilistic inference. It was evaluated on an intelligence test of 60 multiple choice questions administered to 120 individuals.
 {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_15",
"@cite_2"
],
"mid": [
"2105268242",
"2251818274",
"9014458",
"2295951612"
],
"abstract": [
"We propose a new probabilistic graphical model that jointly models the difficulties of questions, the abilities of participants and the correct answers to questions in aptitude testing and crowdsourcing settings. We devise an active learning adaptive testing scheme based on a greedy minimization of expected model entropy, which allows a more efficient resource allocation by dynamically choosing the next question to be asked based on the previous responses. We present experimental results that confirm the ability of our model to infer the required parameters and demonstrate that the adaptive testing scheme requires fewer questions to obtain the same accuracy as a static test scenario.",
"Nonexpert annotation services like Amazon’s Mechanical Turk (AMT) are cheap and fast ways to evaluate systems and provide categorical annotations for training data. Unfortunately, some annotators choose bad labels in order to maximize their pay. Manual identification is tedious, so we experiment with an itemresponse model. It learns in an unsupervised fashion to a) identify which annotators are trustworthy and b) predict the correct underlying labels. We match performance of more complex stateoftheart systems and perform well even under adversarial conditions. We show considerable improvements over standard baselines, both for predicted label accuracy and trustworthiness estimates. The latter can be further improved by introducing a prior on model parameters and using Variational Bayes inference. Additionally, we can achieve even higher accuracy by focusing on the instances our model is most confident in (trading in some recall), and by incorporating annotated control instances. Our system, MACE (MultiAnnotator Competence Estimation), is available for download 1 .",
"In compiling a patient record many facets are subject to errors of measurement. A model is presented which allows individual errorrates to be estimated for polytomous facets even when the patient's \"true\" response is not available. The EM algorithm is shown to provide a slow but sure way of obtaining maximum likelihood estimates of the parameters of interest. Some preliminary experience is reported and the limitations of the method are described.",
"Standard agreement measures for interannotator reliability are neither necessary nor sufficient to ensure a high quality corpus. In a case study of word sense annotation, conventional methods for evaluating labels from trained annotators are contrasted with a probabilistic annotation model applied to crowdsourced data. The annotation model provides far more information, including a certainty measure for each gold standard label; the crowdsourced data was collected at less than half the cost of the conventional approach."
]
}

1908.11421
 2970918254
 Incorporating Item Response Theory (IRT) into NLP tasks can provide valuable information about model performance and behavior. Traditionally, IRT models are learned using human response pattern (RP) data, presenting a significant bottleneck for large data sets like those required for training deep neural networks (DNNs). In this work we propose learning IRT models using RPs generated from artificial crowds of DNN models. We demonstrate the effectiveness of learning IRT models using DNNgenerated data through quantitative and qualitative analyses for two NLP tasks. Parameters learned from human and machine RPs for natural language inference and sentiment analysis exhibit medium to large positive correlations. We demonstrate a usecase for latent difficulty item parameters, namely training set filtering, and show that using difficulty to sample training data outperforms baseline methods. Finally, we highlight cases where human expectation about item difficulty does not match difficulty as estimated from the machine RPs.
 There are several other areas of study regarding how best to use training data that are related to this work. Reweighting or reordering training examples is a wellstudied and related area of supervised learning. Often examples are reweighted according to some notion of difficulty, or model uncertainty @cite_24 . In particular, the internal uncertainty of the model is used as the basis for selecting how training examples are weighted. However, model uncertainty depends upon the original training data the model was trained on, while here we use an external measure of uncertainty.
 {
"cite_N": [
"@cite_24"
],
"mid": [
"2963476860"
],
"abstract": [
"Selfpaced learning and hard example mining reweight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of minibatch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation."
]
}

1908.11402
 2970866601
 A team of mobile agents, starting from different nodes of an unknown network, possibly at different times, have to meet at the same node and declare that they have all met. Agents have different labels and move in synchronous rounds along links of the network. The above task is known as gathering and was traditionally considered under the assumption that when some agents are at the same node then they can talk. In this paper we ask the question of whether this ability of talking is needed for gathering. The answer turns out to be no. Our main contribution are two deterministic algorithms that always accomplish gathering in a much weaker model. We only assume that at any time an agent knows how many agents are at the node that it currently occupies but agents do not see the labels of other colocated agents and cannot exchange any information with them. They also do not see other nodes than the current one. Our first algorithm works under the assumption that agents know a priori some upper bound N on the network size, and it works in time polynomial in N and in the length l of the smallest label. Our second algorithm does not assume any a priori knowledge about the network but its complexity is exponential in the network size and in the labels of agents. Its purpose is to show feasibility of gathering under this harsher scenario. As a byproduct of our techniques we obtain, in the same weak model, the solution of the fundamental problem of leader election among agents. As an application of our result we also solve, in the same model, the wellknown gossiping problem: if each agent has a message at the beginning, we show how to make all messages known to all agents, even without any a priori knowledge about the network. If agents know an upper bound N on the network size then our gossiping algorithm works in time polynomial in N, in l and in the length of the largest message.
 For the deterministic setting a lot of effort has been dedicated to the study of the feasibility of rendezvous, and to the time required to achieve this task, when feasible. For instance, deterministic rendezvous with agents equipped with tokens used to mark nodes was considered, e.g., in @cite_28 . Deterministic rendezvous of two agents that cannot mark nodes but have unique labels was discussed in @cite_17 @cite_9 . These papers are concerned with the time of rendezvous in arbitrary graphs. In @cite_17 the authors show a rendezvous algorithm polynomial in the size of the graph, in the length of the shorter label and in the delay between the starting time of the agents. In @cite_9 rendezvous time is polynomial in the first two of these parameters and independent of the delay.
 {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_17"
],
"mid": [
"2131887891",
"2623545498",
""
],
"abstract": [
"In the rendezvous search problem, two mobile agents must move along the n nodes of a network so as to minimize the time required to meet or rendezvous. When the mobile agents are identical and the network is anonymous, however, the resulting symmetry can make the problem impossible to solve. Symmetry is typically broken by having the mobile agents run either a randomized algorithm or different deterministic algorithms. We investigate the use of identical tokens to break symmetry so that the two mobile agents can run the same deterministic algorithm. After deriving the explicit conditions under which identical tokens can be used to break symmetry on the n node ring, we derive the lower and upper bounds for the time and memory complexity of the rendezvous search problem with various parameter sets. While these results suggest a possible tradeoff between the mobile agents' memory and the time complexity of the rendezvous search problem, we prove that this tradeoff is limited.",
"We obtain several improved solutions for the deterministic rendezvous problem in general undirected graphs. Our solutions answer several problems left open in a recent paper by We also introduce an interesting variant of the rendezvous problem which we call the deterministic treasure hunt problem. Both the rendezvous and the treasure hunt problems motivate the study of universal traversal sequences and universal exploration sequences with some strengthened properties. We call such sequences strongly universal traversal (exploration) sequences. We give an explicit construction of strongly universal exploration sequences. The existence of strongly universal traversal sequences, as well as the solution of the most difficult variant of the deterministic treasure hunt problem, are left as intriguing open problems.",
""
]
}

1908.11402
 2970866601
 A team of mobile agents, starting from different nodes of an unknown network, possibly at different times, have to meet at the same node and declare that they have all met. Agents have different labels and move in synchronous rounds along links of the network. The above task is known as gathering and was traditionally considered under the assumption that when some agents are at the same node then they can talk. In this paper we ask the question of whether this ability of talking is needed for gathering. The answer turns out to be no. Our main contribution are two deterministic algorithms that always accomplish gathering in a much weaker model. We only assume that at any time an agent knows how many agents are at the node that it currently occupies but agents do not see the labels of other colocated agents and cannot exchange any information with them. They also do not see other nodes than the current one. Our first algorithm works under the assumption that agents know a priori some upper bound N on the network size, and it works in time polynomial in N and in the length l of the smallest label. Our second algorithm does not assume any a priori knowledge about the network but its complexity is exponential in the network size and in the labels of agents. Its purpose is to show feasibility of gathering under this harsher scenario. As a byproduct of our techniques we obtain, in the same weak model, the solution of the fundamental problem of leader election among agents. As an application of our result we also solve, in the same model, the wellknown gossiping problem: if each agent has a message at the beginning, we show how to make all messages known to all agents, even without any a priori knowledge about the network. If agents know an upper bound N on the network size then our gossiping algorithm works in time polynomial in N, in l and in the length of the largest message.
 Memory required by two anonymous agents to achieve deterministic rendezvous has been studied in @cite_22 for trees and in @cite_0 for general graphs. Memory needed for randomized rendezvous in the ring is discussed, e.g., in @cite_25 .
 {
"cite_N": [
"@cite_0",
"@cite_25",
"@cite_22"
],
"mid": [
"1972775782",
"1535538796",
"2081055073"
],
"abstract": [
"Two identical (anonymous) mobile agents start from arbitrary nodes in an a priori unknown graph and move synchronously from node to node with the goal of meeting. This rendezvous problem has been thoroughly studied, both for anonymous and for labeled agents, along with another basic task, that of exploring graphs by mobile agents. The rendezvous problem is known to be not easier than graph exploration. A wellknown recent result on exploration, due to Reingold, states that deterministic exploration of arbitrary graphs can be performed in logspace, i.e., using an agent equipped with O(log n) bits of memory, where n is the size of the graph. In this paper we study the size of memory of mobile agents that permits us to solve the rendezvous problem deterministically. Our main result establishes the minimum size of the memory of anonymous agents that guarantees deterministic rendezvous when it is feasible. We show that this minimum size is Θ(log n), where n is the size of the graph, regardless of the delay between the starting times of the agents. More precisely, we construct identical agents equipped with Θ(log n) memory bits that solve the rendezvous problem in all graphs with at most n nodes, if they start with any delay τ, and we prove a matching lower bound Ω(log n) on the number of memory bits needed to accomplish rendezvous, even for simultaneous start. In fact, this lower bound is achieved already on the class of rings. This shows a significant contrast between rendezvous and exploration: e.g., while exploration of rings (without stopping) can be done using constant memory, rendezvous, even with simultaneous start, requires logarithmic memory. As a byproduct of our techniques introduced to obtain logspace rendezvous we get the first algorithm to find a quotient graph of a given unlabeled graph in polynomial time, by means of a mobile agent moving around the graph.",
"We present a tradeoff between the expected time for two identical agents to rendezvous on a synchronous, anonymous, oriented ring and the memory requirements of the agents. In particular, we show that there exists a 2t state agent, which can achieve rendezvous on an n node ring in expected time O(n2 2t + 2t) and that any t 2 state agent requires expected time Ω(n2 2t). As a corollary we observe that Θ(log log n) bits of memory are necessary and sufficient to achieve rendezvous in linear time.",
"The aim of rendezvous in a graph is meeting of two mobile agents at some node of an unknown anonymous connected graph. In this article, we focus on rendezvous in trees, and, analogously to the efforts that have been made for solving the exploration problem with compact automata, we study the size of memory of mobile agents that permits to solve the rendezvous problem deterministically. We assume that the agents are identical, and move in synchronous rounds. We first show that if the delay between the starting times of the agents is arbitrary, then the lower bound on memory required for rendezvous is Ω(log n) bits, even for the line of length n. This lower bound meets a previously known upper bound of O(log n) bits for rendezvous in arbitrary graphs of size at most n. Our main result is a proof that the amount of memory needed for rendezvous with simultaneous start depends essentially on the number e of leaves of the tree, and is exponentially less impacted by the number n of nodes. Indeed, we present two identical agents with O(log e + log log n) bits of memory that solve the rendezvous problem in all trees with at most n nodes and at most e leaves. Hence, for the class of trees with polylogarithmically many leaves, there is an exponential gap in minimum memory size needed for rendezvous between the scenario with arbitrary delay and the scenario with delay zero. Moreover, we show that our upper bound is optimal by proving that Ω(log e + log log n) bits of memory are required for rendezvous, even in the class of trees with degrees bounded by 3."
]
}

1908.11402
 2970866601
 A team of mobile agents, starting from different nodes of an unknown network, possibly at different times, have to meet at the same node and declare that they have all met. Agents have different labels and move in synchronous rounds along links of the network. The above task is known as gathering and was traditionally considered under the assumption that when some agents are at the same node then they can talk. In this paper we ask the question of whether this ability of talking is needed for gathering. The answer turns out to be no. Our main contribution are two deterministic algorithms that always accomplish gathering in a much weaker model. We only assume that at any time an agent knows how many agents are at the node that it currently occupies but agents do not see the labels of other colocated agents and cannot exchange any information with them. They also do not see other nodes than the current one. Our first algorithm works under the assumption that agents know a priori some upper bound N on the network size, and it works in time polynomial in N and in the length l of the smallest label. Our second algorithm does not assume any a priori knowledge about the network but its complexity is exponential in the network size and in the labels of agents. Its purpose is to show feasibility of gathering under this harsher scenario. As a byproduct of our techniques we obtain, in the same weak model, the solution of the fundamental problem of leader election among agents. As an application of our result we also solve, in the same model, the wellknown gossiping problem: if each agent has a message at the beginning, we show how to make all messages known to all agents, even without any a priori knowledge about the network. If agents know an upper bound N on the network size then our gossiping algorithm works in time polynomial in N, in l and in the length of the largest message.
 Apart from the synchronous model used in this paper, several authors have investigated asynchronous gathering in the plane @cite_34 @cite_35 and in network environments @cite_19 @cite_1 @cite_30 @cite_13 . In the latter scenario the agent chooses the edge which it decides to traverse but the adversary controls the speed of the agent. Under this assumption rendezvous in a node cannot be guaranteed even in very simple graphs and hence the rendezvous requirement is relaxed to permit the agents to meet inside an edge.
 {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_1",
"@cite_19",
"@cite_34",
"@cite_13"
],
"mid": [
"2050063944",
"1635699204",
"2100580556",
"1493636406",
"2087073465",
"1582826078"
],
"abstract": [
"Two mobile agents (robots) having distinct labels and located in nodes of an unknown anonymous connected graph have to meet. We consider the asynchronous version of this wellstudied rendezvous problem and we seek fast deterministic algorithms for it. Since in the asynchronous setting, meeting at a node, which is normally required in rendezvous, is in general impossible, we relax the demand by allowing meeting of the agents inside an edge as well. The measure of performance of a rendezvous algorithm is its cost: for a given initial location of agents in a graph, this is the number of edge traversals of both agents until rendezvous is achieved. If agents are initially situated at a distance D in an infinite line, we show a rendezvous algorithm with cost O(DLmin2) when D is known and O((D + Lmax)3) if D is unknown, where Lmin and Lmax are the lengths of the shorter and longer label of the agents, respectively. These results still hold for the case of the ring of unknown size, but then we also give an optimal algorithm of cost O(nLmin), if the size n of the ring is known, and of cost O(nLmax), if it is unknown. For arbitrary graphs, we show that rendezvous is feasible if an upper bound on the size of the graph is known and we give an optimal algorithm of cost O(DLmin) if the topology of the graph and the initial positions are known to agents.",
"We consider a collection of robots which are identical (anonymous), have limited visibility of the environment, and no memory of the past (oblivious); furthermore, they are totally asynchronous in their actions, computations, and movements. We show that, even in such a totally asynchronous setting, it is possible for the robots to gather in the same location in finite time, provided they have a compass.",
"Two mobile agents (robots) with distinct labels have to meet in an arbitrary, possibly infinite, unknown connected graph or in an unknown connected terrain in the plane. Agents are modeled as points, and the route of each of them only depends on its label and on the unknown environment. The actual walk of each agent also depends on an asynchronous adversary that may arbitrarily vary the speed of the agent, stop it, or even move it back and forth, as long as the walk of the agent in each segment of its route is continuous, does not leave it and covers all of it. Meeting in a graph means that both agents must be at the same time in some node or in some point inside an edge of the graph, while meeting in a terrain means that both agents must be at the same time in some point of the terrain. Does there exist a deterministic algorithm that allows any two agents to meet in any unknown environment in spite of this very powerful adversary? We give deterministic rendezvous algorithms for agents starting at arbitrary nodes of any anonymous connected graph (finite or infinite) and for agents starting at any interior points with rational coordinates in any closed region of the plane with pathconnected interior. While our algorithms work in a very general setting  agents can, indeed, meet almost everywhere  we show that none of the above few limitations imposed on the environment can be removed. On the other hand, our algorithm also guarantees the following approximate rendezvous for agents starting at arbitrary interior points of a terrain as above: agents will eventually get at an arbitrarily small positive distance from each other.",
"Two anonymous mobile agents (robots) moving in an asynchronous manner have to meet in an infinite grid of dimension δ > 0, starting from two arbitrary positions at distance at most d. Since the problem is clearly infeasible in such general setting, we assume that the grid is embedded in a δdimensional Euclidean space and that each agent knows the Cartesian coordinates of its own initial position (but not the one of the other agent). We design an algorithm permitting the agents to meet after traversing a trajectory of length O(dδ polylog d). This bound for the case of 2Dgrids subsumes the main result of [12]. The algorithm is almost optimal, since the Ω(dδ) lower bound is straightforward. Further, we apply our rendezvous method to the following network design problem. The ports of the δdimensional grid have to be set such that two anonymous agents starting at distance at most d from each other will always meet, moving in an asynchronous manner, after traversing a O(dδ polylog d) length trajectory. We can also apply our method to a version of the geometric rendezvous problem. Two anonymous agents move asynchronously in the δdimensional Euclidean space. The agents have the radii of visibility of r1 and r2, respectively. Each agent knows only its own initial position and its own radius of visibility. The agents meet when one agent is visible to the other one. We propose an algorithm designing the trajectory of each agent, so that they always meet after traveling a total distance of O((d r)δ polylog(d r)), where r = min(r1, r2) and for r ≥ 1.",
"Consider a set of @math identical mobile computational entities in the plane, called robots, operating in LookComputeMove cycles, without any means of direct communication. The Gathering Problem is the primitive task of all entities gathering in finite time at a point not fixed in advance, without any external control. The problem has been extensively studied in the literature under a variety of strong assumptions (e.g., synchronicity of the cycles, instantaneous movements, complete memory of the past, common coordinate system, etc.). In this paper we consider the setting without those assumptions, that is, when the entities are oblivious (i.e., they do not remember results and observations from previous cycles), disoriented (i.e., have no common coordinate system), and fully asynchronous (i.e., no assumptions exist on timing of cycles and activities within a cycle). The existing algorithmic contributions for such robots are limited to solutions for @math or for restricted sets of initial configura...",
"Two mobile agents starting at different nodes of an unknown network have to meet. This task is known in the literature as rendezvous. Each agent has a different label which is a positive integer known to it but unknown to the other agent. Agents move in an asynchronous way: the speed of agents may vary and is controlled by an adversary. The cost of a rendezvous algorithm is the total number of edge traversals by both agents until their meeting. The only previous deterministic algorithm solving this problem has cost exponential in the size of the graph and in the larger label. In this paper we present a deterministic rendezvous algorithm with cost polynomial in the size of the graph and in the length of the smaller label. Hence, we decrease the cost exponentially in the size of the graph and doubly exponentially in the labels of agents. As an application of our rendezvous algorithm we solve several fundamental problems involving teams of unknown size larger than 1 of labeled agents moving asynchronously in..."
]
}

1908.11402
 2970866601
 A team of mobile agents, starting from different nodes of an unknown network, possibly at different times, have to meet at the same node and declare that they have all met. Agents have different labels and move in synchronous rounds along links of the network. The above task is known as gathering and was traditionally considered under the assumption that when some agents are at the same node then they can talk. In this paper we ask the question of whether this ability of talking is needed for gathering. The answer turns out to be no. Our main contribution are two deterministic algorithms that always accomplish gathering in a much weaker model. We only assume that at any time an agent knows how many agents are at the node that it currently occupies but agents do not see the labels of other colocated agents and cannot exchange any information with them. They also do not see other nodes than the current one. Our first algorithm works under the assumption that agents know a priori some upper bound N on the network size, and it works in time polynomial in N and in the length l of the smallest label. Our second algorithm does not assume any a priori knowledge about the network but its complexity is exponential in the network size and in the labels of agents. Its purpose is to show feasibility of gathering under this harsher scenario. As a byproduct of our techniques we obtain, in the same weak model, the solution of the fundamental problem of leader election among agents. As an application of our result we also solve, in the same model, the wellknown gossiping problem: if each agent has a message at the beginning, we show how to make all messages known to all agents, even without any a priori knowledge about the network. If agents know an upper bound N on the network size then our gossiping algorithm works in time polynomial in N, in l and in the length of the largest message.
 A different asynchronous model for gathering in ring networks was considered in @cite_36 @cite_20 . In this model, agents were memoryless but they could perform look operations which gave them a snapshot of the entire network with the positions of all agents in it.
 {
"cite_N": [
"@cite_36",
"@cite_20"
],
"mid": [
"2400422553",
"2144182788"
],
"abstract": [
"Consider a set of mobile robots placed on distinct nodes of a discrete, anonymous, and bidirectional ring. Asynchronously, each robot takes a snapshot of the ring, determining the size of the ring and which nodes are either occupied by robots or empty. Based on the observed configuration, it decides whether to move to one of its adjacent nodes or not. In the first case, it performs the computed move, eventually. This model of computation is known as LookComputeMove. The computation depends on the required task. In this paper, we solve both the wellknown Gathering and Exclusive Searching tasks. In the former problem, all robots must simultaneously occupy the same node, eventually. In the latter problem, the aim is to clear all edges of the graph. An edge is cleared if it is traversed by a robot or if both its endpoints are occupied. We consider the exclusive searching where it must be ensured that two robots never occupy the same node. Moreover, since the robots are oblivious, the clearing is perpetual, i.e., the ring is cleared infinitely often. In the literature, most contributions are restricted to a subset of initial configurations. Here, we design two different algorithms and provide a characterization of the initial configurations that permit the resolution of the problems under very weak assumptions. More precisely, we provide a full characterization (except for few pathological cases) of the initial configurations for which gathering can be solved. The algorithm relies on the necessary assumption of the localweak multiplicity detection. This means that during the Look phase a robot detects also whether the node it occupies is occupied by other robots, without acquiring the exact number. For the exclusive searching, we characterize all (except for few pathological cases) aperiodic configurations from which the problem is feasible. We also provide some impossibility results for the case of periodic configurations.",
"We consider the problem of gathering identical, memoryless, mobile robots in one node of an anonymous unoriented ring. Robots start from different nodes of the ring. They operate in LookComputeMove cycles and have to end up in the same node. In one cycle, a robot takes a snapshot of the current configuration (Look), makes a decision to stay idle or to move to one of its adjacent nodes (Compute), and in the latter case makes an instantaneous move to this neighbor (Move). Cycles are performed asynchronously for each robot. For an odd number of robots we prove that gathering is feasible if and only if the initial configuration is not periodic, and we provide a gathering algorithm for any such configuration. For an even number of robots we decide the feasibility of gathering except for one type of symmetric initial configurations, and provide gathering algorithms for initial configurations proved to be gatherable."
]
}

1908.11402
 2970866601
 A team of mobile agents, starting from different nodes of an unknown network, possibly at different times, have to meet at the same node and declare that they have all met. Agents have different labels and move in synchronous rounds along links of the network. The above task is known as gathering and was traditionally considered under the assumption that when some agents are at the same node then they can talk. In this paper we ask the question of whether this ability of talking is needed for gathering. The answer turns out to be no. Our main contribution are two deterministic algorithms that always accomplish gathering in a much weaker model. We only assume that at any time an agent knows how many agents are at the node that it currently occupies but agents do not see the labels of other colocated agents and cannot exchange any information with them. They also do not see other nodes than the current one. Our first algorithm works under the assumption that agents know a priori some upper bound N on the network size, and it works in time polynomial in N and in the length l of the smallest label. Our second algorithm does not assume any a priori knowledge about the network but its complexity is exponential in the network size and in the labels of agents. Its purpose is to show feasibility of gathering under this harsher scenario. As a byproduct of our techniques we obtain, in the same weak model, the solution of the fundamental problem of leader election among agents. As an application of our result we also solve, in the same model, the wellknown gossiping problem: if each agent has a message at the beginning, we show how to make all messages known to all agents, even without any a priori knowledge about the network. If agents know an upper bound N on the network size then our gossiping algorithm works in time polynomial in N, in l and in the length of the largest message.
 In @cite_32 , the authors considered the problem of network exploration by many agents that could not communicate between them. However, the information available to an agent in each round was much different than in the present paper. Indeed, in @cite_32 , agents were getting local traffic reports consisting of answers to three questions: Am I alone in the node?'', Did any agent enter this node in this round?'', Did any agent leave this node in this round?''. To see that this feedback cannot be derived from our present assumption of knowing the number of agents colocated with an agent in a given round, consider the situation when an agent @math stays at a node, and in a given round one other agent leaves the node and another agent enters it. In our present model, agent @math does not notice any change, while in the model from @cite_32 it gets reports about somebody leaving the node and somebody entering it.
 {
"cite_N": [
"@cite_32"
],
"mid": [
"2089187143"
],
"abstract": [
"A team consisting of an unknown number of mobile agents starting from different nodes of an unknown network, possibly at different times, have to explore the network: Every node must be visited by at least one agent, and all agents must eventually stop. Agents are anonymous (identical), execute the same deterministic algorithm, and move in synchronous rounds along links of the network. They are silent: They cannot send any messages to other agents or mark visited nodes in any way. In the absence of any additional information, exploration with termination of an arbitrary network in this model, devoid of any means of communication between agents, is impossible. Our aim is to solve the exploration problem by giving to agents very restricted local traffic reports. Specifically, an agent that is at a node v in a given round is provided with three bits of information answering the following questions: Am I alone at vq Did any agent enter v in this roundq Did any agent exit v in this roundq We show that this small amount of information permits us to solve the exploration problem in arbitrary networks. More precisely, we give a deterministic terminating exploration algorithm working in arbitrary networks for all initial configurations that are not perfectly symmetric; that is, in which there are agents with different views of the network. The algorithm works in polynomial time in the (unknown) size of the network. A deterministic terminating exploration algorithm working for all initial configurations in arbitrary networks does not exist."
]
}

1908.11402
 2970866601
 A team of mobile agents, starting from different nodes of an unknown network, possibly at different times, have to meet at the same node and declare that they have all met. Agents have different labels and move in synchronous rounds along links of the network. The above task is known as gathering and was traditionally considered under the assumption that when some agents are at the same node then they can talk. In this paper we ask the question of whether this ability of talking is needed for gathering. The answer turns out to be no. Our main contribution are two deterministic algorithms that always accomplish gathering in a much weaker model. We only assume that at any time an agent knows how many agents are at the node that it currently occupies but agents do not see the labels of other colocated agents and cannot exchange any information with them. They also do not see other nodes than the current one. Our first algorithm works under the assumption that agents know a priori some upper bound N on the network size, and it works in time polynomial in N and in the length l of the smallest label. Our second algorithm does not assume any a priori knowledge about the network but its complexity is exponential in the network size and in the labels of agents. Its purpose is to show feasibility of gathering under this harsher scenario. As a byproduct of our techniques we obtain, in the same weak model, the solution of the fundamental problem of leader election among agents. As an application of our result we also solve, in the same model, the wellknown gossiping problem: if each agent has a message at the beginning, we show how to make all messages known to all agents, even without any a priori knowledge about the network. If agents know an upper bound N on the network size then our gossiping algorithm works in time polynomial in N, in l and in the length of the largest message.
 In @cite_5 , the problem of conveying bits of information using movements of robots was considered in a context much different from ours. Mobile robots were moving in the plane and they could periodically get snapshots of the entire configuration of robots.
 {
"cite_N": [
"@cite_5"
],
"mid": [
"2944251479"
],
"abstract": [
"We investigate avenues for the exchange of information (explicit communication) among deaf and dumb mobile robots scattered in the plane. We introduce the use of movementsignals (analogously to flight signals and bees waggle) as a mean to transfer messages, enabling the use of distributed algorithms among robots. We propose onetoone deterministic movement protocols that implement explicit communication among asynchronous robots. We first show how the movements of robots can provide implicit acknowledgment in asynchronous systems. We use this result to design onetoone communication among a pair of robots. Then, we propose two onetoone communication protocols for any system of n *** 2 robots. The former works for robots equipped with observable IDs that agree on a common direction (sense of direction). The latter enables onetoone communication assuming robots devoid of any observable IDs or sense of direction. All three protocols (for either two or any number of robots) assume that no robot remains inactive forever. However, they cannot avoid that the robots move either away or closer of each others, by the way requiring robots with an infinite visibility. In this paper, we also present how to overcome these two disadvantages. These protocols enable the use of distributing algorithms based on message exchanges among swarms of Stigmergic robots. They also allow robots to be equipped with the means of communication to tolerate faults in their communication devices."
]
}

1908.11526
 2970961333
 The success of existing deeplearning based multiview stereo (MVS) approaches greatly depends on the availability of largescale supervision in the form of dense depth maps. Such supervision, while not always possible, tends to hinder the generalization ability of the learned models in neverseenbefore scenarios. In this paper, we propose the first unsupervised learning based MVS network, which learns the multiview depth maps from the input multiview images and does not need groundtruth 3D training data. Our network is symmetric in predicting depth maps for all views simultaneously, where we enforce crossview consistency of multiview depth maps during both training and testing stages. Thus, the learned multiview depth maps naturally comply with the underlying 3D scene geometry. Besides, our network also learns the multiview occlusion maps, which further improves the robustness of our network in handling realworld occlusions. Experimental results on multiple benchmarking datasets demonstrate the effectiveness of our network and the excellent generalization ability.
 Traditional MVS methods focus on designing neighbor selection and photometric error measures for efficient and accurate reconstruction @cite_26 @cite_34 @cite_9 . Furukawa al @cite_22 adopted geometric structures to reconstruct textured regions and applied Markov random fields to recover perview depth maps. Langguth al @cite_15 used the shadingaware mechanism to improve the robustness of view selection. Wu al @cite_29 utilized the lighting and shadows information to enhance the performance of the illposed region. Michael al @cite_18 chose images to match (both at a perview and perpixel level) for addressing the dramatic changes in lighting, scale, clutter, and other effects. Schonberger al @cite_11 proposed the COLMAP framework, which applied photometric and geometric priors to optimize the view selection and used geometric consistency to refine the depth map.
 {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_9",
"@cite_29",
"@cite_15",
"@cite_34",
"@cite_11"
],
"mid": [
"2156116778",
"2257063750",
"",
"",
"2098551034",
"2519384515",
"",
"2519683295"
],
"abstract": [
"We present a multiview stereo algorithm that addresses the extreme changes in lighting, scale, clutter, and other effects in large online community photo collections. Our idea is to intelligently choose images to match, both at a perview and perpixel level. We show that such adaptive view selection enables robust performance even with dramatic appearance variability. The stereo matching technique takes as input sparse 3D points reconstructed from structurefrommotion methods and iteratively grows surfaces from these points. Optimizing for surface normals within a photoconsistency measure significantly improves the matching results. While the focus of our approach is to estimate highquality depth maps, we also show examples of merging the resulting depth maps into compelling scene reconstructions. We demonstrate our algorithm on standard multiview stereo datasets and on casually acquired photo collections of famous scenes gathered from the Internet.",
"This tutorial presents a handson view of the field of multiview stereo with a focus on practical algorithms. Multiview stereo algorithms are able to construct highly detailed 3D models from images alone. They take a possibly very large set of images and construct a 3D plausible geometry that explains the images under some reasonable assumptions, the most important being scene rigidity. The tutorial frames the multiview stereo problem as an image geometry consistency optimization problem. It describes in detail its main two ingredients: robust implementations of photometric consistency measures, and efficient optimization algorithms. It then presents how these main ingredients are used by some of the most successful algorithms, applied into real applications, and deployed as products in the industry. Finally it describes more advanced approaches exploiting domainspecific knowledge such as structural priors, and gives an overview of the remaining challenges and future research directions.",
"",
"",
"Multiview stereo methods reconstruct 3D geometry from images well for sufficiently textured scenes, but often fail to recover highfrequency surface detail, particularly for smoothly shaded surfaces. On the other hand, shapefromshading methods can recover fine detail from shading variations. Unfortunately, it is nontrivial to apply shapefromshading alone to multiview data, and most shadingbased estimation methods only succeed under very restricted or controlled illumination. We present a new algorithm that combines multiview stereo and shadingbased refinement for highquality reconstruction of 3D geometry models from images taken under constant but otherwise arbitrary illumination. We have tested our algorithm on several scenes that were captured under several general and unknown lighting conditions, and we show that our final reconstructions rival laser range scans.",
"We present a novel multiview reconstruction approach that effectively combines stereo and shapefromshading energies into a single optimization scheme. Our method uses image gradients to transition between stereomatching (which is more accurate at large gradients) and Lambertian shapefromshading (which is more robust in flat regions). In addition, we show that our formulation is invariant to spatially varying albedo without explicitly modeling it. We show that the resulting energy function can be optimized efficiently using a smooth surface representation based on bicubic patches, and demonstrate that this algorithm outperforms both previous multiview stereo algorithms and shading based refinement approaches on a number of datasets.",
"",
"This work presents a MultiView Stereo system for robust and efficient dense modeling from unstructured image collections. Our core contributions are the joint estimation of depth and normal information, pixelwise view selection using photometric and geometric priors, and a multiview geometric consistency term for the simultaneous refinement and imagebased depth and normal fusion. Experiments on benchmarks and largescale Internet photo collections demonstrate stateoftheart performance in terms of accuracy, completeness, and efficiency."
]
}

1908.11526
 2970961333
 The success of existing deeplearning based multiview stereo (MVS) approaches greatly depends on the availability of largescale supervision in the form of dense depth maps. Such supervision, while not always possible, tends to hinder the generalization ability of the learned models in neverseenbefore scenarios. In this paper, we propose the first unsupervised learning based MVS network, which learns the multiview depth maps from the input multiview images and does not need groundtruth 3D training data. Our network is symmetric in predicting depth maps for all views simultaneously, where we enforce crossview consistency of multiview depth maps during both training and testing stages. Thus, the learned multiview depth maps naturally comply with the underlying 3D scene geometry. Besides, our network also learns the multiview occlusion maps, which further improves the robustness of our network in handling realworld occlusions. Experimental results on multiple benchmarking datasets demonstrate the effectiveness of our network and the excellent generalization ability.
 Different from the above geometrybased methods, learningbased approaches adopt convolution operation which has powerful feature learning capability for better pairwise patch matching @cite_32 @cite_7 @cite_10 . Ji al @cite_24 prewarped the multiview images to 3D space, then used CNNs to regularize the cost volume. Huang al @cite_37 proposed DeepMVS, which aggregates information through a set of unordered images. Abhishek al @cite_31 directly leveraged camera parameters as the projection operation to form the cost volume, and achieved an endtoend network. Yao al @cite_17 adopted a variancebased cost metric to aggregate the cost volume, then applied 3D convolutions to regularize and regress the depth map. Im al @cite_35 applied a plane sweeping approach to build a cost volume from deep features, then regularized the cost volume via a contextaware aggregation to improve depth regression. Very recently, Yao al @cite_4 introduced a scalable MVS framework based on the recurrent neural network to reduce the memoryconsuming.
 {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_4",
"@cite_7",
"@cite_32",
"@cite_24",
"@cite_31",
"@cite_10",
"@cite_17"
],
"mid": [
"2909918887",
"2964153986",
"2952740189",
"",
"2563100679",
"2964243776",
"2963966978",
"",
"2962793285"
],
"abstract": [
"",
"We present DeepMVS, a deep convolutional neural network (ConvNet) for multiview stereo reconstruction. Taking an arbitrary number of posed images as input, we first produce a set of planesweep volumes and use the proposed DeepMVS network to predict highquality disparity maps. The key contributions that enable these results are (1) supervised pretraining on a photorealistic synthetic dataset, (2) an effective method for aggregating information across a set of unordered images, and (3) integrating multilayer feature activations from the pretrained VGG19 network. We validate the efficacy of DeepMVS using the ETH3D Benchmark. Our results show that DeepMVS compares favorably against stateoftheart conventional MVS algorithms and other ConvNet based methods, particularly for neartextureless regions and thin structures.",
"Deep learning has recently demonstrated its excellent performance for multiview stereo (MVS). However, one major limitation of current learned MVS approaches is the scalability: the memoryconsuming cost volume regularization makes the learned MVS hard to be applied to highresolution scenes. In this paper, we introduce a scalable multiview stereo framework based on the recurrent neural network. Instead of regularizing the entire 3D cost volume in one go, the proposed Recurrent Multiview Stereo Network (RMVSNet) sequentially regularizes the 2D cost maps along the depth direction via the gated recurrent unit (GRU). This reduces dramatically the memory consumption and makes highresolution reconstruction feasible. We first show the stateoftheart performance achieved by the proposed RMVSNet on the recent MVS benchmarks. Then, we further demonstrate the scalability of the proposed method on several largescale scenarios, where previous learned approaches often fail due to the memory constraint. Code is available at this https URL.",
"",
"Indoor scene understanding is central to applications such as robot navigation and human companion assistance. Over the last years, datadriven deep neural networks have outperformed many traditional approaches thanks to their representation learning capabilities. One of the bottlenecks in training for better representations is the amount of available perpixel ground truth data that is required for core scene understanding tasks such as semantic segmentation, normal prediction, and object boundary detection. To address this problem, a number of works proposed using synthetic data. However, a systematic study of how such synthetic data is generated is missing. In this work, we introduce a largescale synthetic dataset with 500K physicallybased rendered images from 45K realistic 3D indoor scenes. We study the effects of rendering methods and scene lighting on training for three computer vision tasks: surface normal prediction, semantic segmentation, and object boundary detection. This study provides insights into the best practices for training with synthetic data (more realistic rendering is worth it) and shows that pretraining with our new synthetic dataset can improve results beyond the current state of the art on all three tasks.",
"This paper proposes an endtoend learning framework for multiview stereopsis. We term the network SurfaceNet. It takes a set of images and their corresponding camera parameters as input and directly infers the 3D model. The key advantage of the framework is that both photoconsistency as well geometric relations of the surface structure can be directly learned for the purpose of multiview stereopsis in an endtoend fashion. SurfaceNet is a fully 3D convolutional network which is achieved by encoding the camera parameters together with the images in a 3D voxel representation. We evaluate SurfaceNet on the largescale DTU benchmark.",
"We present a learnt system for multiview stereopsis. In contrast to recent learning based methods for 3D reconstruction, we leverage the underlying 3D geometry of the problem through feature projection and unprojection along viewing rays. By formulating these operations in a differentiable manner, we are able to learn the system endtoend for the task of metric 3D reconstruction. Endtoend learning allows us to jointly reason about shape priors while conforming to geometric constraints, enabling reconstruction from much fewer images (even a single image) than required by classical approaches as well as completion of unseen surfaces. We thoroughly evaluate our approach on the ShapeNet dataset and demonstrate the benefits over classical approaches and recent learning based methods.",
"",
"We present an endtoend deep learning architecture for depth map inference from multiview images. In the network, we first extract deep visual image features, and then build the 3D cost volume upon the reference camera frustum via the differentiable homography warping. Next, we apply 3D convolutions to regularize and regress the initial depth map, which is then refined with the reference image to generate the final output. Our framework flexibly adapts arbitrary Nview inputs using a variancebased cost metric that maps multiple features into one cost feature. The proposed MVSNet is demonstrated on the largescale indoor DTU dataset. With simple postprocessing, our method not only significantly outperforms previous stateofthearts, but also is several times faster in runtime. We also evaluate MVSNet on the complex outdoor Tanks and Temples dataset, where our method ranks first before April 18, 2018 without any finetuning, showing the strong generalization ability of MVSNet."
]
}

1908.11526
 2970961333
 The success of existing deeplearning based multiview stereo (MVS) approaches greatly depends on the availability of largescale supervision in the form of dense depth maps. Such supervision, while not always possible, tends to hinder the generalization ability of the learned models in neverseenbefore scenarios. In this paper, we propose the first unsupervised learning based MVS network, which learns the multiview depth maps from the input multiview images and does not need groundtruth 3D training data. Our network is symmetric in predicting depth maps for all views simultaneously, where we enforce crossview consistency of multiview depth maps during both training and testing stages. Thus, the learned multiview depth maps naturally comply with the underlying 3D scene geometry. Besides, our network also learns the multiview occlusion maps, which further improves the robustness of our network in handling realworld occlusions. Experimental results on multiple benchmarking datasets demonstrate the effectiveness of our network and the excellent generalization ability.
 Unsupervised learning has been developed in monocular depth estimation and binocular stereo matching by exploiting the photometric consistency and regularization. Xie al @cite_25 proposed Deep3D to automatically convert 2D videos and images to stereoscopic 3D format. Zhou al @cite_30 proposed an unsupervised monocular depth prediction method by minimizing the image reconstruction error. Mahjourian al @cite_23 explicitly considered the inferred 3D geometry of the whole scene, where consistency of the estimated 3D point clouds and egomotion across consecutive frames are enforced. Zhong al @cite_13 @cite_14 used the image warping error as the loss function to derive the learning process for estimating the disparity map.
 {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_23",
"@cite_13",
"@cite_25"
],
"mid": [
"2609883120",
"2887123368",
"2963906250",
"2751625733",
"2336968928"
],
"abstract": [
"We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an endtoend learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses singleview depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either groundtruth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"Deep Learning based stereo matching methods have shown great successes and achieved top scores across different benchmarks. However, like most datadriven methods, existing deep stereo matching networks suffer from some wellknown drawbacks such as requiring large amount of labeled training data, and that their performances are fundamentally limited by the generalization ability. In this paper, we propose a novel Recurrent Neural Network (RNN) that takes a continuous (possibly previously unseen) stereo video as input, and directly predicts a depthmap at each frame without a pretraining process, and without the need of groundtruth depthmaps as supervision. Thanks to the recurrent nature (provided by two convolutionalLSTM blocks), our network is able to memorize and learn from its past experiences, and modify its inner parameters (network weights) to adapt to previously unseen or unfamiliar environments. This suggests a remarkable generalization ability of the net, making it applicable in an open world setting. Our method works robustly with changes in scene content, image statistics, and lighting and season conditions etc. By extensive experiments, we demonstrate that the proposed method seamlessly adapts between different scenarios. Equally important, in terms of the stereo matching accuracy, it outperforms stateoftheart deep stereo approaches on standard benchmark datasets such as KITTI and Middlebury stereo.",
"We present a novel approach for unsupervised learning of depth and egomotion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or egomotion ground truth, or multiview video). Prior work in unsupervised depth learning uses pixelwise or gradientbased losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the whole scene, and enforce consistency of the estimated 3D point clouds and egomotion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures. We combine this novel 3Dbased loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and egomotion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists. We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the stateoftheart for both depth and egomotion. Because we only require a simple video, learning depth and egomotion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself.1",
"Exiting deeplearning based dense stereo matching methods often rely on groundtruth disparity maps as the training signals, which are however not always available in many situations. In this paper, we design a simple convolutional neural network architecture that is able to learn to compute dense disparity maps directly from the stereo inputs. Training is performed in an endtoend fashion without the need of groundtruth disparity maps. The idea is to use image warping error (instead of disparitymap residuals) as the loss function to drive the learning process, aiming to find a depthmap that minimizes the warping error. While this is a simple concept wellknown in stereo matching, to make it work in a deeplearning framework, many nontrivial challenges must be overcome, and in this work we provide effective solutions. Our network is selfadaptive to different unseen imageries as well as to different camera settings. Experiments on KITTI and Middlebury stereo benchmark datasets show that our method outperforms many stateoftheart stereo matching methods with a margin, and at the same time significantly faster.",
"As 3D movie viewing becomes mainstream and the Virtual Reality (VR) market emerges, the demand for 3D contents is growing rapidly. Producing 3D videos, however, remains challenging. In this paper we propose to use deep neural networks to automatically convert 2D videos and images to a stereoscopic 3D format. In contrast to previous automatic 2Dto3D conversion algorithms, which have separate stages and need ground truth depth map as supervision, our approach is trained endtoend directly on stereo pairs extracted from existing 3D movies. This novel training scheme makes it possible to exploit orders of magnitude more data and significantly increases performance. Indeed, Deep3D outperforms baselines in both quantitative and human subject evaluations."
]
}

1908.11399
 2971293878
 Understanding the morphological changes of primary neuronal cells induced by chemical compounds is essential for drug discovery. Using the data from a single highthroughput imaging assay, a classification model for predicting the biological activity of candidate compounds was introduced. The image recognition model which is based on deep convolutional neural network (CNN) architecture with residual connections achieved accuracy of 99.6 @math on a binary classification task of distinguishing untreated and treated rodent primary neuronal cells with Amyloid @math .
 Recent years have seen an explosion of applications of the deep learning methods to medical imaging, including computeraided diagnosis (CAD) in radiology and medical image analysis @cite_37 @cite_23 @cite_27 . The efficiency of deep learning models for cytometry @cite_2 has been widely recognised and applied to cell imaging @cite_30 , virtual staining with generative adversarial networks (GAN) @cite_19 @cite_26 , fluorescence microscopy @cite_9 and reconstructing cell cycles and disease progression @cite_42 . However, despite the wide popularity and maturity of the deep learning approach, very little has been done to estimate the effect of biological activity of neuronal cells induced by compounds and searching for drugs that may protect against neurodegeneration and Alzheimer's disease. Simm in @cite_22 suggested to repurpose highthroughput images assay to predict biological activity in drug discovery, however this approach depends on the features extracted from CellProfiler @cite_4 and lacks the flexibility of the CNN models @cite_7 which learn features directly from raw pixels of images.
 {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_26",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_9",
"@cite_42",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_2"
],
"mid": [
"2751723768",
"2731899572",
"",
"2107554012",
"2794301983",
"",
"2904591139",
"2750796620",
"2794744817",
"",
"",
"2905502540"
],
"abstract": [
"Abstract Abundant accumulation of digital histopathological images has led to the increased demand for their analysis, such as computeraided diagnosis using machine learning techniques. However, digital pathological images and related tasks have some issues to be considered. In this minireview, we introduce the application of digital pathological image analysis using machine learning algorithms, address some problems specific to such analysis, and propose possible solutions.",
"The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computeraided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deeplearning approach based on a convolutional neural network (CNN) won an overwhelming victory in the bestknown worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deeplearning models: a massivetraining artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or featurebased ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or imagebased ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. “Deep learning”, or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.",
"",
"Biologists can now prepare and image thousands of samples per day using automation, enabling chemical screens and functional genomics (for example, using RNA interference). Here we describe the first free, opensource system designed for flexible, highthroughput cell image analysis, CellProfiler. CellProfiler can address a variety of biological questions quantitatively, including standard assays (for example, cell count, size, percell protein levels) and complex morphological assays (for example, cell organelle shape or subcellular patterns of DNA or protein staining).",
"Summary In both academia and the pharmaceutical industry, largescale assays for drug discovery are expensive and often impractical, particularly for the increasingly important physiologically relevant model systems that require primary cells, organoids, whole organisms, or expensive or rare reagents. We hypothesized that data from a single highthroughput imaging assay can be repurposed to predict the biological activity of compounds in other assays, even those targeting alternate pathways or biological processes. Indeed, quantitative information extracted from a threechannel microscopybased screen for glucocorticoid receptor translocation was able to predict assayspecific biological activity in two ongoing drug discovery projects. In these projects, repurposing increased hit rates by 50 to 250fold over that of the initial project assays while increasing the chemical structure diversity of the hits. Our results suggest that data from highcontent screens are a rich source of information that can be used to predict and replace customized biological assays.",
"",
"We present deeplearningenabled superresolution across different fluorescence microscopy modalities. This datadriven approach does not require numerical modeling of the imaging process or the estimation of a pointspreadfunction, and is based on training a generative adversarial network (GAN) to transform diffractionlimited input images into superresolved ones. Using this framework, we improve the resolution of widefield images acquired with lownumericalaperture objectives, matching the resolution that is acquired using highnumericalaperture objectives. We also demonstrate crossmodality superresolution, transforming confocal microscopy images to match the resolution acquired with a stimulated emission depletion (STED) microscope. We further demonstrate that total internal reflection fluorescence (TIRF) microscopy images of subcellular structures within cells and tissues can be transformed to match the results obtained with a TIRFbased structured illumination microscope. The deep network rapidly outputs these superresolved images, without any iterations or parameter search, and could serve to democratize superresolution imaging.",
"We show that deep convolutional neural networks combined with nonlinear dimension reduction enable reconstructing biological processes based on raw image data. We demonstrate this by reconstructing the cell cycle of Jurkat cells and disease progression in diabetic retinopathy. In further analysis of Jurkat cells, we detect and separate a subpopulation of dead cells in an unsupervised manner and, in classifying discrete cell cycle stages, we reach a sixfold reduction in error rate compared to a recent approach based on boosting on image features. In contrast to previous methods, deep learning based predictions are fast enough for onthefly analysis in an imaging flow cytometer.",
"The histological analysis of tissue samples, widely used for disease diagnosis, involves lengthy and laborious tissue preparation. Here, we show that a convolutional neural network trained using a generative adversarialnetwork model can transform widefield autofluorescence images of unlabelled tissue sections into images that are equivalent to the brightfield images of histologically stained versions of the same samples. A blind comparison, by boardcertified pathologists, of this virtual staining method and standard histological staining using microscopic images of human tissue sections of the salivary gland, thyroid, kidney, liver and lung, and involving different types of stain, showed no major discordances. The virtualstaining method bypasses the typically labourintensive and costly histological staining procedures, and could be used as a blueprint for the virtual staining of tissue images acquired with other labelfree imaging modalities. Deep learning can be used to virtually stain autofluorescence images of unlabelled tissue sections, generating images that are equivalent to the histologically stained versions.",
"",
"",
""
]
}

1908.11518
 2971189344
 Nonconvex optimization problems arise from various areas in science and engineering. Although many numerical methods and theories have been developed for unconstrained nonconvex problems, the parallel development for constrained nonconvex problems remains limited. That restricts the practices of mathematical modeling and quantitative decision making in many disciplines. In this paper, an inexact proximalpoint penalty method is proposed for constrained optimization problems where both the objective function and the constraint can be nonconvex. The proposed method approximately solves a sequence of subproblems, each of which is formed by adding to the original objective function a proximal term and quadratic penalty terms associated to the constraint functions. Under a weakconvexity assumption, each subproblem is made strongly convex and can be solved effectively to a required accuracy by an optimal gradienttype method. The theoretical property of the proposed method is analyzed in two different cases. In the first case, the objective function is nonconvex but the constraint functions are assumed to be convex, while in the second case, both the objective function and the constraint are nonconvex. For both cases, we give the complexity results in terms of the number of function value and gradient evaluations to produce nearstationary points. Due to the different structures, different definitions of nearstationary points are given for the two cases. The complexity for producing a nearly @math stationary point is @math for the first case while it becomes @math for the second case.
 There has been growing interest in firstorder algorithms for nonconvex minimization problems with no constraints or simple constraints in both stochastic and deterministic settings. Initially, the research in this direction mainly focuses on problems with smooth objective functions @cite_31 @cite_55 @cite_0 @cite_8 @cite_52 @cite_57 @cite_9 @cite_66 @cite_42 @cite_67 . Recently, algorithms and theories have been developed for nonconvex problems with nonsmooth (but weakly convex) objective functions @cite_89 @cite_88 @cite_102 @cite_50 @cite_26 @cite_79 . These works tackle the nonsmoothness by introducing the Moreau envelope of the objective function. However, for with sophisticated functional constraints, these methods are not applicable.
 {
"cite_N": [
"@cite_67",
"@cite_26",
"@cite_8",
"@cite_55",
"@cite_9",
"@cite_42",
"@cite_52",
"@cite_102",
"@cite_89",
"@cite_0",
"@cite_57",
"@cite_79",
"@cite_50",
"@cite_88",
"@cite_31",
"@cite_66"
],
"mid": [
"",
"2963625269",
"2570983198",
"1987083649",
"2963763253",
"2460087882",
"2963965485",
"2963534244",
"2786313301",
"2337540838",
"2803240098",
"2807821938",
"2735159666",
"2790417304",
"2963470657",
"2962851402"
],
"abstract": [
"",
"",
"We analyze a fast incremental aggregated gradient method for optimizing nonconvex problems of the form minΣ i ƒ i (x). Specifically, we analyze the SAGA algorithm within an Incremental Firstorder Oracle framework, and show that it converges to a stationary point provably faster than both gradient descent and stochastic gradient descent. We also discuss a Polyak's special class of nonconvex problems for which SAGA converges at a linear rate to the global optimum. Finally, we analyze the practically valuable regularized and minibatch variants of SAGA. To our knowledge, this paper presents the first analysis of fast convergence for an incremental aggregated gradient method for nonconvex problems.",
"In this paper, we generalize the wellknown Nesterov's accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using firstorder information, similarly to the gradient descent method. We then consider an important class of composite optimization problems and show that the AG method can solve them uniformly, i.e., by using the same aggressive stepsize policy as in the convex case, even if the problem turns out to be nonconvex. We demonstrate that the AG method exhibits an optimal rate of convergence if the composite problem is convex, and improves the best known rate of convergence if the problem is nonconvex. Based on the AG method, we also present new nonconvex stochastic approximation methods and show that they can improve a few existing rates of convergence for nonconvex stochastic optimization. To the best of our knowledge, this is the first time that the convergence of the AG method has been established for solving nonconvex nonlinear programming in the literature.",
"",
"We give a simple proof that the FrankWolfe algorithm obtains a stationary point at a rate of @math on nonconvex objectives with a Lipschitz continuous gradient. Our analysis is affine invariant and is the first, to the best of our knowledge, giving a similar rate to what was already proven for projected gradient methods (though on slightly different measures of stationarity).",
"We study nonconvex finitesum problems and analyze stochastic variance reduced gradient (SVRG) methods for them. SVRG and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (SGD); but their theoretical analysis almost exclusively assumes convexity. In contrast, we obtain nonasymptotic rates of convergence of SVRG for nonconvex optimization, showing that it is provably faster than SGD and gradient descent. We also analyze a subclass of nonconvex problems on which SVRG attains linear convergence to the global optimum. We extend our analysis to minibatch variants, showing (theoretical) linear speedup due to minibatching in parallel settings.",
"We consider global efficiency of algorithms for minimizing a sum of a convex function and a composition of a Lipschitz convex function with a smooth map. The basic algorithm we rely on is the proxlinear method, which in each iteration solves a regularized subproblem formed by linearizing the smooth map. When the subproblems are solved exactly, the method has efficiency ( O ( ^ 2 ) ), akin to gradient descent for smooth minimization. We show that when the subproblems can only be solved by firstorder methods, a simple combination of smoothing, the proxlinear method, and a fastgradient scheme yields an algorithm with complexity ( O ( ^ 3 ) ). We round off the paper with an inertial proxlinear method that automatically accelerates in presence of convexity.",
"We prove that the projected stochastic subgradient method, applied to a weakly convex problem, drives the gradient of the Moreau envelope to zero at the rate @math .",
"Recently, stochastic momentum methods have been widely adopted in training deep neural networks. However, their convergence analysis is still underexplored at the moment, in particular for nonconvex optimization. This paper fills the gap between practice and theory by developing a basic convergence analysis of two stochastic momentum methods, namely stochastic heavyball method and the stochastic variant of Nesterov's accelerated gradient method. We hope that the basic convergence results developed in this paper can serve the reference to the convergence of stochastic momentum methods and also serve the baselines for comparison in future development of stochastic momentum methods. The novelty of convergence analysis presented in this paper is a unified framework, revealing more insights about the similarities and differences between different stochastic momentum methods and stochastic gradient method. The unified framework exhibits a continuous change from the gradient method to Nesterov's accelerated gradient method and finally the heavyball method incurred by a free parameter, which can help explain a similar change observed in the testing error convergence behavior for deep learning. Furthermore, our empirical results for optimizing deep neural networks demonstrate that the stochastic variant of Nesterov's accelerated gradient method achieves a good tradeoff (between speed of convergence in training error and robustness of convergence in testing error) among the three stochastic methods.",
"In this paper, we present new stochastic methods for solving two important classes of nonconvex optimization problems. We first introduce a randomized accelerated proximal gradient (RapGrad) method for solving a class of nonconvex optimization problems consisting of the sum of @math component functions, and show that it can significantly reduce the number of gradient computations especially when the condition number @math (i.e., the ratio between the Lipschitz constant and negative curvature) is large. More specifically, RapGrad can save up to @math gradient computations than existing deterministic nonconvex accelerated gradient methods. Moreover, the number of gradient computations required by RapGrad can be @math (at least @math ) times smaller than the bestknown randomized nonconvex gradient methods when @math . Inspired by RapGrad, we also develop a new randomized accelerated proximal dual (RapDual) method for solving a class of multiblock nonconvex optimization problems coupled with linear constraints. We demonstrate that RapDual can also save up to a factor of @math projection subproblems than its deterministic counterpart, where @math denotes the number of blocks. To the best of our knowledge, all these complexity results associated with RapGrad and RapDual seem to be new in the literature. We also illustrate potential advantages of these algorithms through our preliminary numerical experiments.",
"In this paper, we investigate the nonasymptotic stationary convergence behavior of Stochastic Mirror Descent (SMD) for nonconvex optimization. We focus on a general class of nonconvex nonsmooth stochastic optimization problems, in which the objective can be decomposed into a relatively weakly convex function (possibly nonLipschitz) and a simple nonsmooth convex regularizer. We prove that SMD, without the use of minibatch, is guaranteed to converge to a stationary point in a convergence rate of @math . The efficiency estimate matches with existing results for stochastic subgradient method, but is evaluated under a stronger stationarity measure. Our convergence analysis applies to both the original SMD and its proximal version, as well as the deterministic variants, for solving relatively weakly convex problems.",
"In this paper, we introduce a stochastic projected subgradient method for weakly convex (i.e., uniformly proxregular) nonsmooth, nonconvex functionsa wide class of functions which includes the additive and convex composite classes. At a highlevel, the method is an inexact proximal point iteration in which the strongly convex proximal subproblems are quickly solved with a specialized stochastic projected subgradient method. The primary contribution of this paper is a simple proof that the proposed algorithm converges at the same rate as the stochastic gradient method for smooth nonconvex problems. This result appears to be the first convergence rate analysis of a stochastic (or even deterministic) subgradient method for the class of weakly convex functions.",
"We consider an algorithm that successively samples and minimizes stochastic models of the objective function. We show that under weakconvexity and Lipschitz conditions, the algorithm drives the expected norm of the gradient of the Moreau envelope to zero at the rate @math . Our result yields new complexity guarantees for the stochastic proximal point algorithm on weakly convex problems and for the stochastic proxlinear algorithm for minimizing compositions of convex functions with smooth maps. Moreover, our result also recovers the recently obtained complexity estimate for the stochastic proximal subgradient method on weakly convex problems.",
"In this paper, we introduce a new stochastic approximation type algorithm, namely, the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method possesses a nearly optimal rate of convergence if the problem is convex. We discuss a variant of the algorithm which consists of applying a postoptimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and we show that such modification allows us to improve significantly the largedeviation properties of the algorithm. These methods are then specialized for solving a class of simulationbased optimization problems in which only stochastic zerothorder information is available.",
"We consider the fundamental problem in nonconvex optimization of efficiently reaching a stationary point. In contrast to the convex case, in the long history of this basic problem, the only known theoretical results on firstorder nonconvex optimization remain to be full gradient descent that converges in O(1 e) iterations for smooth objectives, and stochastic gradient descent that converges in O(1 e2) iterations for objectives that are sum of smooth functions. We provide the first improvement in this line of research. Our result is based on the variance reduction trick recently introduced to convex optimization, as well as a brand new analysis of variance reduction that is suitable for nonconvex optimization. For objectives that are sum of smooth functions, our firstorder minibatch stochastic method converges with an O(1 e) rate, and is faster than full gradient descent by Ω(n1 3). We demonstrate the effectiveness of our methods on empirical risk minimizations with nonconvex loss functions and training neural nets."
]
}

1908.11518
 2971189344
 Nonconvex optimization problems arise from various areas in science and engineering. Although many numerical methods and theories have been developed for unconstrained nonconvex problems, the parallel development for constrained nonconvex problems remains limited. That restricts the practices of mathematical modeling and quantitative decision making in many disciplines. In this paper, an inexact proximalpoint penalty method is proposed for constrained optimization problems where both the objective function and the constraint can be nonconvex. The proposed method approximately solves a sequence of subproblems, each of which is formed by adding to the original objective function a proximal term and quadratic penalty terms associated to the constraint functions. Under a weakconvexity assumption, each subproblem is made strongly convex and can be solved effectively to a required accuracy by an optimal gradienttype method. The theoretical property of the proposed method is analyzed in two different cases. In the first case, the objective function is nonconvex but the constraint functions are assumed to be convex, while in the second case, both the objective function and the constraint are nonconvex. For both cases, we give the complexity results in terms of the number of function value and gradient evaluations to produce nearstationary points. Due to the different structures, different definitions of nearstationary points are given for the two cases. The complexity for producing a nearly @math stationary point is @math for the first case while it becomes @math for the second case.
 When all constraint functions in are affine, a primaldual FrankWolfe method is proposed in @cite_72 , and it finds an @math stationary point with a complexity of @math in general and @math when there exists a strictly feasible solution. Compared to @cite_72 , this paper uses a different notion of @math stationary point and our constraint functions can be nonlinear and nonconvex.
 {
"cite_N": [
"@cite_72"
],
"mid": [
"2805494515"
],
"abstract": [
"We study constrained stochastic programs where the decision vector at each time slot cannot be chosen freely but is tied to the realization of an underlying random state vector. The goal is to minimize a general objective function subject to linear constraints. A typical scenario where such programs appear is opportunistic scheduling over a network of timevarying channels, where the random state vector is the channel state observed, and the control vector is the transmission decision which depends on the current channel state. We consider a primaldual type FrankWolfe algorithm that has a low complexity update during each slot and that learns to make efficient decisions without prior knowledge of the probability distribution of the random state vector. We establish convergence time guarantees for the case of both convex and nonconvex objective functions. We also emphasize application of the algorithm to nonconvex opportunistic scheduling and distributed nonconvex stochastic optimization over a connected graph."
]
}

1908.11518
 2971189344
 Nonconvex optimization problems arise from various areas in science and engineering. Although many numerical methods and theories have been developed for unconstrained nonconvex problems, the parallel development for constrained nonconvex problems remains limited. That restricts the practices of mathematical modeling and quantitative decision making in many disciplines. In this paper, an inexact proximalpoint penalty method is proposed for constrained optimization problems where both the objective function and the constraint can be nonconvex. The proposed method approximately solves a sequence of subproblems, each of which is formed by adding to the original objective function a proximal term and quadratic penalty terms associated to the constraint functions. Under a weakconvexity assumption, each subproblem is made strongly convex and can be solved effectively to a required accuracy by an optimal gradienttype method. The theoretical property of the proposed method is analyzed in two different cases. In the first case, the objective function is nonconvex but the constraint functions are assumed to be convex, while in the second case, both the objective function and the constraint are nonconvex. For both cases, we give the complexity results in terms of the number of function value and gradient evaluations to produce nearstationary points. Due to the different structures, different definitions of nearstationary points are given for the two cases. The complexity for producing a nearly @math stationary point is @math for the first case while it becomes @math for the second case.
 As a classical approach for solving constrained optimization , a penalty method finds an approximate solution by solving a sequence of unconstrained subproblems, where the violation of constraints is penalized by the positively weighted penalty terms in the objective function of the subproblems. Unconstrained optimization techniques are then applied to the subproblems along with an updating scheme for the weighting parameters. The computational complexity of penalty methods for convex problems has been well established @cite_17 @cite_53 @cite_7 . For nonconvex problems, most existing studies of penalty methods focus on the asymptotic convergence to a stationary point @cite_70 @cite_97 @cite_27 @cite_99 @cite_100 @cite_90 @cite_10 @cite_91 @cite_69 @cite_33 @cite_34 @cite_78 @cite_23 @cite_60 . On the contrary, we analyze the finite complexity of penalty methods for finding a nearstationary point.
 {
"cite_N": [
"@cite_99",
"@cite_69",
"@cite_91",
"@cite_7",
"@cite_33",
"@cite_70",
"@cite_97",
"@cite_53",
"@cite_90",
"@cite_78",
"@cite_60",
"@cite_27",
"@cite_23",
"@cite_100",
"@cite_34",
"@cite_10",
"@cite_17"
],
"mid": [
"1994126718",
"",
"2018296860",
"2893640118",
"",
"1976086748",
"2081956522",
"2605995072",
"1985514811",
"2090523290",
"2122587727",
"2053661713",
"2008832947",
"2032024979",
"1994707765",
"2174004662",
"2144603975"
],
"abstract": [
"Consider the problem of finding the local constrained minimum @math of the function f on the set [ F = x R^n  _i (x) 0, _ j + k (x) = 0; i = 1,2, ,k; j = 1,2, ,l . ]One method of solution is to minimize the associated penalty function [ p_0 (x) = f(x)  _ i = 1 ^k ( 0, _i (x) ) + _ j = 1 ^l  _ j + k (x)  for x R^n , 0. ]Let @math be the minimum of this penalty function. It is known that, provided @math is sufficiently small, @math .However, until recently, a serious drawback to this particular penalty function was that its first order derivatives are not everywhere defined. Thus, wellknown gradient type methods usually applied to unconstrained optimization problems were necessarily excluded.This paper presents a method that enables a modified form of the gradient type approaches to be applied to a perturbation of the penalt...",
"",
"In this paper a new continuously differentiable exact penalty function is introduced for the solution of nonlinear programming problems with compact feasible set. A distinguishing feature of the penalty function is that it is defined on a suitable bounded open set containing the feasible region and that it goes to infinity on the boundary of this set. This allows the construction of an implementable unconstrained minimization algorithm, whose global convergence towards KuhnTucker points of the constrained problem can be established.",
"We develop two new proximal alternating penalty algorithms to solve a wide range class of constrained convex optimization problems. Our approach mainly relies on a novel combination of the classical quadratic penalty, alternating minimization, Nesterov’s acceleration, adaptive strategy for parameters. The first algorithm is designed to solve generic and possibly nonsmooth constrained convex problems without requiring any Lipschitz gradient continuity or strong convexity, while achieving the bestknown ( O ( 1 k ) )convergence rate in a nonergodic sense, where k is the iteration counter. The second algorithm is also designed to solve nonstrongly convex, but semistrongly convex problems. This algorithm can achieve the bestknown ( O ( 1 k^2 ) )convergence rate on the primal constrained problem. Such a rate is obtained in two cases: (1) averaging only on the iterate sequence of the strongly convex term, or (2) using two proximal operators of this term without averaging. In both algorithms, we allow one to linearize the second subproblem to use the proximal operator of the corresponding objective term. Then, we customize our methods to solve different convex problems, and lead to new variants. As a byproduct, these algorithms preserve the same convergence guarantees as in our main algorithms. We verify our theoretical development via different numerical examples and compare our methods with some existing stateoftheart algorithms.",
"",
"The nonlinear programming problem seeks to maximize a function f(x) where the n component vector x must satisfy certain constraints gi(x) = 0, i = 1, …, m1 and gi(z) ≧ 0, i = m1 + 1, …, m. The algorithm presented in this paper solves the nonlinear programming problem by transforming it into a sequence of unconstrained maximization problems. Essentially, a penalty is imposed whenever x does not satisfy the constraints. Although the algorithm appears most useful in the concave case, the convergence proof holds for nonconcave functions as well. The algorithm is especially interesting in the concave case because the programming problem reduces to a single unconstrained maximization problem or, at most, to a finite sequence of unconstrained maximization problems. In addition, the paper presents a new class of dual problems, and the algorithm is shown to be a dual feasible method. Another property of the algorithm is that it appears particularly well suited for largescale problems with a sizable number of c...",
"An algorithm for solving the problem: minimize @math (a convex function) subject to @math , @math , each @math a concave function, is presented. Specifically, the function [ P [ x,t,r_k ] f( x ) + r_k^  1 [ g_i ( x )  t_i ] ^2 ] is minimized over all x, nonnegative t, for a strictly decreasing null sequence @math . This extends the work of T. Pietrzykowski [5]. It is proved that for every @math , there exists a finite point @math which minimizes P, and which solves the convex programming problem as @math . This algorithm is similar to the Sequential Unconstrained Minimization Technique (SUMT) [1] in that it solves the (Wolfe) dual programming problem [6]. It differs from SUMT in that (1) it approaches the optimum from the region of infeasibility (i.e., it is a relaxation technique), (2) it does not require a nonempty interior to the nonlinearly constrained region, (3) no separate feasibilit...",
"In this paper we present a complete iteration complexity analysis of inexact firstorder Lagrangian and penalty methods for solving coneconstrained convex problems that have or may not have optimal Lagrange multipliers that close the duality gap. We first assume the existence of optimal Lagrange multipliers and study primal–dual firstorder methods based on inexact information and augmented Lagrangian smoothing or Nesterovtype smoothing. For inexact (fast) gradient augmented Lagrangian methods, we derive an overall computational complexity of O(1 ϵ) projections onto a simple primal set in order to attain an eoptimal solution of the conic convex problem. For the inexact fast gradient method combined with Nesterovtype smoothing, we derive computational complexity O(1 ϵ3 2) projections onto the same set. Then, we assume that optimal Lagrange multipliers might not exist for the coneconstrained convex problem, and analyse the fast gradient method for solving penalty reformulations of the problem. For the ...",
"It is shown that the existence of a strict local minimum satisfying the constraint qualification of [16] or McCormick's [12] second order sufficient optimality condition implies the existence of a class of exact local penalty functions (that is ones with a finite value of the penalty parameter) for a nonlinear programming problem. A lower bound to the penalty parameter is given by a norm of the optimal Lagrange multipliers which is dual to the norm used in the penalty function.",
"The convergence behaviour of a class of iterative methods for solving the constrained minimization problem is analysed. The methods are based on the sequential minimization of a simple differentiable penalty function. They are sufficiently general to ensure global convergence of the iterates to the solution of the problem at an asymptotic (twostep Q) superlinear rate.",
"The global convergence properties of a class of penalty methods for nonlinear programming are analyzed. These methods include successive linear programming approaches and, more specifically, the successive linearquadratic programming approach presented by [Math. Program., 100 (2004), pp. 2748]. Every iteration requires the solution of two trustregion subproblems involving piecewise linear and quadratic models, respectively. It is shown that, for a fixed penalty parameter, the sequence of iterates approaches stationarity of the penalty function. A procedure for dynamically adjusting the penalty parameter is described, and global convergence results for it are established.",
"The main result of the paper consists of the theorem that under certain, natural assumptions the local conditional maximum @math of the function f on the set [ A = x R^n  _i (x) 0, _j (x) = 0,i = 1, ,k,j = 1, ,l ] is identical with the unconditional maximum of the potential function [ p(x, ) = f(x) + _ i = 1 ^k neg ( _i (x))  _ j = 1 ^l  (x)  , x R^n , 0, ] for @math sufficiently small. There is also provided a draft of a modified gradient procedure for maximizing the potential @math since it is generally nonsmooth even for differentiable f, @math and @math .",
"In their seminal papers Eremin [Soviet Mathematics Doklady, 8 (1966), pp. 459–462] and Zangwill [Management Science, 13 (1967), pp. 344–358] introduce a notion of exact penalization for use in the development of algorithms for constrained optimization. Since that time, exact penalty functions have continued to play a key role in the theory of mathematical programming. In the present paper, this theory is unified by showing how the Eremin–Zangwill exact penalty functions can be used to develop the foundations of the theory of constrained optimization for finite dimensions in an elementary and straightforward way. Regularity conditions, multiplier rules, secondorder optimality conditions, and convex programming are all given interpretations relative to the Eremin–Zangwill exact penalty functions. In conclusion, a historical review of those results associated with the existence of an exact penalty parameter is provided.",
"This paper presents a multiplier method for solving optimization problems with equality and inequality constraints. The method realizes all the good features that were foreseen by R. Fletcher for this type of algorithm in the past, but which suffers from none of the drawbacks of the earlier attempts.",
"In this paper, a recursive quadratic programming algorithm for solving equality constrained optimization problems is proposed and studied. The line search functions used are approximations to Fletcher's differentiable exact penalty function. Global convergence and local superlinear convergence results are proved, and some numerical results are given.",
"In this paper it is shown that, given a nonlinear programming problem with inequality constraints, it is possible to construct a continuously differentiable exact penalty function whose global or local unconstrained minimizers correspond to global or local solutions of the constrained problem.",
"This paper considers a special but broad class of convex programming problems whose feasible region is a simple compact convex set intersected with the inverse image of a closed convex cone under an affine transformation. It studies the computational complexity of quadratic penalty based methods for solving the above class of problems. An iteration of these methods, which is simply an iteration of Nesterov’s optimal method (or one of its variants) for approximately solving a smooth penalization subproblem, consists of one or two projections onto the simple convex set. Iterationcomplexity bounds expressed in terms of the latter type of iterations are derived for two quadratic penalty based variants, namely: one which applies the quadratic penalty method directly to the original problem and another one which applies the latter method to a perturbation of the original problem obtained by adding a small quadratic term to its objective function."
]
}

1908.11518
 2971189344
 Nonconvex optimization problems arise from various areas in science and engineering. Although many numerical methods and theories have been developed for unconstrained nonconvex problems, the parallel development for constrained nonconvex problems remains limited. That restricts the practices of mathematical modeling and quantitative decision making in many disciplines. In this paper, an inexact proximalpoint penalty method is proposed for constrained optimization problems where both the objective function and the constraint can be nonconvex. The proposed method approximately solves a sequence of subproblems, each of which is formed by adding to the original objective function a proximal term and quadratic penalty terms associated to the constraint functions. Under a weakconvexity assumption, each subproblem is made strongly convex and can be solved effectively to a required accuracy by an optimal gradienttype method. The theoretical property of the proposed method is analyzed in two different cases. In the first case, the objective function is nonconvex but the constraint functions are assumed to be convex, while in the second case, both the objective function and the constraint are nonconvex. For both cases, we give the complexity results in terms of the number of function value and gradient evaluations to produce nearstationary points. Due to the different structures, different definitions of nearstationary points are given for the two cases. The complexity for producing a nearly @math stationary point is @math for the first case while it becomes @math for the second case.
 On solving a problem with a nonconvex objective and linear constraint, @cite_25 has developed a quadraticpenalty accelerated inexact proximal point method. That method can generate an @math stationary point in the sense of with a complexity of @math . Our method is similar to that in @cite_25 by utilizing the techniques from both the proximal point method and the quadratic penalty method. Although we make a little stronger assumption than @cite_25 by requiring the boundedness of @math , our method and analysis apply to the problems with nonconvex objectives and convex nonconvex nonlinear constraint functions. When the constraints are convex (but possibly nonlinear), our method can find a nearly @math stationary point with a complexity of @math that is a nearly @math improvement over the complexity in @cite_25 .
 {
"cite_N": [
"@cite_25"
],
"mid": [
"2787445655"
],
"abstract": [
"This paper analyzes the iterationcomplexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs. More specifically, the objective function is of the form @math where @math is a differentiable function whose gradient is Lipschitz continuous and @math is a closed convex function with bounded domain. The method, basically, consists of applying an accelerated inexact proximal point method for solving approximately a sequence of quadratic penalized subproblems associated to the linearly constrained problem. Each subproblem of the proximal point method is in turn approximately solved by an accelerated composite gradient method. It is shown that the proposed scheme generates a @math approximate stationary point in at most @math . Finally, numerical results showing the efficiency of the proposed method are also given."
]
}

1908.11518
 2971189344
 Nonconvex optimization problems arise from various areas in science and engineering. Although many numerical methods and theories have been developed for unconstrained nonconvex problems, the parallel development for constrained nonconvex problems remains limited. That restricts the practices of mathematical modeling and quantitative decision making in many disciplines. In this paper, an inexact proximalpoint penalty method is proposed for constrained optimization problems where both the objective function and the constraint can be nonconvex. The proposed method approximately solves a sequence of subproblems, each of which is formed by adding to the original objective function a proximal term and quadratic penalty terms associated to the constraint functions. Under a weakconvexity assumption, each subproblem is made strongly convex and can be solved effectively to a required accuracy by an optimal gradienttype method. The theoretical property of the proposed method is analyzed in two different cases. In the first case, the objective function is nonconvex but the constraint functions are assumed to be convex, while in the second case, both the objective function and the constraint are nonconvex. For both cases, we give the complexity results in terms of the number of function value and gradient evaluations to produce nearstationary points. Due to the different structures, different definitions of nearstationary points are given for the two cases. The complexity for producing a nearly @math stationary point is @math for the first case while it becomes @math for the second case.
 Barrier methods @cite_54 @cite_83 @cite_56 @cite_74 @cite_16 @cite_11 @cite_3 @cite_36 @cite_13 are another traditional class of algorithms for constrained optimization. Similar to the penalty methods, they also solve a sequence of unconstrained subproblems with barrier functions added to objective function. The barrier functions will increase to infinity as the iterates approach the boundary of the feasible set, and thus enforce the iterates to stay in the interior of the feasible set. However, the convergence rate of barrier methods is only shown when the problem is convex @cite_3 @cite_36 @cite_13 @cite_5 , and only asymptotic convergence analysis is available for nonconvex problems.
 {
"cite_N": [
"@cite_36",
"@cite_54",
"@cite_3",
"@cite_56",
"@cite_83",
"@cite_74",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"1987953347",
"2011430287",
"2157590940",
"2014746566",
"1964015691",
"2962970587",
"2069671092",
"2157959686",
"1566327188"
],
"abstract": [
"",
"Interest in linear programming has been intensified recently by Karmarkar's publication in 1984 of an algorithm that is claimed to be much faster than the simplex method for practical problems. We review classical barrierfunction methods for nonlinear programming based on applying a logarithmic transformation to inequality constraints. For the special case of linear programming, the transformed problem can be solved by a \"projected Newton barrier\" method. This method is shown to be equivalent to Karmarkar's projective method for a particular choice of the barrier parameter. We then present details of a specific barrier algorithm and its practical implementation. Numerical results are given for several nontrivial test problems, and the implications for future developments in linear programming are discussed.",
"Many scientific and engineering applications feature nonsmooth convex minimization problems over convex sets. In this paper, we address an important instance of this broad class where we assume that the nonsmooth objective is equipped with a tractable proximity operator and that the convex constraint set affords a selfconcordant barrier. We provide a new joint treatment of proximal and selfconcordant barrier concepts and illustrate that such problems can be efficiently solved, without the need of lifting the problem dimensions, as in disciplined convex optimization approach. We propose an inexact pathfollowing algorithmic framework and theoretically characterize the worstcase analytical complexity of this framework when the proximal subproblems are solved inexactly. To show the merits of our framework, we apply its instances to both synthetic and realworld applications, where it shows advantages over standard interior point methods. As a byproduct, we describe how our framework can obtain points on t...",
"Interior methods for optimization were widely used in the 1960s, primarily in the form of barrier methods. However, they were not seriously applied to linear programming because of the dominance of the simplex method. Barrier methods fell from favour during the 1970s for a variety of reasons, including their apparent inefficiency compared with the best available alternatives. In 1984, Karmarkar's announcement of a fast polynomialtime interior method for linear programming caused tremendous excitement in the field of optimization. A formal connection can be shown between his method and classical barrier methods, which have consequently undergone a renaissance in interest and popularity. Most papers published since 1984 have concentrated on issues of computational complexity in interior methods for linear programming. During the same period, implementations of interior methods have displayed great efficiency in solving many large linear programs of everincreasing size. Interior methods have also been applied with notable success to nonlinear and combinatorial problems. This paper presents a selfcontained survey of major themes in both classical material and recent developments related to the theory and practice of interior methods.",
"Abstract : This report gives the most comprehensive and detailed treatment to date of some of the most powerful mathematical programming techniques currently knownsequential unconstrained methods for constrained minimization problems in Euclidean nspacegiving many new results not published elsewhere. It provides a fresh presentation of nonlinear programming theory, a detailed review of other unconstrained methods, and a development of the latest algorithms for unconstrained minimization. (Author)",
"In the Newton logbarrier method, Newton steps are taken for the logbarrier function for a fixed value of the barrier parameter until a certain convergence criterion is satisfied. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newton’s method does not exhibit superlinear convergence to the minimizer of each instance of the logbarrier function until it reaches a very small neighborhood, namely within O(μ2) of the minimizer, where μ is the barrier parameter. By analyzing the structure of the barrier Hessian and gradient in terms of the subspace of active constraint gradients and the associated null space, we show that this neighborhood is in fact much larger –O(μσ) for any σ∈(1,2] – thus explaining why reasonably fast local convergence can be attained in practice. Moreover, we show that the overall convergence rate of the Newton logbarrier algorithm is superlinear in the number of function derivative evaluations, provided that the nonlinear program is formulated with a linear objective and that the schedule for decreasing the barrier parameter is related in a certain way to the step length and convergence criteria for each Newton process.",
"We propose a new proximal, pathfollowing framework for a class ofpossibly nonsmoothconstrained convex problems. We consider settings where the nonsmooth part is endowed with a proximity operator, and the constraint set is equipped with a selfconcordant barrier. Our main contribution is a new reparametrization of the optimality condition of the barrier problem, that allows us to process the objective function with its proximal operator within a new path following scheme. In particular, our approach relies on the following two main ideas. First, we reparameterize the optimality condition as an auxiliary problem, such that a \"good\" initial point is available. Second, we combine the proximal operator of the objective and pathfollowing ideas to design a single phase, proximal, pathfollowing algorithm. Our method has several advantages. First, it allows handling nonsmooth objectives via proximal operators, this avoids lifting the problem dimension via slack variables and additional constraints. Second, it consists of only a single phase as compared to a twophase algorithm in [43] In this work, we show how to overcome this difficulty in the proximal setting and prove that our scheme has the same O(ν√log(1 e)) worstcase iterationcomplexity with standard approaches [30, 33], but our method can handle nonsmooth objectives, where ν is the barrier parameter and e is a desired accuracy. Finally, our framework allows errors in the calculation of proximalNewton search directions, without sacrificing the worstcase iteration complexity. We demonstrate the merits of our algorithm via three numerical examples, where proximal operators play a key role to improve the performance over offtheshelf interiorpoint solvers.",
"Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar's widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.",
"In this paper we develop a new affineinvariant primal–dual subgradient method for nonsmooth convex optimization problems. This scheme is based on a selfconcordant barrier for the basic feasible set. It is suitable for finding approximate solutions with certain relative accuracy. We discuss some applications of this technique including fractional covering problem, maximal concurrent flow problem, semidefinite relaxations and nonlinear online optimization. For all these problems, the rate of convergence of our method does not depend on the problem’s data.",
"We propose an algorithmic framework for convex minimization problems of composite functions with two terms: a selfconcordant part and a possibly nonsmooth regularization part. Our method is a new proximal Newton algorithm with local quadratic convergence rate. As a specific problem instance, we consider sparse precision matrix estimation problems in graph learning. Via a careful dual formulation and a novel analytic stepsize selection, we instantiate an algorithm within our framework for graph learning that avoids Cholesky decompositions and matrix inversions, making it attractive for parallel and distributed implementations."
]
}

1908.11518
 2971189344
 Nonconvex optimization problems arise from various areas in science and engineering. Although many numerical methods and theories have been developed for unconstrained nonconvex problems, the parallel development for constrained nonconvex problems remains limited. That restricts the practices of mathematical modeling and quantitative decision making in many disciplines. In this paper, an inexact proximalpoint penalty method is proposed for constrained optimization problems where both the objective function and the constraint can be nonconvex. The proposed method approximately solves a sequence of subproblems, each of which is formed by adding to the original objective function a proximal term and quadratic penalty terms associated to the constraint functions. Under a weakconvexity assumption, each subproblem is made strongly convex and can be solved effectively to a required accuracy by an optimal gradienttype method. The theoretical property of the proposed method is analyzed in two different cases. In the first case, the objective function is nonconvex but the constraint functions are assumed to be convex, while in the second case, both the objective function and the constraint are nonconvex. For both cases, we give the complexity results in terms of the number of function value and gradient evaluations to produce nearstationary points. Due to the different structures, different definitions of nearstationary points are given for the two cases. The complexity for producing a nearly @math stationary point is @math for the first case while it becomes @math for the second case.
 The augmented Lagrange method (ALM) @cite_32 @cite_20 @cite_18 @cite_35 is another common choice for constrained problems. Different from the exact or quadratic penalty method, ALM estimates the primal solution together with the dual solution. At each iteration, it updates the primal variable by minimizing the augmented Lagrange function and then performs a dual gradient ascent step to update the dual variable. The iteration complexity of ALM has been established for convex problems @cite_17 @cite_49 @cite_38 @cite_2 @cite_53 . For nonconvex problems, most of the existing studies on ALM only show its asymptotic convergence or local convergence rate @cite_41 @cite_61 @cite_19 @cite_30 @cite_64 @cite_82 . The computational complexities of ALM for finding an @math stationary point (under various notions of stationarity) are obtained only for linearly constrained problems @cite_39 @cite_12 @cite_63 @cite_4 . One exception is @cite_29 where they essentially assume that the smallest singular value of the Jacobian matrix of the constraint functions is uniformly bounded away from zero at all feasible points. In this paper, we do not require that assumption but, instead, need an initial nearly feasible solution when the constraints are nonconvex while @cite_29 does not need.
 {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_64",
"@cite_41",
"@cite_29",
"@cite_2",
"@cite_20",
"@cite_38",
"@cite_18",
"@cite_4",
"@cite_39",
"@cite_49",
"@cite_17",
"@cite_32",
"@cite_19",
"@cite_12",
"@cite_82",
"@cite_61",
"@cite_53",
"@cite_63"
],
"mid": [
"2962853966",
"1669104078",
"2963178962",
"1534354577",
"2955184355",
"2969771825",
"2057624533",
"2768546550",
"2135779729",
"2951136802",
"2341508215",
"2785315711",
"2144603975",
"",
"1830979757",
"2619916648",
"2076940249",
"1923817890",
"2605995072",
"2592519230"
],
"abstract": [
"In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, ( (x_0, ,x_p,y) ), subject to coupled linear equality constraints. Our ADMM updates each of the primal variables (x_0, ,x_p,y ), followed by updating the dual variable. We separate the variable y from (x_i )’s as it has a special role in our analysis. The developed convergence guarantee covers a variety of nonconvex functions such as piecewise linear functions, ( _q ) quasinorm, Schattenq quasinorm ( (0<q<1 )), minimax concave penalty (MCP), and smoothly clipped absolute deviation penalty. It also allows nonconvex constraints such as compact manifolds (e.g., spherical, Stiefel, and Grassman manifolds) and linear complementarity constraints. Also, the (x_0 )block can be almost any lower semicontinuous function. By applying our analysis, we show, for the first time, that several ADMM algorithms applied to solve nonconvex models in statistical learning, optimization on manifold, and matrix decomposition are guaranteed to converge. Our results provide sufficient conditions for ADMM to converge on (convex or nonconvex) monotropic programs with three or more blocks, as they are special cases of our model. ADMM has been regarded as a variant to the augmented Lagrangian method (ALM). We present a simple example to illustrate how ADMM converges but ALM diverges with bounded penalty parameter ( ). Indicated by this example and other analysis in this paper, ADMM might be a better choice than ALM for some nonconvex nonsmooth problems, because ADMM is not only easier to implement, it is also more likely to converge for the concerned scenarios.",
"",
"The alternating direction method with multipliers (ADMM) is one of the most powerful and successful methods for solving various composite problems. The convergence of the conventional ADMM (i.e., 2block) for convex objective functions has been stated for a long time, and its convergence for nonconvex objective functions has, however, been established very recently. The multiblock ADMM, a natural extension of ADMM, is a widely used scheme and has also been found very useful in solving various nonconvex optimization problems. It is thus expected to establish the convergence of the multiblock ADMM under nonconvex frameworks. In this paper, we first justify the convergence of 3block Bregman ADMM. We next extend these results to the Nblock case (N ≥ 3), which underlines the feasibility of multiblock ADMM applications in nonconvex settings. Finally, we present a simulation study and a realworld application to support the correctness of the obtained theoretical assertions.",
"For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form \"minimize an augmented Lagrangian function subject to linearized constraints.\" Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. Nevertheless, the wellknown software package MINOS has proved effective on many large problems. Its success motivates us to derive a related LCL algorithm that possesses three important properties: it is globally convergent, the subproblem constraints are always feasible, and the subproblems may be solved inexactly. The new algorithm has been implemented in MATLAB, with an option to use either MINOS or SNOPT (Fortran codes) to solve the linearly constrained subproblems. Only first derivatives are required. We present numerical results on a subset of the COPS, HS, and CUTE test problems, which include many large examples. The results demonstrate the robustness and efficiency of the stabilized LCL procedure.",
"We propose a practical inexact augmented Lagrangian method (iALM) for nonconvex problems with nonlinear constraints. We characterize the total computational complexity of our method subject to a verifiable geometric condition, which is closely related to the PolyakLojasiewicz and MangasarianFromovitz conditions. In particular, when a firstorder solver is used for the inner iterates, we prove that iALM finds a firstorder stationary point with @math calls to the firstorder oracle. If, in addition, the problem is smooth and a secondorder solver is used for the inner iterates, iALM finds a secondorder stationary point with @math calls to the secondorder oracle. These complexity results match the known theoretical results in the literature. We also provide strong numerical evidence on largescale machine learning problems, including the BurerMonteiro factorization of semidefinite programs, and a novel nonconvex relaxation of the standard basis pursuit template. For these examples, we also show how to verify our geometric condition.",
"Augmented Lagrangian method (ALM) has been popularly used for solving constrained optimization problems. Practically, subproblems for updating primal variables in the framework of ALM usually can only be solved inexactly. The convergence and local convergence speed of ALM have been extensively studied. However, the global convergence rate of the inexact ALM is still open for problems with nonlinear inequality constraints. In this paper, we work on general convex programs with both equality and inequality constraints. For these problems, we establish the global convergence rate of the inexact ALM and estimate its iteration complexity in terms of the number of gradient evaluations to produce a primal and or primaldual solution with a specified accuracy. We first establish an ergodic convergence rate result of the inexact ALM that uses constant penalty parameters or geometrically increasing penalty parameters. Based on the convergence rate result, we then apply Nesterov’s optimal firstorder method on each primal subproblem and estimate the iteration complexity of the inexact ALM. We show that if the objective is convex, then (O( ^ 1 ) ) gradient evaluations are sufficient to guarantee a primal ( )solution in terms of both primal objective and feasibility violation. If the objective is strongly convex, the result can be improved to (O( ^  1 2  ) ). To produce a primaldual ( )solution, more gradient evaluations are needed for convex case, and the number is (O( ^  4 3 ) ), while for strongly convex case, the number is still (O( ^  1 2  ) ). Finally, we establish a nonergodic convergence rate result of the inexact ALM that uses geometrically increasing penalty parameters. This result is established only for the primal problem. We show that the nonergodic iteration complexity result is in the same order as that for the ergodic result. Numerical experiments on quadratically constrained quadratic programming are conducted to compare the performance of the inexact ALM with different settings.",
"The main purpose of this paper is to suggest a method for finding the minimum of a functionf(x) subject to the constraintg(x)=0. The method consists of replacingf byF=f+λg+1 2cg2, wherec is a suitably large constant, and computing the appropriate value of the Lagrange multiplier. Only the simplest algorithm is presented. The remaining part of the paper is devoted to a survey of known methods for finding unconstrained minima, with special emphasis on the various gradient techniques that are available. This includes Newton's method and the method of conjugate gradients.",
"Firstorder methods have been popularly used for solving largescale problems. However, many existing works only consider unconstrained problems or those with simple constraint. In this paper, we develop two firstorder methods for constrained convex programs, for which the constraint set is represented by affine equations and smooth nonlinear inequalities. Both methods are based on the classic augmented Lagrangian function. They update the multipliers in the same way as the augmented Lagrangian method (ALM) but employ different primal variable updates. The first method, at each iteration, performs a single proximal gradient step to the primal variable, and the second method is a block update version of the first one. For the first method, we establish its global iterate convergence as well as global sublinear and local linear convergence, and for the second method, we show a global sublinear convergence result in expectation. Numerical experiments are carried out on the basis pursuit denoising and a convex quadratically constrained quadratic program to show the empirical performance of the proposed methods. Their numerical behaviors closely match the established theoretical results.",
"The theory of the proximal point algorithm for maximal monotone operators is applied to three algorithms for solving convex programs, one of which has not previously been formulated. Rateofconvergence results for the “method of multipliers,” of the strong sort already known, are derived in a generalized form relevant also to problems beyond the compass of the standard secondorder conditions for oplimality. The new algorithm, the “proximal method of multipliers,” is shown to have much the same convergence properties, but with some potential advantages.",
"In this paper we study the worstcase complexity of an inexact Augmented Lagrangian method for nonconvex inequalityconstrained problems. Assuming that the penalty parameters are bounded, we prove a complexity bound of @math iterations for the referred algorithm generate an @math approximate KKT point, for @math . When the penalty parameters are unbounded, we prove an iteration complexity bound of @math , where @math controls the rate of increase of the penalty parameters. For linearly constrained problems, these bounds yield to evaluation complexity bounds of @math and @math , respectively, when suitable @math order methods ( @math ) are used to approximately solve the unconstrained subproblems at each iteration of our Augmented Lagrangian scheme.",
"In this paper, we propose a new decomposition approach named the proximal primal dual algorithm (ProxPDA) for smooth nonconvex linearly constrained optimization problems. The proposed approach is primaldual based, where the primal step minimizes certain approximation of the augmented Lagrangian of the problem, and the dual step performs an approximate dual ascent. The approximation used in the primal step is able to decompose the variable blocks, making it possible to obtain simple subproblems by leveraging the problem structures. Theoretically, we show that whenever the penalty parameter in the augmented Lagrangian is larger than a given threshold, the ProxPDA converges to the set of stationary solutions, globally and in a sublinear manner (i.e., certain measure of stationarity decreases in the rate of @math , where @math is the iteration counter). Interestingly, when applying a variant of the ProxPDA to the problem of distributed nonconvex optimization (over a connected undirected graph), the resulting algorithm coincides with the popular EXTRA algorithm [ 2014], which is only known to work in convex cases. Our analysis implies that EXTRA and its variants converge globally sublinearly to stationary solutions of certain nonconvex distributed optimization problem. There are many possible extensions of the ProxPDA, and we present one particular extension to certain nonconvex distributed matrix factorization problem.",
"Stochastic gradient (SG) method has been popularly applied to solve optimization problems with objective that is stochastic or an average of many functions. Most existing works on SG assume that the underlying problem is unconstrained or has an easytoproject constraint set. In this paper, we consider problems that have a stochastic objective and also many functional constraints. For such problems, it could be extremely expensive to project a point to the feasible set, or even compute subgradient and or function value of all constraint functions. To find solutions of these problems, we propose a novel SG method based on the augmented Lagrangian function. Within every iteration, it inquires a stochastic subgradient of the objective, a subgradient and function value of one randomly sampled constraint function, and function value of another sampled constraint function. Hence, the periteration complexity is low. We establish its convergence rate for convex and also strongly convex problems. It can achieve the optimal @math convergence rate for convex case and nearly optimal @math rate for strongly convex case. Numerical experiments on quadratically constrained quadratic programming are conducted to demonstrate its efficiency.",
"This paper considers a special but broad class of convex programming problems whose feasible region is a simple compact convex set intersected with the inverse image of a closed convex cone under an affine transformation. It studies the computational complexity of quadratic penalty based methods for solving the above class of problems. An iteration of these methods, which is simply an iteration of Nesterov’s optimal method (or one of its variants) for approximately solving a smooth penalization subproblem, consists of one or two projections onto the simple convex set. Iterationcomplexity bounds expressed in terms of the latter type of iterations are derived for two quadratic penalty based variants, namely: one which applies the quadratic penalty method directly to the original problem and another one which applies the latter method to a perturbation of the original problem obtained by adding a small quadratic term to its objective function.",
"",
"In this paper, we consider augmented Lagrangian AL algorithms for solving largescale nonlinear optimization problems that execute adaptive strategies for updating the penalty parameter. Our work is motivated by the recently proposed adaptive AL trust region method by [An adaptive augmented Lagrangian method for largescale constrained optimization, Math. Program. 152 2015, pp. 201–245.]. The first focal point of this paper is a new variant of the approach that employs a line search rather than a trust region strategy, where a critical algorithmic feature for the line search strategy is the use of convexified piecewise quadratic models of the AL function for computing the search directions. We prove global convergence guarantees for our line search algorithm that are on par with those for the previously proposed trust region method. A second focal point of this paper is the practical performance of the line search and trust region algorithm variants in Matlab software, as well as that of an adaptive penalty parameter updating strategy incorporated into the Lancelot software. We test these methods on problems from the CUTEst and COPS collections, as well as on challenging test problems related to optimal power flow. Our numerical experience suggests that the adaptive algorithms outperform traditional AL methods in terms of efficiency and reliability. As with traditional AL algorithms, the adaptive methods are matrixfree and thus represent a viable option for solving largescale problems.",
"This paper establishes the iterationcomplexity of a Jacobitype nonEuclidean proximal alternating direction method of multipliers (ADMM) for solving multiblock linearly constrained nonconvex programs. The subproblems of this ADMM variant can be solved in parallel and hence the method has great potential to solve large scale multiblock linearly constrained nonconvex programs. Moreover, our analysis allows the Lagrange multiplier to be updated with a relaxation parameter in the interval (0, 2).",
"We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind are needed. Previous literature on the subject required, in addition, the linear independence constraint qualification and either the strict complementarity assumption or a stronger version of the secondorder sufficient condition. That said, the classical results allow the initial multiplier estimate to be far from the optimal one, at the expense of proportionally increasing the threshold value for the penalty parameters. Although our primary goal is to avoid constraint qualifications, if the stronger assumptions are introduced, then starting points far from the optimal multiplier are allowed within our analysis as well. Using only the secondorder sufficient optimality condition, for penalty parameters large ...",
"The alternating direction method with multipliers (ADMM) has been one of most powerful and successful methods for solving various convex or nonconvex composite problems that arise in the fields of image & signal processing and machine learning. In convex settings, numerous convergence results have been established for ADMM as well as its varieties. However, due to the absence of convexity, the convergence analysis of nonconvex ADMM is generally very difficult. In this paper we study the Bregman modification of ADMM (BADMM), which includes the conventional ADMM as a special case and often leads to an improvement of the performance of the algorithm. Under certain assumptions, we prove that the iterative sequence generated by BADMM converges to a stationary point of the associated augmented Lagrangian function. The obtained results underline the feasibility of ADMM in applications under nonconvex settings.",
"In this paper we present a complete iteration complexity analysis of inexact firstorder Lagrangian and penalty methods for solving coneconstrained convex problems that have or may not have optimal Lagrange multipliers that close the duality gap. We first assume the existence of optimal Lagrange multipliers and study primal–dual firstorder methods based on inexact information and augmented Lagrangian smoothing or Nesterovtype smoothing. For inexact (fast) gradient augmented Lagrangian methods, we derive an overall computational complexity of O(1 ϵ) projections onto a simple primal set in order to attain an eoptimal solution of the conic convex problem. For the inexact fast gradient method combined with Nesterovtype smoothing, we derive computational complexity O(1 ϵ3 2) projections onto the same set. Then, we assume that optimal Lagrange multipliers might not exist for the coneconstrained convex problem, and analyse the fast gradient method for solving penalty reformulations of the problem. For the ...",
"This paper establishes convergence rate bounds for a variant of the proximal alternating direction method of multipliers (ADMM) for solving nonconvex linearly constrained optimization problems. The variant of the proximal ADMM allows the inclusion of an overrelaxation stepsize parameter belonging to the interval @math . To the best of our knowledge, all related papers in the literature only consider the case where the overrelaxation parameter lies in the interval @math ."
]
}

1908.11518
 2971189344
 Nonconvex optimization problems arise from various areas in science and engineering. Although many numerical methods and theories have been developed for unconstrained nonconvex problems, the parallel development for constrained nonconvex problems remains limited. That restricts the practices of mathematical modeling and quantitative decision making in many disciplines. In this paper, an inexact proximalpoint penalty method is proposed for constrained optimization problems where both the objective function and the constraint can be nonconvex. The proposed method approximately solves a sequence of subproblems, each of which is formed by adding to the original objective function a proximal term and quadratic penalty terms associated to the constraint functions. Under a weakconvexity assumption, each subproblem is made strongly convex and can be solved effectively to a required accuracy by an optimal gradienttype method. The theoretical property of the proposed method is analyzed in two different cases. In the first case, the objective function is nonconvex but the constraint functions are assumed to be convex, while in the second case, both the objective function and the constraint are nonconvex. For both cases, we give the complexity results in terms of the number of function value and gradient evaluations to produce nearstationary points. Due to the different structures, different definitions of nearstationary points are given for the two cases. The complexity for producing a nearly @math stationary point is @math for the first case while it becomes @math for the second case.
 While preparing this paper, we notice two recently posted papers @cite_86 @cite_76 on the problems with nonconvex constraints. The algorithms in both works are based on the proximal point method. Different from our approach, they solve subproblems with strongly convex objective and also strongly convex constraints by adding proximal terms to the objective and constraints. Their analysis requires the uniform boundedness of the dual solutions of all subproblems and, to ensure this requirement is satisfied, @cite_48 assume that a uniform Slater's condition holds while @cite_86 assume that the MangasarianFromovitz constraint qualification holds at the limiting points of the generated iterates. However, neither assumptions can be easily verified. As pointed out in @cite_86 , their assumptions can be implied by a sufficient feasibility assumption, which is an even stronger assumption. On the contrary, our analysis in the nonconvex constrained case does not depend on the boundness of the dual variables, and thus does not need the aforementioned assumptions by @cite_86 @cite_76 .
 {
"cite_N": [
"@cite_86",
"@cite_76",
"@cite_48"
],
"mid": [
"2964862314",
"",
"2964772384"
],
"abstract": [
"Nonconvex optimization is becoming more and more important in machine learning and operations research. In spite of recent progresses, the development of provably efficient algorithm for optimization with nonconvex functional constraints remains open. Such problems have potential applications in riskaverse machine learning, semisupervised learning and robust optimization among others. In this paper, we introduce a new proximal point type method for solving this important class of nonconvex problems by transforming them into a sequence of convex constrained subproblems. We establish the convergence and rate of convergence of this algorithm to the KKT point under different types of constraint qualifications. In particular, we prove that our algorithm will converge to an @math KKT point in @math iterations under a properly defined condition. For practical use, we present inexact variants of this approach, in which approximate solutions of the subproblems are computed by either primal or primaldual type algorithms, and establish their associated rate of convergence. To the best of our knowledge, this is the first time that proximal point type method is developed for nonlinear programing with nonconvex functional constraints, and most of the convergence and complexity results seem to be new in the literature.",
"",
"Optimization models with nonconvex constraints arise in many tasks in machine learning, e.g., learning with fairness constraints or NeymanPearson classification with nonconvex loss. Although many efficient methods have been developed with theoretical convergence guarantees for nonconvex unconstrained problems, it remains a challenge to design provably efficient algorithms for problems with nonconvex functional constraints. This paper proposes a class of subgradient methods for constrained optimization where the objective function and the constraint functions are are weakly convex. Our methods solve a sequence of strongly convex subproblems, where a proximal term is added to both the objective function and each constraint function. Each subproblem can be solved by various algorithms for strongly convex optimization. Under a uniform Slater's condition, we establish the computation complexities of our methods for finding a nearly stationary point."
]
}

1908.11518
 2971189344
 Nonconvex optimization problems arise from various areas in science and engineering. Although many numerical methods and theories have been developed for unconstrained nonconvex problems, the parallel development for constrained nonconvex problems remains limited. That restricts the practices of mathematical modeling and quantitative decision making in many disciplines. In this paper, an inexact proximalpoint penalty method is proposed for constrained optimization problems where both the objective function and the constraint can be nonconvex. The proposed method approximately solves a sequence of subproblems, each of which is formed by adding to the original objective function a proximal term and quadratic penalty terms associated to the constraint functions. Under a weakconvexity assumption, each subproblem is made strongly convex and can be solved effectively to a required accuracy by an optimal gradienttype method. The theoretical property of the proposed method is analyzed in two different cases. In the first case, the objective function is nonconvex but the constraint functions are assumed to be convex, while in the second case, both the objective function and the constraint are nonconvex. For both cases, we give the complexity results in terms of the number of function value and gradient evaluations to produce nearstationary points. Due to the different structures, different definitions of nearstationary points are given for the two cases. The complexity for producing a nearly @math stationary point is @math for the first case while it becomes @math for the second case.
 In addition to the methods above, algorithms that utilize Hessian information have been developed to find the secondorder @math stationary point of linearly constrained smooth nonconvex optimization @cite_87 @cite_71 @cite_15 . Different from these works, we focus on finding an approximate firstorder stationary point for nonlinear constrained nonconvex optimization using only gradient information.
 {
"cite_N": [
"@cite_15",
"@cite_71",
"@cite_87"
],
"mid": [
"2959708829",
"2895571900",
"2788706426"
],
"abstract": [
"This paper proposes lowcomplexity algorithms for finding approximate secondorder stationary points (SOSPs) of problems with smooth nonconvex objective and linear constraints. While finding (approximate) SOSPs is computationally intractable, we first show that generic instances of the problem can be solved efficiently. More specifically, for a generic problem instance, certain strict complementarity (SC) condition holds for all KarushKuhnTucker (KKT) solutions (with probability one). The SC condition is then used to establish an equivalence relationship between two different notions of SOSPs, one of which is computationally easy to verify. Based on this particular notion of SOSP, we design an algorithm named the Successive Negativecurvature grAdient Projection (SNAP), which successively performs either conventional gradient projection or some negative curvature based projection steps to find SOSPs. SNAP and its firstorder extension SNAP @math , require @math iterations to compute an @math SOSP, and their periteration computational complexities are polynomial in the number of constraints and problem dimension. To our knowledge, this is the first time that firstorder algorithms with polynomial periteration complexity and global sublinear rate have been designed to find SOSPs of the important class of nonconvex problems with linear constraints.",
"We consider the problem of finding an approximate secondorder stationary point of a constrained nonconvex optimization problem. We first show that, unlike the unconstrained scenario, the vanilla projected gradient descent algorithm may converge to a strict saddle point even when there is only a single linear constraint. We then provide a hardness result by showing that checking ( , )second order stationarity is NPhard even in the presence of linear constraints. Despite our hardness result, we identify instances of the problem for which checking second order stationarity can be done efficiently. For such instances, we propose a dynamic second order FrankWolfe algorithm which converges to ( , )second order stationary points in O ( ^ 2 , ^ 3 ) iterations. The proposed algorithm can be used in general constrained nonconvex optimization as long as the constrained quadratic subproblem can be solved efficiently.",
"In this work, we study two firstorder primaldual based algorithms, the Gradient PrimalDual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained nonconvex optimization problems. We show that with random initialization of the primal and dual variables, both algorithms are able to compute secondorder stationary solutions (ss2) with probability one. This is the first result showing that primaldual algorithm is capable of finding ss2 when only using firstorder information, it also extends the existing results for firstorder, but primalonly algorithms. An important implication of our result is that it also gives rise to the first global convergence result to the ss2, for two classes of unconstrained distributed nonconvex learning problems over multiagent networks."
]
}

1908.11053
 2970409889
 Formal query generation aims to generate correct executable queries for question answering over knowledge bases (KBs), given entity and relation linking results. Current approaches build universal paraphrasing or ranking models for the whole questions, which are likely to fail in generating queries for complex, longtail questions. In this paper, we propose SubQG, a new query generation approach based on frequent query substructures, which helps rank the existing (but nonsignificant) query structures or build new query structures. Our experiments on two benchmark datasets show that our approach significantly outperforms the existing ones, especially for complex questions. Also, it achieves promising performance with limited training data and noisy entity relation linking results.
 Semantic parsingbased approaches translate questions into formal queries using bottom up parsing @cite_5 or staged query graph generation @cite_12 . gAnswer @cite_15 @cite_1 builds up semantic query graph for question analysis and utilize subgraph matching for disambiguation. Recent studies combine parsing based approaches with neural networks, to enhance the ability for structure disambiguation. ConstraintQG ( ConstraintQG ), CQAEMNLP ( CQAEMNLP ) and SQG ( SQG ) build query graphs by staged query generation, and follow an encodeandcompare framework to rank candidate queries with neural networks. These approaches try to learn entire representations for questions with different query structures by using a single network. Thus, they may suffer from the lack of training data, especially for questions with rarely appeared structures. By contrast, our approach utilizes multiple networks to learn predictors for different query substructures, which can gain a stable performance with limited training data. Also, our approach does not require manuallywritten rules, and performs stably with noisy linking results.
 {
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_1",
"@cite_12"
],
"mid": [
"2252136820",
"2011992920",
"2766317792",
"2251079237"
],
"abstract": [
"In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from questionanswer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their stateoftheart parser. Additionally, we collected a more realistic and challenging dataset of questionanswer pairs and improves over a natural baseline.",
"RDF question answering (Q A) allows users to ask questions in natural languages over a knowledge base represented by RDF. To answer a national language question, the existing work takes a twostage approach: question understanding and query evaluation. Their focus is on question understanding to deal with the disambiguation of the natural language phrases. The most common technique is the joint disambiguation, which has the exponential search space. In this paper, we propose a systematic framework to answer natural language questions over RDF repository (RDF Q A) from a graph datadriven perspective. We propose a semantic query graph to model the query intention in the natural language question in a structural way, based on which, RDF Q A is reduced to subgraph matching problem. More importantly, we resolve the ambiguity of natural language questions at the time when matches of query are found. The cost of disambiguation is saved if there are no matching found. We compare our method with some stateoftheart RDF Q A systems in the benchmark dataset. Extensive experiments confirm that our method not only improves the precision but also speeds up query performance greatly.",
"RDF question answering (Q A) allows users to ask questions in natural languages over a knowledge base represented by RDF. To answer a natural language question, the existing work takes a twostage approach: question understanding and query evaluation. Their focus is on question understanding to deal with the disambiguation of the natural language phrases. The most common technique is the joint disambiguation, which has the exponential search space. In this paper, we propose a systematic framework to answer natural language questions over RDF repository (RDF Q A) from a graph datadriven perspective. We propose a semantic query graph to model the query intention in the natural language question in a structural way, based on which, RDF Q A is reduced to subgraph matching problem. More importantly, we resolve the ambiguity of natural language questions at the time when matches of query are found. The cost of disambiguation is saved if there are no matching found. More specifically, we propose two different frameworks to build the semantic query graph, one is relation (edge)first and the other one is nodefirst. We compare our method with some stateoftheart RDF Q A systems in the benchmark dataset. Extensive experiments confirm that our method not only improves the precision but also speeds up query performance greatly.",
"We propose a novel semantic parsing framework for question answering using a knowledge base. We define a query graph that resembles subgraphs of the knowledge base and can be directly mapped to a logical form. Semantic parsing is reduced to query graph generation, formulated as a staged search problem. Unlike traditional approaches, our method leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem. By applying an advanced entity linking system and a deep convolutional neural network model that matches questions and predicate sequences, our system outperforms previous methods substantially, and achieves an F1 measure of 52.5 on the WEBQUESTIONS dataset."
]
}

1908.11056
 2971248932
 In the face of growing needs for water and energy, a fundamental understanding of the environmental impacts of human activities becomes critical for managing water and energy resources, remedying water pollution, and making regulatory policy wisely. Among activities that impact the environment, oil and gas production, wastewater transport, and urbanization are included. In addition to the occurrence of anthropogenic contamination, the presence of some contaminants (e.g., methane, salt, and sulfate) of natural origin is not uncommon. Therefore, scientists sometimes find it difficult to identify the sources of contaminants in the coupled natural and human systems. In this paper, we propose a technique to simultaneously conduct source detection and prediction, which outperforms other approaches in the interdisciplinary case study of the identification of potential groundwater contamination within a region of highdensity shale gas development.
 Dictionary learning has been widely used in computer vision to obtain basic components and sparse representations of images @cite_7 . Recently, in order to optimize the learned dictionary for a specific task, people proposed supervised dictionary learning @cite_1 . Some methods learn discriminative dictionaries for different classes @cite_8 @cite_14 , or use label information to prune the learned dictionary by unsupervised dictionary learning @cite_5 . They actually separate the dictionary learning from the supervised learning part and may lead to inferior results. Another group of methods combine dictionary learning and supervised learning @cite_1 @cite_11 , but fail to consider the spatial temporal property for specific problems. Hence, we propose to do dictionary learning and supervised learning iteratively, and spatial and temporal regularization are added to improve the interpretation of results.
 {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_5",
"@cite_11"
],
"mid": [
"",
"2536599074",
"2032768707",
"2128638419",
"2151768982",
""
],
"abstract": [
"",
"We propose in this paper to unify two different approaches to image restoration: On the one hand, learning a basis set (dictionary) adapted to sparse signal descriptions has proven to be very effective in image reconstruction and classification tasks. On the other hand, explicitly exploiting the selfsimilarities of natural images has led to the successful nonlocal means approach to image restoration. We propose simultaneous sparse coding as a framework for combining these two approaches in a natural manner. This is achieved by jointly decomposing groups of similar signals on subsets of the learned dictionary. Experimental results in image denoising and demosaicking tasks with synthetic and real noise show that the proposed method outperforms the state of the art, making it possible to effectively restore raw images from digital cameras at a reasonable speed and memory cost.",
"Face recognition (FR) is an active yet challenging topic in computer vision applications. As a powerful tool to represent high dimensional data, recently sparse representation based classification (SRC) has been successfully used for FR. This paper discusses the metaface learning (MFL) of face images under the framework of SRC. Although directly using the training samples as dictionary bases can achieve good FR performance, a well learned dictionary matrix can lead to higher FR rate with less dictionary atoms. An SRC oriented unsupervised MFL algorithm is proposed in this paper and the experimental results on benchmark face databases demonstrated the improvements brought by the proposed MFL algorithm over original SRC.",
"It is now well established that sparse signal models are well suited for restoration tasks and can be effectively learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This paper proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and discriminative class models. The linear version of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks.",
"We present an approach to determine the category and location of objects in images. It performs very fast categorization of each pixel in an image, a bruteforce approach made feasible by three key developments: First, our method reduces the size of a large generic dictionary (on the order of ten thousand words) to the low hundreds while increasing classification performance compared to kmeans. This is achieved by creating a discriminative dictionary tailored to the task by following the information bottleneck principle. Second, we perform featurebased categorization efficiently on a dense grid by extending the concept of integral images to the computation of local histograms. Third, we compute SIFT descriptors densely in linear time. We compare our method to the state of the art and find that it excels in accuracy and simplicity, performing better while assuming less.",
""
]
}

1908.11044
 2971299764
 We present a general paradigm for dynamic 3D reconstruction from multiple independent and uncontrolled image sources having arbitrary temporal sampling density and distribution. Our graphtheoretic formulation models the Spatiotemporal relationships among our observations in terms of the joint estimation of their 3D geometry and its discrete Laplace operator. Towards this end, we define a triconvex optimization framework that leverages the geometric properties and dependencies found among a Euclideanshapespace and the discrete Laplace operator describing its local and global topology. We present a reconstructability analysis, experiments on motion capture data and multiview image datasets, as well as explore applications to geometrybased event segmentation and data association.
 Temporal alignment is a necessary preprocessing step for most dynamic 3D reconstruction methods. Current video synchronization or image sequencing @cite_42 @cite_44 @cite_27 @cite_18 @cite_29 @cite_19 rely on the image 2D features, foregoing the recovery of the 3D structure. Featurebased sequencing methods like @cite_42 @cite_27 @cite_16 make different assumptions on the underlying imaging geometry. For example, while @cite_42 favors an approximately static imaging geometry, @cite_27 prefers viewing configurations with large baselines. @cite_44 overcomes the limitation of static cameras and improves accuracy by leveraging the temporal info of frames in individual cameras. @cite_18 determines spatiotemporal alignment among a partially order set of observation by framing the problem as mapping of @math observations into a single line in @math , which explicitly imposes a total ordering. Unlike previous methods, @cite_20 propose a synchronization algorithm without tracking corresponding feature between video sequences. Instead, they synchronize two videos by the relative motion between two rigid objects. @cite_25 determined sequencing based on the approximate 3D intersections of viewing rays under an affine reference frame. @cite_30 jointly synchronize a pair of video sequences and reconstruct their commonly observed dense 3D structure by maximizing the spatiotemporal consistency of twoview pixel correspondences across video sequences.
 {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_29",
"@cite_42",
"@cite_44",
"@cite_19",
"@cite_27",
"@cite_16",
"@cite_25",
"@cite_20"
],
"mid": [
"",
"2167747244",
"1886695513",
"",
"2152136819",
"",
"1901796002",
"2126784706",
"2097446893",
"1850003624"
],
"abstract": [
"",
"In this paper, we consider the problem of estimating the spatiotemporal alignment between N unsynchronized video sequences of the same dynamic 3D scene, captured from distinct viewpoints. Unlike most existing methods, which work for N = 2 and rely on a computationally intensive search in the space of temporal alignments, we present a novel approach that reduces the problem for general N to the robust estimation of a single line in RN. This line captures all temporal relations between the sequences and can be computed without any prior knowledge of these relations. Considering that the spatial alignment is captured by the parameters of fundamental matrices, an iterative algorithm is used to refine simultaneously the parameters representing the temporal and spatial relations between the sequences. Experimental results with realworld and synthetic sequences show that our method can accurately align the videos even when they have large misalignments (e.g., hundreds of frames), when the problem is seemingly ambiguous (e.g., scenes with roughly periodic motion), and when accurate manual alignment is difficult (e.g., due to slowmoving objects).",
"We present a novel algorithm for temporally synchronizing multiple videos capturing the same dynamic scene. Our algorithm relies on general image features and it does not require explicitly tracking any specific object, making it applicable to general scenes with complex motion. This is facilitated by our new trajectory filtering and matching schemes that correctly identifies matching pairs of trajectories (inliers) from a large set of potential candidate matches, of which many are outliers. We find globally optimal synchronization parameters by using a stable RANSACbased optimization approach. For multivideo synchronization, the algorithm identifies an informative subset of video pairs which prevents the RANSAC algorithm from being biased by outliers. Experiments on twocamera and multicamera synchronization demonstrate the performance of our algorithm.",
"",
"Photosequencing is the problem of recovering the temporal order of a set of still images of a dynamic event, taken asynchronously by a set of uncalibrated cameras. Solving this problem is a first, crucial step for analyzing (or visualizing) the dynamic content of the scene captured by a large number of freely moving spectators. We propose a geometric based solution, followed by rank aggregation to the photosequencing problem. Our algorithm trades spatial certainty for temporal certainty. Whereas the previous solution proposed by [4] relies on two images taken from the same static camera to eliminate uncertainty in space, we drop the staticcamera assumption and replace it with temporal information available from images taken from the same (moving) camera. Our method thus overcomes the limitation of the staticcamera assumption, and scales much better with the duration of the event and the spread of cameras in space. We present successful results on challenging real data sets and large scale synthetic data (250 images).",
"",
"We present an algorithm that synchronizes two short video sequences where an object undergoes ballistic motion against stationary scene points. The object’s motion and epipolar geometry are exploited to guide the algorithm to the correct synchronization in an iterative manner. Our algorithm accurately synchronizes videos recorded at different frame rates, and takes few iterations to converge to subframe accuracy. We use synthetic data to analyze our algorithm’s accuracy under the influence of noise. We demonstrate that it accurately synchronizes real video sequences, and evaluate its performance against manual synchronization.",
"This paper presents a method of synchronizing video sequences that exploits the nonrigidity of sets of 3D point features (e.g., anatomical joint locations) within the scene. The theory is developed for homography, perspective and affine projection models within a unified rank constraint framework that is computationally cheap. An efficient method is then presented that recovers potential frame correspondences, estimates possible synchronization parameters via the Hough transform and refines these parameters using nonlinear optimization methods in order to recover synchronization to subframe accuracy, even for sequences of unknown and different frame rates. The method is evaluated quantitatively using synthetic data and demonstrated qualitatively on several real sequences.",
"We present a novel method for automatically synchronizing two video sequences of the same event. Unlike previously proposed methods, we do not put any restrictive constraints on the scene nor on the camera motions: our method can deal with independently moving cameras, wide baseline conditions, and general 3D scenes. It starts from five point correspondences throughout the video sequences, that are provided using wide baseline matching and tracking techniques. It is efficient, in that it can be implemented in a noncombinatorial way. The feasibility of the method is demonstrated by preliminary experimental results.",
"In this work, a method that synchronizes two video sequences is proposed. Unlike previous methods, which require the existence of correspondences between features tracked in the two sequences, and or that the cameras are static or jointly moving, the proposed approach does not impose any of these constraints. It works when the cameras move independently, even if different features are tracked in the two sequences. The assumptions underlying the proposed strategy are that the intrinsic parameters of the cameras are known and that two rigid objects, with independent motions on the scene, are visible in both sequences. The relative motion between these objects is used as clue for the synchronization. The extrinsic parameters of the cameras are assumed to be unknown. A new synchronization algorithm for static or jointly moving cameras that see (possibly) different parts of a common rigidly moving object is also proposed. Proofofconcept experiments that illustrate the performance of these methods are presented, as well as a comparison with a stateoftheart approach."
]
}

1908.11315
 2970031230
 Due to its hereditary nature, genomic data is not only linked to its owner but to that of close relatives as well. As a result, its sensitivity does not really degrade over time; in fact, the relevance of a genomic sequence is likely to be longer than the security provided by encryption. This prompts the need for specialized techniques providing longterm security for genomic data, yet the only available tool for this purpose is GenoGuard (, 2015). By relying on Honey Encryption, GenoGuard is secure against an adversary that can brute force all possible keys; i.e., whenever an attacker tries to decrypt using an incorrect password, she will obtain an incorrect but plausible looking decoy sequence. In this paper, we set to analyze the realworld security guarantees provided by GenoGuard; specifically, assess how much more information does access to a ciphertext encrypted using GenoGuard yield, compared to one that was not. Overall, we find that, if the adversary has access to side information in the form of partial information from the target sequence, the use of GenoGuard does appreciably increase her power in determining the rest of the sequence. We show that, in the case of a sequence encrypted using an easily guessable (lowentropy) password, the adversary is able to rule out most decoy sequences, and obtain the target sequence with just 2.5 of it available as side information. In the case of a hardertoguess (highentropy) password, we show that the adversary still obtains, on average, better accuracy in guessing the rest of the target sequences than using stateoftheart genomic sequence inference methods, obtaining up to 15 improvement in accuracy.
 Membership inference. @cite_33 present a membership inference attack in which they infer the presence of an individual's genotype within a complex genomic DNA mixture. @cite_2 improve on the attack using correlation statistics of just a few hundreds SNPs, while @cite_16 rely on regression coefficients. Shringarpure and Bustamante @cite_30 perform membership inference against the Beacon network. Beacons are web servers that answer questions e.g. does your dataset include a genome that has a specific nucleotide at a specific genomic coordinate?'' to which the Beacon responds yes or no, without referring to a specific individual; see: https: github.com ga4ghbeacon specification . They use a likelihoodratio test to predict whether an individual is present in the Beacon, detecting membership within a Beacon with 1,000 individuals using 5,000 queries. Also, Von @cite_9 reduce the number of queries to less than 0.5 best performing attack uses a highorder Markov chain to model the SNP correlations, as described in @cite_11 . Note that, as part of the attacks described in this paper, we use inference methods from @cite_11 as our baseline inference methods.
 {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_9",
"@cite_2",
"@cite_16",
"@cite_11"
],
"mid": [
"1838635991",
"2040228409",
"2952306472",
"2141481372",
"2134493890",
"1526729971"
],
"abstract": [
"The human genetics community needs robust protocols that enable secure sharing of genomic data from participants in genetic research. Beacons are web servers that answer allelepresence queries—such as “Do you have a genome that has a specific nucleotide (e.g., A) at a specific genomic position (e.g., position 11,272 on chromosome 1)?”—with either “yes” or “no.” Here, we show that individuals in a beacon are susceptible to reidentification even if the only data shared include presence or absence information about alleles in a beacon. Specifically, we propose a likelihoodratio test of whether a given individual is present in a given genetic beacon. Our test is not dependent on allele frequencies and is the most powerful test for a specified falsepositive rate. Through simulations, we showed that in a beacon with 1,000 individuals, reidentification is possible with just 5,000 queries. Relatives can also be identified in the beacon. Reidentification is possible even in the presence of sequencing errors and variantcalling differences. In a beacon constructed with 65 European individuals from the 1000 Genomes Project, we demonstrated that it is possible to detect membership in the beacon with just 250 SNPs. With just 1,000 SNP queries, we were able to detect the presence of an individual genome from the Personal Genome Project in an existing beacon. Our results show that beacons can disclose membership and implied phenotypic information about participants and do not protect privacy a priori. We discuss risk mitigation through policies and standards such as not allowing anonymous pings of genetic beacons and requiring minimum beacon sizes.",
"We use highdensity single nucleotide polymorphism (SNP) genotyping microarrays to demonstrate the ability to accurately and robustly determine whether individuals are in a complex genomic DNA mixture. We first develop a theoretical framework for detecting an individual's presence within a mixture, then show, through simulations, the limits associated with our method, and finally demonstrate experimentally the identification of the presence of genomic DNA of specific individuals within a series of highly complex genomic mixtures, including mixtures where an individual contributes less than 0.1 of the total genomic DNA. These findings shift the perceived utility of SNPs for identifying individual trace contributors within a forensics mixture, and suggest future research efforts into assessing the viability of previously suboptimal DNA sources due to sample contamination. These findings also suggest that composite statistics across cohorts, such as allele frequency or genotype counts, do not mask identity within genomewide association studies. The implications of these findings are discussed.",
"Genomic datasets are often associated with sensitive phenotypes. Therefore, the leak of membership information is a major privacy risk. Genomic beacons aim to provide a secure, easy to implement, and standardized interface for data sharing by only allowing yes no queries on the presence of specific alleles in the dataset. Previously deemed secure against reidentification attacks, beacons were shown to be vulnerable despite their stringent policy. Recent studies have demonstrated that it is possible to determine whether the victim is in the dataset, by repeatedly querying the beacon for his her single nucleotide polymorphisms (SNPs). In this work, we propose a novel reidentification attack and show that the privacy risk is more serious than previously thought. Using the proposed attack, even if the victim systematically hides informative SNPs (i.e., SNPs with very low minor allele frequency MAF), it is possible to infer the alleles at positions of interest as well as the beacon query results with very high confidence. Our method is based on the fact that alleles at different loci are not necessarily independent. We use the linkage disequilibrium and a highorder Markov chainbased algorithm for the inference. We show that in a simulated beacon with 65 individuals from the CEU population, we can infer membership of individuals with 95 confidence with only 5 queries, even when SNPs with MAF less than 0.05 are hidden. This means, we need less than 0.5 of the number of queries that existing works require, to determine beacon membership under the same conditions. We further show that countermeasures such as hiding certain parts of the genome or setting a query budget for the user would fail to protect the privacy of the participants under our adversary model.",
"Genomewide association studies (GWAS) aim at discovering the association between genetic variations, particularly singlenucleotide polymorphism (SNP), and common diseases, which is well recognized to be one of the most important and active areas in biomedical research. Also renowned is the privacy implication of such studies, which has been brought into the limelight by the recent attack proposed by Homer's attack demonstrates that it is possible to identify a GWAS participant from the allele frequencies of a large number of SNPs. Such a threat, unfortunately, was found in our research to be significantly understated. In this paper, we show that individuals can actually be identified from even a relatively small set of statistics, as those routinely published in GWAS papers. We present two attacks. The first one extends Homer's attack with a much more powerful test statistic, based on the correlations among different SNPs described by coefficient of determination (r2). This attack can determine the presence of an individual from the statistics related to a couple of hundred SNPs. The second attack can lead to complete disclosure of hundreds of participants' SNPs, through analyzing the information derived from published statistics. We also found that those attacks can succeed even when the precisions of the statistics are low and part of data is missing. We evaluated our attacks on the real human genomes and concluded that such threats are completely realistic.",
"Recent advances in genomescale, systemlevel measurements of quantitative phenotypes (transcriptome, metabolome, and proteome) promise to yield unprecedented biological insights. In this environment, broad dissemination of results from genomewide association studies (GWASs) or deepsequencing efforts is highly desirable. However, summary results from casecontrol studies (allele frequencies) have been withdrawn from public access because it has been shown that they can be used for inferring participation in a study if the individual's genotype is available. A natural question that follows is how much private information is contained in summary results from quantitative trait GWAS such as regression coefficients or p values. We show that regression coefficients for many SNPs can reveal the person's participation and for participants his or her phenotype with high accuracy. Our power calculations show that regression coefficients contain as much information on individuals as allele frequencies do, if the person's phenotype is rather extreme or if multiple phenotypes are available as has been increasingly facilitated by the use of multipleomics data sets. These findings emphasize the need to devise a mechanism that allows data sharing that will facilitate scientific progress without sacrificing privacy protection.",
"As genomic data becomes widely used, the problem of genomic data privacy becomes a hot interdisciplinary research topic among geneticists, bioinformaticians and security and privacy experts. Practical attacks have been identified on genomic data, and thus break the privacy expectations of individuals who contribute their genomic data to medical research, or simply share their data online. Frustrating as it is, the problem could become even worse. Existing genomic privacy breaches rely on loworder SNV (Single Nucleotide Variant) correlations. Our work shows that far more powerful attacks can be designed if highorder correlations are utilized. We corroborate this concern by making use of different SNV correlations based on various genomic data models and applying them to an inference attack on individuals' genotype data with hidden SNVs. We also show that loworder models behave very differently from real genomic data and therefore should not be relied upon for privacypreserving solutions."
]
}

1908.11315
 2970031230
 Due to its hereditary nature, genomic data is not only linked to its owner but to that of close relatives as well. As a result, its sensitivity does not really degrade over time; in fact, the relevance of a genomic sequence is likely to be longer than the security provided by encryption. This prompts the need for specialized techniques providing longterm security for genomic data, yet the only available tool for this purpose is GenoGuard (, 2015). By relying on Honey Encryption, GenoGuard is secure against an adversary that can brute force all possible keys; i.e., whenever an attacker tries to decrypt using an incorrect password, she will obtain an incorrect but plausible looking decoy sequence. In this paper, we set to analyze the realworld security guarantees provided by GenoGuard; specifically, assess how much more information does access to a ciphertext encrypted using GenoGuard yield, compared to one that was not. Overall, we find that, if the adversary has access to side information in the form of partial information from the target sequence, the use of GenoGuard does appreciably increase her power in determining the rest of the sequence. We show that, in the case of a sequence encrypted using an easily guessable (lowentropy) password, the adversary is able to rule out most decoy sequences, and obtain the target sequence with just 2.5 of it available as side information. In the case of a hardertoguess (highentropy) password, we show that the adversary still obtains, on average, better accuracy in guessing the rest of the target sequences than using stateoftheart genomic sequence inference methods, obtaining up to 15 improvement in accuracy.
 Data sharing. Progress in genomics research is dependent on collaboration and data sharing among different institutions. Given the sensitive nature of the data, as well as regulatory and ethics constraints, this often proves to be a challenging task. @cite_4 propose the use of secret sharing to distribute data among several entities and, using secure multiparty computations, support privacyfriendly computations across multiple entities. @cite_3 present GENSETS, a genomewide, privacypreserving similar patients querying system using genomic edit distance approximation and private set difference protocols. Then, @cite_21 use Software Guard Extensions (SGX) to build a privacypreserving international collaboration tool; this enables secure and distributed computations over encrypted data, thus supporting the analysis of rare disease genetic data across different continents. Finally, Oprisanu and De Cristofaro @cite_14 present a framework ( AnoniMME'') geared supporting anonymous queries within the Matchmaker Exchange platform, which allows researchers to perform queries for rare genetic disease discovery over multiple federated databases.
 {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_4",
"@cite_3"
],
"mid": [
"2951951981",
"2568218703",
"2166106997",
""
],
"abstract": [
"Motivation: Advances in genome sequencing and genomics research are bringing us closer to a new era of personalized medicine, where healthcare can be tailored to the individual’s genetic makeup, and to more effective diagnosis and treatment of rare genetic diseases. Much of this progress depends on collaborations and access to genomes, and thus a number of initiatives have been introduced to support seamless data sharing. Among these, the Global Alliance for Genomics and Health runs a popular platform, called Matchmaker Exchange, which allows researchers to perform queries for rare genetic disease discovery over multiple federated databases. Queries include gene variations which are linked to rare diseases, and the ability to find other researchers that have seen or have interest in those variations is extremely valuable. Nonetheless, in some cases, researchers may be reluctant to use the platform since the queries they make (thus, what they are working on) are revealed to other researchers, and this creates concerns with privacy and competitive advantage. Contributions: We present AnoniMME, a novel framework geared to enable anonymous queries within the Matchmaker Exchange platform. The framework, building on a cryptographic primitive called Reverse Private Information Retrieval (PIR), let researchers anonymously query the federated platform, in a multiserver setting. Specifically, they write their query, along with a public encryption key, anonymously in a public database. Responses are also supported, so that other researchers can respond to queries by providing their encrypted contact details. Availability and Implementation: https: github.com bristenaop AnoniMME.",
"We introduce PRINCESS, a privacypreserving international collaboration framework for analyzing rare disease genetic data that are distributed across different continents. PRINCESS leverages Software Guard Extensions (SGX) and hardware for trustworthy computation. Unlike a traditional international collaboration model, where individuallevel patient DNA are physically centralized at a single site, PRINCESS performs a secure and distributed computation over encrypted data, fulfilling institutional policies and regulations for protected health information. To demonstrate PRINCESS' performance and feasibility, we conducted a familybased allelic association study for Kawasaki Disease, with data hosted in three different continents. The experimental results show that PRINCESS provides secure and accurate analyses much faster than alternative solutions, such as homomorphic encryption and garbled circuits (over 40 000× faster). https: github.com achenfengb PRINCESS_opensource. shw070@ucsd.edu. Supplementary data are available at Bioinformatics online.",
"Motivation: Increased availability of various genotyping techniques has initiated a race for finding genetic markers that can be used in diagnostics and personalized medicine. Although many genetic risk factors are known, key causes of common diseases with complex heritage patterns are still unknown. Identification of such complex traits requires a targeted study over a large collection of data. Ideally, such studies bring together data from many biobanks. However, data aggregation on such a large scale raises many privacy issues. Results: We show how to conduct such studies without violating privacy of individual donors and without leaking the data to third parties. The presented solution has provable security guarantees. Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.",
""
]
}

1908.11315
 2970031230
 Due to its hereditary nature, genomic data is not only linked to its owner but to that of close relatives as well. As a result, its sensitivity does not really degrade over time; in fact, the relevance of a genomic sequence is likely to be longer than the security provided by encryption. This prompts the need for specialized techniques providing longterm security for genomic data, yet the only available tool for this purpose is GenoGuard (, 2015). By relying on Honey Encryption, GenoGuard is secure against an adversary that can brute force all possible keys; i.e., whenever an attacker tries to decrypt using an incorrect password, she will obtain an incorrect but plausible looking decoy sequence. In this paper, we set to analyze the realworld security guarantees provided by GenoGuard; specifically, assess how much more information does access to a ciphertext encrypted using GenoGuard yield, compared to one that was not. Overall, we find that, if the adversary has access to side information in the form of partial information from the target sequence, the use of GenoGuard does appreciably increase her power in determining the rest of the sequence. We show that, in the case of a sequence encrypted using an easily guessable (lowentropy) password, the adversary is able to rule out most decoy sequences, and obtain the target sequence with just 2.5 of it available as side information. In the case of a hardertoguess (highentropy) password, we show that the adversary still obtains, on average, better accuracy in guessing the rest of the target sequences than using stateoftheart genomic sequence inference methods, obtaining up to 15 improvement in accuracy.
 Privacyfriendly testing. Another line of work focuses on protecting privacy in the context of personal genomic testing, i.e., computational tests run on sequenced genomes to assess, e.g., genetic susceptibility to diseases, determining the best course of treatment, etc. @cite_31 assume that each individual keeps a copy of their data and consents to tests done in such a way that only the outcome is disclosed. They present a few cryptographic protocols allowing researchers to privately search mutations in specific genes. @cite_34 rely on a semitrusted party to store an encrypted copy of the individual's genomic data: using additively homomorphic encryption and proxy reencryption, they allow a Medical Center to privately perform disease susceptibility tests on patients' SNPs. @cite_6 introduce a new cryptographic primitive called Controlled Functional Encryption (CFE), which allows users to learn only certain functions of the (encrypted) data, using keys obtained from an authority; however, the client is required to send a fresh key request to the authority every time they want to evaluate a function on a ciphertext. Overall, for an overview of privacyenhancing technologies applied to genetic testing, we refer the reader to @cite_13 .
 {
"cite_N": [
"@cite_31",
"@cite_34",
"@cite_13",
"@cite_6"
],
"mid": [
"2087135382",
"2133711597",
"2885741165",
"2056556714"
],
"abstract": [
"Recent advances in DNA sequencing technologies have put ubiquitous availability of fully sequenced human genomes within reach. It is no longer hard to imagine the day when everyone will have the means to obtain and store one's own DNA sequence. Widespread and affordable availability of fully sequenced genomes immediately opens up important opportunities in a number of healthrelated fields. In particular, common genomic applications and tests performed in vitro today will soon be conducted computationally, using digitized genomes. New applications will be developed as genomeenabled medicine becomes increasingly preventive and personalized. However, this progress also prompts significant privacy challenges associated with potential loss, theft, or misuse of genomic data. In this paper, we begin to address genomic privacy by focusing on three important applications: Paternity Tests, Personalized Medicine, and Genetic Compatibility Tests. After carefully analyzing these applications and their privacy requirements, we propose a set of efficient techniques based on private set operations. This allows us to implement in in silico some operations that are currently performed via in vitro methods, in a secure fashion. Experimental results demonstrate that proposed techniques are both feasible and practical today.",
"In this paper, we propose privacyenhancing technologies for medical tests and personalized medicine methods that use patients' genomic data. Focusing on genetic diseasesusceptibility tests, we develop a new architecture (between the patient and the medical unit) and propose a \"privacypreserving disease susceptibility test\" (PDS) by using homomorphic encryption and proxy reencryption. Assuming the whole genome sequencing to be done by a certified institution, we propose to store patients' genomic data encrypted by their public keys at a \"storage and processing unit\" (SPU). Our proposed solution lets the medical unit retrieve the encrypted genomic data from the SPU and process it for medical tests and personalized medicine methods, while preserving the privacy of patients' genomic data. We also quantify the genomic privacy of a patient (from the medical unit's point of view) and show how a patient's genomic privacy decreases with the genetic tests he undergoes due to (i) the nature of the genetic test, and (ii) the characteristics of the genomic data. Furthermore, we show how basic policies and obfuscation methods help to keep the genomic privacy of a patient at a high level. We also implement and show, via a complexity analysis, the practicality of PDS.",
"",
"Motivated by privacy and usability requirements in various scenarios where existing cryptographic tools (like secure multiparty computation and functional encryption) are not adequate, we introduce a new cryptographic tool called Controlled Functional Encryption (CFE). As in functional encryption, CFE allows a user (client) to learn only certain functions of encrypted data, using keys obtained from an authority. However, we allow (and require) the client to send a fresh key request to the authority every time it wants to evaluate a function on a ciphertext. We obtain efficient solutions by carefully combining CCA2 secure publickey encryption (or rerandomizable RCCA secure publickey encryption, depending on the nature of security desired) with Yao's garbled circuit. Our main contributions in this work include developing and for mally defining the notion of CFE; designing theoretical and practical constructions of CFE schemes achieving these definitions for specific and general classes of functions; and evaluating the performance of our constructions on various application scenarios."
]
}

Dataset Card for MultiXScience
Dataset Summary
MultiXScience, a largescale multidocument summarization dataset created from scientific articles. MultiXScience introduces a challenging multidocument summarization task: writing the relatedwork section of a paper based on its abstract and the articles it references.
Supported Tasks and Leaderboards
[More Information Needed]
Languages
The text in the dataset is in English
Dataset Structure
Data Instances
{'abstract': 'Author(s): Kuperberg, Greg; Thurston, Dylan P.  Abstract: We give a purely topological definition of the perturbative quantum invariants of links and 3manifolds associated with ChernSimons field theory. Our definition is as close as possible to one given by Kontsevich. We will also establish some basic properties of these invariants, in particular that they are universally finite type with respect to algebraically split surgery and with respect to Torelli surgery. Torelli surgery is a mutual generalization of blink surgery of Garoufalidis and Levine and clasper surgery of Habiro.', 'aid': 'math9912167', 'mid': '1631980677', 'ref_abstract': {'abstract': ['This note is a sequel to our earlier paper of the same title [4] and describes invariants of rational homology 3spheres associated to acyclic orthogonal local systems. Our work is in the spirit of the Axelrod–Singer papers [1], generalizes some of their results, and furnishes a new setting for the purely topological implications of their work.', 'Recently, Mullins calculated the CassonWalker invariant of the 2fold cyclic branched cover of an oriented link in S^3 in terms of its Jones polynomial and its signature, under the assumption that the 2fold branched cover is a rational homology 3sphere. Using elementary principles, we provide a similar calculation for the general case. In addition, we calculate the LMO invariant of the pfold branched cover of twisted knots in S^3 in terms of the Kontsevich integral of the knot.'], 'cite_N': ['@cite_16', '@cite_26'], 'mid': ['1481005306', '1641082372']}, 'related_work': 'Two other generalizations that can be considered are invariants of graphs in 3manifolds, and invariants associated to other flat connections @cite_16 . We will analyze these in future work. Among other things, there should be a general relation between flat bundles and links in 3manifolds on the one hand and finite covers and branched covers on the other hand @cite_26 .'}
Data Fields
{abstract
: text of paper abstract
aid
: arxiv id
mid
: microsoft academic graph id
ref_abstract
:
{
abstract
: text of reference paper (cite_N) abstract
cite_N
: special cite symbol,
mid
: reference paper's (cite_N) microsoft academic graph id
},
related_work
: text of paper related work
}
Data Splits
The data is split into a training, validation and test.
train  validation  test 

30369  5066  5093 
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
@article{lu2020multi,
title={MultiXScience: A Largescale Dataset for Extreme Multidocument Summarization of Scientific Articles},
author={Lu, Yao and Dong, Yue and Charlin, Laurent},
journal={arXiv preprint arXiv:2010.14235},
year={2020}
}
Contributions
Thanks to @moussaKam for adding this dataset.