aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
sequence
math9912167
1631980677
Author(s): Kuperberg, Greg; Thurston, Dylan P. | Abstract: We give a purely topological definition of the perturbative quantum invariants of links and 3-manifolds associated with Chern-Simons field theory. Our definition is as close as possible to one given by Kontsevich. We will also establish some basic properties of these invariants, in particular that they are universally finite type with respect to algebraically split surgery and with respect to Torelli surgery. Torelli surgery is a mutual generalization of blink surgery of Garoufalidis and Levine and clasper surgery of Habiro.
Two other generalizations that can be considered are invariants of graphs in 3-manifolds, and invariants associated to other flat connections @cite_16 . We will analyze these in future work. Among other things, there should be a general relation between flat bundles and links in 3-manifolds on the one hand and finite covers and branched covers on the other hand @cite_26 .
{ "cite_N": [ "@cite_16", "@cite_26" ], "mid": [ "1481005306", "1641082372" ], "abstract": [ "This note is a sequel to our earlier paper of the same title [4] and describes invariants of rational homology 3-spheres associated to acyclic orthogonal local systems. Our work is in the spirit of the Axelrod–Singer papers [1], generalizes some of their results, and furnishes a new setting for the purely topological implications of their work.", "Recently, Mullins calculated the Casson-Walker invariant of the 2-fold cyclic branched cover of an oriented link in S^3 in terms of its Jones polynomial and its signature, under the assumption that the 2-fold branched cover is a rational homology 3-sphere. Using elementary principles, we provide a similar calculation for the general case. In addition, we calculate the LMO invariant of the p-fold branched cover of twisted knots in S^3 in terms of the Kontsevich integral of the knot." ] }
cs9910011
2168463568
A statistical model for segmentation and word discovery in child directed speech is presented. An incremental unsupervised learning algorithm to infer word boundaries based on this model is described and results of empirical tests showing that the algorithm is competitive with other models that have been used for similar tasks are also presented.
Model Based Dynamic Programming, hereafter referred to as MBDP-1 @cite_0 , is probably the most recent work that addresses the exact same issue as that considered in this paper. Both the approach presented in this paper and Brent's MBDP-1 are based on explicit probability models. Approaches not based on explicit probability models include those based on information theoretic criteria such as MDL , transitional probability or simple recurrent networks . The maximum likelihood approach due to Olivier:SGL68 is probabilistic in the sense that it is geared towards explicitly calculating the most probable segmentation of each block of input utterances. However, it is not based on a formal statistical model. To avoid needless repetition, we only describe Brent's MBDP-1 below and direct the interested reader at Brent:EPS99 which provides an excellent review of many of the algorithms mentioned above.
{ "cite_N": [ "@cite_0" ], "mid": [ "2074546930" ], "abstract": [ "This paper presents a model-based, unsupervised algorithm for recovering word boundaries in a natural-language text from which they have been deleted. The algorithm is derived from a probability model of the source that generated the text. The fundamental structure of the model is specified abstractly so that the detailed component models of phonology, word-order, and word frequency can be replaced in a modular fashion. The model yields a language-independent, prior probability distribution on all possible sequences of all possible words over a given alphabet, based on the assumption that the input was generated by concatenating words from a fixed but unknown lexicon. The model is unusual in that it treats the generation of a complete corpus, regardless of length, as a single event in the probability space. Accordingly, the algorithm does not estimate a probability distribution on wordss instead, it attempts to calculate the prior probabilities of various word sequences that could underlie the observed text. Experiments on phonemic transcripts of spontaneous speech by parents to young children suggest that our algorithm is more effective than other proposed algorithms, at least when utterance boundaries are given and the text includes a substantial number of short utterances." ] }
cs9911003
2950670108
We solve the subgraph isomorphism problem in planar graphs in linear time, for any pattern of constant size. Our results are based on a technique of partitioning the planar graph into pieces of small tree-width, and applying dynamic programming within each piece. The same methods can be used to solve other planar graph problems including connectivity, diameter, girth, induced subgraph isomorphism, and shortest paths.
Recently we were able to characterize the graphs that can occur at most @math times as a subgraph isomorph in an @math -vertex planar graph: they are exactly the 3-connected planar graphs @cite_41 . However our proof does not lead to an efficient algorithm for 3-connected planar subgraph isomorphism. In this paper we use different techniques which do not depend on high-order connectivity.
{ "cite_N": [ "@cite_41" ], "mid": [ "2074992286" ], "abstract": [ "It is well known that any planar graph contains at most O(n) complete subgraphs. We extend this to an exact characterization: G occurs O(n) times as a subgraph of any planar graph, if and only if G is three-connected. We generalize these results to similarly characterize certain other minor-closed families of graphs; in particular, G occurs O(n) times as a subgraph of the Kb,c-free graphs, b ≥ c and c ≤ 4, iff G is c-connected. Our results use a simple Ramsey-theoretic lemma that may be of independent interest. © 1993 John Wiley & Sons, Inc." ] }
hep-th9908200
2160091034
Daviau showed the equivalence of matrix Dirac theory, formulated within a spinor bundle (S_x C _x^4 ), to a Clifford algebraic formulation within space Clifford algebra (C ( R ^3 , ) M _ 2 ( C ) P ) Pauli algebra (matrices) ≃ ℍ ⨁ ℍ ≃ biquaternions. We will show, that Daviau's map θ: ( : C ^4 M _ 2 ( C ) ) is an isomorphism. It is shown that Hestenes' and Parra's formulations are equivalent to Daviau's Clifford algebra formulation, which uses outer automorphisms. The connection between different formulations is quite remarkable, since it connects the left and right action on the Pauli algebra itself viewed as a bi-module with the left (resp. right) action of the enveloping algebra (P^ P P^T on P ). The isomorphism established in this article and given by Daviau's map does clearly show that right and left actions are of similar type. This should be compared with attempts of Hestenes, Daviau, and others to interprete the right action as the iso-spin freedom.
A further genuine and important approach to the spinor-tensor transition was developed starting probably with Crawford by P. Lounesto, @cite_6 and references there. He investigated the question, how a spinor field can be reconstructed from known tensor densities. The major characterization is derived, using Fierz-Kofink identities, from elements called Boomerangs --because they are able to come back to the spinorial picture. Lounesto's result is a characterization of spinors based on multi-vector relations which unveils a new unknown type of spinor.
{ "cite_N": [ "@cite_6" ], "mid": [ "2082565556" ], "abstract": [ "A historical review of spinors is given together with a construction of spinor spaces as minimal left ideals of Clifford algebras. Spinor spaces of euclidean spaces over reals have a natural linear structure over reals, complex numbers or quaternions. Clifford algebras have involutions which induce bilinear forms or scalar products on spinor spaces. The automorphism groups of these scalar products of spinors are determined and also classified." ] }
cs9903014
1612660921
We present an open architecture for just-in-time code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make system-level code generation useful in practice.
Pioneering research in dynamic runtime optimization was done by Hansen @cite_8 who first described a fully automated system for runtime code optimization. His system was similar in structure to our system---it was composed of a loader, a profiler, and an optimizer---but used profiling data only to decide when to optimize and what to optimize, not how to optimize. Also, his system interpreted code prior to optimization, since load time code generation was too memory and time consuming at the time.
{ "cite_N": [ "@cite_8" ], "mid": [ "2101776604" ], "abstract": [ "Abstract : This thesis investigates adaptive compiler systems that perform, during program execution, code optimizations based on the dynamic behavior of the program as opposed to current approaches that employ a fixed code generation strategy, i.e., one in which a predetermined set of code optimizations are applied at compile-time to an entire program. The main problems associated with such adaptive systems are studied in general: which optimizations to apply to what parts of the program and when. Two different optimization strategies result: an ideal scheme which is not practical to implement, and a more basic scheme that is. The design of a practical system is discussed for the FORTRAN IV language. The system was implemented and tested with programs having different behavioral characteristics." ] }
cs9903014
1612660921
We present an open architecture for just-in-time code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make system-level code generation useful in practice.
Hansen's work was followed by several other projects that have investigated the benefits of runtime optimization: the Smalltalk @cite_33 and SELF @cite_0 systems that focused on the benefits of dynamic optimization in an object-oriented environment; Morph'', a project developed at Harvard University @cite_16 ; and the system described by the authors of this paper @cite_4 @cite_30 . Other projects have experimented with optimization at link time rather than at runtime @cite_18 . At link time, many of the problems described in this paper are non-existent. Among them the decision when to optimize, what to optimize, and how to replace code. However, there is also a price to pay, namely that it cannot be performed in the presence of dynamic loading.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_4", "@cite_33", "@cite_0", "@cite_16" ], "mid": [ "1542872339", "2112378054", "1541680547", "1993318777", "2167997514", "2153228154" ], "abstract": [ "Despite the apparent success of the Java Virtual Machine, its lackluster performance makes it ill-suited for many speed-critical applications. Although the latest just-in-time compilers and dedicated Java processors try to remedy this situation, optimized code compiled directly from a C program source is still considerably faster than software transported via Java byte-codes. This is true even if the Java byte-codes are subsequently further translated into native code. In this paper, we claim that these performance penalties are not a necessary consequence of machine-independence, but related to Java's particular intermediate representation and runtime architecture. We have constructed a prototype and are further developing a software transportability scheme founded on a tree-based alternative to Java byte-codes. This tree-based intermediate representation is not only twice as compact as Java byte-codes, but also contains more high-level information, some of which is critical for advanced code optimizations. Our architecture not only provides on-the-fly code generation from this intermediate representation, but also continuous re-optimization of the existing code-base by a low-priority background process. The re-optimization process is guided by up-to-the-minute profiling data, leading to superior runtime performance.", "Modifying code after the compiler has generated it can be useful for both optimization and instrumentation. Several years ago we designed the Mahler system, which uses link-time code modification for a variety of tools on our experimental Titan workstations. Killian’s Pixie tool works even later, translating a fully-linked MIPS executable file into a new version with instrumentation added. Recently we wanted to develop a hybrid of the two, that would let us experiment with both optimization and instrumentation on a standard workstation, preferably without requiring us to modify the normal compilers and linker. This paper describes prototypes of two hybrid systems, closely related to Mahler and Pixie. We implemented basic-block counting in both, and compare the resulting time and space expansion to those of Mahler and Pixie.", "In the past few years, code optimization has become a major field of research. Many efforts have been undertaken to find new sophisticated algorithms that fully exploit the computing power of today's advanced microprocessors. Most of these algorithms do very well in statically linked, monolithic software systems, but perform perceptibly worse in extensible systems. The modular structure of these systems imposes a natural barrier for intermodular compile-time optimizations. In this paper we discuss a different approach in which optimization is no longer performed at compile-time, but is delayed until runtime. Reoptimized module versions are generated on-the-fly while the system is running, replacing earlier less optimized versions.", "The Smalltalk-80* programming language includes dynamic storage allocation, full upward funargs, and universally polymorphic procedures; the Smalltalk-80 programming system features interactive execution with incremental compilation, and implementation portability. These features of modern programming systems are among the most difficult to implement efficiently, even individually. A new implementation of the Smalltalk-80 system, hosted on a small microprocessor-based computer, achieves high performance while retaining complete (object code) compatibility with existing implementations. This paper discusses the most significant optimization techniques developed over the course of the project, many of which are applicable to other languages. The key idea is to represent certain runtime state (both code and data) in more than one form, and to convert between forms when needed.", "Crossing abstraction boundaries often incurs a substantial run-time overhead in the form of frequent procedure calls. Thus, pervasive use of abstraction, while desirable from a design standpoint, may lead to very inefficient programs. Aggressively optimizing compilers can reduce this overhead but conflict with interactive programming environments because they introduce long compilation pauses and often preclude source-level debugging. Thus, programmers are caught on the horns of two dilemmas: they have to choose between abstraction and efficiency, and between responsive programming environments and efficiency. This dissertation shows how to reconcile these seemingly contradictory goals. Four new techniques work together to achieve this: - Type feedback achieves high performance by allowing the compiler to inline message sends based on information extracted from the runtime system. - Adaptive optimization achieves high responsiveness without sacrificing performance by using a fast compiler to generate initial code while automatically recompiling heavily used program parts with an optimizing compiler. - Dynamic deoptimization allows source-level debugging of optimized code by transparently recreating non-optimized code as needed. - Polymorphic inline caching speeds up message dispatch and, more significantly, collects concrete type information for the compiler. With better performance yet good interactive behavior, these techniques reconcile exploratory programming, ubiquitous abstraction, and high performance.", "The Morph system provides a framework for automatic collection and management of profile information and application of profile-driven optimizations. In this paper, we focus on the operating system support that is required to collect and manage profile information on an end-user's workstation in an automatic, continuous, and transparent manner. Our implementation for a Digital Alpha machine running Digital UNIX 4.0 achieves run-time overheads of less than 0.3 during profile collection. Through the application of three code layout optimizations, we further show that Morph can use statistical profiles to improve application performance. With appropriate system support, automatic profiling and optimization is both possible and effective." ] }
cs9903014
1612660921
We present an open architecture for just-in-time code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make system-level code generation useful in practice.
Common to the above-mentioned work is that the main focus has always been on functional aspects, that is how to profile and which optimizations to perform. Related to this is research on how to boost application performance by combining profiling data and code optimizations at compile time (not at runtime), including work on method dispatch optimizations for object-oriented programming languages @cite_22 @cite_35 , profile-guided intermodular optimizations @cite_3 @cite_26 , code positioning techniques @cite_13 @cite_25 , and profile-guided data cache locality optimizations @cite_29 @cite_10 @cite_12 .
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_22", "@cite_10", "@cite_29", "@cite_3", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2141293928", "2030400507", "2164470257", "2117703621", "", "1524758670", "2116672403", "2162612712", "177739376" ], "abstract": [ "Polymorphic inline caches (PICs) provide a new way to reduce the overhead of polymorphic message sends by extending inline caches to include more than one cached lookup result per call site. For a set of typical object-oriented SELF programs, PICs achieve a median speedup of 11 .", "SUMMARY This paper describes critical implementation issues that must be addressed to develop a fully automatic inliner. These issues are: integration into a compiler, program representation, hazard prevention, expansion sequence control, and program modification. An automatic inter-file inliner that uses profile information has been implemented and integrated into an optimizing C compiler. The experimental results show that this inliner achieves significant speedups for production C programs.", "The Smalltalk-80 system makes it possible to write programs quickly by providing object-oriented programming, incremental compilation, run-time type checking, use-extensible data types and control structures, and an interactive graphical interface. However, the potential savings in programming effort have been curtailed by poor performance in widely available computers or high processor cost. Smalltalk-80 systems pose tough challenges for implementors: dynamic data typing, a high-level instruction set, frequent and expensive procedure calls, and object-oriented storage management. The dissertation documents two results that run counter to conventional wisdom: that a reduced instruction set computer can offer excellent performance for a system with dynamic data typing such as Smalltalk-80, and that automatic storage reclamation need not be time-consuming. This project was sponsored by Defense Advance Research Projects Agency (DoD) ARPA Order No. 3803, monitored by Naval Electronic System Command under Contractor No. N00034-R-0251. It was also sponsored by Defense Advance Research Projects Agency (DoD) ARPA Order No. 4871, monitored by Naval Electronic Systems Command under Contract No. N00039-84-C-0089.", "The cost of accessing main memory is increasing. Machine designers have tried to mitigate the consequences of the processor and memory technology trends underlying this increasing gap with a variety of techniques to reduce or tolerate memory latency. These techniques, unfortunately, are only occasionally successful for pointer-manipulating programs. Recent research has demonstrated the value of a complementary approach, in which pointer-based data structures are reorganized to improve cache locality.This paper studies a technique for using a generational garbage collector to reorganize data structures to produce a cache-conscious data layout, in which objects with high temporal affinity are placed next to each other, so that they are likely to reside in the same cache block. The paper explains how to collect, with low overhead, real-time profiling information about data access patterns in object-oriented languages, and describes a new copying algorithm that utilizes this information to produce a cache-conscious object layout.Preliminary results show that this technique reduces cache miss rates by 21--42 , and improves program performance by 14--37 over Cheney's algorithm. We also compare our layouts against those produced by the Wilson-Lam-Moher algorithm, which attempts to improve program locality at the page level. Our cache-conscious object layouts reduces cache miss rates by 20--41 and improves program performance by 18--31 over their algorithm, indicating that improving locality at the page level is not necessarily beneficial at the cache level.", "", "We have developed a system called OM to explore the problem of code optimization at link-time. OM takes a collection of object modules constituting the entire program, and converts the object code into a symbolic Register Transfer Language (RTL) form that can be easily manipulated. This RTL is then transformed by intermodule optimization and finally converted back into object form. Although much high-level information about the program is gone at link-time, this approach enables us to perform optimizations that a compiler looking at a single module cannot see. Since object modules are more or less independent of the particular source language or compiler, this also gives us the chance to improve the code in ways that some compilers might simply have missed. To test the concept, we have used OM to build an optimizer that does interprocedural code motion. It moves simple loop-invariant code out of loops, even when the loop body extends across many procedures and the loop control is in a different procedure from the invariant code. Our technique also easily handles ‘‘loops’’ induced by recursion rather than iteration. Our code motion technique makes use of an interprocedural liveness analysis to discover dead registers that it can use to hold loop-invariant results. This liveness analysis also lets us perform interprocedural dead code elimination. We applied our code motion and dead code removal to SPEC benchmarks compiled with optimization using the standard compilers for the DECstation 5000. Our system improved the performance by 5 on average and by more than 14 in one case. More improvement should be possible soon; at present we move only simple load and load-address operations out of loops, and we scavenge registers to hold these values, rather than completely reallocating them. This paper will appear in the March issue of Journal of Programming Languages. It replaces Technical Note TN-31, an earlier version of the same material.", "This paper presents the results of our investigation of code positioning techniques using execution profile data as input into the compilation process. The primary objective of the positioning is to reduce the overhead of the instruction memory hierarchy. After initial investigation in the literature, we decided to implement two prototypes for the Hewlett-Packard Precision Architecture (PA-RISC). The first, built on top of the linker, positions code based on whole procedures. This prototype has the ability to move procedures into an order that is determined by a “closest is best” strategy. The second prototype, built on top of an existing optimizer package, positions code based on basic blocks within procedures. Groups of basic blocks that would be better as straight-line sequences are identified as chains . These chains are then ordered according to branch heuristics. Code that is never executed during the data collection runs can be physically separated from the primary code of a procedure by a technique we devised called procedure splitting . The algorithms we implemented are described through examples in this paper. The performance improvements from our work are also summarized in various tables and charts.", "A dynamic instruction trace often contains many unnecessary instructions that are required only by the unexecuted portion of the program. Hot-cold optimization (HCO) is a technique that realizes this performance opportunity. HCO uses profile information to partition each routine into frequently executed (hot) and infrequently executed (cold) parts. Unnecessary operations in the hot portion are removed, and compensation code is added on transitions from hot to cold as needed. We evaluate HCO on a collection of large Windows NT applications. HCO is most effective on the programs that are call intensive and have flat profiles, providing a 3-8 reduction in path length beyond conventional optimization.", "" ] }
cs9903018
1593496962
Scripting languages are becoming more and more important as a tool for software development, as they provide great flexibility for rapid prototyping and for configuring componentware applications. In this paper we present LuaJava, a scripting tool for Java. LuaJava adopts Lua, a dynamically typed interpreted language, as its script language. Great emphasis is given to the transparency of the integration between the two languages, so that objects from one language can be used inside the other like native objects. The final result of this integration is a tool that allows the construction of configurable Java applications, using off-the-shelf components, in a high abstraction level.
For Tcl @cite_13 two integration solutions exist: the TclBlend binding @cite_11 and the Jacl implementation @cite_14 . TclBlend is a binding between Java and Tcl, which, as LuaJava, allows Java objects to be manipulated by scripts. Some operations, such as access to fields and static method invocations, require specific functions. Calls to instance methods are handled naturally by Tcl commands.
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_11" ], "mid": [ "2162914120", "2789138443", "196441419" ], "abstract": [ "This paper describes the motivations and strategies behind our group’s efforts to integrate the Tcl and Java programming languages. From the Java perspective, we wish to create a powerful scripting solution for Java applications and operating environments. From the Tcl perspective, we want to allow for cross-platform Tcl extensions and leverage the useful features and user community Java has to offer. We are specifically focusing on Java tasks like Java Bean manipulation, where a scripting solution is preferable to using straight Java code. Our goal is to create a synergy between Tcl and Java, similar to that of Visual Basic and Visual C++ on the Microsoft desktop, which makes both languages more powerful together than they are individually.", "", "A mechanical brake actuator includes a manual lever which is self-locking in the active braking position. In such position, the lever and associated cable means applies tension to a spring whose force is applied to the plunger of a hydraulic master cylinder included in the conventional turntable hydraulic brake system. In the event of minor leakage and or thermal changes in the hydraulic braking system, the spring force exerted by the mechanical actuator maintains safe braking pressure when the crane is parked. When the mechanical actuator is in a release mode, the turntable hydraulic brake is foot pedal operated from the crane operator's cab without interference from the mechanical actuator." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
The main objective of this chapter was to study the basic @math -branes that one encounters in M-theory, and to treat them in a unified way. The need to unify the treatment is inspired by U-duality @cite_22 @cite_86 @cite_144 , which states that from the effective lower dimensional space-time point of view, all the charges carried by the different branes are on the same footing. While string theory breaks' this U-duality symmetry, choosing the NSNS string to be the fundamental object of the perturbative theory, the supergravity low-energy effective theories realize the U-duality at the classical level.
{ "cite_N": [ "@cite_86", "@cite_22", "@cite_144" ], "mid": [ "1987603965", "2141847212", "" ], "abstract": [ "Abstract The strong coupling dynamics of string theories in dimension d ⩾ 4 are studied. It is argued, among other things, that eleven-dimensional supergravity arises as a low energy limit of the ten-dimensional Type IIA superstring, and that a recently conjectured duality between the heterotic string and Type IIA superstrings controls the strong coupling dynamics of the heterotic string in five, six, and seven dimensions and implies S -duality for both heterotic and Type II strings.", "Abstract The effective action for type II string theory compactified on a six-torus is N = 8 supergravity, which is known to have an E7 duality symmetry. We show that this is broken by quantum effects to a discrete subgroup, E 7 ( Z ) , which contains both the T-duality group O(6, 6; Z ) and the S-duality group SL(2; Z ). We present evidence for the conjecture that E 7 ( Z ) is an exact ‘U-duality’ symmetry of type II string theory. This conjecture requires certain extreme black hole states to be identified with massive modes of the fundamental string. The gauge bosons from the Ramond-Ramond sector couple not to string excitations but to solitons. We discuss similar issues in the context of toroidal string compactifications to other dimensions, compactifications of the type II string on K3 × T2 and compactifications of 11-dimensional supermembrane theory.", "" ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
It should also not be underestimated that the derivation of the intersecting solutions presented in this chapter is a thorough consistency check of all the dualities acting on, and between, the supergravity theories. It is straightforward to check that, starting from one definite configuration, all its dual configurations are also found between the solutions presented here (with the exception of the solutions involving waves and KK monopoles). In this line of thoughts, we presented a recipe for building five and four dimensional extreme supersymmetric black holes. Some of these black holes were used in the literature to perform a microscopic counting of their entropy, as in @cite_191 @cite_61 for the 5-dimensional ones. Actually, the only (5 dimensional) black holes in the U-duality orbit' that were counted were the ones containing only D-branes and KK momentum. It is still an open problem to directly count the microscopic states of the same black hole but in a different M-theoretic formulation.
{ "cite_N": [ "@cite_191", "@cite_61" ], "mid": [ "2130491267", "2045285156" ], "abstract": [ "Abstract The Bekenstein-Hawking area-entropy relation S BH = A 4 is derived for a class of five-dimensional extremal black holes in string theory by counting the degeneracy of BPS solition bound states.", "Abstract Strominger and Vafa have used D-brane technology to identify and precisely count the degenerate quantum states responsible for the entropy of certain extremal, BPS-saturated black holes. Here we give a Type-II D-brane description of a class of extremal and non-extremal five-dimensional Reissner-Nordstrom solutions and identify a corresponding set of degenerate D-brane configurations. We use this information to do a string theory calculation of the entropy, radiation rate and “Hawking” temperature. The results agree perfectly with standard Hawking results for the corresponding nearly extremal Reissner-Nordstrom black holes. Although these calculations suffer from open-string strong coupling problems, we give some reasons to believe that they are nonetheless qualitatively reliable. In this optimistic scenario there would be no “information loss” in black hole quantum evolution." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Some of the intersection rules intersectionrules point towards an M-theory interpretation in terms of open branes ending on other branes. This idea will be elaborated and made firmer in the next chapter. It suffices to say here that this interpretation is consistent with dualities if we postulate that the open character' of a fundamental string ending on a D-brane is invariant under dualities. S-duality directly implies, for instance, that D-strings can end on NS5-branes @cite_176 . Then T-dualities imply that all the D-branes can end on the NS5-brane. In particular, the fact that the D2-brane can end on the NS5-brane should imply that the M5-brane is a D-brane for the M2-branes @cite_176 @cite_143 @cite_38 (this could also be extrapolated from the fact that a F1-string ends on a D4-brane). In the next chapter we will see how these ideas are further supported by the presence of the Chern-Simons terms in the supergravities, and by the structure of the world-volume effective actions of the branes.
{ "cite_N": [ "@cite_143", "@cite_38", "@cite_176" ], "mid": [ "1981788818", "2032622195", "" ], "abstract": [ "Abstract Various aspects of branes in the recently proposed matrix model for M-theory are discussed. A careful analysis of the supersymmetry algebra of the matrix model uncovers some central changes which can be activated only in the large N limit. We identify the states with non-zero charges as branes of different dimensions.", "Abstract We formulate boundary conditions for an open membrane that ends on the fivebrane of M -theory. We show that the dynamics of the eleven-dimensional fivebrane can be obtained from the quantization of a “small membrane” that is confined to a single fivebrane and which moves with the speed of light. This shows that the eleven-dimensional fivebrane has an interpretation as a D -brane of an open supermembrane as has recently been proposed by Strominger and Townsend. We briefly discuss the boundary dynamics of an infinitely extended planar membrane that is stretched between two parallel fivebranes.", "" ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
In this chapter we presented only extremal configurations of intersecting branes. The natural further step to take would be to consider also non-extremal configurations of intersecting branes. There is however a subtlety: there could be a difference between intersections of non-extremal branes, and non-extremal intersections of otherwise extremal branes. If we focus on bound states (and thus not on configurations of well separated branes), it appears that a non-extremal configuration would be characterized for instance by @math charges and by its mass. There is only one additional parameter with respect to the extremal configurations. Physically, we could have hardly expected to have, say, as many non-extremality parameters as the number of branes in the bound state. Indeed, non-extremality can be roughly associated to the branes being in an excited state, and it would have thus been very unlikely that the excitations did not mix between the various branes in the bound state. Non-extremal intersecting brane solutions were found first in @cite_48 , and were derived from the equations of motion following a similar approach as here in @cite_81 @cite_16 .
{ "cite_N": [ "@cite_48", "@cite_81", "@cite_16" ], "mid": [ "2054280159", "", "2086840642" ], "abstract": [ "Abstract We present non-extreme generalisations of intersecting p -brane solutions of eleven-dimensional supergravity which upon toroidal compactification reduce to non-extreme static black holes in dimensions D = 4, D = 5 and 6 ⩽ D ⩽ 9, parameterised by four, three and two charges, respectively. The D = 4 black holes are obtained either from a non-extreme configuration of three intersecting five-branes with a boost along the common string or from a non-extreme intersecting system of two two-branes and two five-branes. The D = 5 black holes arise from three intersecting two-branes or from a system of an intersecting two-brane and five-brane with a boost along the common string. The five-brane and two-brane with a boost along one direction reduce to black holes in D = 6 and D = 9, respectively, while a D = 7 black hole can be interpreted in terms of a non-extreme configuration of two intersecting two-branes. We discuss the expressions for the corresponding masses and entropies.", "", "Abstract We present a general rule determining how extremal branes can intersect in a configuration with zero binding energy. The rule is derived in a model independent way and in arbitrary spacetime dimensions D by solving the equations of motion of gravity coupled to a dilaton and several different n -form field strengths. The intersection rules are all compatible with supersymmetry, although derived without using it. We then specialize to the branes occurring in type II string theories and in M-theory. We show that the intersection rules are consistent with the picture that open branes can have boundaries on some other branes. In particular, all the D-branes of dimension q , with 1 ≤ q ≤ 6, can have boundaries on the solitonic 5-brane." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Supergravity solutions corresponding to D-branes at angles were found in @cite_31 @cite_59 @cite_178 . The resulting solutions contain as expected off-diagonal elements in the internal metric, and the derivation from the equations of motion as in @cite_59 is accordingly rather intricated.
{ "cite_N": [ "@cite_31", "@cite_178", "@cite_59" ], "mid": [ "2125497699", "2048444779", "" ], "abstract": [ "A low-energy background field solution is presented which describes several D-membranes oriented at angles with respect to one another. The mass and charge densities for this configuration are computed and found to saturate the Bogomol close_quote nyi-Prasad-Sommerfeld bound, implying the preservation of one-quarter of the supersymmetries. T duality is exploited to construct new solutions with nontrivial angles from the basic one. copyright ital 1997 ital The American Physical Society", "We construct the most general supersymmetric configuration of @math -branes and @math -branes on a 6-torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16-parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math -branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group.", "" ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Other half-supersymmetric bound states of this class are the @math multiplets of 1- and 5-branes in type IIB theory @cite_95 @cite_0 , or more precisely the configurations F1 @math D1 and NS5 @math D5, also called @math 1- and 5-branes, where @math is the NSNS charge and @math the RR charge of the compound. The classical solutions corresponding to this latter case were actually found more simply performing an @math transformation on the F1 or NS5 solutions.
{ "cite_N": [ "@cite_0", "@cite_95" ], "mid": [ "2022574854", "2127535930" ], "abstract": [ "The recent discovery of an explicit conformal field theory description of Type II p-branes makes it possible to investigate the existence of bound states of such objects. In particular, it is possible with reasonable precision to verify the prediction that the Type IIB superstring in ten dimensions has a family of soliton and bound state strings permuted by SL(2,Z). The space-time coordinates enter tantalizingly in the formalism as non-commuting matrices.", "An SL(2, Z) family of string solutions of type IIB supergravity in ten dimensions is constructed. The solutions are labeled by a pair of relatively prime integers, which characterize charges of the three-form field strengths. The string tensions depend on these charges in an SL(2, Z) covariant way. Compactifying on a circle and identifying with eleven-dimensional supergravity compactified on a torus implies that the modulus of the IIB theory should be equated to the modular parameter of the torus." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
In @cite_106 (inspired by @cite_54 ) a solution is presented which corresponds to a M5 @math M5=1 configuration, which follows the harmonic superposition rule, provided however that the harmonic functions depend on the respective relative transverse space (i.e. they are functions of two different spaces). The problem now is that the harmonic functions do not depend on the overall transverse space (which is 1-dimensional in the case above), the configuration thus not being localized there. A method actually inspired by the one presented here to derive the intersecting brane solutions, has been applied in @cite_89 to the intersections of this second kind. Imposing that the functions depend on the relative transverse space(s) (with factorized dependence) and not on the overall one, the authors of @cite_89 arrive at a formula for the intersections very similar to intersectionrules , with @math on the l.h.s. This rule correctly reproduces the M5 @math M5=1 configuration, and moreover also all the configurations of two D-branes with 8 Neumann-Dirichlet directions, which preserve @math supersymmetries but were excluded from the intersecting solutions derived in this chapter (only the configurations with 4 ND directions were found as solutions). One such configuration is e.g. D0 @math D8.
{ "cite_N": [ "@cite_54", "@cite_106", "@cite_89" ], "mid": [ "2065552713", "1992572456", "" ], "abstract": [ "We derive an exact stringlike soliton solution of [ital D]=10 heterotic string theory. The solution possesses SU(2)[times]SU(2) instanton structure in the eight-dimensional space transverse to the world sheet of the soliton.", "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "" ] }
1903.05435
2921710190
In this work, we are interested in the applications of big data in the telecommunication domain, analysing two weeks of datasets provided by Telecom Italia for Milan and Trento. Our objective is to identify hotspots which are places with very high communication traffic relative to others and measure the interaction between them. We model the hotspots as nodes in a graph and then apply node centrality metrics that quantify the importance of each node. We review five node centrality metrics and show that they can be divided into two families: the first family is composed of closeness and betweenness centrality whereas the second family consists of degree, PageRank and eigenvector centrality. We then proceed with a statistical analysis in order to evaluate the consistency of the results over the two weeks. We find out that the ranking of the hotspots under the various centrality metrics remains practically the same with the time for both Milan and Trento. We further identify that the relative difference of the values of the metrics is smaller for PageRank centrality than for closeness centrality and this holds for both Milan and Trento. Finally, our analysis reveals that the variance of the results is significantly smaller for Trento than for Milan.
Nowadays, telecom companies use widely big data in order to mine the behaviour of their customers, improve the quality of service that they provide and reduce the customers' churn. Towards this direction, demographic statistics, network deployments and call detail records (CDRs) are key factors that need to be carefully integrated in order to make accurate predictions. Though there are various open source data for the first two factors, researchers rarely have access to traffic demand data, since it is a sensitive information for the operators. Therefore, researchers need to rely on synthetic models, which do not always capture accurately large-scale mobile networks @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2741581007" ], "abstract": [ "In a world of open data and large-scale measurements, it is often feasible to obtain a real-world trace to fit to one's research problem. Feasible, however, does not imply simple. Taking next-generation cellular network planning as a case study, in this paper we describe a large-scale dataset, combining topology, traffic demand from call detail records, and demographic information throughout a whole country. We investigate how these aspects interact, revealing effects that are normally not captured by smaller-scale or synthetic datasets. In addition to making the resulting dataset available for download, we discuss how our experience can be generalized to other scenarios and case studies, i.e., how everyone can construct a similar dataset from publicly available information." ] }
1903.05435
2921710190
In this work, we are interested in the applications of big data in the telecommunication domain, analysing two weeks of datasets provided by Telecom Italia for Milan and Trento. Our objective is to identify hotspots which are places with very high communication traffic relative to others and measure the interaction between them. We model the hotspots as nodes in a graph and then apply node centrality metrics that quantify the importance of each node. We review five node centrality metrics and show that they can be divided into two families: the first family is composed of closeness and betweenness centrality whereas the second family consists of degree, PageRank and eigenvector centrality. We then proceed with a statistical analysis in order to evaluate the consistency of the results over the two weeks. We find out that the ranking of the hotspots under the various centrality metrics remains practically the same with the time for both Milan and Trento. We further identify that the relative difference of the values of the metrics is smaller for PageRank centrality than for closeness centrality and this holds for both Milan and Trento. Finally, our analysis reveals that the variance of the results is significantly smaller for Trento than for Milan.
For example, the authors in @cite_4 analyse an heterogeneous cellular network which consists of different types of nodes, such as macrocells and microcells. Nowadays a popular model is the one from Wyner @cite_0 , but it fails to fully capture a real heterogeneous cellular network because it is simplistic. Another approach is to use the spatial Poisson point process model (SPPP) @cite_9 , which can be derived from the premise that all base stations are uniformly distributed. However, a city can be classified in different areas, which have different population densities. These different areas can be characterised as dense urban, urban and suburban. To be able to classify the heterogeneous networks into these areas, the authors introduce SPPP for homogeneous and inhomogeneous sets. They show that the SPPP-model captures accurately both urban and suburban areas, whereas this is not the case for dense urban areas, because of a considerable population concentrated in small areas.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_4" ], "mid": [ "2131070905", "", "2005411736" ], "abstract": [ "The Wyner model has been widely used to model and analyze cellular networks due to its simplicity and analytical tractability. Its key aspects include fixed user locations and the deterministic and homogeneous interference intensity. While clearly a significant simplification of a real cellular system, which has random user locations and interference levels that vary by several orders of magnitude over a cell, a common presumption by theorists is that the Wyner model nevertheless captures the essential aspects of cellular interactions. But is this true? To answer this question, we compare the Wyner model to a model that includes random user locations and fading. We consider both uplink and downlink transmissions and both outage-based and average-based metrics. For the uplink, for both metrics, we conclude that the Wyner model is in fact quite accurate for systems with a sufficient number of simultaneous users, e.g., a CDMA system. Conversely, it is broadly inaccurate otherwise. Turning to the downlink, the Wyner model becomes inaccurate even for systems with a large number of simultaneous users. In addition, we derive an approximation for the main parameter in the Wyner model - the interference intensity term, which depends on the path loss exponent.", "", "In heterogeneous cellular networks spatial characteristics of base stations (BSs) influence the system performance intensively. Existing models like two-dimensional hexagonal grid model or homogeneous spatial poisson point process (SPPP) are based on the assumption that BSs are ideal or uniformly distributed, but the aggregation behavior of users in hot spots has an important effect on the location of low power nodes (LPNs), so these models fail to characterize the distribution of BSs in the current mobile cellular networks. In this paper, firstly existing spatial models are analyzed. Then, based on real data from a mobile operator in one large city of China, a set of spatial models is proposed in three typical regions: dense urban, urban and suburban. For dense urban area, “Two Tiers Poisson Cluster Superimposed Process” is proposed to model the spatial characteristics of real-world BSs. Specifically, for urban and suburban area, conventional SPPP model still can be used. Finally, the fundamental relationship between user behavior and BS distribution is illustrated and summarized. Numerous results show that SPPP is only appropriate in the urban and suburban regions where users are not gathered together obviously. Principal parameters of these models are provided as reference for the theoretical analysis and computer simulation, which describe the complex spatial configuration more reasonably and reflect the current mobile cellular network performance more precisely." ] }
1903.05355
2968491849
Learning the dynamics of robots from data can help achieve more accurate tracking controllers, or aid their navigation algorithms. However, when the actual dynamics of the robots change due to external conditions, on-line adaptation of their models is required to maintain high fidelity performance. In this work, a framework for on-line learning of robot dynamics is developed to adapt to such changes. The proposed framework employs an incremental support vector regression method to learn the model sequentially from data streams. In combination with the incremental learning, strategies for including and forgetting data are developed to obtain better generalization over the whole state space. The framework is tested in simulation and real experimental scenarios demonstrating its adaptation capabilities to changes in the robot’s dynamics.
In the field of marine robotics, @cite_10 used locally weighted projection regression to compensate the mismatch between the physics based model and the sensors reading of the AUV Nessie. Auto-regressive networks augmented with a genetic algorithm as a gating network were used to identify the model of a simulated AUV with variable mass. In a previous work @cite_16 , an on-line adaptation method was proposed to model the change in the damping forces resulting from a structural change of an AUVs mechanical structure. The algorithm showed good adaptation capability but was only limited to modelling the damping effect of an AUV model. In this work we build upon our the results of @cite_14 @cite_11 to provide a general framework for on-line learning of AUV fully coupled nonlinear dynamics, and validating the proposed approach on simulated data as well as real robot data.
{ "cite_N": [ "@cite_14", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2766583623", "2774694743", "2022798939", "" ], "abstract": [ "This work addresses a data driven approach which employs a machine learning technique known as Support Vector Regression (SVR), to identify the coupled dynamical model of an autonomous underwater vehicle. To train the regressor, we use a dataset collected from the robot's on-board navigation sensors and actuators. To achieve a better fit to the experimental data, a variant of a radial-basis-function kernel is used in combination with the SVR which accounts for the different complexities of each of the contributing input features of the model. We compare our method to other explicit hydrodynamic damping models that were identified using the total least squares method and with less complex SVR methods. To analyze the transferability, we clearly separate training and testing data obtained in real-world experiments. Our presented method shows much better results especially compared to classical approaches.", "This paper presents an online technique which employs incremental support vector regression to learn the damping term of an underwater vehicle motion model, subject to dynamical changes in the vehicle's body. To learn the damping term, we use data collected from the robot's on-board navigation sensors and actuator encoders. We introduce a new sample-efficient methodology which accounts for adding new training samples, removing old samples, and outlier rejection. The proposed method is tested in a real-world experimental scenario to account for the model's dynamical changes due to a change in the vehicle's geometrical shape.", "Navigation is instrumental in the successful deployment of Autonomous Underwater Vehicles (AUVs). Sensor hardware is installed on AUVs to support navigational accuracy. Sensors, however, may fail during deployment, thereby jeopardizing the mission. This work proposes a solution, based on an adaptive dynamic model, to accurately predict the navigation of the AUV. A hydrodynamic model, derived from simple laws of physics, is integrated with a powerful non-parametric regression method. The incremental regression method, namely the Locally Weighted Projection Regression (LWPR), is used to compensate for un-modeled dynamics, as well as for possible changes in the operating conditions of the vehicle. The augmented hydrodynamic model is used within an Extended Kalman Filter, to provide optimal estimations of the AUV’s position and orientation. Experimental results demonstrate an overall improvement in the prediction of the vehicle’s acceleration and velocity.", "" ] }
1903.05454
2950587559
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
In recent years, research regarding image matching has been influenced by the developments in other areas of computer vision. Deep learning architectures have been developed both for image matching @cite_10 @cite_1 @cite_17 and geopositioning @cite_13 @cite_5 @cite_18 with attractive results.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_1", "@cite_5", "@cite_10", "@cite_17" ], "mid": [ "1946093182", "2199890863", "2609950940", "1969891195", "2204975001", "2614218061" ], "abstract": [ "The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.", "We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales.", "Location recognition is commonly treated as visual instance retrieval on \"street view\" imagery. The dataset items and queries are panoramic views, i.e. groups of images taken at a single location. This work introduces a novel panorama-to-panorama matching process, either by aggregating features of individual images in a group or by explicitly constructing a larger panorama. In either case, multiple views are used as queries. We reach near perfect location recognition on a standard benchmark with only four query views.", "We address the problem of geo-registering ground-based multi-view stereo models by ground-to-aerial image matching. The main contribution is a fully automated geo-registration pipeline with a novel viewpoint-dependent matching method that handles ground to aerial viewpoint variation. We conduct large-scale experiments which consist of many popular outdoor landmarks in Rome. The proposed approach demonstrates a high success rate for the task, and dramatically outperforms state-of-the-art techniques, yielding geo-registration at pixel-level accuracy.", "Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably.", "We propose an attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELF (DEep Local Feature). The new feature is based on convolutional neural networks, which are trained only with image-level annotations on a landmark image dataset. To identify semantically useful local features for image retrieval, we also propose an attention mechanism for keypoint selection, which shares most network layers with the descriptor. This framework can be used for image retrieval as a drop-in replacement for other keypoint detectors and descriptors, enabling more accurate feature matching and geometric verification. Our system produces reliable confidence scores to reject false positives---in particular, it is robust against queries that have no correct match in the database. To evaluate the proposed descriptor, we introduce a new large-scale dataset, referred to as Google-Landmarks dataset, which involves challenges in both database and query such as background clutter, partial occlusion, multiple landmarks, objects in variable scales, etc. We show that DELF outperforms the state-of-the-art global and local descriptors in the large-scale setting by significant margins. Code and dataset can be found at the project webpage: this https URL ." ] }
1903.05454
2950587559
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
Convolutional features extracted from the deep layers of CNNs have shown great utility when addressing image matching and retrieval problems. Babenko @cite_10 employ pre-trained networks to generate descriptors based on high-level convolutional features used for retrieving images of various landmarks. Sunderhauf @cite_2 solve the problem of urban scene recognition, employing salient regions and convolutional features of local objects. This method is extended in @cite_8 , where additional spatial information is used to increase the algorithm performance.
{ "cite_N": [ "@cite_8", "@cite_10", "@cite_2" ], "mid": [ "2518534307", "2204975001", "1162411702" ], "abstract": [ "Recent work by [1] demonstrated improved visual place recognition using proposal regions coupled with features from convolutional neural networks (CNN) to match landmarks between views. In this work we extend the approach by introducing descriptors built from landmark features which also encode the spatial distribution of the landmarks within a view. Matching descriptors then enforces consistency of the relative positions of landmarks between views. This has a significant impact on performance. For example, in experiments on 10 image-pair datasets, each consisting of 200 urban locations with significant differences in viewing positions and conditions, we recorded average precision of around 70 (at 100 recall), compared with 58 obtained using whole image CNN features and 50 for the method in [1].", "Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably.", "Place recognition has long been an incompletely solved problem in that all approaches involve significant compromises. Current methods address many but never all of the critical challenges of place recognition – viewpoint-invariance, condition-invariance and minimizing training requirements. Here we present an approach that adapts state-of-the-art object proposal techniques to identify potential landmarks within an image for place recognition. We use the astonishing power of convolutional neural network features to identify matching landmark proposals between images to perform place recognition over extreme appearance and viewpoint variations. Our system does not require any form of training, all components are generic enough to be used off-the-shelf. We present a range of challenging experiments in varied viewpoint and environmental conditions. We demonstrate superior performance to current state-of-the- art techniques. Furthermore, by building on existing and widely used recognition frameworks, this approach provides a highly compatible place recognition system with the potential for easy integration of other techniques such as object detection and semantic scene interpretation." ] }
1903.05454
2950587559
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
The problem of geopositioning can be seen as a dedicated branch of image retrieval. In this case, the objective is to compute extrinsic parameters (or coordinates) of a camera capturing the query image, based on the matched georeferenced images from a database. There exist many different algorithms and neural network architectures that attempt to identify the geographical location of a street-level query image. Lin @cite_13 learn deep representations for matching aerial and ground images. Workman @cite_18 use spatial features at multiple scales which are fused with street-level features, to solve the problem of geolocalization. @cite_5 , a fully automated processing pipeline matches multi-view stereo (MVS) models to aerial images. This matching algorithm handles the viewpoint variance across aerial and street-level images.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_13" ], "mid": [ "1969891195", "2199890863", "1946093182" ], "abstract": [ "We address the problem of geo-registering ground-based multi-view stereo models by ground-to-aerial image matching. The main contribution is a fully automated geo-registration pipeline with a novel viewpoint-dependent matching method that handles ground to aerial viewpoint variation. We conduct large-scale experiments which consist of many popular outdoor landmarks in Rome. The proposed approach demonstrates a high success rate for the task, and dramatically outperforms state-of-the-art techniques, yielding geo-registration at pixel-level accuracy.", "We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales.", "The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations." ] }
1903.05454
2950587559
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
A common factor of the above work is that it either requires the combination of aerial and street-level images for geopositioning, or extensive training on specific datasets. Both cases and their solutions cannot be easily generalized. In our approach, we utilize georeferenced, street-level panoramic images only and a pre-trained CNN combined with image matching techniques for coordinate estimation. This avoids lengthy training and labeling procedures and assumes street-level data to be available without requiring aerial images. Furthermore, and unlike @cite_1 , we do not assume that our query and database images originate from the same imaging devices.
{ "cite_N": [ "@cite_1" ], "mid": [ "2609950940" ], "abstract": [ "Location recognition is commonly treated as visual instance retrieval on \"street view\" imagery. The dataset items and queries are panoramic views, i.e. groups of images taken at a single location. This work introduces a novel panorama-to-panorama matching process, either by aggregating features of individual images in a group or by explicitly constructing a larger panorama. In either case, multiple views are used as queries. We reach near perfect location recognition on a standard benchmark with only four query views." ] }
1903.05524
2972959293
In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graph-theoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
Kirchhoff index or equivalently effective graph resistance based measures have been instrumental in quantifying the effect of noise on the expected steady state dispersion in linear dynamical networks, particularly in the ones with the consensus dynamics, for instance see @cite_23 @cite_8 @cite_22 . Furthermore, limits on robustness measures that quantify expected steady-state dispersion due to external stochastic disturbances in linear dynamical networks are also studied in @cite_9 @cite_10 . To maximize robustness in networks by minimizing their Kirchhoff indices, various optimization approaches (e.g., @cite_26 @cite_1 ) including graph-theoretic ones @cite_0 have been proposed. The main objective there is to determine crucial edges that need to be added or maintained to maximize robustness under given constraints @cite_11 .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_8", "@cite_9", "@cite_1", "@cite_0", "@cite_23", "@cite_10", "@cite_11" ], "mid": [ "1987717935", "2287065438", "1818569528", "", "", "2156286761", "2099689250", "", "2736121037" ], "abstract": [ "The effective resistance between two nodes of a weighted graph is the electrical resistance seen between the nodes of a resistor network with branch conductances given by the edge weights. The effective resistance comes up in many applications and fields in addition to electrical network analysis, including, for example, Markov chains and continuous-time averaging networks. In this paper we study the problem of allocating edge weights on a given graph in order to minimize the total effective resistance, i.e., the sum of the resistances between all pairs of nodes. We show that this is a convex optimization problem and can be solved efficiently either numerically or, in some cases, analytically. We show that optimal allocation of the edge weights can reduce the total effective resistance of the graph (compared to uniform weights) by a factor that grows unboundedly with the size of the graph. We show that among all graphs with @math nodes, the path has the largest value of optimal total effective resistance and the complete graph has the least.", "This work considers the robustness of uncertain consensus networks. The stability properties of consensus networks with negative edge weights are also examined. We show that the network is unstable if either the negative weight edges form a cut in the graph or any single negative edge weight has a magnitude less than the inverse of the effective resistance between the two incident nodes. These results are then used to analyze the robustness of the consensus network with additive but bounded perturbations of the edge weights. It is shown that the small-gain condition is related again to cuts in the graph and effective resistance. For the single edge case, the small-gain condition is also shown to be exact. The results are then extended to consensus networks with nonlinear couplings.", "The graphical notion of effective resistance has found wide-ranging applications in many areas of pure mathematics, applied mathematics and control theory. By the nature of its construction, effective resistance can only be computed in undirected graphs and yet in several areas of its application, directed graphs arise as naturally (or more naturally) than undirected ones. In Part I of this work, we propose a generalization of effective resistance to directed graphs that preserves its control-theoretic properties in relation to consensus-type dynamics. We proceed to analyze the dependence of our algebraic definition on the structural properties of the graph and the relationship between our construction and a graphical distance. The results make possible the calculation of effective resistance between any two nodes in any directed graph and provide a solid foundation for the application of effective resistance to problems involving directed graphs.", "", "", "This paper studies an interesting graph measure that we call the effective graph resistance. The notion of effective graph resistance is derived from the field of electric circuit analysis where it is defined as the accumulated effective resistance between all pairs of vertices. The objective of the paper is twofold. First, we survey known formulae of the effective graph resistance and derive other representations as well. The derivation of new expressions is based on the analysis of the associated random walk on the graph and applies tools from Markov chain theory. This approach results in a new method to approximate the effective graph resistance. A second objective of this paper concerns the optimisation of the effective graph resistance for graphs with given number of vertices and diameter, and for optimal edge addition. A set of analytical results is described, as well as results obtained by exhaustive search. One of the foremost applications of the effective graph resistance we have in mind, is the analysis of robustness-related problems. However, with our discussion of this informative graph measure we hope to open up a wealth of possibilities of applying the effective graph resistance to all kinds of networks problems. © 2011 Elsevier Inc. All rights reserved.", "In this paper we study robustness of consensus in networks of coupled single integrators driven by white noise. Robustness is quantified as the H 2 norm of the closed-loop system. In particular we investigate how robustness depends on the properties of the underlying (directed) communication graph. To this end several classes of directed and undirected communication topologies are analyzed and compared. The trade-off between speed of convergence and robustness to noise is also investigated.", "", "This paper investigates the robustness of strong structural controllability for linear time-invariant directed networked systems with respect to structural perturbations, including edge additions and deletions. In this regard, an algorithm is presented that is initiated by endowing each node of a network with a successive set of integers. Using this algorithm, a new notion of perfect graphs associated with a network is introduced, and tight upper bounds on the number of edges that can be added to, or removed from a network, while ensuring strong structural controllability, are derived. Moreover, we obtain a characterization of critical edges with respect to edge additions and deletions; these sets are the maximal sets of edges whose any subset can be respectively added to, or removed from a network, while preserving strong structural controllability." ] }
1903.05524
2972959293
In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graph-theoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
To quantify controllability, several approaches have been adapted, including determining the minimum number of inputs (leader nodes) needed to (structurally or strong structurally) control a network, determining the worst-case control energy, metrics based on controllability Gramians, and so on (e.g., see @cite_7 @cite_5 ). Strong structural controllability, due to its independence on coupling weights between nodes, is a generalized notion of controllability with practical implications. There have been recent studies providing graph-theoretic characterizations of this concept @cite_20 @cite_13 @cite_17 . There are numerous other studies regarding leader selection to optimize network performance measures under various constraints, such as to minimize the deviation from consensus in a noisy environment @cite_4 @cite_2 , and to maximize various controllability measures, for instance @cite_15 @cite_18 @cite_25 @cite_14 . Recently, optimization methods are also presented to select leader nodes that exploit submodularity properties of performance measures for network robustness and structural controllability @cite_5 @cite_3 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_7", "@cite_3", "@cite_2", "@cite_5", "@cite_15", "@cite_13", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "", "", "", "2111725629", "", "", "1938602245", "", "2315383458", "", "2049708951", "2763583074" ], "abstract": [ "", "", "", "This paper studies the problem of controlling complex networks, i.e., the joint problem of selecting a set of control nodes and of designing a control input to steer a network to a target state. For this problem, 1) we propose a metric to quantify the difficulty of the control problem as a function of the required control energy, 2) we derive bounds based on the system dynamics (network topology and weights) to characterize the tradeoff between the control energy and the number of control nodes, and 3) we propose an open-loop control strategy with performance guarantees. In our strategy, we select control nodes by relying on network partitioning, and we design the control input by leveraging optimal and distributed control techniques. Our findings show several control limitations and properties. For instance, for Schur stable and symmetric networks: 1) if the number of control nodes is constant, then the control energy increases exponentially with the number of network nodes; 2) if the number of control nodes is a fixed fraction of the network nodes, then certain networks can be controlled with constant energy independently of the network dimension; and 3) clustered networks may be easier to control because, for sufficiently many control nodes, the control energy depends only on the controllability properties of the clusters and on their coupling strength. We validate our results with examples from power networks, social networks and epidemics spreading.", "", "", "Controllability and observability have long been recognized as fundamental structural properties of dynamical systems, but have recently seen renewed interest in the context of large, complex networks of dynamical systems. A basic problem is sensor and actuator placement: choose a subset from a finite set of possible placements to optimize some real-valued controllability and observability metrics of the network. Surprisingly little is known about the structure of such combinatorial optimization problems. In this paper, we show that several important classes of metrics based on the controllability and observability Gramians have a strong structural property that allows for either efficient global optimization or an approximation guarantee by using a simple greedy heuristic for their maximization. In particular, the mapping from possible placements to several scalar functions of the associated Gramian is either a modular or submodular set function. The results are illustrated on randomly generated systems and on a problem of power-electronic actuator placement in a model of the European power grid.", "", "In this technical note, we study the controllability of diffusively coupled networks from a graph theoretic perspective. We consider leader-follower networks, where the external control inputs are injected to only some of the agents, namely the leaders. Our main result relates the controllability of such systems to the graph distances between the agents. More specifically, we present a graph topological lower bound on the rank of the controllability matrix. This lower bound is tight, and it is applicable to systems with arbitrary network topologies, coupling weights, and number of leaders. An algorithm for computing the lower bound is also provided. Furthermore, as a prominent application, we present how the proposed bound can be utilized to select a minimal set of leaders for achieving controllability, even when the coupling weights are unknown.", "", "This paper examines strong structural controllability of linear-time-invariant networked systems. We provide necessary and sufficient conditions for strong structural controllability involving constrained matchings over the bipartite graph representation of the network. An O(n2) algorithm to validate if a set of inputs leads to a strongly structurally controllable network and to find such an input set is proposed. The problem of finding such a set with minimal cardinality is shown to be NP-complete. Minimal cardinality results for strong and weak structural controllability are compared.", "Characterization of network controllability through its topology has recently gained a lot of attention in the systems and control community. Using the notion of balancing sets, in this note, such a network-centric approach for the controllability of certain families of undirected networks is investigated. Moreover, by introducing the notion of a generalized zero forcing set, the structural controllability of undirected networks is discussed; in this direction, lower bounds on the dimension of the controllable subspace are derived. In addition, a method is proposed that facilitates synthesis of structural and strong structural controllable networks as well as examining preservation of network controllability under structural perturbations." ] }
1903.05524
2972959293
In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graph-theoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
Very recently in @cite_21 , trade-off between controllability and fragility in complex networks is investigated. Fragility measures the smallest perturbation in edge weights to make the network unstable. Authors in @cite_21 show that networks that require small control energy, as measured by the eigen values of the controllability Gramian, to drive from one state to another are more fragile and vice versa. In our work, for control performance, we consider minimum leaders for strong structural controllability, which is independent of coupling weights; and for robustness, we utilize the Kirchhoff index which measures robustness to noise as well as to structural changes in the underlying network graph. Moreover, in this work we focus on designing and comparing extremal networks for these properties. The rest of the paper is organized as follows: Section describes preliminaries and network dynamics. Section explains the measures for robustness and controllability, and also outlines the main problems. Section presents maximally robust networks for a given @math and @math , and also analyzes their controllability. Section provides a design of maximally controllable networks and also evaluates their robustness. Finally, Section concludes the paper.
{ "cite_N": [ "@cite_21" ], "mid": [ "2887109490" ], "abstract": [ "Mathematical theories and empirical evidence suggest that several complex natural and man-made systems are fragile: as their size increases, arbitrarily small and localized alterations of the system parameters may trigger system-wide failures. Examples are abundant, from perturbation of the population densities leading to extinction of species in ecological networks [1], to structural changes in metabolic networks preventing reactions [2], cascading failures in power networks [3], and the onset of epileptic seizures following alterations of structural connectivity among populations of neurons [4]. While fragility of these systems has long been recognized [5], convincing theories of why natural evolution or technological advance has failed, or avoided, to enhance robustness in complex systems are still lacking. In this paper we propose a mechanistic explanation of this phenomenon. We show that a fundamental tradeoff exists between fragility of a complex network and its controllability degree, that is, the control energy needed to drive the network state to a desirable state. We provide analytical and numerical evidence that easily controllable networks are fragile, suggesting that natural and man-made systems can either be resilient to parameters perturbation or efficient to adapt their state in response to external excitations and controls." ] }
cmp-lg9408015
2952406681
Effective problem solving among multiple agents requires a better understanding of the role of communication in collaboration. In this paper we show that there are communicative strategies that greatly improve the performance of resource-bounded agents, but that these strategies are highly sensitive to the task requirements, situation parameters and agents' resource limitations. We base our argument on two sources of evidence: (1) an analysis of a corpus of 55 problem solving dialogues, and (2) experimental simulations of collaborative problem solving dialogues in an experimental world, Design-World, where we parameterize task requirements, agents' resources and communicative strategies.
Design-World is also based on the method used in Carletta's JAM simulation for the Edinburgh Map-Task @cite_10 . JAM is based on the Map-Task Dialogue corpus, where the goal of the task is for the planning agent, the instructor, to instruct the reactive agent, the instructee, how to get from one place to another on the map. JAM focuses on efficient strategies for recovery from error and parametrizes agents according to their communicative and error recovery strategies. Given good error recovery strategies, Carletta argues that high risk' strategies are more efficient, where efficiency is a measure of the number of utterances in the dialogue. While the focus here is different, we have shown that that the number of utterances is just one parameter for evaluating performance, and that the task definition determines when strategies are effective.
{ "cite_N": [ "@cite_10" ], "mid": [ "2159574206" ], "abstract": [ "The Principle of Parsimony states that people usually try to complete tasks with the least effort that will produce a satisfactory solution. In task-oriented dialogue, this produces a tension between conveying information carefully to the partner and leaving it to be inferred, risking a misunderstanding and the need for recovery. Using natural dialogue examples, primarily from the HCRC Map Task, we apply the Principle of Parsimony to a range of information types and identify a set of applicable recovery strategies. We argue that risk-taking and recovery are crucial for efficient dialogue because they pinpoint which information must be transferred and allow control of the interaction to switch to the participant who can best guide the course of the dialogue." ] }
cs9907027
2949190809
The aim of the Alma project is the design of a strongly typed constraint programming language that combines the advantages of logic and imperative programming. The first stage of the project was the design and implementation of Alma-0, a small programming language that provides a support for declarative programming within the imperative programming framework. It is obtained by extending a subset of Modula-2 by a small number of features inspired by the logic programming paradigm. In this paper we discuss the rationale for the design of Alma-0, the benefits of the resulting hybrid programming framework, and the current work on adding constraint processing capabilities to the language. In particular, we discuss the role of the logical and customary variables, the interaction between the constraint store and the program, and the need for lists.
We concentrate here on the related work involving addition of constraints to imperative languages. For an overview of related work pertaining to the language we refer the reader to @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "1968265180" ], "abstract": [ "We describe here an implemented small programming language, called Alma-O, that augments the expressive power of imperative programming by a limited number of features inspired by the logic programming paradigm. These additions encourage declarative programming and make it a more attractive vehicle for problems that involve search. We illustrate the use of Alma-O by presenting solutions to a number of classical problems, including α-β search, STRIPS planning, knapsack, and Eight Queens. These solutions are substantially simpler than their counterparts written in the imperative or in the logic programming style and can be used for different purposes without any modification. We also discuss here the implementation of Alma-O and an operational, executable, semantics of a large subset of the language." ] }
cmp-lg9709007
1578881253
Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.
To our knowledge, lexical databases have been used only once before in TC, apart from our previous work. Hearst @cite_10 adapted a disambiguation algorithm by Yarowsky using WordNet to recognize category occurrences. Categories are made of WordNet terms, which is not the general case of standard or user-defined categories. It is a hard task to adapt WordNet subsets to pre-existing categories, especially when they are domain dependent. Hearst's approach has shown promising results confirmed by our previous work @cite_23 and present results.
{ "cite_N": [ "@cite_10", "@cite_23" ], "mid": [ "1493108551", "1575569168" ], "abstract": [ "This dissertation investigates the role of contextual information in the automated retrieval and display of full-text documents, using robust natural language processing algorithms to automatically detect structure in and assign topic labels to texts. Many long texts are comprised of complex topic and subtopic structure, a fact ignored by existing information access methods. I present two algorithms which detect such structure, and two visual display paradigms which use the results of these algorithms to show the interactions of multiple main topics, multiple subtopics, and the relations between main topics and subtopics. The first algorithm, called TextTiling , recognizes the subtopic structure of texts as dictated by their content. It uses domain-independent lexical frequency and distribution information to partition texts into multi-paragraph passages. The results are found to correspond well to reader judgments of major subtopic boundaries. The second algorithm assigns multiple main topic labels to each text, where the labels are chosen from pre-defined, intuitive category sets; the algorithm is trained on unlabeled text. A new iconic representation, called TileBars uses TextTiles to simultaneously and compactly display query term frequency, query term distribution and relative document length. This representation provides an informative alternative to ranking long texts according to their overall similarity to a query. For example, a user can choose to view those documents that have an extended discussion of one set of terms and a brief but overlapping discussion of a second set of terms. This representation also allows for relevance feedback on patterns of term distribution. TileBars display documents only in terms of words supplied in the user query. For a given retrieved text, if the query words do not correspond to its main topics, the user cannot discern in what context the query terms were used. For example, a query on contaminants may retrieve documents whose main topics relate to nuclear power, food, or oil spills. To address this issue, I describe a graphical interface, called Cougar , that displays retrieved documents in terms of interactions among their automatically-assigned main topics, thus allowing users to familiarize themselves with the topics and terminology of a text collection.", "Automatic text categorization is a complex and useful task for many natural language processing applications. Recent approaches to text categorization focus more on algorithms than on resources involved in this operation. In contrast to this trend, we present an approach based on the integration of widely available resources as lexical databases and training collections to overcome current limitations of the task. Our approach makes use of WordNet synonymy information to increase evidence for bad trained categories. When testing a direct categorization, a WordNet based one, a training algorithm, and our integrated approach, the latter exhibits a better perfomance than any of the others. Incidentally, WordNet based approach perfomance is comparable with the training approach one." ] }
cmp-lg9709007
1578881253
Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.
Lexical databases have been employed recently in word sense disambiguation. For example, Agirre and Rigau @cite_4 make use of a semantic distance that takes into account structural factors in WordNet for achieving good results for this task. Additionally, Resnik @cite_3 combines the use of WordNet and a text collection for a definition of a distance for disambiguating noun groupings. Although the text collection is not a training collection (in the sense of a collection of manually labeled texts for a pre-defined text processing task), his approach can be regarded as the most similar to ours in the disambiguation setting. Finally, Ng and Lee @cite_11 make use of several sources of information inside a training collection (neighborhood, part of speech, morphological form, etc.) to get good results in disambiguating unrestricted text.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_11" ], "mid": [ "", "1608874027", "2157025692" ], "abstract": [ "", "Word groupings useful for language processing tasks are increasingly available, as thesauri appear on-line, and as distributional word clustering techniques improve. However, for many tasks, one is interested in relationships among word senses, not words. This paper presents a method for automatic sense disambiguation of nouns appearing within sets of related nouns — the kind of data one finds in on-line thesauri, or as the output of distributional clustering algorithms. Disambiguation is performed with respect to WordNet senses, which are fairly fine-grained; however, the method also permits the assignment of higher-level WordNet categories rather than sense labels. The method is illustrated primarily by example, though results of a more rigorous evaluation are also presented.", "In this paper, we present a new approach for word sense disambiguation (WSD) using an exemplar-based learning algorithm. This approach integrates a diverse set of knowledge sources to disambiguate word sense, including part of speech of neighboring words, morphological form, the unordered set of surrounding words, local collocations, and verb-object syntactic relation. We tested our WSD program, named LEXAS, on both a common data set used in previous work, as well as on a large sense-tagged corpus that we separately constructed. LEXAS achieves a higher accuracy on the common data set, and performs better than the most frequent heuristic on the highly ambiguous words in the large corpus tagged with the refined senses of WORDNET." ] }
cmp-lg9706006
2953123431
Learning problems in the text processing domain often map the text to a space whose dimensions are the measured features of the text, e.g., its words. Three characteristic properties of this domain are (a) very high dimensionality, (b) both the learned concepts and the instances reside very sparsely in the feature space, and (c) a high variation in the number of active features in an instance. In this work we study three mistake-driven learning algorithms for a typical task of this nature -- text categorization. We argue that these algorithms -- which categorize documents by learning a linear separator in the feature space -- have a few properties that make them ideal for this domain. We then show that a quantum leap in performance is achieved when we further modify the algorithms to better address some of the specific characteristics of the domain. In particular, we demonstrate (1) how variation in document length can be tolerated by either normalizing feature weights or by using negative weights, (2) the positive effect of applying a threshold range in training, (3) alternatives in considering feature frequency, and (4) the benefits of discarding features while training. Overall, we present an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set.
The methods that are most similar to our techniques are the on-line algorithms used in @cite_16 and @cite_10 . In the first, two algorithms, a multiplicative update and additive update algorithms suggested in @cite_5 are evaluated in the domain, and are shown to perform somewhat better than Rocchio's algorithm. While both these works make use of multiplicative update algorithms, as we do, there are two major differences between those studies and the current one. First, there are some important technical differences between the algorithms used. Second, the algorithms we study here are mistake-driven; they update the weight vector only when a mistake is made, and not after every example seen. The Experts algorithm studied in @cite_10 is very similar to a basic version of the algorithm which we study here. The way we treat the negative weights is different, though, and significantly more efficient, especially in sparse domains (see ). Cohen and Singer experiment also, using the same algorithm, with more complex features (sparse n-grams) and show that, as expected, it yields better results.
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_10" ], "mid": [ "2069317438", "", "2440833291" ], "abstract": [ "We consider two algorithm for on-line prediction based on a linear model. The algorithms are the well-known Gradient Descent (GD) algorithm and a new algorithm, which we call EG(+ -). They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG(+ -) algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG(+ -) and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG(+ -) has a much smaller loss if only a few components of the input are relevant for the predictions. We have performed experiments, which show that our worst-case upper bounds are quite tight already on simple artificial data.", "", "Two recently implemented machine-learning algorithms, RIPPER and sleeping-experts for phrases, are evaluated on a number of large text categorization problems. These algorithms both construct classifiers that allow the “context” of a word w to affect how (or even whether) the presence or absence of w will contribute to a classification. However, RIPPER and sleeping-experts differ radically in many other respects: differences include different notions as to what constitutes a context, different ways of combining contexts to construct a classifier, different methods to search for a combination of contexts, and different criteria as to what contexts should be included in such a combination. In spite of these differences, both RIPPER and sleeping-experts perform extremely well across a wide variety of categorization problems, generally outperforming previously applied learning methods. We view this result as a confirmation of the usefulness of classifiers that represent contextual information." ] }
cmp-lg9701003
2952751702
In expert-consultation dialogues, it is inevitable that an agent will at times have insufficient information to determine whether to accept or reject a proposal by the other agent. This results in the need for the agent to initiate an information-sharing subdialogue to form a set of shared beliefs within which the agents can effectively re-evaluate the proposal. This paper presents a computational strategy for initiating such information-sharing subdialogues to resolve the system's uncertainty regarding the acceptance of a user proposal. Our model determines when information-sharing should be pursued, selects a focus of information-sharing among multiple uncertain beliefs, chooses the most effective information-sharing strategy, and utilizes the newly obtained information to re-evaluate the user proposal. Furthermore, our model is capable of handling embedded information-sharing subdialogues.
Grosz, Sidner and Lochbaum @cite_7 @cite_1 developed a SharedPlan approach to modelling collaborative discourse, and Sidner formulated an artificial language for modeling such discourse. Sidner viewed a collaborative planning process as proposal acceptance and proposal rejection sequences. Her artificial language treats an utterance such as Why do X? as a proposal for the hearer to provide support for his proposal to do X. However, Sidner's work is descriptive and does not provide a mechanism for determining when and how such a proposal should be made nor how responses should be formulated in information-sharing subdialogues.
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "2148389694", "332028463" ], "abstract": [ "A model of plan recognition in discourse must be based on intended recognition, distinguish each agent's beliefs and intentions from the other's, and avoid assumptions about the correctness or completeness of the agents' beliefs. In this paper, we present an algorithm for plan recognition that is based on the Shared-Plan model of collaboration (Grosz and Sidner, 1990; , 1990) and that satisfies these constraints.", "Abstract : Discourses are fundamentally instances of collaboration behavior. We propose a model of the collaborative plans of agents achieving joint goals and illustrate the role of these plans in discourses. Three types of collaborative plans, called Shared Plans, are formulated for joint goals requiring simultaneous, conjoined or sequential actions on the part of the agents who participate in the plans and the discourse; a fourth type of Shared Plan is presented for the circumstance where two agents communicate, but only one acts." ] }
cmp-lg9701003
2952751702
In expert-consultation dialogues, it is inevitable that an agent will at times have insufficient information to determine whether to accept or reject a proposal by the other agent. This results in the need for the agent to initiate an information-sharing subdialogue to form a set of shared beliefs within which the agents can effectively re-evaluate the proposal. This paper presents a computational strategy for initiating such information-sharing subdialogues to resolve the system's uncertainty regarding the acceptance of a user proposal. Our model determines when information-sharing should be pursued, selects a focus of information-sharing among multiple uncertain beliefs, chooses the most effective information-sharing strategy, and utilizes the newly obtained information to re-evaluate the user proposal. Furthermore, our model is capable of handling embedded information-sharing subdialogues.
Several researchers have studied the role of clarification dialogues in disambiguating user plans @cite_3 @cite_5 and in understanding referring expressions @cite_8 . developed an automated librarian that could revise its beliefs and intentions and could generate responses as an attempt to revise the user's beliefs and intentions. Although their system had rules for asking the user whether he holds a particular belief and for telling the system's attitude toward a belief, the emphasis of their work was on conflict resolution and plan disambiguation. Thus they did not investigate a comprehensive strategy for information-sharing during proposal evaluation. For example, they did not identify situations in which information-sharing is necessary, did not address how to select a focus of information-sharing when there are multiple uncertain beliefs, did not consider requesting the user's justifications for a belief, etc. In addition, they do not provide an overall dialogue planner that takes into account discourse structure and appropriately captures embedded subdialogues.
{ "cite_N": [ "@cite_5", "@cite_3", "@cite_8" ], "mid": [ "", "2037768686", "1541473376" ], "abstract": [ "", "Recognizing the plan underlying a query aids in the generation of an appropriate response. In this paper, we address the problem of how to generate cooperative responses when the user's plan is ambiguous. We show that it is not always necessary to resolve the ambiguity, and provide a procedure that estimates whether the ambiguity matters to the task of formulating a response. The procedure makes use of the critiquing of possible plans and identifies plans with the same fault. We illustrate the process of critiquing with examples. If the ambiguity does matter, we propose to resolve the ambiguity by entering into a clarification dialogue with the user and provide a procedure that performs this task. Together, these procedures allow a question-answering system to take advantage of the interactive and collaborative nature of dialogue in order to recognize plans and resolve ambiguity. This work therefore presents a view of generation in advice-giving contexts which is different from the straightforward model of a passive selection of responses to questions asked by users. We also report on a trial implementation in a course-advising domain, which provides insights on the practicality of the procedures and directions for future research.", "This paper presents a computational model of how conversational participants collaborate in order to make a referring action successful. The model is based on the view of language as goal-directed behavior. We propose that the content of a referring expression can be accounted for by the planning paradigm. Not only does this approach allow the processes of building referring expressions and identifying their referents to be captured by plan construction and plan inference, it also allows us to account for how participants clarify a referring expression by using meta-actions that reason about and manipulate the plan derivation that corresponds to the referring expression. To account for how clarification goals arise and how inferred clarification plans affect the agent, we propose that the agents are in a certain state of mind, and that this state includes an intention to achieve the goal of referring and a plan that the agents are currently considering. It is this mental state that sanctions the adoption of goals and the acceptance of inferred plans, and so acts as a link between understanding and generation." ] }
cmp-lg9606025
2952364737
This paper presents an analysis conducted on a corpus of software instructions in French in order to establish whether task structure elements (the procedural representation of the users' tasks) are alone sufficient to control the grammatical resources of a text generator. We show that the construct of genre provides a useful additional source of control enabling us to resolve undetermined cases.
The results from our linguistic analysis are consistent with other research on sublanguages in the instructions domain, in both French and English, e.g., @cite_5 @cite_4 . Our analysis goes beyond previous work by identifying within the discourse context the means for exercising explicit control over a text generator.
{ "cite_N": [ "@cite_5", "@cite_4" ], "mid": [ "2047374384", "2131196772" ], "abstract": [ "This paper discusses an approach to planning the content of instructional texts. The research is based on a corpus study of 15 French procedural texts ranging from step-by-step device manuals to general artistic procedures. The approach taken starts from an AI task planner building a task representation, from which semantic carriers are selected. The most appropriate RST relations to communicate these carriers are then chosen according to heuristics developed during the corpus analysis.", "Instructional texts have been the object of many studies recently, motivated by the increased need to produce manuals (especially multilingual manuals) coupled with the cost of translators and technical writers. Because these studies concentrate on aspects other than the linguistic realisation of instructions -- for example, the integration of text and graphics - they all generate a sequence of steps required to achieve a task, using imperatives. Our research so far shows, however, that manuals can in fact have different styles, i. e., not all instructions are stated using a sequence of imperatives, and that, furthermore, different parts of manuals often use different styles. In this paper, we present our preliminary results from an analysis of over 30 user guides manuals for consumer appliances and discuss some of the implications." ] }
cmp-lg9505006
1617827527
In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDT-resolution. In this article we motivate a variant of Datalog grammars which allows us a meta-grammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
A notion that is central to recent work on ellipsis, and which has been present in embryonic form, as we have seen, even in the early work on coordination, is that of parallelism as a key element in the determination of implicit meanings. Asher @cite_3 defines parallelism as
{ "cite_N": [ "@cite_3" ], "mid": [ "1495022714" ], "abstract": [ "Preface. Introduction. 1. From Events to Propositions: a Tour of Abstract Entities, Eventualities and the Nominals that Denote them. 2. A Crash Course in DRT. 3. Attitudes and Attitude Descriptions. 4. The Semantic Representation for Sentential Nominals. 5. Problems for the Semantics of Nominals. 6. Anaphora and Abstract Entities. 7. A Theory of Discourse Structure for an Analysis of Abstract Entity Anaphora. 8. Applying the Theory of Discourse Structure to the Anaphoric Phenomena. 9. Applications of the Theory of Discourse Structure to Concept Anaphora and VP Ellipsis. 10. Model Theory for Abstract Entities and its Philosophical Implications. Conclusion. Bibliography. Index." ] }
cmp-lg9505006
1617827527
In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDT-resolution. In this article we motivate a variant of Datalog grammars which allows us a meta-grammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
a) neither method formulates exactly how parallelism is to be determined- it is just postulated as a prerequisite to the resolution of ellipsis (although @cite_6 speculates on possible ways of formulating this, leaving it for future work)
{ "cite_N": [ "@cite_6" ], "mid": [ "2119997945" ], "abstract": [ "We describe an implementation in Carpenter's typed feature formalism, ALE, of a discourse grammar of the kind proposed by Scha, Polanyi, et al We examine their method for resolving parallelism-dependent anaphora and show that there is a coherent feature-structural rendition of this type of grammar which uses the operations of priority union and generalization. We describe an augmentation of the ALE system to encompass these operations and we show that an appropriate choice of definition for priority union gives the desired multiple output for examples of VP-ellipsis which exhibit a strict sloppy ambiguity." ] }
cmp-lg9505006
1617827527
In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDT-resolution. In this article we motivate a variant of Datalog grammars which allows us a meta-grammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
By examining ellipsis in the context of coordinated structures, which are parallel by definition, and by using extended DLGs, we provide a method in which parallel structures are detected and resolved through syntactic and semantic criteria, and which can be applied to either grammars using different semantic representations- feature structure, @math -calculus, or other. We exemplify using a logic based semantics along the lines of @cite_8 .
{ "cite_N": [ "@cite_8" ], "mid": [ "2006589508" ], "abstract": [ "Logic grammars are grammars expressible in predicate logic. Implemented in the programming language Prolog, logic grammar systems have proved to be a good basis for natural language processing. One of the most difficult constructions for natural language grammars to treat is coordination (construction with conjunctions like 'and'). This paper describes a logic grammar formalism, modifier structure grammars (MSGs), together with an interpreter written in Prolog, which can handle coordination (and other natural language constructions) in a reasonable and general way. The system produces both syntactic analyses and logical forms, and problems of scoping for coordination and quantifiers are dealt with. The MSG formalism seems of interest in its own right (perhaps even outside natural language processing) because the notions of syntactic structure and semantic interpretation are more constrained than in many previous systems (made more implicit in the formalism itself), so that less burden is put on the grammar writer." ] }
cmp-lg9505038
2952182676
Augmented reality is a research area that tries to embody an electronic information space within the real world, through computational devices. A crucial issue within this area, is the recognition of real world objects or situations. In natural language processing, it is much easier to determine interpretations of utterances, even if they are ill-formed, when the context or situation is fixed. We therefore introduce robust, natural language processing into a system of augmented reality with situation awareness. Based on this idea, we have developed a portable system, called the Ubiquitous Talker. This consists of an LCD display that reflects the scene at which a user is looking as if it is a transparent glass, a CCD camera for recognizing real world objects with color-bar ID codes, a microphone for recognizing a human voice and a speaker which outputs a synthesized voice. The Ubiquitous Talker provides its user with some information related to a recognized object, by using the display and voice. It also accepts requests or questions as voice inputs. The user feels as if he she is talking with the object itself through the system.
Ubiquitous computing @cite_4 proposes that very small computational devices (i.e., ubiquitous computers) be embedded and integrated into physical environments in such a way that they operate seamlessly and almost transparently. These devices are aware of their physical surroundings. In contrast to ubiquitous computers, our barcode (color-code) system is a low cost and reliable solution to making everything a computer. Suppose that every page in a book has a unique barcode. When the user opens a page, its page ID is detected by the system, so it can supply specific information regarding the page. When the user adds some information to the page, the system stores it with the page ID tagged for later retrieval. This is almost the same as having a computer in every page of the book without the cost. Our ID-aware system is better than ubiquitous computers from the viewpoint of reliability and cost-performance, since it does not require batteries and never breaks down.
{ "cite_N": [ "@cite_4" ], "mid": [ "2084069552" ], "abstract": [ "Ubiquitous computing is the method of enhancing computer use by making many computers available throughout the physical environment, but making them effectively invisible to the user. Since we started this work at Xerox PARC in 1988, a number of researchers around the world have begun to work in the ubiquitous computing framework. This paper explains what is new and different about the computer science in ubiquitous computing. It starts with a brief overview of ubiquitous computing, and then elaborates through a series of examples drawn from various subdisciplines of computer science: hardware components (e.g. chips), network protocols, interaction substrates (e.g. software for screens and pens), applications, privacy, and computational methods. Ubiquitous computing offers a framework for new and exciting research across the spectrum of computer science." ] }
math9702221
2117609843
This paper reexamines univariate reduction from a toric geometric point of view. We begin by constructing a binomial variant of the @math -resultant and then retailor the generalized characteristic polynomial to fully exploit sparsity in the monomial structure of any given polynomial system. We thus obtain a fast new algorithm for univariate reduction and a better understanding of the underlying projections. As a corollary, we show that a refinement of Hilbert's Tenth Problem is decidable within single-exponential time. We also show how certain multisymmetric functions of the roots of polynomial systems can be calculated with sparse resultants.
From an applied angle, our observations on degeneracies and handling polynomial systems with infinitely many roots nicely complement the work of Emiris and Canny @cite_43 . In particular, their sparse resultant based algorithms for polynomial system solving can now be made to work even when problem B occurs. Also, an added benefit of working torically (as opposed to the classical approach of working in projective space) is the increased efficiency of the sparse resultant: the resulting matrix calculations (for polynomial system solving) are much smaller and faster. In particular, whereas it was remarked in @cite_41 that Gr "obner basis methods are likely to be faster than the GCP for sparse polynomial systems, the toric GCP appears to be far more competitive in such a comparison.
{ "cite_N": [ "@cite_41", "@cite_43" ], "mid": [ "1976392590", "2066130115" ], "abstract": [ "Multipolynomial resultants provide the most efficient methods known (in terms as asymptoticcomplexity) for solving certain systems of polynomial equations or eliminating variables (, 1988). The resultant of f\"1, ..., f\"n in K[x\"1,...,x\"m] will be a polynomial in m-n+1 variables which is zero when the system f\"1=0 has a solution in ^m ( the algebraic closure of K). Thus the resultant defines a projection operator from ^m to ^(^m^-^n^+^1^). However, resultants are only exact conditions for homogeneous systems, and in the affine case just mentioned, the resultant may be zero even if the system has no affine solution. This is most serious when the solution set of the system of polynomials has ''excess components'' (components of dimension >m-n), which may not even be affine, since these cause the resultant to vanish identically. In this paper we describe a projection operator which is not identically zero, but which is guaranteed to vanish on all the proper (dimension=m-n) components of the system f\"i=0. Thus it fills the role of a general affine projection operator or variable elimination ''black box'' which can be used for arbitrary polynomial systems. The construction is based on a generalisation of the characteristic polynomial of a linear system to polynomial systems. As a corollary, we give a single-exponential time method for finding all the isolated solution points of a system of polynomials, even in the presence of infinitely many solutions, at infinity or elsewhere.", "Abstract We propose a new and efficient algorithm for computing the sparse resultant of a system of n + 1 polynomial equations in n unknowns. This algorithm produces a matrix whose entries are coefficients of the given polynomials and is typically smaller than the matrices obtained by previous approaches. The matrix determinant is a non-trivial multiple of the sparse resultant from which the sparse resultant itself can be recovered. The algorithm is incremental in the sense that successively larger matrices are constructed until one is found with the above properties. For multigraded systems, the new algorithm produces optimal matrices, i.e. expresses the sparse resultant as a single determinant. An implementation of the algorithm is described and experimental results are presented. In addition, we propose an efficient algorithm for computing the mixed volume of n polynomials in n variables. This computation provides an upper bound on the number of common isolated roots. A publicly available implementation of the algorithm is presented and empirical results are reported which suggest that it is the fastest mixed volume code to date." ] }
cmp-lg9503009
2951769629
This paper presents an algorithm for tagging words whose part-of-speech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.
The simplest part-of-speech taggers are bigram or trigram models @cite_12 @cite_1 . They require a relatively large tagged training text. Transformation-based tagging as introduced by also requires a hand-tagged text for training. No pretagged text is necessary for Hidden Markov Models @cite_7 @cite_13 @cite_2 . Still, a lexicon is needed that specifies the possible parts of speech for every word. have shown that the effort necessary to construct the part-of-speech lexicon can be considerably reduced by combining learning procedures and a partial part-of-speech categorization elicited from an informant.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_2", "@cite_13", "@cite_12" ], "mid": [ "2100796029", "2166394306", "", "2046224275", "1509596266" ], "abstract": [ "Abstract A system for part-of-speech tagging is described. It is based on a hidden Markov model which can be trained using a corpus of untagged text. Several techniques are introduced to achieve robustness while maintaining high performance. Word equivalence classes are used to reduce the overall number of parameters in the model, alleviating the problem of obtaining reliable estimates for individual words. The context for category prediction is extended selectively via predefined networks, rather than using a uniformly higher-order conditioning which requires exponentially more parameters with increasing context. The networks are embedded in a first-order model and network structure is developed by analysis of erros, and also via linguistic considerations. To compensate for incomplete dictionary coverage, the categories of unknown words are predicted using both local context and suffix information to aid in disambiguation. An evaluation was performed using the Brown corpus and different dictionary arrangements were investigated. The techniques result in a model that correctly tags approximately 96 of the text. The flexibility of the methods is illustrated by their use in a tagging program for French.", "We derive from first principles the basic equations for a few of the basic hidden-Markov-model word taggers as well as equations for other models which may be novel (the descriptions in previous papers being too spare to be sure). We give performance results for all of the models. The results from our best model (96.45 on an unused test sample from the Brown corpus with 181 distinct tags) is on the upper edge of reported results. We also hope these results clear up some confusion in the literature about the best equations to use. However, the major purpose of this paper is to show how the equations for a variety of models may be derived and thus encourage future authors to give the equations for their model and the derivations thereof.", "", "We present an implementation of a part-of-speech tagger based on a hidden Markov model. The methodology enables robust and accurate tagging with few resource requirements. Only a lexicon and some unlabeled training text are required. Accuracy exceeds 96 . We describe implementation strategies and optimizations which result in high-speed operation. Three applications for tagging are described: phrase recognition; word sense disambiguation; and grammatical function assignment.", "A consideration of problems engendered by the use of concordances to study additional word senses. The use of factor analysis as a research tool in lexicography is discussed. It is shown that this method provides information not obtainable through other approaches. This includes provision of several major senses for each word, an indication of the relationship between collocational patterns, & a more detailed analysis of the senses themselves. Sample factor analyses for the collocates of certain & right are presented & discussed. 3 Tables, 11 References. B. Annesser Murray" ] }
cmp-lg9503009
2951769629
This paper presents an algorithm for tagging words whose part-of-speech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.
The present paper is concerned with tagging languages and sublanguages for which no a priori knowledge about grammatical categories is available, a situation that occurs often in practice @cite_4 .
{ "cite_N": [ "@cite_4" ], "mid": [ "144990771" ], "abstract": [ "In this paper, we will discuss a method for assigning part of speech tags to words in an unannotated text corpus whose structure is completely unknown, with a little bit of help from an informant. Starting from scratch, automated and semiautomated methods are employed to build a part of speech tagger for the text. There are three steps to building the tagger: uncovering a set of part of speech tags, building a lexicon which indicates for each word its most likely tag, and learning rules to both correct mistakes in the learned lexicon and discover where contextual information can repair tagging mistakes. The long term goal of this work is to create a system which would enable somebody to take a large text in a language he does not know, and with only a few hours of help from a speaker of the language, accurately annotate the text with part of speech information." ] }
cmp-lg9503009
2951769629
This paper presents an algorithm for tagging words whose part-of-speech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.
In a previous paper @cite_15 , we trained a neural network to disambiguate part-of-speech using context; however, no information about the word that is to be categorized was used. This scheme fails for cases like The soldiers rarely come home.'' vs. The soldiers will come home.'' where the context is identical and information about the lexical item in question ( rarely'' vs. will'') is needed in combination with context for correct classification. In this paper, we will compare two tagging algorithms, one based on classifying word types, and one based on classifying words-plus-context.
{ "cite_N": [ "@cite_15" ], "mid": [ "2163514362" ], "abstract": [ "This paper presents a method for inducing the parts of speech of a language and part-of-speech labels for individual words from a large text corpus. Vector representations for the part-of-speech of a word are formed from entries of its near lexical neighbors. A dimensionality reduction creates a space representing the syntactic categories of unambiguous words. A neural net trained on these spatial representations classifies individual contexts of occurrence of ambiguous words. The method classifies both ambiguous and unambiguous words correctly with high accuracy." ] }
cmp-lg9607020
1497662183
In this paper, we propose a novel strategy which is designed to enhance the accuracy of the parser by simplifying complex sentences before parsing. This approach involves the separate parsing of the constituent sub-sentences within a complex sentence. To achieve that, the divide-and-conquer strategy first disambiguates the roles of the link words in the sentence and segments the sentence based on these roles. The separate parse trees of the segmented sub-sentences and the noun phrases within them are then synthesized to form the final parse. To evaluate the effects of this strategy on parsing, we compare the original performance of a dependency parser with the performance when it is enhanced with the divide-and-conquer strategy. When tested on 600 sentences of the IPSM'95 data sets, the enhanced parser saw a considerable error reduction of 21.2 in its accuracy.
Magerman discussed the poor performance of his parser SPATTER on sentences with conjunctions @cite_9 . As a result, he augmented SPATTER's probabilistic model with an additional conjunction feature. However, he reported that though SPATTER's performance on conjoined sentences improves with the conjunction feature, a significant percentage is still misanalyzed, as the simple conjunction feature model finds it difficult to capture long distance dependencies.
{ "cite_N": [ "@cite_9" ], "mid": [ "1924403233" ], "abstract": [ "Traditional natural language parsers are based on rewrite rule systems developed in an arduous, time-consuming manner by grammarians. A majority of the grammarian's efforts are devoted to the disambiguation process, first hypothesizing rules which dictate constituent categories and relationships among words in ambiguous sentences, and then seeking exceptions and corrections to these rules. In this work, I propose an automatic method for acquiring a statistical parser from a set of parsed sentences which takes advantage of some initial linguistic input, but avoids the pitfalls of the iterative and seemingly endless grammar development process. Based on distributionally-derived and linguistically-based features of language, this parser acquires a set of statistical decision trees which assign a probability distribution on the space of parse trees given the input sentence. These decision trees take advantage of significant amount of contextual information, potentially including all of the lexical information in the sentence, to produce highly accurate statistical models of the disambiguation process. By basing the disambiguation criteria selection on entropy reduction rather than human intuition, this parser development method is able to consider more sentences than a human grammarian can when making individual disambiguation rules. In experiments between a parser, acquired using this statistical framework, and a grammarian's rule-based parser, developed over a ten-year period, both using the same training material and test sentences, the decision tree parser significantly outperformed the grammar-based parser on the accuracy measure which the grammarian was trying to maximize, achieving an accuracy of 78 compared to the grammar-based parser's 69 ." ] }
cmp-lg9607020
1497662183
In this paper, we propose a novel strategy which is designed to enhance the accuracy of the parser by simplifying complex sentences before parsing. This approach involves the separate parsing of the constituent sub-sentences within a complex sentence. To achieve that, the divide-and-conquer strategy first disambiguates the roles of the link words in the sentence and segments the sentence based on these roles. The separate parse trees of the segmented sub-sentences and the noun phrases within them are then synthesized to form the final parse. To evaluate the effects of this strategy on parsing, we compare the original performance of a dependency parser with the performance when it is enhanced with the divide-and-conquer strategy. When tested on 600 sentences of the IPSM'95 data sets, the enhanced parser saw a considerable error reduction of 21.2 in its accuracy.
Jones explored another type of link words, the punctuations @cite_7 . He showed successfully that for longer sentences, a grammar which makes use of punctuation massively outperforms one which does not. Besides improving parsing accuracy, the use of punctuations also significantly reduces the number of possible parses generated. However, as theoretical forays into the syntactic roles of punctuation are limited, the grammar he designed can only cover a subset of all punctuation phenomena. Unexpected constructs thus cause the grammar to fail completely.
{ "cite_N": [ "@cite_7" ], "mid": [ "2039117335" ], "abstract": [ "Few, if any, current NLP systems make any significant use of punctuation. Intuitively, a treatment of punctuation seems necessary to the analysis and production of text. Whilst this has been suggested in the fields of discourse structure, it is still unclear whether punctuation can help in the syntactic field. This investigation attempts to answer this question by parsing some corpus-based material with two similar grammars --- one including rules for punctuation, the other ignoring it. The punctuated grammar significantly out-performs the unpunctuated one, and so the conclusion is that punctuation can play a useful role in syntactic processing." ] }
1908.11823
2970705892
In this work we investigate to which extent one can recover class probabilities within the empirical risk minimization (ERM) paradigm. The main aim of our paper is to extend existing results and emphasize the tight relations between empirical risk minimization and class probability estimation. Based on existing literature on excess risk bounds and proper scoring rules, we derive a class probability estimator based on empirical risk minimization. We then derive fairly general conditions under which this estimator will converge, in the L1-norm and in probability, to the true class probabilities. Our main contribution is to present a way to derive finite sample L1-convergence rates of this estimator for different surrogate loss functions. We also study in detail which commonly used loss functions are suitable for this estimation problem and finally discuss the setting of model-misspecification as well as a possible extension to asymmetric loss functions.
perform an analysis similar to ours as they also investigate convergence properties of a class probability estimator, their start and end point are very different though. While we start with theory from proper scoring rules, their paper directly starts with the class probability estimator as found in @cite_5 . The problem is that the estimator in @cite_5 only appears as a side remark, and it is unclear to which extent this is the best, only or even the correct choice. This paper contributes to close this gap and answers those questions. They show that the estimator converges to a unique class probability model. In relation to this one can view this paper as an investigation of this unique class probability model and we give necessary and sufficient conditions that lead to convergence to the true class probabilities. Note also that their paper uses convex methods, while our work in comparison draws from the theory of proper scoring rules.
{ "cite_N": [ "@cite_5" ], "mid": [ "2023163512" ], "abstract": [ "We study how closely the optimal Bayes error rate can be approximately reached using a classification algorithm that computes a classifier by minimizing a convex upper bound of the classification error function. The measurement of closeness is characterized by the loss function used in the estimation. We show that such a classification scheme can be generally regarded as a (nonmaximum-likelihood) conditional in-class probability estimate, and we use this analysis to compare various convex loss functions that have appeared in the literature. Furthermore, the theoretical insight allows us to design good loss functions with desirable properties. Another aspect of our analysis is to demonstrate the consistency of certain classification methods using convex risk minimization. This study sheds light on the good performance of some recently proposed linear classification methods including boosting and support vector machines. It also shows their limitations and suggests possible improvements." ] }
1908.11823
2970705892
In this work we investigate to which extent one can recover class probabilities within the empirical risk minimization (ERM) paradigm. The main aim of our paper is to extend existing results and emphasize the tight relations between empirical risk minimization and class probability estimation. Based on existing literature on excess risk bounds and proper scoring rules, we derive a class probability estimator based on empirical risk minimization. We then derive fairly general conditions under which this estimator will converge, in the L1-norm and in probability, to the true class probabilities. Our main contribution is to present a way to derive finite sample L1-convergence rates of this estimator for different surrogate loss functions. We also study in detail which commonly used loss functions are suitable for this estimation problem and finally discuss the setting of model-misspecification as well as a possible extension to asymmetric loss functions.
The probability estimator we use also appears in @cite_10 where it is used to derive excess risk bounds, referred to as surrogate risk bounds, for bipartite ranking. The methods used are very similar in the sense that these are also based on proper scoring rules. The difference is again the focus, and even more so the conditions used. They introduce the notion of strongly proper scoring rules which directly allows one to bound the @math -norm, and thus the @math -norm, of the estimator in terms of the excess risk. We show that convergence can be achieved already under milder conditions. We then use the concept of modulus of continuity, of which strongly proper scoring rules are a particular case, to analyze the rate of convergence.
{ "cite_N": [ "@cite_10" ], "mid": [ "2141789531" ], "abstract": [ "The problem of bipartite ranking, where instances are labeled positive or negative and the goal is to learn a scoring function that minimizes the probability of mis-ranking a pair of positive and negative instances (or equivalently, that maximizes the area under the ROC curve), has been widely studied in recent years. A dominant theoretical and algorithmic framework for the problem has been to reduce bipartite ranking to pairwise classification; in particular, it is well known that the bipartite ranking regret can be formulated as a pairwise classification regret, which in turn can be upper bounded using usual regret bounds for classification problems. Recently, (2011) showed regret bounds for bipartite ranking in terms of the regret associated with balanced versions of the standard (non-pairwise) logistic and exponential losses. In this paper, we show that such (nonpairwise) surrogate regret bounds for bipartite ranking can be obtained in terms of a broad class of proper (composite) losses that we term as strongly proper. Our proof technique is much simpler than that of (2011), and relies on properties of proper (composite) losses as elucidated recently by Reid and Williamson (2010, 2011) and others. Our result yields explicit surrogate bounds (with no hidden balancing terms) in terms of a variety of strongly proper losses, including for example logistic, exponential, squared and squared hinge losses as special cases. An important consequence is that standard algorithms minimizing a (non-pairwise) strongly proper loss, such as logistic regression and boosting algorithms (assuming a universal function class and appropriate regularization), are in fact consistent for bipartite ranking; moreover, our results allow us to quantify the bipartite ranking regret in terms of the corresponding surrogate regret. We also obtain tighter surrogate bounds under certain low-noise conditions via a recent result of Clemencon and Robbiano (2011)." ] }
1908.11829
2970633507
We consider the minimum cut problem in undirected, weighted graphs. We give a simple algorithm to find a minimum cut that @math -respects (cuts two edges of) a spanning tree @math of a graph @math . This procedure can be used in place of the complicated subroutine given in Karger's near-linear time minimum cut algorithm (J. ACM, 2000). We give a self-contained version of Karger's algorithm with the new procedure, which is easy to state and relatively simple to implement. It produces a minimum cut on an @math -edge, @math -vertex graph in @math time with high probability. This performance matches that achieved by Karger, thereby matching the current state of the art.
On an unweighted graph, Gabow @cite_13 showed how to compute the minimum cut in @math time, where @math is the capacity of the minimum cut. Karger @cite_29 improved Gabow's algorithm by applying random sampling, achieving runtime @math The @math notation hides @math factors. Las Vegas. The sampling technique developed by Karger @cite_29 , combined with the tree-packing technique devised by Gabow @cite_13 , form the basis of Karger's near-linear time minimum cut algorithm @cite_16 . As previously mentioned, this technique finds the minimum cut in an undirected, weighted graph in @math time with high probability.
{ "cite_N": [ "@cite_29", "@cite_16", "@cite_13" ], "mid": [ "2150516767", "1964510837", "2012287357" ], "abstract": [ "We use random sampling as a tool for solving undirected graph problems. We show that the sparse graph, or skeleton, that arises when we randomly sample a graph's edges will accurately approximate the value of all cuts in the original graph with high probability. This makes sampling effective for problems involving cuts in graphs. We present fast randomized (Monte Carlo and Las Vegas) algorithms for approximating and exactly finding minimum cuts and maximum flows in unweighted, undirected graphs. Our cut-approximation algorithms extend unchanged to weighted graphs while our weighted-graph flow algorithms are somewhat slower. Our approach gives a general paradigm with potential applications to any packing problem. It has since been used in a near-linear time algorithm for finding minimum cuts, as well as faster cut and flow algorithms. Our sampling theorems also yield faster algorithms for several other cut-based problems, including approximating the best balanced cut of a graph, finding a k-connected orientation of a 2k-connected graph, and finding integral multicommodity flows in graphs with a great deal of excess capacity. Our methods also improve the efficiency of some parallel cut and flow algorithms. Our methods also apply to the network design problem, where we wish to build a network satisfying certain connectivity requirements between vertices. We can purchase edges of various costs and wish to satisfy the requirements at minimum total cost. Since our sampling theorems apply even when the sampling probabilities are different for different edges, we can apply randomized rounding to solve network design problems. This gives approximation algorithms that guarantee much better approximations than previous algorithms whenever the minimum connectivity requirement is large. As a particular example, we improve the best approximation bound for the minimum k-connected subgraph problem from 1.85 to [math not displayed].", "We significantly improve known time bounds for solving the minimum cut problem on undirected graphs. We use a \"semiduality\" between minimum cuts and maximum spanning tree packings combined with our previously developed random sampling techniques. We give a randomized (Monte Carlo) algorithm that finds a minimum cut in an m -edge, n -vertex graph with high probability in O (m log 3 n ) time. We also give a simpler randomized algorithm that finds all minimum cuts with high probability in O( m log 3 n ) time. This variant has an optimal RNC parallelization. Both variants improve on the previous best time bound of O ( n 2 log 3 n ). Other applications of the tree-packing approach are new, nearly tight bounds on the number of near-minimum cuts a graph may have and a new data structure for representing them in a space-efficient manner.", "We present an algorithm that finds the edge connectivity ? of a graph having n vectices and m edges. The running time is O(? m log(n2 m)) for directed graphs and slightly less for undirected graphs, O(m+?2n log(n ?)). This improves the previous best time bounds, O(min mn, ?2n2 ) for directed graphs and O(?n2) for undirected graphs. We present an algorithm that finds k edge-disjoint arborescences on a directed graph in time O((kn)2). This improves the previous best time bound, O(kmn + k3n2). Unlike previous work, our approach is based on two theorems of Edmonds that link these two problems and show how they can be solved." ] }
1908.11829
2970633507
We consider the minimum cut problem in undirected, weighted graphs. We give a simple algorithm to find a minimum cut that @math -respects (cuts two edges of) a spanning tree @math of a graph @math . This procedure can be used in place of the complicated subroutine given in Karger's near-linear time minimum cut algorithm (J. ACM, 2000). We give a self-contained version of Karger's algorithm with the new procedure, which is easy to state and relatively simple to implement. It produces a minimum cut on an @math -edge, @math -vertex graph in @math time with high probability. This performance matches that achieved by Karger, thereby matching the current state of the art.
A recent development uses low-conductance cuts to find the minimum cut in an undirected unweighted graph. This technique was introduced by Kawarabayashi and Thorup @cite_2 , who achieve near-linear deterministic time (estimated to be @math ). This was improved by Henzinger, Rao, and Wang @cite_31 , who achieve deterministic runtime @math . Although the algorithm of is more efficient than Karger's algorithm @cite_16 on unweighted graphs, the procedure, as well as the one it was based on @cite_2 are quite involved, thus making them largely impractical for implementation purposes.
{ "cite_N": [ "@cite_31", "@cite_16", "@cite_2" ], "mid": [ "2569104968", "1964510837", "2963972775" ], "abstract": [ "We study the problem of computing a minimum cut in a simple, undirected graph and give a deterministic O(m log2 n log log2 n) time algorithm. This improves both on the best previously known deterministic running time of O(m log12 n) (Kawarabayashi and Thorup [12]) and the best previously known randomized running time of O(m log3 n) (Karger [11]) for this problem, though Karger's algorithm can be further applied to weighted graphs. Our approach is using the Kawarabayashi and Thorup graph compression technique, which repeatedly finds low-conductance cuts. To find these cuts they use a diffusion-based local algorithm. We use instead a flow-based local algorithm and suitably adjust their framework to work with our flow-based subroutine. Both flow and diffusion based methods have a long history of being applied to finding low conductance cuts. Diffusion algorithms have several variants that are naturally local while it is more complicated to make flow methods local. Some prior work has proven nice properties for local flow based algorithms with respect to improving or cleaning up low conductance cuts. Our flow subroutine, however, is the first that is both local and produces low conductance cuts. Thus, it may be of independent interest.", "We significantly improve known time bounds for solving the minimum cut problem on undirected graphs. We use a \"semiduality\" between minimum cuts and maximum spanning tree packings combined with our previously developed random sampling techniques. We give a randomized (Monte Carlo) algorithm that finds a minimum cut in an m -edge, n -vertex graph with high probability in O (m log 3 n ) time. We also give a simpler randomized algorithm that finds all minimum cuts with high probability in O( m log 3 n ) time. This variant has an optimal RNC parallelization. Both variants improve on the previous best time bound of O ( n 2 log 3 n ). Other applications of the tree-packing approach are new, nearly tight bounds on the number of near-minimum cuts a graph may have and a new data structure for representing them in a space-efficient manner.", "We present a deterministic algorithm that computes the edge-connectivity of a graph in near-linear time. This is for a simple undirected unweighted graph G with n vertices and m edges. This is the first o(mn) time deterministic algorithm for the problem. Our algorithm is easily extended to find a concrete minimum edge-cut. In fact, we can construct the classic cactus representation of all minimum cuts in near-linear time. The previous fastest deterministic algorithm by Gabow from STOC '91 took O(m+λ2 n), where λ is the edge connectivity, but λ can be as big as n−1. Karger presented a randomized near-linear time Monte Carlo algorithm for the minimum cut problem at STOC’96, but the returned cut is only minimum with high probability. Our main technical contribution is a near-linear time algorithm that contracts vertex sets of a simple input graph G with minimum degree Δ, producing a multigraph Ḡ with O(m Δ) edges, which preserves all minimum cuts of G with at least two vertices on each side. In our deterministic near-linear time algorithm, we will decompose the problem via low-conductance cuts found using PageRank a la Brin and Page (1998), as analyzed by Andersson, Chung, and Lang at FOCS’06. Normally, such algorithms for low-conductance cuts are randomized Monte Carlo algorithms, because they rely on guessing a good start vertex. However, in our case, we have so much structure that no guessing is needed." ] }
1908.11656
2971124296
We propose LU-Net -- for LiDAR U-Net, a new method for the semantic segmentation of a 3D LiDAR point cloud. Instead of applying some global 3D segmentation method such as PointNet, we propose an end-to-end architecture for LiDAR point cloud semantic segmentation that efficiently solves the problem as an image processing problem. We first extract high-level 3D features for each point given its 3D neighbors. Then, these features are projected into a 2D multichannel range-image by considering the topology of the sensor. Thanks to these learned features and this projection, we can finally perform the segmentation using a simple U-Net segmentation network, which performs very well while being very efficient. In this way, we can exploit both the 3D nature of the data and the specificity of the LiDAR sensor. This approach outperforms the state-of-the-art by a large margin on the KITTI dataset, as our experiments show. Moreover, this approach operates at 24fps on a single GPU. This is above the acquisition rate of common LiDAR sensors which makes it suitable for real-time applications.
Semantic segmentation of images has been the subject of many works in the past years. Recently, deep learning methods have largely outperformed previous ones. The method presented in @cite_16 was the first to propose an accurate end-to-end network for semantic segmentation. This method is based on an encoder in which each scale is used to compute the final segmentation. Only a few month later, the U-Net architecture @cite_20 was proposed for the semantic segmentation of medical images. This method is an encoder-decoder able to provide highly precise segmentation. These two methods have largely influenced recent works such as DeeplabV3+ @cite_11 that uses dilated convolutional layers and spatial pyramid pooling modules in an encoder-decoder structure to improve the quality of the prediction. Other approaches explore multi-scale architectures to produce and fuse segmentations performed at different scales @cite_15 @cite_7 . Most of these methods are able to produce very accurate results, on various types of images (medical, outdoor, indoor). The survey @cite_1 of CNNs methods for semantic segmentation provides a deep analysis of some recent techniques. This work demonstrates that a combination of various components would most likely improve segmentation results on wider classes of objects.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_15", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "2611259176", "2898743055", "2563705555", "1903029394", "1901129140", "2964309882" ], "abstract": [ "We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve high-quality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.", "Majority of CNN architecture design is aimed at achieving high accuracy in public benchmarks by increasing the complexity. Typically, they are over-specified by a large margin and can be optimized by a factor of 10-100x with only a small reduction in accuracy. In spite of the increase in computational power of embedded systems, these networks are still not suitable for embedded deployment. There is a large need to optimize for hardware and reduce the size of the network by orders of magnitude for computer vision applications. This has led to a growing community which is focused on designing efficient networks. However, CNN architectures are evolving rapidly and efficient architectures seem to lag behind. There is also a gap in understanding the hardware architecture details and incorporating it into the network design. The motivation of this paper is to systematically summarize efficient design techniques and provide guidelines for an application developer. We also perform a case study by benchmarking various semantic segmentation algorithms for autonomous driving.", "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .", "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89 and 82.1 without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: github.com tensorflow models tree master research deeplab." ] }
1908.11656
2971124296
We propose LU-Net -- for LiDAR U-Net, a new method for the semantic segmentation of a 3D LiDAR point cloud. Instead of applying some global 3D segmentation method such as PointNet, we propose an end-to-end architecture for LiDAR point cloud semantic segmentation that efficiently solves the problem as an image processing problem. We first extract high-level 3D features for each point given its 3D neighbors. Then, these features are projected into a 2D multichannel range-image by considering the topology of the sensor. Thanks to these learned features and this projection, we can finally perform the segmentation using a simple U-Net segmentation network, which performs very well while being very efficient. In this way, we can exploit both the 3D nature of the data and the specificity of the LiDAR sensor. This approach outperforms the state-of-the-art by a large margin on the KITTI dataset, as our experiments show. Moreover, this approach operates at 24fps on a single GPU. This is above the acquisition rate of common LiDAR sensors which makes it suitable for real-time applications.
Recently, SqueezeSeg, a novel approach for the semantic segmentation of a LiDAR point cloud represented as a spherical range-image @cite_14 , was proposed. This representation allows to perform the segmentation by using simple 2D convolutions, which lowers the computational cost while keeping good accuracy. The architecture is derived from the SqueezeNet image segmentation method @cite_13 . The intermediate layers are "fire layers", layers made of one squeeze module and one expansion module. Later on, the same authors improved this method in @cite_3 by adding a context aggregation module and by considering focal loss and batch normalization to improve the quality of the segmentation. A similar range-image approach was proposed in @cite_17 , where a Atrous Spatial Pyramid Pooling @cite_4 and squeeze reweighting layer @cite_8 are added. Finally, in @cite_10 , the authors offer to input a range-image directly to the U-Net architecture described in @cite_20 . This method achieves results that are comparable to the state of the art of range-image methods with a much simpler and more intuitive architecture. All these range-image methods succeed in real-time computation. However, their results often lack of accuracy which limits their usage in real scenarios.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_10", "@cite_3", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "", "2412782625", "", "2946747865", "2968557240", "2279098554", "1901129140", "2884355388" ], "abstract": [ "", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "", "This paper proposes RIU-Net (for Range-Image U-Net), the adaptation of a popular semantic segmentation network for the semantic segmentation of a 3D LiDAR point cloud. The point cloud is turned into a 2D range-image by exploiting the topology of the sensor. This image is then used as input to a U-net. This architecture has already proved its efficiency for the task of semantic segmentation of medical images. We demonstrate how it can also be used for the accurate semantic segmentation of a 3D LiDAR point cloud and how it represents a valid bridge between image processing and 3D point cloud processing. Our model is trained on range-images built from KITTI 3D object detection dataset. Experiments show that RIU-Net, despite being very simple, offers results that are comparable to the state-of-the-art of range-image based methods. Finally, we demonstrate that this architecture is able to operate at 90fps on a single GPU, which enables deployment for real-time segmentation.", "Earlier work demonstrates the promise of deep-learning-based approaches for point cloud segmentation; however, these approaches need to be improved to be practically useful. To this end, we introduce a new model SqueezeSegV2. With an improved model structure, SqueezeSetV2 is more robust against dropout noises in LiDAR point cloud and therefore achieves significant accuracy improvement. Training models for point cloud segmentation requires large amounts of labeled data, which is expensive to obtain. To sidestep the cost of data collection and annotation, simulators such as GTA-V can be used to create unlimited amounts of labeled, synthetic data. However, due to domain shift, models trained on synthetic data often do not generalize well to the real world. Existing domain-adaptation methods mainly focus on images and most of them cannot be directly applied to point clouds. We address this problem with a domain-adaptation training pipeline consisting of three major components: 1) learned intensity rendering, 2) geodesic correlation alignment, and 3) progressive domain calibration. When trained on real data, our new model exhibits segmentation accuracy improvements of 6.0-8.6 over the original SqueezeSeg. When training our new model on synthetic data using the proposed domain adaptation pipeline, we nearly double test accuracy on real-world data, from 29.0 to 57.4 . Our source code and synthetic dataset are open sourced11https: github.com xuanyuzhou98 SqueezeSegV2", "Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .", "In this paper, we propose PointSeg, a real-time end-to-end semantic segmentation method for road-objects based on spherical images. We take the spherical image, which is transformed from the 3D LiDAR point clouds, as input of the convolutional neural networks (CNNs) to predict the point-wise semantic map. To make PointSeg applicable on a mobile system, we build the model based on the light-weight network, SqueezeNet, with several improvements. It maintains a good balance between memory cost and prediction performance. Our model is trained on spherical images and label masks projected from the KITTI 3D object detection dataset. Experiments show that PointSeg can achieve competitive accuracy with 90fps on a single GPU 1080ti. which makes it quite compatible for autonomous driving applications." ] }
1908.11769
2971290534
Rewriting logic is naturally concurrent: several subterms of the state term can be rewritten simultaneously. But state terms are global, which makes compositionality difficult to achieve. Compositionality here means being able to decompose a complex system into its functional components and code each as an isolated and encapsulated system. Our goal is to help bringing compositionality to system specification in rewriting logic. The base of our proposal is the operation that we call synchronous composition. We discuss the motivations and implications of our proposal, formalize it for rewriting logic and also for transition structures, to be used as semantics, and show the power of our approach with some examples. This paper is under consideration in Theory and Practice of Logic Programming (TPLP).
Several temporal logics have been proposed that make joint use of actions and propositions on states: ACTL* @cite_34 , RLTL @cite_0 , SE-LTL @cite_4 , TLR* @cite_25 . There are also transition structures with mixed ingredients: LKS @cite_4 , L2TS @cite_54 . Although all of them bring actions (or transitions) to the focus, none tries to be utterly egalitarian, as we do.
{ "cite_N": [ "@cite_4", "@cite_54", "@cite_0", "@cite_34", "@cite_25" ], "mid": [ "", "2167352300", "2024215898", "1602925513", "1599831144" ], "abstract": [ "", "Three temporal logics are introduced which induce on labeled transition systems the same identifications as branching bisimulation. The first is an extension of Hennessy-Milner logic with a kind of unit operator. The second is another extension of Hennessy-Milner logic which exploits the power of backward modalities. The third is CTL* with the next-time operator interpreted over all paths, not just over maximal ones. A relevant side effect of the last characterization is that it sets a bridge between the state- and event-based approaches to the semantics of concurrent systems. >", "We study efficient translations of Regular Linear Temporal Logic ( ) into automata on infinite words. is a temporal logic that fuses Linear Temporal Logic (LTL) with regular expressions, extending its expressive power to all @math -regular languages. The first contribution of this paper is a novel bottom up translation from into alternating parity automata of linear size that requires only colors @math , @math and @math . Moreover, the resulting automata enjoy the stratified internal structure of hesitant automata. Our translation is defined inductively for every operator, and does not require an upfront transformation of the expression into a normal form. Our construction builds at every step two automata: one equivalent to the formula and another to its complement. Inspired by this construction, our second contribution is to extend with new operators, including universal sequential composition, that enrich the logic with duality laws and negation normal forms. The third contribution is a ranking translation of the resulting alternating automata into non-deterministic automata. To provide this efficient translation we introduce the notion of stratified rankings, and show how the translation is optimal for the LTL fragment of the logic.", "A temporal logic based on actions rather than on states is presented and interpreted over labelled transition systems. It is proved that it has essentially the same power as CTL*, a temporal logic interpreted over Kripke structures. The relationship between the two logics is established by introducing two mappings from Kripke structures to labelled transition systems and viceversa and two transformation functions between the two logics which preserve truth. A branching time version of the action based logic is also introduced. This new logic for transition systems can play an important role as an intermediate between Hennessy-Milner Logic and the modal μ-calculus. It is sufficiently expressive to describe safety and liveness properties but permits model checking in linear time.", "This paper presents the temporal logic of rewriting @math . Syntactically, @math is a very simple extension of @math which just adds action atoms, in the form of spatial action patterns, to @math . Semantically and pragmatically, however, when used together with rewriting logic as a \"tandem\" of system specification and property specification logics, it has substantially more expressive power than purely state-based logics like @math , or purely action-based logics like A- @math . Furthermore, it avoids the system property mismatch problem experienced in state-based or action-based logics, which makes many useful properties inexpressible in those frameworks without unnatural changes to a system's specification. The advantages in expresiveness of @math are gained without losing the ability to use existing tools and algorithms to model check its properties: a faithful translation of models and formulas is given that allows verifying @math properties with @math model checkers." ] }

This is a copy of the Multi-XScience dataset, except the input source documents of its test split have been replaced by a sparse retriever. The retrieval pipeline used:

  • query: The related_work field of each example
  • corpus: The union of all documents in the train, validation and test splits
  • retriever: BM25 via PyTerrier with default settings
  • top-k strategy: "mean", i.e. the number of documents retrieved, k, is set as the mean number of documents seen across examples in this dataset, in this case k==4

Retrieval results on the train set:

Recall@100 Rprec Precision@k Recall@k
0.5482 0.2243 0.1578 0.2689

Retrieval results on the validation set:

Recall@100 Rprec Precision@k Recall@k
0.5476 0.2209 0.1592 0.2650

Retrieval results on the test set:

Recall@100 Rprec Precision@k Recall@k
0.548 0.2272 0.1611 0.2704
Downloads last month
2