Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
found
Source Datasets:
original
Tags:
License:
aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
sequence
math9912167
1631980677
Author(s): Kuperberg, Greg; Thurston, Dylan P. | Abstract: We give a purely topological definition of the perturbative quantum invariants of links and 3-manifolds associated with Chern-Simons field theory. Our definition is as close as possible to one given by Kontsevich. We will also establish some basic properties of these invariants, in particular that they are universally finite type with respect to algebraically split surgery and with respect to Torelli surgery. Torelli surgery is a mutual generalization of blink surgery of Garoufalidis and Levine and clasper surgery of Habiro.
Two other generalizations that can be considered are invariants of graphs in 3-manifolds, and invariants associated to other flat connections @cite_16 . We will analyze these in future work. Among other things, there should be a general relation between flat bundles and links in 3-manifolds on the one hand and finite covers and branched covers on the other hand @cite_26 .
{ "cite_N": [ "@cite_16", "@cite_26" ], "mid": [ "2080743555", "1641082372" ], "abstract": [ "The aim of this paper is to construct new topological invariants of compact oriented 3-manifolds and of framed links in such manifolds. Our invariant of (a link in) a closed oriented 3-manifold is a sequence of complex numbers parametrized by complex roots of 1. For a framed link in S 3 the terms of the sequence are equale to the values of the (suitably parametrized) Jones polynomial of the link in the corresponding roots of 1. In the case of manifolds with boundary our invariant is a (sequence of) finite dimensional complex linear operators. This produces from each root of unity q a 3-dimensional topological quantum field theory", "Recently, Mullins calculated the Casson-Walker invariant of the 2-fold cyclic branched cover of an oriented link in S^3 in terms of its Jones polynomial and its signature, under the assumption that the 2-fold branched cover is a rational homology 3-sphere. Using elementary principles, we provide a similar calculation for the general case. In addition, we calculate the LMO invariant of the p-fold branched cover of twisted knots in S^3 in terms of the Kontsevich integral of the knot." ] }
cs9910011
2168463568
A statistical model for segmentation and word discovery in child directed speech is presented. An incremental unsupervised learning algorithm to infer word boundaries based on this model is described and results of empirical tests showing that the algorithm is competitive with other models that have been used for similar tasks are also presented.
Model Based Dynamic Programming, hereafter referred to as MBDP-1 @cite_0 , is probably the most recent work that addresses the exact same issue as that considered in this paper. Both the approach presented in this paper and Brent's MBDP-1 are based on explicit probability models. Approaches not based on explicit probability models include those based on information theoretic criteria such as MDL , transitional probability or simple recurrent networks . The maximum likelihood approach due to Olivier:SGL68 is probabilistic in the sense that it is geared towards explicitly calculating the most probable segmentation of each block of input utterances. However, it is not based on a formal statistical model. To avoid needless repetition, we only describe Brent's MBDP-1 below and direct the interested reader at Brent:EPS99 which provides an excellent review of many of the algorithms mentioned above.
{ "cite_N": [ "@cite_0" ], "mid": [ "2949560198" ], "abstract": [ "We initiate the probabilistic analysis of linear programming (LP) decoding of low-density parity-check (LDPC) codes. Specifically, we show that for a random LDPC code ensemble, the linear programming decoder of Feldman succeeds in correcting a constant fraction of errors with high probability. The fraction of correctable errors guaranteed by our analysis surpasses previous nonasymptotic results for LDPC codes, and in particular, exceeds the best previous finite-length result on LP decoding by a factor greater than ten. This improvement stems in part from our analysis of probabilistic bit-flipping channels, as opposed to adversarial channels. At the core of our analysis is a novel combinatorial characterization of LP decoding success, based on the notion of a flow on the Tanner graph of the code. An interesting by-product of our analysis is to establish the existence of ldquoprobabilistic expansionrdquo in random bipartite graphs, in which one requires only that almost every (as opposed to every) set of a certain size expands, for sets much larger than in the classical worst case setting." ] }
cs9911003
2950670108
We solve the subgraph isomorphism problem in planar graphs in linear time, for any pattern of constant size. Our results are based on a technique of partitioning the planar graph into pieces of small tree-width, and applying dynamic programming within each piece. The same methods can be used to solve other planar graph problems including connectivity, diameter, girth, induced subgraph isomorphism, and shortest paths.
Recently we were able to characterize the graphs that can occur at most @math times as a subgraph isomorph in an @math -vertex planar graph: they are exactly the 3-connected planar graphs @cite_41 . However our proof does not lead to an efficient algorithm for 3-connected planar subgraph isomorphism. In this paper we use different techniques which do not depend on high-order connectivity.
{ "cite_N": [ "@cite_41" ], "mid": [ "1799262171" ], "abstract": [ "Given two graphs @math and @math , the Subgraph Isomorphism problem asks if @math is isomorphic to a subgraph of @math . While NP-hard in general, algorithms exist for various parameterized versions of the problem: for example, the problem can be solved (1) in time @math using the color-coding technique of Alon, Yuster, and Zwick; (2) in time @math using Courcelle's Theorem; (3) in time @math using a result on first-order model checking by Frick and Grohe; or (4) in time @math for connected @math using the algorithm of Matou s ek and Thomas. Already this small sample of results shows that the way an algorithm can depend on the parameters is highly nontrivial and subtle. We develop a framework involving 10 relevant parameters for each of @math and @math (such as treewidth, pathwidth, genus, maximum degree, number of vertices, number of components, etc.), and ask if an algorithm with running time [ f_1(p_1,p_2,..., p_ ) n^ f_2(p_ +1 ,..., p_k) ] exist, where each of @math is one of the 10 parameters depending only on @math or @math . We show that all the questions arising in this framework are answered by a set of 11 maximal positive results (algorithms) and a set of 17 maximal negative results (hardness proofs); some of these results already appear in the literature, while others are new in this paper. On the algorithmic side, our study reveals for example that an unexpected combination of bounded degree, genus, and feedback vertex set number of @math gives rise to a highly nontrivial algorithm for Subgraph Isomorphism. On the hardness side, we present W[1]-hardness proofs under extremely restricted conditions, such as when @math is a bounded-degree tree of constant pathwidth and @math is a planar graph of bounded pathwidth." ] }
hep-th9908200
2160091034
Daviau showed the equivalence of matrix Dirac theory, formulated within a spinor bundle (S_x C _x^4 ), to a Clifford algebraic formulation within space Clifford algebra (C ( R ^3 , ) M _ 2 ( C ) P ) Pauli algebra (matrices) ≃ ℍ ⨁ ℍ ≃ biquaternions. We will show, that Daviau's map θ: ( : C ^4 M _ 2 ( C ) ) is an isomorphism. It is shown that Hestenes' and Parra's formulations are equivalent to Daviau's Clifford algebra formulation, which uses outer automorphisms. The connection between different formulations is quite remarkable, since it connects the left and right action on the Pauli algebra itself viewed as a bi-module with the left (resp. right) action of the enveloping algebra (P^ P P^T on P ). The isomorphism established in this article and given by Daviau's map does clearly show that right and left actions are of similar type. This should be compared with attempts of Hestenes, Daviau, and others to interprete the right action as the iso-spin freedom.
A further genuine and important approach to the spinor-tensor transition was developed starting probably with Crawford by P. Lounesto, @cite_6 and references there. He investigated the question, how a spinor field can be reconstructed from known tensor densities. The major characterization is derived, using Fierz-Kofink identities, from elements called Boomerangs --because they are able to come back to the spinorial picture. Lounesto's result is a characterization of spinors based on multi-vector relations which unveils a new unknown type of spinor.
{ "cite_N": [ "@cite_6" ], "mid": [ "2198155329" ], "abstract": [ "Patch-based low-rank models have shown effective in exploiting spatial redundancy of natural images especially for the application of image denoising. However, two-dimensional low-rank model can not fully exploit the spatio-temporal correlation in larger data sets such as multispectral images and 3D MRIs. In this work, we propose a novel low-rank tensor approximation framework with Laplacian Scale Mixture (LSM) modeling for multi-frame image denoising. First, similar 3D patches are grouped to form a tensor of d-order and high-order Singular Value Decomposition (HOSVD) is applied to the grouped tensor. Then the task of multiframe image denoising is formulated as a Maximum A Posterior (MAP) estimation problem with the LSM prior for tensor coefficients. Both unknown sparse coefficients and hidden LSM parameters can be efficiently estimated by the method of alternating optimization. Specifically, we have derived closed-form solutions for both subproblems. Experimental results on spectral and dynamic MRI images show that the proposed algorithm can better preserve the sharpness of important image structures and outperform several existing state-of-the-art multiframe denoising methods (e.g., BM4D and tensor dictionary learning)." ] }
cs9903014
1612660921
We present an open architecture for just-in-time code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make system-level code generation useful in practice.
Pioneering research in dynamic runtime optimization was done by Hansen @cite_8 who first described a fully automated system for runtime code optimization. His system was similar in structure to our system---it was composed of a loader, a profiler, and an optimizer---but used profiling data only to decide when to optimize and what to optimize, not how to optimize. Also, his system interpreted code prior to optimization, since load time code generation was too memory and time consuming at the time.
{ "cite_N": [ "@cite_8" ], "mid": [ "2165006697" ], "abstract": [ "Optimizing programs at run-time provides opportunities to apply aggressive optimizations to programs based on information that was not available at compile time. At run time, programs can be adapted to better exploit architectural features, optimize the use of dynamic libraries, and simplify code based on run-time constants. Our profiling system provides a framework for collecting information required for performing run-time optimization. We sample the performance hardware registers available on an Itanium processor, and select a set of code that is likely to lead to important performance-events. We gather distribution information about the performance-events we wish to monitor, and test our traces by estimating the ability for dynamic patching of a program to execute run-time generated traces. Our results show that we are able to capture 58 of execution time across various SPEC2000 integer benchmarks using our profile and patching techniques on a relatively small number of frequently executed execution paths. Our profiling and detection system overhead increases execution time by only 2-4 ." ] }
cs9903014
1612660921
We present an open architecture for just-in-time code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make system-level code generation useful in practice.
Hansen's work was followed by several other projects that have investigated the benefits of runtime optimization: the Smalltalk @cite_33 and SELF @cite_0 systems that focused on the benefits of dynamic optimization in an object-oriented environment; Morph'', a project developed at Harvard University @cite_16 ; and the system described by the authors of this paper @cite_4 @cite_30 . Other projects have experimented with optimization at link time rather than at runtime @cite_18 . At link time, many of the problems described in this paper are non-existent. Among them the decision when to optimize, what to optimize, and how to replace code. However, there is also a price to pay, namely that it cannot be performed in the presence of dynamic loading.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_4", "@cite_33", "@cite_0", "@cite_16" ], "mid": [ "2171438172", "2788459922", "2165006697", "2116210226", "2156560068", "1524758670" ], "abstract": [ "Traditional query optimizers assume accurate knowledge of run-time parameters such as selectivities and resource availability during plan optimization, i.e., at compile time. In reality, however, this assumption is often not justified. Therefore, the “static” plans produced by traditional optimizers may not be optimal for many of their actual run-time invocations. Instead, we propose a novel optimization model that assigns the bulk of the optimization effort to compile-time and delays carefully selected optimization decisions until run-time. Our previous work defined the run-time primitives, “dynamic plans” using “choose-plan” operators, for executing such delayed decisions, but did not solve the problem of constructing dynamic plans at compile-time. The present paper introduces techniques that solve this problem. Experience with a working prototype optimizer demonstrates (i) that the additional optimization and start-up overhead of dynamic plans compared to static plans is dominated by their advantage at run-time, (ii) that dynamic plans are as robust as the “brute-force” remedy of run-time optimization, i.e., dynamic plans maintain their optimality even if parameters change between compile-time and run-time, and (iii) that the start-up overhead of dynamic plans is significantly less than the time required for complete optimization at run-time. In other words, our proposed techniques are superior to both techniques considered to-date, namely compile-time optimization into a single static plan as well as run-time optimization. Finally, we believe that the concepts and technology described can be transferred to commercial query optimizers in order to improve the performance of embedded queries with host variables in the query predicate and to adapt to run-time system loads unpredictable at compile time.", "This paper presents the interesting observation that by performing fewer of the optimizations available in a standard compiler optimization level such as -02, while preserving their original ordering, significant savings can be achieved in both execution time and energy consumption. This observation has been validated on two embedded processors, namely the ARM Cortex-M0 and the ARM Cortex-M3, using two different versions of the LLVM compilation framework; v3.8 and v5.0. Experimental evaluation with 71 embedded benchmarks demonstrated performance gains for at least half of the benchmarks for both processors. An average execution time reduction of 2.4 and 5.3 was achieved across all the benchmarks for the Cortex-M0 and Cortex-M3 processors, respectively, with execution time improvements ranging from 1 up to 90 over the -02. The savings that can be achieved are in the same range as what can be achieved by the state-of-the-art compilation approaches that use iterative compilation or machine learning to select flags or to determine phase orderings that result in more efficient code. In contrast to these time consuming and expensive to apply techniques, our approach only needs to test a limited number of optimization configurations, less than 64, to obtain similar or even better savings. Furthermore, our approach can support multi-criteria optimization as it targets execution time, energy consumption and code size at the same time.", "Optimizing programs at run-time provides opportunities to apply aggressive optimizations to programs based on information that was not available at compile time. At run time, programs can be adapted to better exploit architectural features, optimize the use of dynamic libraries, and simplify code based on run-time constants. Our profiling system provides a framework for collecting information required for performing run-time optimization. We sample the performance hardware registers available on an Itanium processor, and select a set of code that is likely to lead to important performance-events. We gather distribution information about the performance-events we wish to monitor, and test our traces by estimating the ability for dynamic patching of a program to execute run-time generated traces. Our results show that we are able to capture 58 of execution time across various SPEC2000 integer benchmarks using our profile and patching techniques on a relatively small number of frequently executed execution paths. Our profiling and detection system overhead increases execution time by only 2-4 .", "Compile-time optimization is often limited by a lack of target machine and input data set knowledge. Without this information, compilers may be forced to make conservative assumptions to preserve correctness and to avoid performance degradation. In order to cope with this lack of information at compile-time, adaptive and dynamic systems can be used to perform optimization at runtime when complete knowledge of input and machine parameters is available. This paper presents a compiler-supported high-level adaptive optimization system. Users describe, in a domain specific language, optimizations performed by stand-alone optimization tools and backend compiler flags, as well as heuristics for applying these optimizations dynamically at runtime. The ADAPT compiler reads these descriptions and generates application-specific runtime systems to apply the heuristics. To facilitate the usage of existing tools and compilers, overheads are minimized by decoupling optimization from execution. Our system, ADAPT, supports a range of paradigms proposed recently, including dynamic compilation, parameterization and runtime sampling. We demonstrate our system by applying several optimization techniques to a suite of benchmarks on two target machines. ADAPT is shown to consistently outperform statically generated executables, improving performance by as much as 70 .", "Tuning compiler optimizations for rapidly evolving hardware makes porting and extending an optimizing compiler for each new platform extremely challenging. Iterative optimization is a popular approach to adapting programs to a new architecture automatically using feedback-directed compilation. However, the large number of evaluations required for each program has prevented iterative compilation from widespread take-up in production compilers. Machine learning has been proposed to tune optimizations across programs systematically but is currently limited to a few transformations, long training phases and critically lacks publicly released, stable tools. Our approach is to develop a modular, extensible, self-tuning optimization infrastructure to automatically learn the best optimizations across multiple programs and architectures based on the correlation between program features, run-time behavior and optimizations. In this paper we describe Milepost GCC, the first publicly-available open-source machine learning-based compiler. It consists of an Interactive Compilation Interface (ICI) and plugins to extract program features and exchange optimization data with the cTuning.org open public repository. It automatically adapts the internal optimization heuristic at function-level granularity to improve execution time, code size and compilation time of a new program on a given architecture. Part of the MILEPOST technology together with low-level ICI-inspired plugin framework is now included in the mainline GCC. We developed machine learning plugins based on probabilistic and transductive approaches to predict good combinations of optimizations. Our preliminary experimental results show that it is possible to automatically reduce the execution time of individual MiBench programs, some by more than a factor of 2, while also improving compilation time and code size. On average we are able to reduce the execution time of the MiBench benchmark suite by 11 for the ARC reconfigurable processor. We also present a realistic multi-objective optimization scenario for Berkeley DB library using Milepost GCC and improve execution time by approximately 17 , while reducing compilation time and code size by 12 and 7 respectively on Intel Xeon processor.", "We have developed a system called OM to explore the problem of code optimization at link-time. OM takes a collection of object modules constituting the entire program, and converts the object code into a symbolic Register Transfer Language (RTL) form that can be easily manipulated. This RTL is then transformed by intermodule optimization and finally converted back into object form. Although much high-level information about the program is gone at link-time, this approach enables us to perform optimizations that a compiler looking at a single module cannot see. Since object modules are more or less independent of the particular source language or compiler, this also gives us the chance to improve the code in ways that some compilers might simply have missed. To test the concept, we have used OM to build an optimizer that does interprocedural code motion. It moves simple loop-invariant code out of loops, even when the loop body extends across many procedures and the loop control is in a different procedure from the invariant code. Our technique also easily handles ‘‘loops’’ induced by recursion rather than iteration. Our code motion technique makes use of an interprocedural liveness analysis to discover dead registers that it can use to hold loop-invariant results. This liveness analysis also lets us perform interprocedural dead code elimination. We applied our code motion and dead code removal to SPEC benchmarks compiled with optimization using the standard compilers for the DECstation 5000. Our system improved the performance by 5 on average and by more than 14 in one case. More improvement should be possible soon; at present we move only simple load and load-address operations out of loops, and we scavenge registers to hold these values, rather than completely reallocating them. This paper will appear in the March issue of Journal of Programming Languages. It replaces Technical Note TN-31, an earlier version of the same material." ] }
cs9903014
1612660921
We present an open architecture for just-in-time code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make system-level code generation useful in practice.
Common to the above-mentioned work is that the main focus has always been on functional aspects, that is how to profile and which optimizations to perform. Related to this is research on how to boost application performance by combining profiling data and code optimizations at compile time (not at runtime), including work on method dispatch optimizations for object-oriented programming languages @cite_22 @cite_35 , profile-guided intermodular optimizations @cite_3 @cite_26 , code positioning techniques @cite_13 @cite_25 , and profile-guided data cache locality optimizations @cite_29 @cite_10 @cite_12 .
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_22", "@cite_10", "@cite_29", "@cite_3", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2165006697", "2093922742", "2130570838", "2115971347", "2751901133", "2952416601", "2330621167", "2171438172", "2116672403" ], "abstract": [ "Optimizing programs at run-time provides opportunities to apply aggressive optimizations to programs based on information that was not available at compile time. At run time, programs can be adapted to better exploit architectural features, optimize the use of dynamic libraries, and simplify code based on run-time constants. Our profiling system provides a framework for collecting information required for performing run-time optimization. We sample the performance hardware registers available on an Itanium processor, and select a set of code that is likely to lead to important performance-events. We gather distribution information about the performance-events we wish to monitor, and test our traces by estimating the ability for dynamic patching of a program to execute run-time generated traces. Our results show that we are able to capture 58 of execution time across various SPEC2000 integer benchmarks using our profile and patching techniques on a relatively small number of frequently executed execution paths. Our profiling and detection system overhead increases execution time by only 2-4 .", "Many profilers based on bytecode instrumentation yield wrong results in the presence of an optimizing dynamic compiler, either due to not being aware of optimizations such as stack allocation and method inlining, or due to the inserted code disrupting such optimizations. To avoid such perturbations, we present a novel technique to make any profiler implemented at the bytecode level aware of optimizations performed by the dynamic compiler. We implement our approach in a state-of-the-art Java virtual machine and demonstrate its significance with concrete profilers. We quantify the impact of escape analysis on allocation profiling, object life-time analysis, and the impact of method inlining on callsite profiling. We illustrate how our approach enables new kinds of profilers, such as a profiler for non-inlined callsites, and a testing framework for locating performance bugs in dynamic compiler implementations.", "Commercial applications such as databases and Web servers constitute the most important market segment for high-performance servers. Among these applications, on-line transaction processing (OLTP) workloads provide a challenging set of requirements for system designs since they often exhibit inefficient executions dominated by a large memory stall component. This behavior arises from large instruction and data footprints and high communication miss rates. A number of recent studies have characterized the behavior of commercial workloads and proposed architectural features to improve their performance. However, there has been little research on the impact of software and compiler-level optimizations for improving the behavior of such workloads. This paper provides a detailed study of profile-driven compiler optimizations to improve the code layout in commercial workloads with large instruction footprints. Our compiler algorithms are implemented in the context of Spike, an executable optimizer for the Alpha architecture. Our experiments use the Oracle commercial database engine running an OLTP workload, with results generated using both full system simulations and actual runs on Alpha multiprocessors. Our results show that code layout optimizations can provide a major improvement in the instruction cache behavior, providing a 55 to 65 reduction in the application misses for 64-128K caches. Our analysis shows that this improvement primarily arises from longer sequences of consecutively executed instructions and more reuse of cache lines before they are replaced. We also show that the majority of application instruction misses are caused by self-interference. However, code layout optimizations significantly reduce the amount of self-interference, thus elevating the relative importance of interference with operating system code. Finally, we show that better code layout can also provide substantial improvements in the behavior of other memory system components such as the instruction TLB and the unified second-level cache. The overall performance impact of our code layout optimizations is an improvement of 1.33 times in the execution time of our workload.", "Program profiles identify frequently executed portions of a program, which are the places at which optimizations offer programmers and compilers the greatest benefit. Compilers, however, infrequently exploit program profiles, because, profiling a program requires a programmer to instrument and run the program. An attractive alternative is for the complier to statically estimate program profiles. This paper presents several new techniques for static branch prediction and profiling. The first technique combines multiple predictions of a branch's outcome into a prediction of the probability that the branch is taken. Another technique uses these predictions to estimate the relative execution frequency (i.e., profile) of basic blocks and control-flow edges within a procedure. A third algorithm uses local frequency estimates to predict the global frequency of calls, procedure invocations, and basic block and control-flow edge executions. Experiments on the SPEC92 integer benchmarks and Unix applications show that the frequently executed blocks, edges, and functions identified by our techniques closely match those in a dynamic profile.", "Recent compilers offer a vast number of multilayered optimizations targeting different code segments of an application. Choosing among these optimizations can significantly impact the performance of the code being optimized. The selection of the right set of compiler optimizations for a particular code segment is a very hard problem, but finding the best ordering of these optimizations adds further complexity. Finding the best ordering represents a long standing problem in compilation research, named the phase-ordering problem. The traditional approach of constructing compiler heuristics to solve this problem simply cannot cope with the enormous complexity of choosing the right ordering of optimizations for every code segment in an application. This article proposes an automatic optimization framework we call MiCOMP, which Mi tigates the Com piler P hase-ordering problem. We perform phase ordering of the optimizations in LLVM’s highest optimization level using optimization sub-sequences and machine learning. The idea is to cluster the optimization passes of LLVM’s O3 setting into different clusters to predict the speedup of a complete sequence of all the optimization clusters instead of having to deal with the ordering of more than 60 different individual optimizations. The predictive model uses (1) dynamic features, (2) an encoded version of the compiler sequence, and (3) an exploration heuristic to tackle the problem. Experimental results using the LLVM compiler framework and the Cbench suite show the effectiveness of the proposed clustering and encoding techniques to application-based reordering of passes, while using a number of predictive models. We perform statistical analysis on the results and compare against (1) random iterative compilation, (2) standard optimization levels, and (3) two recent prediction approaches. We show that MiCOMP’s iterative compilation using its sub-sequences can reach an average performance speedup of 1.31 (up to 1.51). Additionally, we demonstrate that MiCOMP’s prediction model outperforms the -O1, -O2, and -O3 optimization levels within using just a few predictions and reduces the prediction error rate down to only 5 . Overall, it achieves 90 of the available speedup by exploring less than 0.001 of the optimization space.", "Performance optimization for large-scale applications has recently become more important as computation continues to move towards data centers. Data-center applications are generally very large and complex, which makes code layout an important optimization to improve their performance. This has motivated recent investigation of practical techniques to improve code layout at both compile time and link time. Although post-link optimizers had some success in the past, no recent work has explored their benefits in the context of modern data-center applications. In this paper, we present BOLT, a post-link optimizer built on top of the LLVM framework. Utilizing sample-based profiling, BOLT boosts the performance of real-world applications even for highly optimized binaries built with both feedback-driven optimizations (FDO) and link-time optimizations (LTO). We demonstrate that post-link performance improvements are complementary to conventional compiler optimizations, even when the latter are done at a whole-program level and in the presence of profile information. We evaluated BOLT on both Facebook data-center workloads and open-source compilers. For data-center applications, BOLT achieves up to 8.0 performance speedups on top of profile-guided function reordering and LTO. For the GCC and Clang compilers, our evaluation shows that BOLT speeds up their binaries by up to 20.4 on top of FDO and LTO, and up to 52.1 if the binaries are built without FDO and LTO.", "A large number of compiler optimizations are nowadays available to users. These optimizations interact with each other and with the input code in several and complex ways. The sequence of application of optimization passes can have a significant impact on the performance achieved. The effect of the optimizations is both platform and application dependent. The exhaustive exploration of all viable sequences of compiler optimizations for a given code fragment is not feasible. As this exploration is a complex and time-consuming task, several researchers have focused on Design Space Exploration (DSE) strategies both to select optimization sequences to improve the performance of each function of the application and to reduce the exploration time. In this article, we present a DSE scheme based on a clustering approach for grouping functions with similarities and exploration of a reduced search space resulting from the combination of optimizations previously suggested for the functions in each group. The identification of similarities between functions uses a data mining method that is applied to a symbolic code representation. The data mining process combines three algorithms to generate clusters: the Normalized Compression Distance, the Neighbor Joining, and a new ambiguity-based clustering algorithm. Our experiments for evaluating the effectiveness of the proposed approach address the exploration of optimization sequences in the context of the ReflectC compiler, considering 49 compilation passes while targeting a Xilinx MicroBlaze processor, and aiming at performance improvements for 51 functions and four applications. Experimental results reveal that the use of our clustering-based DSE approach achieves a significant reduction in the total exploration time of the search space (20× over a Genetic Algorithm approach) at the same time that considerable performance speedups (41p over the baseline) were obtained using the optimized codes. Additional experiments were performed considering the LLVM compiler, considering 124 compilation passes, and targeting a LEON3 processor. The results show that our approach achieved geometric mean speedups of 1.49 × , 1.32 × , and 1.24 × for the best 10, 20, and 30 functions, respectively, and a global improvement of 7p over the performance obtained when compiling with -O2.", "Traditional query optimizers assume accurate knowledge of run-time parameters such as selectivities and resource availability during plan optimization, i.e., at compile time. In reality, however, this assumption is often not justified. Therefore, the “static” plans produced by traditional optimizers may not be optimal for many of their actual run-time invocations. Instead, we propose a novel optimization model that assigns the bulk of the optimization effort to compile-time and delays carefully selected optimization decisions until run-time. Our previous work defined the run-time primitives, “dynamic plans” using “choose-plan” operators, for executing such delayed decisions, but did not solve the problem of constructing dynamic plans at compile-time. The present paper introduces techniques that solve this problem. Experience with a working prototype optimizer demonstrates (i) that the additional optimization and start-up overhead of dynamic plans compared to static plans is dominated by their advantage at run-time, (ii) that dynamic plans are as robust as the “brute-force” remedy of run-time optimization, i.e., dynamic plans maintain their optimality even if parameters change between compile-time and run-time, and (iii) that the start-up overhead of dynamic plans is significantly less than the time required for complete optimization at run-time. In other words, our proposed techniques are superior to both techniques considered to-date, namely compile-time optimization into a single static plan as well as run-time optimization. Finally, we believe that the concepts and technology described can be transferred to commercial query optimizers in order to improve the performance of embedded queries with host variables in the query predicate and to adapt to run-time system loads unpredictable at compile time.", "This paper presents the results of our investigation of code positioning techniques using execution profile data as input into the compilation process. The primary objective of the positioning is to reduce the overhead of the instruction memory hierarchy. After initial investigation in the literature, we decided to implement two prototypes for the Hewlett-Packard Precision Architecture (PA-RISC). The first, built on top of the linker, positions code based on whole procedures. This prototype has the ability to move procedures into an order that is determined by a “closest is best” strategy. The second prototype, built on top of an existing optimizer package, positions code based on basic blocks within procedures. Groups of basic blocks that would be better as straight-line sequences are identified as chains . These chains are then ordered according to branch heuristics. Code that is never executed during the data collection runs can be physically separated from the primary code of a procedure by a technique we devised called procedure splitting . The algorithms we implemented are described through examples in this paper. The performance improvements from our work are also summarized in various tables and charts." ] }
cs9903018
1593496962
Scripting languages are becoming more and more important as a tool for software development, as they provide great flexibility for rapid prototyping and for configuring componentware applications. In this paper we present LuaJava, a scripting tool for Java. LuaJava adopts Lua, a dynamically typed interpreted language, as its script language. Great emphasis is given to the transparency of the integration between the two languages, so that objects from one language can be used inside the other like native objects. The final result of this integration is a tool that allows the construction of configurable Java applications, using off-the-shelf components, in a high abstraction level.
For Tcl @cite_13 two integration solutions exist: the TclBlend binding @cite_11 and the Jacl implementation @cite_14 . TclBlend is a binding between Java and Tcl, which, as LuaJava, allows Java objects to be manipulated by scripts. Some operations, such as access to fields and static method invocations, require specific functions. Calls to instance methods are handled naturally by Tcl commands.
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_11" ], "mid": [ "2162914120", "2118300983", "2147650421" ], "abstract": [ "This paper describes the motivations and strategies behind our group’s efforts to integrate the Tcl and Java programming languages. From the Java perspective, we wish to create a powerful scripting solution for Java applications and operating environments. From the Tcl perspective, we want to allow for cross-platform Tcl extensions and leverage the useful features and user community Java has to offer. We are specifically focusing on Java tasks like Java Bean manipulation, where a scripting solution is preferable to using straight Java code. Our goal is to create a synergy between Tcl and Java, similar to that of Visual Basic and Visual C++ on the Microsoft desktop, which makes both languages more powerful together than they are individually.", "We describe JastAdd, a Java-based system for compiler construction. JastAdd is centered around an object-oriented representation of the abstract syntax tree where reference variables can be used to link together different parts of the tree. JastAdd supports the combination of declarative techniques (using Reference Attributed Grammars) and imperative techniques (using ordinary Java code) in implementing the compiler. The behavior can be modularized into different aspects, e.g. name analysis, type checking, code generation, etc., that are woven together into classes using aspect-oriented programming techniques, providing a safer and more powerful alternative to the Visitor pattern. The JastAdd system is independent of the underlying parsing technology and supports any noncircular dependencies between computations, thereby allowing general multi-pass compilation. The attribute evaluator (optimal recursive evaluation) is implemented very conveniently using Java classes, interfaces, and virtual methods.", "We present the first verification of full functional correctness for a range of linked data structure implementations, including mutable lists, trees, graphs, and hash tables. Specifically, we present the use of the Jahob verification system to verify formal specifications, written in classical higher-order logic, that completely capture the desired behavior of the Java data structure implementations (with the exception of properties involving execution time and or memory consumption). Given that the desired correctness properties include intractable constructs such as quantifiers, transitive closure, and lambda abstraction, it is a challenge to successfully prove the generated verification conditions. Our Jahob verification system uses integrated reasoning to split each verification condition into a conjunction of simpler subformulas, then apply a diverse collection of specialized decision procedures, first-order theorem provers, and, in the worst case, interactive theorem provers to prove each subformula. Techniques such as replacing complex subformulas with stronger but simpler alternatives, exploiting structure inherently present in the verification conditions, and, when necessary, inserting verified lemmas and proof hints into the imperative source code make it possible to seamlessly integrate all of the specialized decision procedures and theorem provers into a single powerful integrated reasoning system. By appropriately applying multiple proof techniques to discharge different subformulas, this reasoning system can effectively prove the complex and challenging verification conditions that arise in this context." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
The main objective of this chapter was to study the basic @math -branes that one encounters in M-theory, and to treat them in a unified way. The need to unify the treatment is inspired by U-duality @cite_22 @cite_86 @cite_144 , which states that from the effective lower dimensional space-time point of view, all the charges carried by the different branes are on the same footing. While string theory breaks' this U-duality symmetry, choosing the NSNS string to be the fundamental object of the perturbative theory, the supergravity low-energy effective theories realize the U-duality at the classical level.
{ "cite_N": [ "@cite_86", "@cite_22", "@cite_144" ], "mid": [ "1992572456", "2141847212", "2152342374" ], "abstract": [ "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "Abstract The effective action for type II string theory compactified on a six-torus is N = 8 supergravity, which is known to have an E7 duality symmetry. We show that this is broken by quantum effects to a discrete subgroup, E 7 ( Z ) , which contains both the T-duality group O(6, 6; Z ) and the S-duality group SL(2; Z ). We present evidence for the conjecture that E 7 ( Z ) is an exact ‘U-duality’ symmetry of type II string theory. This conjecture requires certain extreme black hole states to be identified with massive modes of the fundamental string. The gauge bosons from the Ramond-Ramond sector couple not to string excitations but to solitons. We discuss similar issues in the context of toroidal string compactifications to other dimensions, compactifications of the type II string on K3 × T2 and compactifications of 11-dimensional supermembrane theory.", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
It should also not be underestimated that the derivation of the intersecting solutions presented in this chapter is a thorough consistency check of all the dualities acting on, and between, the supergravity theories. It is straightforward to check that, starting from one definite configuration, all its dual configurations are also found between the solutions presented here (with the exception of the solutions involving waves and KK monopoles). In this line of thoughts, we presented a recipe for building five and four dimensional extreme supersymmetric black holes. Some of these black holes were used in the literature to perform a microscopic counting of their entropy, as in @cite_191 @cite_61 for the 5-dimensional ones. Actually, the only (5 dimensional) black holes in the U-duality orbit' that were counted were the ones containing only D-branes and KK momentum. It is still an open problem to directly count the microscopic states of the same black hole but in a different M-theoretic formulation.
{ "cite_N": [ "@cite_191", "@cite_61" ], "mid": [ "2054280159", "1992572456" ], "abstract": [ "Abstract We present non-extreme generalisations of intersecting p -brane solutions of eleven-dimensional supergravity which upon toroidal compactification reduce to non-extreme static black holes in dimensions D = 4, D = 5 and 6 ⩽ D ⩽ 9, parameterised by four, three and two charges, respectively. The D = 4 black holes are obtained either from a non-extreme configuration of three intersecting five-branes with a boost along the common string or from a non-extreme intersecting system of two two-branes and two five-branes. The D = 5 black holes arise from three intersecting two-branes or from a system of an intersecting two-brane and five-brane with a boost along the common string. The five-brane and two-brane with a boost along one direction reduce to black holes in D = 6 and D = 9, respectively, while a D = 7 black hole can be interpreted in terms of a non-extreme configuration of two intersecting two-branes. We discuss the expressions for the corresponding masses and entropies.", "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Some of the intersection rules intersectionrules point towards an M-theory interpretation in terms of open branes ending on other branes. This idea will be elaborated and made firmer in the next chapter. It suffices to say here that this interpretation is consistent with dualities if we postulate that the open character' of a fundamental string ending on a D-brane is invariant under dualities. S-duality directly implies, for instance, that D-strings can end on NS5-branes @cite_176 . Then T-dualities imply that all the D-branes can end on the NS5-brane. In particular, the fact that the D2-brane can end on the NS5-brane should imply that the M5-brane is a D-brane for the M2-branes @cite_176 @cite_143 @cite_38 (this could also be extrapolated from the fact that a F1-string ends on a D4-brane). In the next chapter we will see how these ideas are further supported by the presence of the Chern-Simons terms in the supergravities, and by the structure of the world-volume effective actions of the branes.
{ "cite_N": [ "@cite_143", "@cite_38", "@cite_176" ], "mid": [ "2086840642", "1992572456", "2054280159" ], "abstract": [ "Abstract We present a general rule determining how extremal branes can intersect in a configuration with zero binding energy. The rule is derived in a model independent way and in arbitrary spacetime dimensions D by solving the equations of motion of gravity coupled to a dilaton and several different n -form field strengths. The intersection rules are all compatible with supersymmetry, although derived without using it. We then specialize to the branes occurring in type II string theories and in M-theory. We show that the intersection rules are consistent with the picture that open branes can have boundaries on some other branes. In particular, all the D-branes of dimension q , with 1 ≤ q ≤ 6, can have boundaries on the solitonic 5-brane.", "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "Abstract We present non-extreme generalisations of intersecting p -brane solutions of eleven-dimensional supergravity which upon toroidal compactification reduce to non-extreme static black holes in dimensions D = 4, D = 5 and 6 ⩽ D ⩽ 9, parameterised by four, three and two charges, respectively. The D = 4 black holes are obtained either from a non-extreme configuration of three intersecting five-branes with a boost along the common string or from a non-extreme intersecting system of two two-branes and two five-branes. The D = 5 black holes arise from three intersecting two-branes or from a system of an intersecting two-brane and five-brane with a boost along the common string. The five-brane and two-brane with a boost along one direction reduce to black holes in D = 6 and D = 9, respectively, while a D = 7 black hole can be interpreted in terms of a non-extreme configuration of two intersecting two-branes. We discuss the expressions for the corresponding masses and entropies." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
In this chapter we presented only extremal configurations of intersecting branes. The natural further step to take would be to consider also non-extremal configurations of intersecting branes. There is however a subtlety: there could be a difference between intersections of non-extremal branes, and non-extremal intersections of otherwise extremal branes. If we focus on bound states (and thus not on configurations of well separated branes), it appears that a non-extremal configuration would be characterized for instance by @math charges and by its mass. There is only one additional parameter with respect to the extremal configurations. Physically, we could have hardly expected to have, say, as many non-extremality parameters as the number of branes in the bound state. Indeed, non-extremality can be roughly associated to the branes being in an excited state, and it would have thus been very unlikely that the excitations did not mix between the various branes in the bound state. Non-extremal intersecting brane solutions were found first in @cite_48 , and were derived from the equations of motion following a similar approach as here in @cite_81 @cite_16 .
{ "cite_N": [ "@cite_48", "@cite_81", "@cite_16" ], "mid": [ "2086840642", "2152342374", "2054280159" ], "abstract": [ "Abstract We present a general rule determining how extremal branes can intersect in a configuration with zero binding energy. The rule is derived in a model independent way and in arbitrary spacetime dimensions D by solving the equations of motion of gravity coupled to a dilaton and several different n -form field strengths. The intersection rules are all compatible with supersymmetry, although derived without using it. We then specialize to the branes occurring in type II string theories and in M-theory. We show that the intersection rules are consistent with the picture that open branes can have boundaries on some other branes. In particular, all the D-branes of dimension q , with 1 ≤ q ≤ 6, can have boundaries on the solitonic 5-brane.", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system.", "Abstract We present non-extreme generalisations of intersecting p -brane solutions of eleven-dimensional supergravity which upon toroidal compactification reduce to non-extreme static black holes in dimensions D = 4, D = 5 and 6 ⩽ D ⩽ 9, parameterised by four, three and two charges, respectively. The D = 4 black holes are obtained either from a non-extreme configuration of three intersecting five-branes with a boost along the common string or from a non-extreme intersecting system of two two-branes and two five-branes. The D = 5 black holes arise from three intersecting two-branes or from a system of an intersecting two-brane and five-brane with a boost along the common string. The five-brane and two-brane with a boost along one direction reduce to black holes in D = 6 and D = 9, respectively, while a D = 7 black hole can be interpreted in terms of a non-extreme configuration of two intersecting two-branes. We discuss the expressions for the corresponding masses and entropies." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Supergravity solutions corresponding to D-branes at angles were found in @cite_31 @cite_59 @cite_178 . The resulting solutions contain as expected off-diagonal elements in the internal metric, and the derivation from the equations of motion as in @cite_59 is accordingly rather intricated.
{ "cite_N": [ "@cite_31", "@cite_178", "@cite_59" ], "mid": [ "1992572456", "2048444779", "2054280159" ], "abstract": [ "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "We construct the most general supersymmetric configuration of @math -branes and @math -branes on a 6-torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16-parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math -branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group.", "Abstract We present non-extreme generalisations of intersecting p -brane solutions of eleven-dimensional supergravity which upon toroidal compactification reduce to non-extreme static black holes in dimensions D = 4, D = 5 and 6 ⩽ D ⩽ 9, parameterised by four, three and two charges, respectively. The D = 4 black holes are obtained either from a non-extreme configuration of three intersecting five-branes with a boost along the common string or from a non-extreme intersecting system of two two-branes and two five-branes. The D = 5 black holes arise from three intersecting two-branes or from a system of an intersecting two-brane and five-brane with a boost along the common string. The five-brane and two-brane with a boost along one direction reduce to black holes in D = 6 and D = 9, respectively, while a D = 7 black hole can be interpreted in terms of a non-extreme configuration of two intersecting two-branes. We discuss the expressions for the corresponding masses and entropies." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Other half-supersymmetric bound states of this class are the @math multiplets of 1- and 5-branes in type IIB theory @cite_95 @cite_0 , or more precisely the configurations F1 @math D1 and NS5 @math D5, also called @math 1- and 5-branes, where @math is the NSNS charge and @math the RR charge of the compound. The classical solutions corresponding to this latter case were actually found more simply performing an @math transformation on the F1 or NS5 solutions.
{ "cite_N": [ "@cite_0", "@cite_95" ], "mid": [ "1992572456", "2048444779" ], "abstract": [ "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "We construct the most general supersymmetric configuration of @math -branes and @math -branes on a 6-torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16-parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math -branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
In @cite_106 (inspired by @cite_54 ) a solution is presented which corresponds to a M5 @math M5=1 configuration, which follows the harmonic superposition rule, provided however that the harmonic functions depend on the respective relative transverse space (i.e. they are functions of two different spaces). The problem now is that the harmonic functions do not depend on the overall transverse space (which is 1-dimensional in the case above), the configuration thus not being localized there. A method actually inspired by the one presented here to derive the intersecting brane solutions, has been applied in @cite_89 to the intersections of this second kind. Imposing that the functions depend on the relative transverse space(s) (with factorized dependence) and not on the overall one, the authors of @cite_89 arrive at a formula for the intersections very similar to intersectionrules , with @math on the l.h.s. This rule correctly reproduces the M5 @math M5=1 configuration, and moreover also all the configurations of two D-branes with 8 Neumann-Dirichlet directions, which preserve @math supersymmetries but were excluded from the intersecting solutions derived in this chapter (only the configurations with 4 ND directions were found as solutions). One such configuration is e.g. D0 @math D8.
{ "cite_N": [ "@cite_54", "@cite_106", "@cite_89" ], "mid": [ "1992572456", "2086840642", "2048444779" ], "abstract": [ "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "Abstract We present a general rule determining how extremal branes can intersect in a configuration with zero binding energy. The rule is derived in a model independent way and in arbitrary spacetime dimensions D by solving the equations of motion of gravity coupled to a dilaton and several different n -form field strengths. The intersection rules are all compatible with supersymmetry, although derived without using it. We then specialize to the branes occurring in type II string theories and in M-theory. We show that the intersection rules are consistent with the picture that open branes can have boundaries on some other branes. In particular, all the D-branes of dimension q , with 1 ≤ q ≤ 6, can have boundaries on the solitonic 5-brane.", "We construct the most general supersymmetric configuration of @math -branes and @math -branes on a 6-torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16-parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math -branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group." ] }
1903.05435
2921710190
In this work, we are interested in the applications of big data in the telecommunication domain, analysing two weeks of datasets provided by Telecom Italia for Milan and Trento. Our objective is to identify hotspots which are places with very high communication traffic relative to others and measure the interaction between them. We model the hotspots as nodes in a graph and then apply node centrality metrics that quantify the importance of each node. We review five node centrality metrics and show that they can be divided into two families: the first family is composed of closeness and betweenness centrality whereas the second family consists of degree, PageRank and eigenvector centrality. We then proceed with a statistical analysis in order to evaluate the consistency of the results over the two weeks. We find out that the ranking of the hotspots under the various centrality metrics remains practically the same with the time for both Milan and Trento. We further identify that the relative difference of the values of the metrics is smaller for PageRank centrality than for closeness centrality and this holds for both Milan and Trento. Finally, our analysis reveals that the variance of the results is significantly smaller for Trento than for Milan.
Nowadays, telecom companies use widely big data in order to mine the behaviour of their customers, improve the quality of service that they provide and reduce the customers' churn. Towards this direction, demographic statistics, network deployments and call detail records (CDRs) are key factors that need to be carefully integrated in order to make accurate predictions. Though there are various open source data for the first two factors, researchers rarely have access to traffic demand data, since it is a sensitive information for the operators. Therefore, researchers need to rely on synthetic models, which do not always capture accurately large-scale mobile networks @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2556289220" ], "abstract": [ "In this study, with Singapore as an example, we demonstrate how we can use mobile phone call detail record (CDR) data, which contains millions of anonymous users, to extract individual mobility networks comparable to the activity-based approach. Such an approach is widely used in the transportation planning practice to develop urban micro simulations of individual daily activities and travel; yet it depends highly on detailed travel survey data to capture individual activity-based behavior. We provide an innovative data mining framework that synthesizes the state-of-the-art techniques in extracting mobility patterns from raw mobile phone CDR data, and design a pipeline that can translate the massive and passive mobile phone records to meaningful spatial human mobility patterns readily interpretable for urban and transportation planning purposes. With growing ubiquitous mobile sensing, and shrinking labor and fiscal resources in the public sector globally, the method presented in this research can be used as a low-cost alternative for transportation and planning agencies to understand the human activity patterns in cities, and provide targeted plans for future sustainable development." ] }
1903.05435
2921710190
In this work, we are interested in the applications of big data in the telecommunication domain, analysing two weeks of datasets provided by Telecom Italia for Milan and Trento. Our objective is to identify hotspots which are places with very high communication traffic relative to others and measure the interaction between them. We model the hotspots as nodes in a graph and then apply node centrality metrics that quantify the importance of each node. We review five node centrality metrics and show that they can be divided into two families: the first family is composed of closeness and betweenness centrality whereas the second family consists of degree, PageRank and eigenvector centrality. We then proceed with a statistical analysis in order to evaluate the consistency of the results over the two weeks. We find out that the ranking of the hotspots under the various centrality metrics remains practically the same with the time for both Milan and Trento. We further identify that the relative difference of the values of the metrics is smaller for PageRank centrality than for closeness centrality and this holds for both Milan and Trento. Finally, our analysis reveals that the variance of the results is significantly smaller for Trento than for Milan.
For example, the authors in @cite_4 analyse an heterogeneous cellular network which consists of different types of nodes, such as macrocells and microcells. Nowadays a popular model is the one from Wyner @cite_0 , but it fails to fully capture a real heterogeneous cellular network because it is simplistic. Another approach is to use the spatial Poisson point process model (SPPP) @cite_9 , which can be derived from the premise that all base stations are uniformly distributed. However, a city can be classified in different areas, which have different population densities. These different areas can be characterised as dense urban, urban and suburban. To be able to classify the heterogeneous networks into these areas, the authors introduce SPPP for homogeneous and inhomogeneous sets. They show that the SPPP-model captures accurately both urban and suburban areas, whereas this is not the case for dense urban areas, because of a considerable population concentrated in small areas.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_4" ], "mid": [ "2005411736", "2076773434", "1994267277" ], "abstract": [ "In heterogeneous cellular networks spatial characteristics of base stations (BSs) influence the system performance intensively. Existing models like two-dimensional hexagonal grid model or homogeneous spatial poisson point process (SPPP) are based on the assumption that BSs are ideal or uniformly distributed, but the aggregation behavior of users in hot spots has an important effect on the location of low power nodes (LPNs), so these models fail to characterize the distribution of BSs in the current mobile cellular networks. In this paper, firstly existing spatial models are analyzed. Then, based on real data from a mobile operator in one large city of China, a set of spatial models is proposed in three typical regions: dense urban, urban and suburban. For dense urban area, “Two Tiers Poisson Cluster Superimposed Process” is proposed to model the spatial characteristics of real-world BSs. Specifically, for urban and suburban area, conventional SPPP model still can be used. Finally, the fundamental relationship between user behavior and BS distribution is illustrated and summarized. Numerous results show that SPPP is only appropriate in the urban and suburban regions where users are not gathered together obviously. Principal parameters of these models are provided as reference for the theoretical analysis and computer simulation, which describe the complex spatial configuration more reasonably and reflect the current mobile cellular network performance more precisely.", "We consider spatial stochastic models of downlink heterogeneous cellular networks (HCNs) with multiple tiers, where the base stations (BSs) of each tier have a particular spatial density, transmission power and path-loss exponent. Prior works on such spatial models of HCNs assume, due to its tractability, that the BSs are deployed according to homogeneous Poisson point processes. This means that the BSs are located independently of each other and their spatial correlation is ignored. In the current paper, we propose two spatial models for the analysis of downlink HCNs, in which the BSs are deployed according to @a-Ginibre point processes. The @a-Ginibre point processes constitute a class of determinantal point processes and account for the repulsion between the BSs. Besides, the degree of repulsion is adjustable according to the value of @[email protected]?(0,1]. In one proposed model, the BSs of different tiers are deployed according to mutually independent @a-Ginibre processes, where the @a can take different values for the different tiers. In the other model, all the BSs are deployed according to an @a-Ginibre point process and they are classified into multiple tiers by mutually independent marks. For these proposed models, we derive computable representations for the coverage probability of a typical user-the probability that the downlink signal-to-interference-plus-noise ratio for the typical user achieves a target threshold. We exhibit the results of some numerical experiments and compare the proposed models and the Poisson based model.", "The spatial structure of transmitters in wireless networks plays a key role in evaluating the mutual interference and hence the performance. Although the Poisson point process (PPP) has been widely used to model the spatial configuration of wireless networks, it is not suitable for networks with repulsion. The Ginibre point process (GPP) is one of the main examples of determinantal point processes that can be used to model random phenomena where repulsion is observed. Considering the accuracy, tractability and practicability tradeoffs, we introduce and promote the @math -GPP, an intermediate class between the PPP and the GPP, as a model for wireless networks when the nodes exhibit repulsion. To show that the model leads to analytically tractable results in several cases of interest, we derive the mean and variance of the interference using two different approaches: the Palm measure approach and the reduced second moment approach, and then provide approximations of the interference distribution by three known probability density functions. Besides, to show that the model is relevant for cellular systems, we derive the coverage probability of the typical user and also find that the fitted @math -GPP can closely model the deployment of actual base stations in terms of the coverage probability and other statistics." ] }
1903.05355
2968491849
Learning the dynamics of robots from data can help achieve more accurate tracking controllers, or aid their navigation algorithms. However, when the actual dynamics of the robots change due to external conditions, on-line adaptation of their models is required to maintain high fidelity performance. In this work, a framework for on-line learning of robot dynamics is developed to adapt to such changes. The proposed framework employs an incremental support vector regression method to learn the model sequentially from data streams. In combination with the incremental learning, strategies for including and forgetting data are developed to obtain better generalization over the whole state space. The framework is tested in simulation and real experimental scenarios demonstrating its adaptation capabilities to changes in the robot’s dynamics.
In the field of marine robotics, @cite_10 used locally weighted projection regression to compensate the mismatch between the physics based model and the sensors reading of the AUV Nessie. Auto-regressive networks augmented with a genetic algorithm as a gating network were used to identify the model of a simulated AUV with variable mass. In a previous work @cite_16 , an on-line adaptation method was proposed to model the change in the damping forces resulting from a structural change of an AUVs mechanical structure. The algorithm showed good adaptation capability but was only limited to modelling the damping effect of an AUV model. In this work we build upon our the results of @cite_14 @cite_11 to provide a general framework for on-line learning of AUV fully coupled nonlinear dynamics, and validating the proposed approach on simulated data as well as real robot data.
{ "cite_N": [ "@cite_14", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2060977605", "2106353989", "1938668859", "2022798939" ], "abstract": [ "This paper proposes a pose-based algorithm to solve the full Simultaneous Localization And Mapping (SLAM) problem for an Autonomous Underwater Vehicle (AUV), navigating in an unknown and possibly unstructured environment. A probabilistic scan matching technique using range scans gathered from a Mechanical Scanning Imaging Sonar (MSIS) is used together with the robot dead-reckoning displacements. The proposed method utilizes two Extended Kalman Filters (EKFs). The first, estimates the local path traveled by the robot while forming the scan as well as its uncertainty, providing position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augmented state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. Also, a method of estimating the uncertainty of the scan matching estimation is provided. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach.", "A full-scale adaptive ocean sampling network was deployed throughout the month-long 2006 Adaptive Sam- pling and Prediction (ASAP) field experiment in Monterey Bay, California. One of the central goals of the field experiment was to test and demonstrate newly developed techniques for coordinated motion control of au- tonomous vehicles carrying environmental sensors to efficiently sample the ocean. We describe the field results for the heterogeneous fleet of autonomous underwater gliders that collected data continuously throughout the month-long experiment. Six of these gliders were coordinated autonomously for 24 days straight using feed- back laws that scale with the number of vehicles. These feedback laws were systematically computed using recently developed methodology to produce desired collective motion patterns, tuned to the spatial and tem- poral scales in the sampled fields for the purpose of reducing statistical uncertainty in field estimates. The implementation was designed to allow for adaptation of coordinated sampling patterns using human-in-the- loop decision making, guided by optimization and prediction tools. The results demonstrate an innovative tool for ocean sampling and provide a proof of concept for an important field robotics endeavor that integrates coordinated motion control with adaptive sampling. C", "Robotic sampling is attractive in many field robotics applications that require persistent collection of physical samples for ex-situ analysis. Examples abound in the earth sciences in studies involving the collection of rock, soil, and water samples for laboratory analysis. In our test domain, marine ecosystem monitoring, detailed understanding of plankton ecology requires laboratory analysis of water samples, but predictions using physical and chemical properties measured in real-time by sensors aboard an autonomous underwater vehicle AUV can guide sample collection decisions. In this paper, we present a data-driven and opportunistic sampling strategy to minimize cumulative regret for batches of plankton samples acquired by an AUV over multiple surveys. Samples are labeled at the end of each survey, and used to update a probabilistic model that guides sampling during subsequent surveys. During a survey, the AUV makes irrevocable sample collection decisions online for a sequential stream of candidates, with no knowledge of the quality of future samples. In addition to extensive simulations using historical field data, we present results from a one-day field trial where beginning with a prior model learned from data collected and labeled in an earlier campaign, the AUV collected water samples with a high abundance of a pre-specified planktonic target. This is the first time such a field experiment has been carried out in its entirety in a data-driven fashion, in effect ?closing the loop? on a significant and relevant ecosystem monitoring problem while allowing domain experts marine ecologists to specify the mission at a relatively high level.", "Navigation is instrumental in the successful deployment of Autonomous Underwater Vehicles (AUVs). Sensor hardware is installed on AUVs to support navigational accuracy. Sensors, however, may fail during deployment, thereby jeopardizing the mission. This work proposes a solution, based on an adaptive dynamic model, to accurately predict the navigation of the AUV. A hydrodynamic model, derived from simple laws of physics, is integrated with a powerful non-parametric regression method. The incremental regression method, namely the Locally Weighted Projection Regression (LWPR), is used to compensate for un-modeled dynamics, as well as for possible changes in the operating conditions of the vehicle. The augmented hydrodynamic model is used within an Extended Kalman Filter, to provide optimal estimations of the AUV’s position and orientation. Experimental results demonstrate an overall improvement in the prediction of the vehicle’s acceleration and velocity." ] }
1903.05454
2950587559
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
In recent years, research regarding image matching has been influenced by the developments in other areas of computer vision. Deep learning architectures have been developed both for image matching @cite_10 @cite_1 @cite_17 and geopositioning @cite_13 @cite_5 @cite_18 with attractive results.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_1", "@cite_5", "@cite_10", "@cite_17" ], "mid": [ "1762798876", "2607603241", "2606149788", "2964213755", "1946093182", "2479919622" ], "abstract": [ "We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk ( A comparison of affine region detectors, 2005), the MPI-Sintel ( A naturalistic open source movie for optical flow evaluation, 2012) and the Kitti ( Vision meets robotics: The KITTI dataset, 2013) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures. We also apply DeepMatching to the computation of optical flow, called DeepFlow, by integrating it in the large displacement optical flow (LDOF) approach of Brox and Malik (Large displacement optical flow: descriptor matching in variational motion estimation, 2011). Additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation.", "Finding matching images across large datasets plays a key role in many computer vision applications such as structure-from-motion (SfM), multi-view 3D reconstruction, image retrieval, and image-based localisation. In this paper, we propose finding matching and non-matching pairs of images by representing them with neural network based feature vectors, whose similarity is measured by Euclidean distance. The feature vectors are obtained with convolutional neural networks which are learnt from labeled examples of matching and non-matching image pairs by using a contrastive loss function in a Siamese network architecture. Previously Siamese architecture has been utilised in facial image verification and in matching local image patches, but not yet in generic image retrieval or whole-image matching. Our experimental results show that the proposed features improve matching performance compared to baseline features obtained with networks which are trained for image classification task. The features generalize well and improve matching of images of new landmarks which are not seen at training time. This is despite the fact that the labeling of matching and non-matching pairs is imperfect in our training data. The results are promising considering image retrieval applications, and there is potential for further improvement by utilising more training image pairs with more accurate ground truth labels.", "Despite significant progress of deep learning in recent years, state-of-the-art semantic matching methods still rely on legacy features such as SIFT or HoG. We argue that the strong invariance properties that are key to the success of recent deep architectures on the classification task make them unfit for dense correspondence tasks, unless a large amount of supervision is used. In this work, we propose a deep network, termed AnchorNet, that produces image representations that are well-suited for semantic matching. It relies on a set of filters whose response is geometrically consistent across different object instances, even in the presence of strong intra-class, scale, or viewpoint variations. Trained only with weak image-level labels, the final representation successfully captures information about the object structure and improves results of state-of-the-art semantic matching methods such as the deformable spatial pyramid or the proposal flow methods. We show positive results on the cross-instance matching task where different instances of the same object category are matched as well as on a new cross-category semantic matching task aligning pairs of instances each from a different object class.", "Despite significant progress of deep learning in recent years, state-of-the-art semantic matching methods still rely on legacy features such as SIFT or HoG. We argue that the strong invariance properties that are key to the success of recent deep architectures on the classification task make them unfit for dense correspondence tasks, unless a large amount of supervision is used. In this work, we propose a deep network, termed AnchorNet, that produces image representations that are well-suited for semantic matching. It relies on a set of filters whose response is geometrically consistent across different object instances, even in the presence of strong intra-class, scale, or viewpoint variations. Trained only with weak image-level labels, the final representation successfully captures information about the object structure and improves results of state-of-the-art semantic matching methods such as the Deformable Spatial Pyramid or the Proposal Flow methods. We show positive results on the cross-instance matching task where different instances of the same object category are matched as well as on a new cross-category semantic matching task aligning pairs of instances each from a different object class.", "The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.", "In this paper we aim to determine the location and orientation of a ground-level query image by matching to a reference database of overhead (e.g. satellite) images. For this task we collect a new dataset with one million pairs of street view and overhead images sampled from eleven U.S. cities. We explore several deep CNN architectures for cross-domain matching – Classification, Hybrid, Siamese, and Triplet networks. Classification and Hybrid architectures are accurate but slow since they allow only partial feature precomputation. We propose a new loss function which significantly improves the accuracy of Siamese and Triplet embedding networks while maintaining their applicability to large-scale retrieval tasks like image geolocalization. This image matching task is challenging not just because of the dramatic viewpoint difference between ground-level and overhead imagery but because the orientation (i.e. azimuth) of the street views is unknown making correspondence even more difficult. We examine several mechanisms to match in spite of this – training for rotation invariance, sampling possible rotations at query time, and explicitly predicting relative rotation of ground and overhead images with our deep networks. It turns out that explicit orientation supervision also improves location prediction accuracy. Our best performing architectures are roughly 2.5 times as accurate as the commonly used Siamese network baseline." ] }
1903.05454
2950587559
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
Convolutional features extracted from the deep layers of CNNs have shown great utility when addressing image matching and retrieval problems. Babenko @cite_10 employ pre-trained networks to generate descriptors based on high-level convolutional features used for retrieving images of various landmarks. Sunderhauf @cite_2 solve the problem of urban scene recognition, employing salient regions and convolutional features of local objects. This method is extended in @cite_8 , where additional spatial information is used to increase the algorithm performance.
{ "cite_N": [ "@cite_8", "@cite_10", "@cite_2" ], "mid": [ "2289772031", "2749407104", "2258484932" ], "abstract": [ "Convolutional neural networks (CNNs) have recently achieved remarkable successes in various image classification and understanding tasks. The deep features obtained at the top fully connected layer of the CNN (FC-features) exhibit rich global semantic information and are extremely effective in image classification. On the other hand, the convolutional features in the middle layers of the CNN also contain meaningful local information, but are not fully explored for image representation. In this paper, we propose a novel locally supervised deep hybrid model (LS-DHM) that effectively enhances and explores the convolutional features for scene recognition. First, we notice that the convolutional features capture local objects and fine structures of scene images, which yield important cues for discriminating ambiguous scenes, whereas these features are significantly eliminated in the highly compressed FC representation. Second, we propose a new local convolutional supervision layer to enhance the local structure of the image by directly propagating the label information to the convolutional layers. Third, we propose an efficient Fisher convolutional vector (FCV) that successfully rescues the orderless mid-level semantic information (e.g., objects and textures) of scene image. The FCV encodes the large-sized convolutional maps into a fixed-length mid-level representation, and is demonstrated to be strongly complementary to the high-level FC-features. Finally, both the FCV and FC-features are collaboratively employed in the LS-DHM representation, which achieves outstanding performance in our experiments. It obtains 83.75 and 67.56 accuracies, respectively, on the heavily benchmarked MIT Indoor67 and SUN397 data sets, advancing the state-of-the-art substantially.", "In this paper, we present a robust method for scene recognition, which leverages Convolutional Neural Networks (CNNs) features and Sparse Coding setting by creating a new representation of indoor scenes. Although CNNs highly benefited the fields of computer vision and pattern recognition, convolutional layers adjust weights on a global-approach, which might lead to losing important local details such as objects and small structures. Our proposed scene representation relies on both: global features that mostly refers to environment’s structure, and local features that are sparsely combined to capture characteristics of common objects of a given scene. This new representation is based on fragments of the scene and leverages features extracted by CNNs. The experimental evaluation shows that the resulting representation outperforms previous scene recognition methods on Scene15 and MIT67 datasets, and performs competitively on SUN397, while being highly robust to perturbations in the input image such as noise and occlusion.", "Convolutional neural network (CNN) has achieved the state-of-the-art performance in many different visual tasks. Learned from a large-scale training data set, CNN features are much more discriminative and accurate than the handcrafted features. Moreover, CNN features are also transferable among different domains. On the other hand, traditional dictionary-based features (such as BoW and spatial pyramid matching) contain much more local discriminative and structural information, which is implicitly embedded in the images. To further improve the performance, in this paper, we propose to combine CNN with dictionary-based models for scene recognition and visual domain adaptation (DA). Specifically, based on the well-tuned CNN models (e.g., AlexNet and VGG Net), two dictionary-based representations are further constructed, namely, mid-level local representation (MLR) and convolutional Fisher vector (CFV) representation. In MLR, an efficient two-stage clustering method, i.e., weighted spatial and feature space spectral clustering on the parts of a single image followed by clustering all representative parts of all images, is used to generate a class-mixture or a class-specific part dictionary. After that, the part dictionary is used to operate with the multiscale image inputs for generating mid-level representation. In CFV, a multiscale and scale-proportional Gaussian mixture model training strategy is utilized to generate Fisher vectors based on the last convolutional layer of CNN. By integrating the complementary information of MLR, CFV, and the CNN features of the fully connected layer, the state-of-the-art performance can be achieved on scene recognition and DA problems. An interested finding is that our proposed hybrid representation (from VGG net trained on ImageNet) is also complementary to GoogLeNet and or VGG-11 (trained on Place205) greatly." ] }
1903.05454
2950587559
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
The problem of geopositioning can be seen as a dedicated branch of image retrieval. In this case, the objective is to compute extrinsic parameters (or coordinates) of a camera capturing the query image, based on the matched georeferenced images from a database. There exist many different algorithms and neural network architectures that attempt to identify the geographical location of a street-level query image. Lin @cite_13 learn deep representations for matching aerial and ground images. Workman @cite_18 use spatial features at multiple scales which are fused with street-level features, to solve the problem of geolocalization. @cite_5 , a fully automated processing pipeline matches multi-view stereo (MVS) models to aerial images. This matching algorithm handles the viewpoint variance across aerial and street-level images.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_13" ], "mid": [ "1946093182", "2479919622", "2199890863" ], "abstract": [ "The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.", "In this paper we aim to determine the location and orientation of a ground-level query image by matching to a reference database of overhead (e.g. satellite) images. For this task we collect a new dataset with one million pairs of street view and overhead images sampled from eleven U.S. cities. We explore several deep CNN architectures for cross-domain matching – Classification, Hybrid, Siamese, and Triplet networks. Classification and Hybrid architectures are accurate but slow since they allow only partial feature precomputation. We propose a new loss function which significantly improves the accuracy of Siamese and Triplet embedding networks while maintaining their applicability to large-scale retrieval tasks like image geolocalization. This image matching task is challenging not just because of the dramatic viewpoint difference between ground-level and overhead imagery but because the orientation (i.e. azimuth) of the street views is unknown making correspondence even more difficult. We examine several mechanisms to match in spite of this – training for rotation invariance, sampling possible rotations at query time, and explicitly predicting relative rotation of ground and overhead images with our deep networks. It turns out that explicit orientation supervision also improves location prediction accuracy. Our best performing architectures are roughly 2.5 times as accurate as the commonly used Siamese network baseline.", "We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales." ] }
1903.05454
2950587559
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
A common factor of the above work is that it either requires the combination of aerial and street-level images for geopositioning, or extensive training on specific datasets. Both cases and their solutions cannot be easily generalized. In our approach, we utilize georeferenced, street-level panoramic images only and a pre-trained CNN combined with image matching techniques for coordinate estimation. This avoids lengthy training and labeling procedures and assumes street-level data to be available without requiring aerial images. Furthermore, and unlike @cite_1 , we do not assume that our query and database images originate from the same imaging devices.
{ "cite_N": [ "@cite_1" ], "mid": [ "1946093182" ], "abstract": [ "The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations." ] }
1903.05524
2972959293
In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graph-theoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
Kirchhoff index or equivalently effective graph resistance based measures have been instrumental in quantifying the effect of noise on the expected steady state dispersion in linear dynamical networks, particularly in the ones with the consensus dynamics, for instance see @cite_23 @cite_8 @cite_22 . Furthermore, limits on robustness measures that quantify expected steady-state dispersion due to external stochastic disturbances in linear dynamical networks are also studied in @cite_9 @cite_10 . To maximize robustness in networks by minimizing their Kirchhoff indices, various optimization approaches (e.g., @cite_26 @cite_1 ) including graph-theoretic ones @cite_0 have been proposed. The main objective there is to determine crucial edges that need to be added or maintained to maximize robustness under given constraints @cite_11 .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_8", "@cite_9", "@cite_1", "@cite_0", "@cite_23", "@cite_10", "@cite_11" ], "mid": [ "2247236687", "1980652450", "2206841333", "2962884567", "2962699510", "1677630274", "2287065438", "2156286761", "2154177180" ], "abstract": [ "We investigate the (generalized) Walsh decomposition of point-to-point effective resistances on countable random electric networks with i.i.d. resistances. We show that it is concentrated on low levels, and thus point-to-point effective resistances are uniformly stable to noise. For graphs that satisfy some homogeneity property, we show in addition that it is concentrated on sets of small diameter. As a consequence, we compute the right order of the variance and prove a central limit theorem for the effective resistance through the discrete torus of side length n in Zd, when n goes to infinity.", "This paper considers the inverse problem with observed variables Y = BGX ⊕Z, where BG is the incidence matrix of a graph G, X is the vector of unknown vertex variables with a uniform prior, and Z is a noise vector with Bernoulli(e) i.i.d. entries. All variables and operations are Boolean. This model is motivated by coding, synchronization, and community detection problems. In particular, it corresponds to a stochastic block model or a correlation clustering problem with two communities and censored edges. Without noise, exact recovery of X is possible if and only the graph G is connected, with a sharp threshold at the edge probability log(n) n for Erdős-Renyi random graphs. The first goal of this paper is to determine how the edge probability p needs to scale to allow exact recovery in the presence of noise. Defining the degree (oversampling) rate of the graph by α = np log(n), it is shown that exact recovery is possible if and only if α > 2 (1−2e)+o(1 (1−2e)). In other words, 2 (1−2e) is the information theoretic threshold for exact recovery at lowSNR. In addition, an efficient recovery algorithm based on semidefinite programming is proposed and shown to succeed in the threshold regime up to twice the optimal rate. Full version available in [1].", "Given a background graph with n vertices, the binary censored block model assumes that vertices are partitioned into two clusters, and every edge is labeled independently at random with labels drawn from Bern(1 − e) if two endpoints are in the same cluster, or from Bern(e) otherwise, where e ∈ [0; 1 2] is a fixed constant. For Erdős-Renyi graphs with edge probability p = a log n n and fixed a, we show that the semidefinite programming relaxation of the maximum likelihood estimator achieves the optimal threshold equation for exactly recovering the partition from the labeled graph with probability tending to one as n → ∞. For random regular graphs with degree scaling as a log n, we show that the semidefinite programming relaxation also achieves the optimal recovery threshold aD(Bern(1 2)∥Bern(e)) > 1, where D denotes the Kullback-Leibler divergence.", "We study a graph-theoretic property known as robustness, which plays a key role in the behavior of certain classes of dynamics on networks (such as resilient consensus and contagion). This property is much stronger than other graph properties such as connectivity and minimum degree, in that one can construct graphs with high connectivity and minimum degree but low robustness. In this paper, we investigate the robustness of common random graph models for complex networks (Erdős-Renyi, geometric random, and preferential attachment graphs). We show that the notions of connectivity and robustness coincide on these random graph models: the properties share the same threshold function in the Erdős-Renyi model, cannot be very different in the geometric random graph model, and are equivalent in the preferential attachment model. This indicates that a variety of purely local diffusion dynamics will be effective at spreading information in such networks.", "In this paper, we consider the cluster estimation problem under the stochastic block model. We show that the semidefinite programming (SDP) formulation for this problem achieves an error rate that decays exponentially in the signal-to-noise ratio. The error bound implies weak recovery in the sparse graph regime with bounded expected degrees as well as exact recovery in the dense regime. An immediate corollary of our results yields error bounds under the censored block model. Moreover, these error bounds are robust, continuing to hold under heterogeneous edge probabilities and a form of the so-called monotone attack. Significantly, this error rate is achieved by the SDP solution itself without any further pre- or post-processing and improves upon existing polynomially decaying error bounds proved using the Grothendieck’s inequality. Our analysis builds on two key ingredients: 1) showing that the graph has a well-behaved spectrum, even in the sparse regime, after discounting an exponentially small number of edges and 2) an order-statistics argument that governs the final error rate. Both arguments highlight the implicit regularization effect of the SDP formulation.", "Let G be a graph and @t:V(G)->N be an assignment of thresholds to the vertices of G. A subset of vertices D is said to be dynamic monopoly (or simply dynamo) if the vertices of G can be partitioned into subsets D\"0,D\"1,...,D\"k such that D\"0=D and for any i=1,...,k-1 each vertex v in D\"i\"+\"1 has at least t(v) neighbors in D\"[email protected][email protected]?D\"i. Dynamic monopolies are in fact modeling the irreversible spread of influence such as disease or belief in social networks. We denote the smallest size of any dynamic monopoly of G, with a given threshold assignment, by dyn(G). In this paper, we first define the concept of a resistant subgraph and show its relationship with dynamic monopolies. Then we obtain some lower and upper bounds for the smallest size of dynamic monopolies in graphs with different types of thresholds. Next we introduce dynamo-unbounded families of graphs and prove some related results. We also define the concept of a homogeneous society that is a graph with probabilistic thresholds satisfying some conditions and obtain a bound for the smallest size of its dynamos. Finally, we consider dynamic monopoly of line graphs and obtain some bounds for their sizes and determine the exact values in some special cases.", "This work considers the robustness of uncertain consensus networks. The stability properties of consensus networks with negative edge weights are also examined. We show that the network is unstable if either the negative weight edges form a cut in the graph or any single negative edge weight has a magnitude less than the inverse of the effective resistance between the two incident nodes. These results are then used to analyze the robustness of the consensus network with additive but bounded perturbations of the edge weights. It is shown that the small-gain condition is related again to cuts in the graph and effective resistance. For the single edge case, the small-gain condition is also shown to be exact. The results are then extended to consensus networks with nonlinear couplings.", "This paper studies an interesting graph measure that we call the effective graph resistance. The notion of effective graph resistance is derived from the field of electric circuit analysis where it is defined as the accumulated effective resistance between all pairs of vertices. The objective of the paper is twofold. First, we survey known formulae of the effective graph resistance and derive other representations as well. The derivation of new expressions is based on the analysis of the associated random walk on the graph and applies tools from Markov chain theory. This approach results in a new method to approximate the effective graph resistance. A second objective of this paper concerns the optimisation of the effective graph resistance for graphs with given number of vertices and diameter, and for optimal edge addition. A set of analytical results is described, as well as results obtained by exhaustive search. One of the foremost applications of the effective graph resistance we have in mind, is the analysis of robustness-related problems. However, with our discussion of this informative graph measure we hope to open up a wealth of possibilities of applying the effective graph resistance to all kinds of networks problems. © 2011 Elsevier Inc. All rights reserved.", "Newman’s measure for (dis)assortativity, the linear degree correlation coefficient ?D, is reformulated in terms of the total number Nk of walks in the graph with k hops. This reformulation allows us to derive a new formula from which a degree-preserving rewiring algorithm is deduced, that, in each rewiring step, either increases or decreases ?D conform our desired objective. Spectral metrics (eigenvalues of graph-related matrices), especially, the largest eigenvalue ?1 of the adjacency matrix and the algebraic connectivity ?N?1 (second-smallest eigenvalue of the Laplacian) are powerful characterizers of dynamic processes on networks such as virus spreading and synchronization processes. We present various lower bounds for the largest eigenvalue ?1 of the adjacency matrix and we show, apart from some classes of graphs such as regular graphs or bipartite graphs, that the lower bounds for ?1 increase with ?D. A new upper bound for the algebraic connectivity ?N?1 decreases with ?D. Applying the degree-preserving rewiring algorithm to various real-world networks illustrates that (a) assortative degree-preserving rewiring increases ?1, but decreases ?N?1, even leading to disconnectivity of the networks in many disjoint clusters and that (b) disassortative degree-preserving rewiring decreases ?1, but increases the algebraic connectivity, at least in the initial rewirings." ] }
1903.05524
2972959293
In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graph-theoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
To quantify controllability, several approaches have been adapted, including determining the minimum number of inputs (leader nodes) needed to (structurally or strong structurally) control a network, determining the worst-case control energy, metrics based on controllability Gramians, and so on (e.g., see @cite_7 @cite_5 ). Strong structural controllability, due to its independence on coupling weights between nodes, is a generalized notion of controllability with practical implications. There have been recent studies providing graph-theoretic characterizations of this concept @cite_20 @cite_13 @cite_17 . There are numerous other studies regarding leader selection to optimize network performance measures under various constraints, such as to minimize the deviation from consensus in a noisy environment @cite_4 @cite_2 , and to maximize various controllability measures, for instance @cite_15 @cite_18 @cite_25 @cite_14 . Recently, optimization methods are also presented to select leader nodes that exploit submodularity properties of performance measures for network robustness and structural controllability @cite_5 @cite_3 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_7", "@cite_3", "@cite_2", "@cite_5", "@cite_15", "@cite_13", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2111725629", "2315383458", "2736121037", "1938602245", "2049708951", "2101420429", "1993788616", "2763583074", "2170723553", "2551993261", "2256710691", "2067935583" ], "abstract": [ "This paper studies the problem of controlling complex networks, i.e., the joint problem of selecting a set of control nodes and of designing a control input to steer a network to a target state. For this problem, 1) we propose a metric to quantify the difficulty of the control problem as a function of the required control energy, 2) we derive bounds based on the system dynamics (network topology and weights) to characterize the tradeoff between the control energy and the number of control nodes, and 3) we propose an open-loop control strategy with performance guarantees. In our strategy, we select control nodes by relying on network partitioning, and we design the control input by leveraging optimal and distributed control techniques. Our findings show several control limitations and properties. For instance, for Schur stable and symmetric networks: 1) if the number of control nodes is constant, then the control energy increases exponentially with the number of network nodes; 2) if the number of control nodes is a fixed fraction of the network nodes, then certain networks can be controlled with constant energy independently of the network dimension; and 3) clustered networks may be easier to control because, for sufficiently many control nodes, the control energy depends only on the controllability properties of the clusters and on their coupling strength. We validate our results with examples from power networks, social networks and epidemics spreading.", "In this technical note, we study the controllability of diffusively coupled networks from a graph theoretic perspective. We consider leader-follower networks, where the external control inputs are injected to only some of the agents, namely the leaders. Our main result relates the controllability of such systems to the graph distances between the agents. More specifically, we present a graph topological lower bound on the rank of the controllability matrix. This lower bound is tight, and it is applicable to systems with arbitrary network topologies, coupling weights, and number of leaders. An algorithm for computing the lower bound is also provided. Furthermore, as a prominent application, we present how the proposed bound can be utilized to select a minimal set of leaders for achieving controllability, even when the coupling weights are unknown.", "This paper investigates the robustness of strong structural controllability for linear time-invariant directed networked systems with respect to structural perturbations, including edge additions and deletions. In this regard, an algorithm is presented that is initiated by endowing each node of a network with a successive set of integers. Using this algorithm, a new notion of perfect graphs associated with a network is introduced, and tight upper bounds on the number of edges that can be added to, or removed from a network, while ensuring strong structural controllability, are derived. Moreover, we obtain a characterization of critical edges with respect to edge additions and deletions; these sets are the maximal sets of edges whose any subset can be respectively added to, or removed from a network, while preserving strong structural controllability.", "Controllability and observability have long been recognized as fundamental structural properties of dynamical systems, but have recently seen renewed interest in the context of large, complex networks of dynamical systems. A basic problem is sensor and actuator placement: choose a subset from a finite set of possible placements to optimize some real-valued controllability and observability metrics of the network. Surprisingly little is known about the structure of such combinatorial optimization problems. In this paper, we show that several important classes of metrics based on the controllability and observability Gramians have a strong structural property that allows for either efficient global optimization or an approximation guarantee by using a simple greedy heuristic for their maximization. In particular, the mapping from possible placements to several scalar functions of the associated Gramian is either a modular or submodular set function. The results are illustrated on randomly generated systems and on a problem of power-electronic actuator placement in a model of the European power grid.", "This paper examines strong structural controllability of linear-time-invariant networked systems. We provide necessary and sufficient conditions for strong structural controllability involving constrained matchings over the bipartite graph representation of the network. An O(n2) algorithm to validate if a set of inputs leads to a strongly structurally controllable network and to find such an input set is proposed. The problem of finding such a set with minimal cardinality is shown to be NP-complete. Minimal cardinality results for strong and weak structural controllability are compared.", "The ultimate proof of our understanding of natural or technological systems is reflected in our ability to control them. Although control theory offers mathematical tools for steering engineered and natural systems towards a desired state, a framework to control complex self-organized systems is lacking. Here we develop analytical tools to study the controllability of an arbitrary complex directed network, identifying the set of driver nodes with time-dependent control that can guide the system’s entire dynamics. We apply these tools to several real networks, finding that the number of driver nodes is determined mainly by the network’s degree distribution. We show that sparse inhomogeneous networks, which emerge in many real complex systems, are the most difficult to control, but that dense and homogeneous networks can be controlled using a few driver nodes. Counterintuitively, we find that in both model and real systems the driver nodes tend to avoid the high-degree nodes. Control theory can be used to steer engineered and natural systems towards a desired state, but a framework to control complex self-organized systems is lacking. Can such networks be controlled? Albert-Laszlo Barabasi and colleagues tackle this question and arrive at precise mathematical answers that amount to 'yes, up to a point'. They develop analytical tools to study the controllability of an arbitrary complex directed network using both model and real systems, ranging from regulatory, neural and metabolic pathways in living organisms to food webs, cell-phone movements and social interactions. They identify the minimum set of driver nodes whose time-dependent control can guide the system's entire dynamics ( http: go.nature.com wd9Ek2 ). Surprisingly, these are not usually located at the network hubs.", "We consider the relation of symmetries and subspace controllability for spin-1 2 networks with XXZ couplings, subject to perturbation of a single node by a local potential (Z-control). The Hamiltonians for such networks decompose into excitation subspaces. Focusing on the single excitation subspace, it is shown for single-node Z-controls that external symmetries are characterized by eigenstates of the system Hamiltonian which have zero overlap with the control node, and there are no internal symmetries. It is further shown that there are symmetries which persist even in the presence of random perturbations. For XXZ chains with uniform coupling strengths, a characterization of all possible symmetries is given which shows a strong dependence on the position of the node we control. We then show for Heisenberg and XX chains with uniform coupling strength subject to single-node Z-control that the lack of symmetry is both necessary and sufficient for subspace controllability. Finally, the latter approach is generalized to establish controllability results for simple branched networks.", "Characterization of network controllability through its topology has recently gained a lot of attention in the systems and control community. Using the notion of balancing sets, in this note, such a network-centric approach for the controllability of certain families of undirected networks is investigated. Moreover, by introducing the notion of a generalized zero forcing set, the structural controllability of undirected networks is discussed; in this direction, lower bounds on the dimension of the controllable subspace are derived. In addition, a method is proposed that facilitates synthesis of structural and strong structural controllable networks as well as examining preservation of network controllability under structural perturbations.", "Network controllability has numerous applications in natural and technological systems. Here, develop a theoretical approach and a greedy algorithm to study target control—the ability to efficiently control a preselected subset of nodes—in complex networks.", "In this paper, we apply an emerging method, online learning with dynamics, to deduce properties of distributed energy resources (DERs) from coarse measurements, e.g., measurements taken at distribution substations, rather than household-level measurements. Reduced sensing requirements can lower infrastructure costs associated with reliably incorporating DERs into the distribution network. We specifically investigate whether dynamic mirror descent (DMD), an online learning algorithm, can determine the real-time controllable demand served by a distribution feeder using feeder-level active power demand measurements. In our scenario, DMD incorporates various controllable demand and uncontrollable demand models to generate real-time controllable demand estimates. In a realistic scenario, these estimates have an RMS error of 8.34 of the average controllable demand, which improves to 5.53 by incorporating more accurate models. We propose topics for additional work in modeling, system identification, and the DMD algorithm itself that could improve the RMS errors.", "In this paper, we consider a robust network control problem. We consider linear unstable and uncertain discrete time plants with a network between the sensor and controller and the controller and plant. We investigate the effect of data drop out in the form of packet losses. Four distinct control schemes are explored and sufficient conditions to ensure almost sure stability of the closed loop system are derived for each of them in terms of minimum packet arrival rate and the maximum uncertainty. I. INTRODUCTION In the past decade, networked control systems (NCS) have gained much attention from both the control com- munity and the network and communication community. When compared with classical feedback control system, networked control systems have several advantages. For example, they can reduce the system wiring, make the system easy to operate and maintain and later diagnose in case of malfunctioning, and increase system agility (20). Although NCS have advantages, inserting a network in between the plant and the controller introduces many problems as well. For instance, zero-delayed sensing and actuation, perfect information and synchronization are no longer guaranteed in the new system architecture as only finite bandwidth is available and data packet drops and delays may occur due to network traffic conditions. These must be revisited and analyzed before networked control systems become prevalent. Recently, many researchers have spent effort on these issues and some significant results were obtained and many are in progress. Many of the aforementioned issues are studied separately. Tatikonda (19) and Sahai (13) have presented some interesting results in the area of control under communication constraints. Specifically, Tatikonda gave a necessary and sufficient condition on the channel data rate such that a noiseless LTI system in the closed loop is asymptotically stable. He also gave rate results for stabilizing a noisy LTI system over a digital channel. Sahai proposed the notion of anytime capacity to deal with real time estimation and control for a networked control system. In our paper (17), the authors have considered various rate issues under finite bandwidth, packet drops and finite controls. An optimal bit allocation scheme is given in (16) under the networked setting. The effect of pacekt drops on state estimation was studied by Sinopoli, et. al. in (3). It has further been investigated by many researchers including the present authors in (15) and (6).", "In practical military or first responder deployment scenarios, information flows need to adhere to specified policies regardless of the physical connectivity of nodes. Nodes in such networks are associated with various levels in a command-and-control hierarchy, and therefore typically form a logical hierarchical tree network that is used to route both command and data traffic. Associated with this logical hierarchical network is a communication network that represents the connectivity of these nodes in the deployed scenario. Such composite networks introduce constraints that can result in information flows having to traverse much longer paths in the underlying communication network. In this paper, we look at the problem of adding edges to a logical hierarchical network (or any other social network) so as to minimize the number of hops required to route data traffic in the underlying communication network from a node to other specified nodes. The edges added are a subset of all possible edges in the complementary logical hierarchical graph and have to satisfy specified hierarchical constraints. First, we consider the general problem of minimizing the eccentricity of a source node 's' (where eccentricity of 's' is the maximum of the shortest paths from 's' to all other nodes) in a metric graph on adding upto 'B' unequal cost metric edges from the set of all edges in the complementary graph. We develop an efficient constant factor approximation algorithm for this case that outperforms existing constant factor algorithms for eccentricity minimization. Here the added edge metric cost as well as the graph edge metric cost correspond to the number of hops in the shortest path required to route traffic in the actual deployed topology (i.e., underlying communication network). Next, we consider the case where the set of possible added edges is a specified subset of the edges in the complementary graph and the set of destinations is a subset of the graph nodes. For this case, we develop heuristic algorithms based on the previous eccentricity minimizing algorithm that show good performance. We validate our algorithms using two realistic military deployment scenarios. We find that adding even a low number of hierarchically constrained edges (of the order of 10) can cause a significant decrease (around 50 ) in the eccentricity of a node in the logical hierarchical network and thus can reduce the number of hops required for data traffic traversal." ] }
1903.05524
2972959293
In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graph-theoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
Very recently in @cite_21 , trade-off between controllability and fragility in complex networks is investigated. Fragility measures the smallest perturbation in edge weights to make the network unstable. Authors in @cite_21 show that networks that require small control energy, as measured by the eigen values of the controllability Gramian, to drive from one state to another are more fragile and vice versa. In our work, for control performance, we consider minimum leaders for strong structural controllability, which is independent of coupling weights; and for robustness, we utilize the Kirchhoff index which measures robustness to noise as well as to structural changes in the underlying network graph. Moreover, in this work we focus on designing and comparing extremal networks for these properties. The rest of the paper is organized as follows: Section describes preliminaries and network dynamics. Section explains the measures for robustness and controllability, and also outlines the main problems. Section presents maximally robust networks for a given @math and @math , and also analyzes their controllability. Section provides a design of maximally controllable networks and also evaluates their robustness. Finally, Section concludes the paper.
{ "cite_N": [ "@cite_21" ], "mid": [ "2887109490" ], "abstract": [ "Mathematical theories and empirical evidence suggest that several complex natural and man-made systems are fragile: as their size increases, arbitrarily small and localized alterations of the system parameters may trigger system-wide failures. Examples are abundant, from perturbation of the population densities leading to extinction of species in ecological networks [1], to structural changes in metabolic networks preventing reactions [2], cascading failures in power networks [3], and the onset of epileptic seizures following alterations of structural connectivity among populations of neurons [4]. While fragility of these systems has long been recognized [5], convincing theories of why natural evolution or technological advance has failed, or avoided, to enhance robustness in complex systems are still lacking. In this paper we propose a mechanistic explanation of this phenomenon. We show that a fundamental tradeoff exists between fragility of a complex network and its controllability degree, that is, the control energy needed to drive the network state to a desirable state. We provide analytical and numerical evidence that easily controllable networks are fragile, suggesting that natural and man-made systems can either be resilient to parameters perturbation or efficient to adapt their state in response to external excitations and controls." ] }
cmp-lg9408015
2952406681
Effective problem solving among multiple agents requires a better understanding of the role of communication in collaboration. In this paper we show that there are communicative strategies that greatly improve the performance of resource-bounded agents, but that these strategies are highly sensitive to the task requirements, situation parameters and agents' resource limitations. We base our argument on two sources of evidence: (1) an analysis of a corpus of 55 problem solving dialogues, and (2) experimental simulations of collaborative problem solving dialogues in an experimental world, Design-World, where we parameterize task requirements, agents' resources and communicative strategies.
Design-World is also based on the method used in Carletta's JAM simulation for the Edinburgh Map-Task @cite_10 . JAM is based on the Map-Task Dialogue corpus, where the goal of the task is for the planning agent, the instructor, to instruct the reactive agent, the instructee, how to get from one place to another on the map. JAM focuses on efficient strategies for recovery from error and parametrizes agents according to their communicative and error recovery strategies. Given good error recovery strategies, Carletta argues that high risk' strategies are more efficient, where efficiency is a measure of the number of utterances in the dialogue. While the focus here is different, we have shown that that the number of utterances is just one parameter for evaluating performance, and that the task definition determines when strategies are effective.
{ "cite_N": [ "@cite_10" ], "mid": [ "2949964922" ], "abstract": [ "Building a dialogue agent to fulfill complex tasks, such as travel planning, is challenging because the agent has to learn to collectively complete multiple subtasks. For example, the agent needs to reserve a hotel and book a flight so that there leaves enough time for commute between arrival and hotel check-in. This paper addresses this challenge by formulating the task in the mathematical framework of options over Markov Decision Processes (MDPs), and proposing a hierarchical deep reinforcement learning approach to learning a dialogue manager that operates at different temporal scales. The dialogue manager consists of: (1) a top-level dialogue policy that selects among subtasks or options, (2) a low-level dialogue policy that selects primitive actions to complete the subtask given by the top-level policy, and (3) a global state tracker that helps ensure all cross-subtask constraints be satisfied. Experiments on a travel planning task with simulated and real users show that our approach leads to significant improvements over three baselines, two based on handcrafted rules and the other based on flat deep reinforcement learning." ] }
cs9907027
2949190809
The aim of the Alma project is the design of a strongly typed constraint programming language that combines the advantages of logic and imperative programming. The first stage of the project was the design and implementation of Alma-0, a small programming language that provides a support for declarative programming within the imperative programming framework. It is obtained by extending a subset of Modula-2 by a small number of features inspired by the logic programming paradigm. In this paper we discuss the rationale for the design of Alma-0, the benefits of the resulting hybrid programming framework, and the current work on adding constraint processing capabilities to the language. In particular, we discuss the role of the logical and customary variables, the interaction between the constraint store and the program, and the need for lists.
We concentrate here on the related work involving addition of constraints to imperative languages. For an overview of related work pertaining to the language we refer the reader to @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2034373223" ], "abstract": [ "We present a new approach to adding state and state-changing commands to a term language. As a formal semantics it can be seen as a generalization of predicate transformer semantics, but beyond that it brings additional opportunities for specifying and verifying programs. It is based on a construct called a phrase, which is a term of the form C r t, where C stands for a command and t stands for a term of any type. If R is boolean, C r R is closely related to the weakest precondition wp(C,R). The new theory draws together functional and imperative programming in a simple way. In particular, imperative procedures and functions are seen to be governed by the same laws as classical functions. We get new techniques for reasoning about programs, including the ability to dispense with logical variables and their attendant complexities. The theory covers both programming and specification languages, and supports unbounded demonic and angelic nondeterminacy in both commands and terms." ] }
cmp-lg9709007
1578881253
Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.
To our knowledge, lexical databases have been used only once before in TC, apart from our previous work. Hearst @cite_10 adapted a disambiguation algorithm by Yarowsky using WordNet to recognize category occurrences. Categories are made of WordNet terms, which is not the general case of standard or user-defined categories. It is a hard task to adapt WordNet subsets to pre-existing categories, especially when they are domain dependent. Hearst's approach has shown promising results confirmed by our previous work @cite_23 and present results.
{ "cite_N": [ "@cite_10", "@cite_23" ], "mid": [ "2038721957", "1578881253" ], "abstract": [ "Part 1 The lexical database: nouns in WordNet, George A. Miller modifiers in WordNet, Katherine J. Miller a semantic network of English verbs, Christiane Fellbaum design and implementation of the WordNet lexical database and searching software, Randee I. Tengi. Part 2: automated discovery of WordNet relations, Marti A. Hearst representing verb alterations in WordNet, Karen T. the formalization of WordNet by methods of relational concept analysis, Uta E. Priss. Part 3 Applications of WordNet: building semantic concordances, Shari performance and confidence in a semantic annotation task, Christiane WordNet and class-based probabilities, Philip Resnik combining local context and WordNet similarity for word sense identification, Claudia Leacock and Martin Chodorow using WordNet for text retrieval, Ellen M. Voorhees lexical chains as representations of context for the detection and correction of malapropisms, Graeme Hirst and David St-Onge temporal indexing through lexical chaining, Reem Al-Halimi and Rick Kazman COLOR-X - using knowledge from WordNet for conceptual modelling, J.F.M. Burg and R.P. van de Riet knowledge processing on an extended WordNet, Sanda M. Harabagiu and Dan I Moldovan appendix - obtaining and using WordNet.", "Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories." ] }
cmp-lg9709007
1578881253
Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.
Lexical databases have been employed recently in word sense disambiguation. For example, Agirre and Rigau @cite_4 make use of a semantic distance that takes into account structural factors in WordNet for achieving good results for this task. Additionally, Resnik @cite_3 combines the use of WordNet and a text collection for a definition of a distance for disambiguating noun groupings. Although the text collection is not a training collection (in the sense of a collection of manually labeled texts for a pre-defined text processing task), his approach can be regarded as the most similar to ours in the disambiguation setting. Finally, Ng and Lee @cite_11 make use of several sources of information inside a training collection (neighborhood, part of speech, morphological form, etc.) to get good results in disambiguating unrestricted text.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_11" ], "mid": [ "2165897980", "1561908597", "128995279" ], "abstract": [ "Words and phrases acquire meaning from the way they are used in society, from their relative semantics to other words and phrases. For computers, the equivalent of \"society\" is \"database,\" and the equivalent of \"use\" is \"a way to search the database\". We present a new theory of similarity between words and phrases based on information distance and Kolmogorov complexity. To fix thoughts, we use the World Wide Web (WWW) as the database, and Google as the search engine. The method is also applicable to other search engines and databases. This theory is then applied to construct a method to automatically extract similarity, the Google similarity distance, of words and phrases from the WWW using Google page counts. The WWW is the largest database on earth, and the context information entered by millions of independent users averages out to provide automatic semantics of useful quality. We give applications in hierarchical clustering, classification, and language translation. We give examples to distinguish between colors and numbers, cluster names of paintings by 17th century Dutch masters and names of books by English novelists, the ability to understand emergencies and primes, and we demonstrate the ability to do a simple automatic English-Spanish translation. Finally, we use the WordNet database as an objective baseline against which to judge the performance of our method. We conduct a massive randomized trial in binary classification using support vector machines to learn categories based on our Google distance, resulting in an a mean agreement of 87 percent with the expert crafted WordNet categories", "This paper presents an adaptation of Lesk's dictionary-based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the SENSEVAL-2 word sense disambiguation exercise, and attains an overall accuracy of 32 . This represents a significant improvement over the 16 and 23 accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation systems.", "The WordNet lexical database is now quite large and offers broad coverage of general lexical relations in English. As is evident in this volume, WordNet has been employed as a resource for many applications in natural language processing (NLP) and information retrieval (IR). However, many potentially useful lexical relations are currently missing from WordNet. Some of these relations, while useful for NLP and IR applications, are not necessarily appropriate for a general, domain-independent lexical database. For example, WordNet’s coverage of proper nouns is rather sparse, but proper nouns are often very important in application tasks. The standard way lexicographers find new relations is to look through huge lists of concordance lines. However, culling through long lists of concordance lines can be a rather daunting task (Church and Hanks, 1990), so a method that picks out those lines that are very likely to hold relations of interest should be an improvement over more traditional techniques. This chapter describes a method for the automatic discovery of WordNetstyle lexico-semantic relations by searching for corresponding lexico-syntactic patterns in large text collections. Large text corpora are now widely available, and can be viewed as vast resources from which to mine lexical, syntactic, and semantic information. This idea is reminiscent of what is known as “data mining” in the artificial intelligence literature (Fayyad and Uthurusamy, 1996), however, in this case the ore is raw text rather than tables of numerical data. The Lexico-Syntactic Pattern Extraction (LSPE) method is meant to be useful as an automated or semi-automated aid for lexicographers and builders of domain-dependent knowledge-bases. The LSPE technique is light-weight; it does not require a knowledge base or complex interpretation modules in order to suggest new WordNet relations." ] }
cmp-lg9706006
2953123431
Learning problems in the text processing domain often map the text to a space whose dimensions are the measured features of the text, e.g., its words. Three characteristic properties of this domain are (a) very high dimensionality, (b) both the learned concepts and the instances reside very sparsely in the feature space, and (c) a high variation in the number of active features in an instance. In this work we study three mistake-driven learning algorithms for a typical task of this nature -- text categorization. We argue that these algorithms -- which categorize documents by learning a linear separator in the feature space -- have a few properties that make them ideal for this domain. We then show that a quantum leap in performance is achieved when we further modify the algorithms to better address some of the specific characteristics of the domain. In particular, we demonstrate (1) how variation in document length can be tolerated by either normalizing feature weights or by using negative weights, (2) the positive effect of applying a threshold range in training, (3) alternatives in considering feature frequency, and (4) the benefits of discarding features while training. Overall, we present an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set.
The methods that are most similar to our techniques are the on-line algorithms used in @cite_16 and @cite_10 . In the first, two algorithms, a multiplicative update and additive update algorithms suggested in @cite_5 are evaluated in the domain, and are shown to perform somewhat better than Rocchio's algorithm. While both these works make use of multiplicative update algorithms, as we do, there are two major differences between those studies and the current one. First, there are some important technical differences between the algorithms used. Second, the algorithms we study here are mistake-driven; they update the weight vector only when a mistake is made, and not after every example seen. The Experts algorithm studied in @cite_10 is very similar to a basic version of the algorithm which we study here. The way we treat the negative weights is different, though, and significantly more efficient, especially in sparse domains (see ). Cohen and Singer experiment also, using the same algorithm, with more complex features (sparse n-grams) and show that, as expected, it yields better results.
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_10" ], "mid": [ "2097645432", "1988790447", "2069317438" ], "abstract": [ "In most kernel based online learning algorithms, when an incoming instance is misclassified, it will be added into the pool of support vectors and assigned with a weight, which often remains unchanged during the rest of the learning process. This is clearly insufficient since when a new support vector is added, we generally expect the weights of the other existing support vectors to be updated in order to reflect the influence of the added support vector. In this paper, we propose a new online learning method, termed Double Updating Online Learning, or DUOL for short, that explicitly addresses this problem. Instead of only assigning a fixed weight to the misclassified example received at the current trial, the proposed online learning algorithm also tries to update the weight for one of the existing support vectors. We show that the mistake bound can be improved by the proposed online learning method. We conduct an extensive set of empirical evaluations for both binary and multi-class online learning tasks. The experimental results show that the proposed technique is considerably more effective than the state-of-the-art online learning algorithms. The source code is available to public at http: www.cais.ntu.edu.sg chhoi DUOL .", "In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.", "We consider two algorithm for on-line prediction based on a linear model. The algorithms are the well-known Gradient Descent (GD) algorithm and a new algorithm, which we call EG(+ -). They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG(+ -) algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG(+ -) and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG(+ -) has a much smaller loss if only a few components of the input are relevant for the predictions. We have performed experiments, which show that our worst-case upper bounds are quite tight already on simple artificial data." ] }
cmp-lg9701003
2952751702
In expert-consultation dialogues, it is inevitable that an agent will at times have insufficient information to determine whether to accept or reject a proposal by the other agent. This results in the need for the agent to initiate an information-sharing subdialogue to form a set of shared beliefs within which the agents can effectively re-evaluate the proposal. This paper presents a computational strategy for initiating such information-sharing subdialogues to resolve the system's uncertainty regarding the acceptance of a user proposal. Our model determines when information-sharing should be pursued, selects a focus of information-sharing among multiple uncertain beliefs, chooses the most effective information-sharing strategy, and utilizes the newly obtained information to re-evaluate the user proposal. Furthermore, our model is capable of handling embedded information-sharing subdialogues.
Grosz, Sidner and Lochbaum @cite_7 @cite_1 developed a SharedPlan approach to modelling collaborative discourse, and Sidner formulated an artificial language for modeling such discourse. Sidner viewed a collaborative planning process as proposal acceptance and proposal rejection sequences. Her artificial language treats an utterance such as Why do X? as a proposal for the hearer to provide support for his proposal to do X. However, Sidner's work is descriptive and does not provide a mechanism for determining when and how such a proposal should be made nor how responses should be formulated in information-sharing subdialogues.
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "2148389694", "1564910013" ], "abstract": [ "A model of plan recognition in discourse must be based on intended recognition, distinguish each agent's beliefs and intentions from the other's, and avoid assumptions about the correctness or completeness of the agents' beliefs. In this paper, we present an algorithm for plan recognition that is based on the Shared-Plan model of collaboration (Grosz and Sidner, 1990; , 1990) and that satisfies these constraints.", "This paper addresses the problem of designing a system that accepts a plan structure of the sort generated by AI planning programs and produces natural language text explaining how to execute the plan. We describe a system that generates text from plans produced by the NONLIN planner (Tate 1976).The results of our system are promising, but the texts still lack much of the smoothness of human-generated text. This is partly because, although the domain of plans seems a priori to provide rich structure that a natural language generator can use, in practice a plan that is generated without the production of explanations in mind rarely contains the kinds of information that would yield an interesting natural language account. For instance, the hierarchical organization assigned to a plan is liable to reflect more a programmer's approach to generating a class of plans efficiently than the way that a human would naturally \"chunk\" the relevant actions. Such problems are, of course, similar to those that Swartout (1983) encountered with expert systems. In addition, AI planners have a restricted view of the world that is hard to match up with the normal semantics of natural language expressions. Thus constructs that are primitive to the planner may be only clumsily or misleadingly expressed in natural language, and the range of possible natural language constructs may be artificially limited by the shallowness of the planner's representations." ] }
cmp-lg9701003
2952751702
In expert-consultation dialogues, it is inevitable that an agent will at times have insufficient information to determine whether to accept or reject a proposal by the other agent. This results in the need for the agent to initiate an information-sharing subdialogue to form a set of shared beliefs within which the agents can effectively re-evaluate the proposal. This paper presents a computational strategy for initiating such information-sharing subdialogues to resolve the system's uncertainty regarding the acceptance of a user proposal. Our model determines when information-sharing should be pursued, selects a focus of information-sharing among multiple uncertain beliefs, chooses the most effective information-sharing strategy, and utilizes the newly obtained information to re-evaluate the user proposal. Furthermore, our model is capable of handling embedded information-sharing subdialogues.
Several researchers have studied the role of clarification dialogues in disambiguating user plans @cite_3 @cite_5 and in understanding referring expressions @cite_8 . developed an automated librarian that could revise its beliefs and intentions and could generate responses as an attempt to revise the user's beliefs and intentions. Although their system had rules for asking the user whether he holds a particular belief and for telling the system's attitude toward a belief, the emphasis of their work was on conflict resolution and plan disambiguation. Thus they did not investigate a comprehensive strategy for information-sharing during proposal evaluation. For example, they did not identify situations in which information-sharing is necessary, did not address how to select a focus of information-sharing when there are multiple uncertain beliefs, did not consider requesting the user's justifications for a belief, etc. In addition, they do not provide an overall dialogue planner that takes into account discourse structure and appropriately captures embedded subdialogues.
{ "cite_N": [ "@cite_5", "@cite_3", "@cite_8" ], "mid": [ "2101445408", "1990671169", "1733954365" ], "abstract": [ "We report evaluation results for real users of a learnt dialogue management policy versus a hand-coded policy in the TALK project's \"Townlnfo\" tourist information system. The learnt policy, for filling and confirming information slots, was derived from COMMUNICATOR (flight-booking) data using reinforcement learning (RL) as described in [2], ported to the tourist information domain (using a general method that we propose here), and tested using 18 human users in 180 dialogues, who also used a state-of-the-art hand- coded dialogue policy embedded in an otherwise identical system. We found that users of the (ported) learned policy had an average gain in perceived task completion of 14.2 (from 67.6 to 81.8 at p < .03), that the hand-coded policy dialogues had on average 3.3 more system turns (p < .01), and that the user satisfaction results were comparable, even though the policy was learned for a different domain. Combining these in a dialogue reward score, we found a 14.4 increase for the learnt policy (a 23.8 relative increase, p < .03). These results are important because they show a) that results for real users are consistent with results for automatic evaluation [2] of learned policies using simulated users [3, 4], b) that a policy learned using linear function approximation over a very large policy space [2] is effective for real users, and c) that policies learned using data for one domain can be used successfully in other domains. We also present a qualitative discussion of the learnt policy.", "HighlightsWe integrate user appraisals in a POMDP-based dialogue manager procedure.We employ additional socially-inspired rewards in a RL setup to guide the learning.A unified framework for speeding up the policy optimisation and user adaptation.We consider a potential-based reward shaping with a sample efficient RL algorithm.Evaluated using both user simulator (information retrieval) and user trials (HRI). This paper investigates some conditions under which polarized user appraisals gathered throughout the course of a vocal interaction between a machine and a human can be integrated in a reinforcement learning-based dialogue manager. More specifically, we discuss how this information can be cast into socially-inspired rewards for speeding up the policy optimisation for both efficient task completion and user adaptation in an online learning setting. For this purpose a potential-based reward shaping method is combined with a sample efficient reinforcement learning algorithm to offer a principled framework to cope with these potentially noisy interim rewards. The proposed scheme will greatly facilitate the system's development by allowing the designer to teach his system through explicit positive negative feedbacks given as hints about task progress, in the early stage of training. At a later stage, the approach will be used as a way to ease the adaptation of the dialogue policy to specific user profiles. Experiments carried out using a state-of-the-art goal-oriented dialogue management framework, the Hidden Information State (HIS), support our claims in two configurations: firstly, with a user simulator in the tourist information domain (and thus simulated appraisals), and secondly, in the context of man-robot dialogue with real user trials.", "To participate in a dialogue a system must be capable of reasoning about its own previous utterances. Follow-up questions must be interpreted in the context of the ongoing conversation, and the system's previous contributions form part of this context. Furthermore, if a system is to be able to clarify misunderstood explanations or to elaborate on prior explanations, it must understand what it has conveyed in prior explanations. Previous approaches to generating multisentential texts have relied solely on rhetorical structuring techniques. In this paper, we argue that, to handle explanation dialogues successfully, a discourse model must include information about the intended effect of individual parts of the text on the hearer, as well as how the parts relate to one another rhetorically. We present a text planner that records this information and show how the resulting structure is used to respond appropriately to a follow-up question." ] }
cmp-lg9606025
2952364737
This paper presents an analysis conducted on a corpus of software instructions in French in order to establish whether task structure elements (the procedural representation of the users' tasks) are alone sufficient to control the grammatical resources of a text generator. We show that the construct of genre provides a useful additional source of control enabling us to resolve undetermined cases.
The results from our linguistic analysis are consistent with other research on sublanguages in the instructions domain, in both French and English, e.g., @cite_5 @cite_4 . Our analysis goes beyond previous work by identifying within the discourse context the means for exercising explicit control over a text generator.
{ "cite_N": [ "@cite_5", "@cite_4" ], "mid": [ "2047374384", "2016630033" ], "abstract": [ "This paper discusses an approach to planning the content of instructional texts. The research is based on a corpus study of 15 French procedural texts ranging from step-by-step device manuals to general artistic procedures. The approach taken starts from an AI task planner building a task representation, from which semantic carriers are selected. The most appropriate RST relations to communicate these carriers are then chosen according to heuristics developed during the corpus analysis.", "This paper describes a system and set of algorithms for automatically inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity taggers and morphological analyzers for an arbitrary foreign language. Case studies include French, Chinese, Czech and Spanish.Existing text analysis tools for English are applied to bilingual text corpora and their output projected onto the second language via statistically derived word alignments. Simple direct annotation projection is quite noisy, however, even with optimal alignments. Thus this paper presents noise-robust tagger, bracketer and lemmatizer training procedures capable of accurate system bootstrapping from noisy and incomplete initial projections.Performance of the induced stand-alone part-of-speech tagger applied to French achieves 96 core part-of-speech (POS) tag accuracy, and the corresponding induced noun-phrase bracketer exceeds 91 F-measure. The induced morphological analyzer achieves over 99 lemmatization accuracy on the complete French verbal system.This achievement is particularly noteworthy in that it required absolutely no hand-annotated training data in the given language, and virtually no language-specific knowledge or resources beyond raw text. Performance also significantly exceeds that obtained by direct annotation projection." ] }
cmp-lg9505006
1617827527
In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDT-resolution. In this article we motivate a variant of Datalog grammars which allows us a meta-grammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
A notion that is central to recent work on ellipsis, and which has been present in embryonic form, as we have seen, even in the early work on coordination, is that of parallelism as a key element in the determination of implicit meanings. Asher @cite_3 defines parallelism as
{ "cite_N": [ "@cite_3" ], "mid": [ "2168671000" ], "abstract": [ "This paper reports on an empirically based system that automatically resolves VP ellipsis in the 644 examples identified in the parsed Penn Treebank. The results reported here represent the first systematic corpus-based study of VP ellipsis resolution, and the performance of the system is comparable to the best existing systems for pronoun resolution. The methodology and utilities described can be applied to other discourse-processing problems, such as other forms of ellipsis and anaphora resolution.The system determines potential antecedents for ellipsis by applying syntactic constraints, and these antecedents are ranked by combining structural and discourse preference factors such as recency, clausal relations, and parallelism. The system is evaluated by comparing its output to the choices of human coders. The system achieves a success rate of 94.8 , where success is defined as sharing of a head between the system choice and the coder choice, while a baseline recency-based scheme achieves a success rate of 75.0 by this measure. Other criteria for success are also examined. When success is defined as an exact, word-for-word match with the coder choice, the system performs with 76.0 accuracy, and the baseline approach achieves only 14.6 accuracy. Analysis of the individual components of the system shows that each of the structural and discourse constraints used are strong predictors of the antecedent of VP ellipsis." ] }
cmp-lg9505006
1617827527
In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDT-resolution. In this article we motivate a variant of Datalog grammars which allows us a meta-grammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
a) neither method formulates exactly how parallelism is to be determined- it is just postulated as a prerequisite to the resolution of ellipsis (although @cite_6 speculates on possible ways of formulating this, leaving it for future work)
{ "cite_N": [ "@cite_6" ], "mid": [ "2168671000" ], "abstract": [ "This paper reports on an empirically based system that automatically resolves VP ellipsis in the 644 examples identified in the parsed Penn Treebank. The results reported here represent the first systematic corpus-based study of VP ellipsis resolution, and the performance of the system is comparable to the best existing systems for pronoun resolution. The methodology and utilities described can be applied to other discourse-processing problems, such as other forms of ellipsis and anaphora resolution.The system determines potential antecedents for ellipsis by applying syntactic constraints, and these antecedents are ranked by combining structural and discourse preference factors such as recency, clausal relations, and parallelism. The system is evaluated by comparing its output to the choices of human coders. The system achieves a success rate of 94.8 , where success is defined as sharing of a head between the system choice and the coder choice, while a baseline recency-based scheme achieves a success rate of 75.0 by this measure. Other criteria for success are also examined. When success is defined as an exact, word-for-word match with the coder choice, the system performs with 76.0 accuracy, and the baseline approach achieves only 14.6 accuracy. Analysis of the individual components of the system shows that each of the structural and discourse constraints used are strong predictors of the antecedent of VP ellipsis." ] }
cmp-lg9505006
1617827527
In previous work we studied a new type of DCGs, Datalog grammars, which are inspired on database theory. Their efficiency was shown to be better than that of their DCG counterparts under (terminating) OLDT-resolution. In this article we motivate a variant of Datalog grammars which allows us a meta-grammatical treatment of coordination. This treatment improves in some respects over previous work on coordination in logic grammars, although more research is needed for testing it in other respects.
By examining ellipsis in the context of coordinated structures, which are parallel by definition, and by using extended DLGs, we provide a method in which parallel structures are detected and resolved through syntactic and semantic criteria, and which can be applied to either grammars using different semantic representations- feature structure, @math -calculus, or other. We exemplify using a logic based semantics along the lines of @cite_8 .
{ "cite_N": [ "@cite_8" ], "mid": [ "2168671000" ], "abstract": [ "This paper reports on an empirically based system that automatically resolves VP ellipsis in the 644 examples identified in the parsed Penn Treebank. The results reported here represent the first systematic corpus-based study of VP ellipsis resolution, and the performance of the system is comparable to the best existing systems for pronoun resolution. The methodology and utilities described can be applied to other discourse-processing problems, such as other forms of ellipsis and anaphora resolution.The system determines potential antecedents for ellipsis by applying syntactic constraints, and these antecedents are ranked by combining structural and discourse preference factors such as recency, clausal relations, and parallelism. The system is evaluated by comparing its output to the choices of human coders. The system achieves a success rate of 94.8 , where success is defined as sharing of a head between the system choice and the coder choice, while a baseline recency-based scheme achieves a success rate of 75.0 by this measure. Other criteria for success are also examined. When success is defined as an exact, word-for-word match with the coder choice, the system performs with 76.0 accuracy, and the baseline approach achieves only 14.6 accuracy. Analysis of the individual components of the system shows that each of the structural and discourse constraints used are strong predictors of the antecedent of VP ellipsis." ] }
cmp-lg9505038
2952182676
Augmented reality is a research area that tries to embody an electronic information space within the real world, through computational devices. A crucial issue within this area, is the recognition of real world objects or situations. In natural language processing, it is much easier to determine interpretations of utterances, even if they are ill-formed, when the context or situation is fixed. We therefore introduce robust, natural language processing into a system of augmented reality with situation awareness. Based on this idea, we have developed a portable system, called the Ubiquitous Talker. This consists of an LCD display that reflects the scene at which a user is looking as if it is a transparent glass, a CCD camera for recognizing real world objects with color-bar ID codes, a microphone for recognizing a human voice and a speaker which outputs a synthesized voice. The Ubiquitous Talker provides its user with some information related to a recognized object, by using the display and voice. It also accepts requests or questions as voice inputs. The user feels as if he she is talking with the object itself through the system.
Ubiquitous computing @cite_4 proposes that very small computational devices (i.e., ubiquitous computers) be embedded and integrated into physical environments in such a way that they operate seamlessly and almost transparently. These devices are aware of their physical surroundings. In contrast to ubiquitous computers, our barcode (color-code) system is a low cost and reliable solution to making everything a computer. Suppose that every page in a book has a unique barcode. When the user opens a page, its page ID is detected by the system, so it can supply specific information regarding the page. When the user adds some information to the page, the system stores it with the page ID tagged for later retrieval. This is almost the same as having a computer in every page of the book without the cost. Our ID-aware system is better than ubiquitous computers from the viewpoint of reliability and cost-performance, since it does not require batteries and never breaks down.
{ "cite_N": [ "@cite_4" ], "mid": [ "1975697571" ], "abstract": [ "There is increased interest in the use of color barcodes to encode more information per area unit than regular, black-and-white barcodes. For example, Microsoft's HCCB technology uses 4 or 8 colors per patch. Unfortunately, the observed color of a surface depends as much on the illuminant spectrum (and other viewing parameters) as on the surface reflectivity, which complicates the task of decoding the content of the barcode. A popular solution is to append to the barcode a “palette” with the reference colors. In this paper, we propose a new approach to color barcode decoding, one that does not require a reference color palette. Our algorithm decodes groups of color bars at once, exploiting the fact that joint color changes can be represented by a low-dimensional space. Decoding a group of bars (a “barcode element”) is thus equivalent to searching for the nearest subspace in a dataset. We also propose algorithms to select subsets of barcode elements that can be decoded with low error probability. Our experimental results show that our barcode decoding algorithm enables substantial information rate increase with respect to system that display a color palette, at a very low decoding error rate." ] }

This is a copy of the Multi-XScience dataset, except the input source documents of the train, validation, and test splits have been replaced by a dense retriever. The retrieval pipeline used:

  • query: The related_work field of each example
  • corpus: The union of all documents in the train, validation and test splits
  • retriever: facebook/contriever-msmarco via PyTerrier with default settings
  • top-k strategy: "oracle", i.e. the number of documents retrieved, k, is set as the original number of input documents for each example

Retrieval results on the train set:

Recall@100 Rprec Precision@k Recall@k
0.5270 0.2005 0.2005 0.2005

Retrieval results on the validation set:

Recall@100 Rprec Precision@k Recall@k
0.5310 0.2026 0.2026 0.2026

Retrieval results on the test set:

Recall@100 Rprec Precision@k Recall@k
0.5229 0.2081 0.2081 0.2081
Downloads last month
2