Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
found
Source Datasets:
original
Tags:
License:
aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
sequence
math9912167
1631980677
Author(s): Kuperberg, Greg; Thurston, Dylan P. | Abstract: We give a purely topological definition of the perturbative quantum invariants of links and 3-manifolds associated with Chern-Simons field theory. Our definition is as close as possible to one given by Kontsevich. We will also establish some basic properties of these invariants, in particular that they are universally finite type with respect to algebraically split surgery and with respect to Torelli surgery. Torelli surgery is a mutual generalization of blink surgery of Garoufalidis and Levine and clasper surgery of Habiro.
Two other generalizations that can be considered are invariants of graphs in 3-manifolds, and invariants associated to other flat connections @cite_16 . We will analyze these in future work. Among other things, there should be a general relation between flat bundles and links in 3-manifolds on the one hand and finite covers and branched covers on the other hand @cite_26 .
{ "cite_N": [ "@cite_16", "@cite_26" ], "mid": [ "2080743555", "1641082372", "1990891135", "2023137590" ], "abstract": [ "The aim of this paper is to construct new topological invariants of compact oriented 3-manifolds and of framed links in such manifolds. Our invariant of (a link in) a closed oriented 3-manifold is a sequence of complex numbers parametrized by complex roots of 1. For a framed link in S 3 the terms of the sequence are equale to the values of the (suitably parametrized) Jones polynomial of the link in the corresponding roots of 1. In the case of manifolds with boundary our invariant is a (sequence of) finite dimensional complex linear operators. This produces from each root of unity q a 3-dimensional topological quantum field theory", "Recently, Mullins calculated the Casson-Walker invariant of the 2-fold cyclic branched cover of an oriented link in S^3 in terms of its Jones polynomial and its signature, under the assumption that the 2-fold branched cover is a rational homology 3-sphere. Using elementary principles, we provide a similar calculation for the general case. In addition, we calculate the LMO invariant of the p-fold branched cover of twisted knots in S^3 in terms of the Kontsevich integral of the knot.", "We describe a correspondence between the Donaldson–Thomas invariants enumerating D0–D6 bound states on a Calabi–Yau 3-fold and certain Gromov–Witten invariants counting rational curves in a family of blowups of weighted projective planes. This is a variation on a correspondence found by Gross–Pandharipande, with D0–D6 bound states replacing representations of generalised Kronecker quivers. We build on a small part of the theories developed by Joyce–Song and Kontsevich–Soibelman for wall-crossing formulae and by Gross–Pandharipande–Siebert for factorisations in the tropical vertex group. Along the way we write down an explicit formula for the BPS state counts which arise up to rank 3 and prove their integrality. We also compare with previous “noncommutative DT invariants” computations in the physics literature.", "Motivated by S-duality modularity conjectures in string theory, we define new invariants counting a restricted class of two-dimensional torsion sheaves, enumerating pairs (Z H ) in a Calabi–Yau threefold (X ). Here (H ) is a member of a sufficiently positive linear system and (Z ) is a one-dimensional subscheme of it. The associated sheaf is the ideal sheaf of (Z H ), pushed forward to (X ) and considered as a certain Joyce–Song pair in the derived category of (X ). We express these invariants in terms of the MNOP invariants of (X )." ] }
cs9910011
2168463568
A statistical model for segmentation and word discovery in child directed speech is presented. An incremental unsupervised learning algorithm to infer word boundaries based on this model is described and results of empirical tests showing that the algorithm is competitive with other models that have been used for similar tasks are also presented.
Model Based Dynamic Programming, hereafter referred to as MBDP-1 @cite_0 , is probably the most recent work that addresses the exact same issue as that considered in this paper. Both the approach presented in this paper and Brent's MBDP-1 are based on explicit probability models. Approaches not based on explicit probability models include those based on information theoretic criteria such as MDL , transitional probability or simple recurrent networks . The maximum likelihood approach due to Olivier:SGL68 is probabilistic in the sense that it is geared towards explicitly calculating the most probable segmentation of each block of input utterances. However, it is not based on a formal statistical model. To avoid needless repetition, we only describe Brent's MBDP-1 below and direct the interested reader at Brent:EPS99 which provides an excellent review of many of the algorithms mentioned above.
{ "cite_N": [ "@cite_0" ], "mid": [ "2949560198", "2751901133", "2109910161", "2130913800" ], "abstract": [ "We initiate the probabilistic analysis of linear programming (LP) decoding of low-density parity-check (LDPC) codes. Specifically, we show that for a random LDPC code ensemble, the linear programming decoder of Feldman succeeds in correcting a constant fraction of errors with high probability. The fraction of correctable errors guaranteed by our analysis surpasses previous nonasymptotic results for LDPC codes, and in particular, exceeds the best previous finite-length result on LP decoding by a factor greater than ten. This improvement stems in part from our analysis of probabilistic bit-flipping channels, as opposed to adversarial channels. At the core of our analysis is a novel combinatorial characterization of LP decoding success, based on the notion of a flow on the Tanner graph of the code. An interesting by-product of our analysis is to establish the existence of ldquoprobabilistic expansionrdquo in random bipartite graphs, in which one requires only that almost every (as opposed to every) set of a certain size expands, for sets much larger than in the classical worst case setting.", "Recent compilers offer a vast number of multilayered optimizations targeting different code segments of an application. Choosing among these optimizations can significantly impact the performance of the code being optimized. The selection of the right set of compiler optimizations for a particular code segment is a very hard problem, but finding the best ordering of these optimizations adds further complexity. Finding the best ordering represents a long standing problem in compilation research, named the phase-ordering problem. The traditional approach of constructing compiler heuristics to solve this problem simply cannot cope with the enormous complexity of choosing the right ordering of optimizations for every code segment in an application. This article proposes an automatic optimization framework we call MiCOMP, which Mi tigates the Com piler P hase-ordering problem. We perform phase ordering of the optimizations in LLVM’s highest optimization level using optimization sub-sequences and machine learning. The idea is to cluster the optimization passes of LLVM’s O3 setting into different clusters to predict the speedup of a complete sequence of all the optimization clusters instead of having to deal with the ordering of more than 60 different individual optimizations. The predictive model uses (1) dynamic features, (2) an encoded version of the compiler sequence, and (3) an exploration heuristic to tackle the problem. Experimental results using the LLVM compiler framework and the Cbench suite show the effectiveness of the proposed clustering and encoding techniques to application-based reordering of passes, while using a number of predictive models. We perform statistical analysis on the results and compare against (1) random iterative compilation, (2) standard optimization levels, and (3) two recent prediction approaches. We show that MiCOMP’s iterative compilation using its sub-sequences can reach an average performance speedup of 1.31 (up to 1.51). Additionally, we demonstrate that MiCOMP’s prediction model outperforms the -O1, -O2, and -O3 optimization levels within using just a few predictions and reduces the prediction error rate down to only 5 . Overall, it achieves 90 of the available speedup by exploring less than 0.001 of the optimization space.", "Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem.", "We present a data-driven, probabilistic trajectory optimization framework for systems with unknown dynamics, called Probabilistic Differential Dynamic Programming (PDDP). PDDP takes into account uncertainty explicitly for dynamics models using Gaussian processes (GPs). Based on the second-order local approximation of the value function, PDDP performs Dynamic Programming around a nominal trajectory in Gaussian belief spaces. Different from typical gradient-based policy search methods, PDDP does not require a policy parameterization and learns a locally optimal, time-varying control policy. We demonstrate the effectiveness and efficiency of the proposed algorithm using two nontrivial tasks. Compared with the classical DDP and a state-of-the-art GP-based policy search method, PDDP offers a superior combination of data-efficiency, learning speed, and applicability." ] }
cs9911003
2950670108
We solve the subgraph isomorphism problem in planar graphs in linear time, for any pattern of constant size. Our results are based on a technique of partitioning the planar graph into pieces of small tree-width, and applying dynamic programming within each piece. The same methods can be used to solve other planar graph problems including connectivity, diameter, girth, induced subgraph isomorphism, and shortest paths.
Recently we were able to characterize the graphs that can occur at most @math times as a subgraph isomorph in an @math -vertex planar graph: they are exactly the 3-connected planar graphs @cite_41 . However our proof does not lead to an efficient algorithm for 3-connected planar subgraph isomorphism. In this paper we use different techniques which do not depend on high-order connectivity.
{ "cite_N": [ "@cite_41" ], "mid": [ "1799262171", "1960457407", "2795050613", "2074992286" ], "abstract": [ "Given two graphs @math and @math , the Subgraph Isomorphism problem asks if @math is isomorphic to a subgraph of @math . While NP-hard in general, algorithms exist for various parameterized versions of the problem: for example, the problem can be solved (1) in time @math using the color-coding technique of Alon, Yuster, and Zwick; (2) in time @math using Courcelle's Theorem; (3) in time @math using a result on first-order model checking by Frick and Grohe; or (4) in time @math for connected @math using the algorithm of Matou s ek and Thomas. Already this small sample of results shows that the way an algorithm can depend on the parameters is highly nontrivial and subtle. We develop a framework involving 10 relevant parameters for each of @math and @math (such as treewidth, pathwidth, genus, maximum degree, number of vertices, number of components, etc.), and ask if an algorithm with running time [ f_1(p_1,p_2,..., p_ ) n^ f_2(p_ +1 ,..., p_k) ] exist, where each of @math is one of the 10 parameters depending only on @math or @math . We show that all the questions arising in this framework are answered by a set of 11 maximal positive results (algorithms) and a set of 17 maximal negative results (hardness proofs); some of these results already appear in the literature, while others are new in this paper. On the algorithmic side, our study reveals for example that an unexpected combination of bounded degree, genus, and feedback vertex set number of @math gives rise to a highly nontrivial algorithm for Subgraph Isomorphism. On the hardness side, we present W[1]-hardness proofs under extremely restricted conditions, such as when @math is a bounded-degree tree of constant pathwidth and @math is a planar graph of bounded pathwidth.", "The complexity of the subgraph isomorphism problem where the pattern graph is of fixed size is well known to depend on the topology of the pattern graph. For instance, the larger the maximum independent set of the pattern graph is the more efficient algorithms are known. The situation seems to be substantially different in the case of induced subgraph isomorphism for pattern graphs of fixed size. We present two results which provide evidence that no topology of an induced subgraph of fixed size can be easier to detect or count than an independent set of related size. We show that: Any fixed pattern graph that has a maximum independent set of size k that is disjoint from other maximum independent sets is not easier to detect as an induced subgraph than an independent set of size k. It follows in particular that an induced path on k vertices is not easier to detect than an independent set on ⌈k 2 ⌉ vertices, and that an induced even cycle on k vertices is not easier to detect than an independent set on k 2 vertices. In view of linear time upper bounds on induced paths of length three and four, our lower bound is tight. Similar corollaries hold for the detection of induced complete bipartite graphs and induced complete split graphs. For an arbitrary pattern graph H on k vertices with no isolated vertices, there is a simple subdivision of H, resulting from splitting each edge into a path of length four and attaching a distinct path of length three at each vertex of degree one, that is not easier to detect or count than an independent set on k vertices, respectively. Finally, we show that the so called diamond, paw and C 4 are not easier to detect as induced subgraphs than an independent set on three vertices.", "The subgraph isomorphism problem involves deciding whether a copy of a pattern graph occurs inside a larger target graph. The non-induced version allows extra edges in the target, whilst the induced version does not. Although both variants are NP-complete, algorithms inspired by constraint programming can operate comfortably on many real-world problem instances with thousands of vertices. However, they cannot handle arbitrary instances of this size. We show how to generate \" really hard \" random instances for subgraph isomorphism problems, which are computationally challenging with a couple of hundred vertices in the target, and only twenty pattern vertices. For the non-induced version of the problem, these instances lie on a satisfiable unsatisfiable phase transition, whose location we can predict; for the induced variant, much richer behaviour is observed, and constrained-ness gives a better measure of difficulty than does proximity to a phase transition. These results have practical consequences: we explain why the widely researched \" filter verify \" indexing technique used in graph databases is founded upon a misunderstanding of the empirical hardness of NP-complete problems, and cannot be beneficial when paired with any reasonable subgraph isomorphism algorithm.", "It is well known that any planar graph contains at most O(n) complete subgraphs. We extend this to an exact characterization: G occurs O(n) times as a subgraph of any planar graph, if and only if G is three-connected. We generalize these results to similarly characterize certain other minor-closed families of graphs; in particular, G occurs O(n) times as a subgraph of the Kb,c-free graphs, b ≥ c and c ≤ 4, iff G is c-connected. Our results use a simple Ramsey-theoretic lemma that may be of independent interest. © 1993 John Wiley & Sons, Inc." ] }
hep-th9908200
2160091034
Daviau showed the equivalence of matrix Dirac theory, formulated within a spinor bundle (S_x C _x^4 ), to a Clifford algebraic formulation within space Clifford algebra (C ( R ^3 , ) M _ 2 ( C ) P ) Pauli algebra (matrices) ≃ ℍ ⨁ ℍ ≃ biquaternions. We will show, that Daviau's map θ: ( : C ^4 M _ 2 ( C ) ) is an isomorphism. It is shown that Hestenes' and Parra's formulations are equivalent to Daviau's Clifford algebra formulation, which uses outer automorphisms. The connection between different formulations is quite remarkable, since it connects the left and right action on the Pauli algebra itself viewed as a bi-module with the left (resp. right) action of the enveloping algebra (P^ P P^T on P ). The isomorphism established in this article and given by Daviau's map does clearly show that right and left actions are of similar type. This should be compared with attempts of Hestenes, Daviau, and others to interprete the right action as the iso-spin freedom.
A further genuine and important approach to the spinor-tensor transition was developed starting probably with Crawford by P. Lounesto, @cite_6 and references there. He investigated the question, how a spinor field can be reconstructed from known tensor densities. The major characterization is derived, using Fierz-Kofink identities, from elements called Boomerangs --because they are able to come back to the spinorial picture. Lounesto's result is a characterization of spinors based on multi-vector relations which unveils a new unknown type of spinor.
{ "cite_N": [ "@cite_6" ], "mid": [ "2198155329", "2022437913", "2073656615", "1009009209" ], "abstract": [ "Patch-based low-rank models have shown effective in exploiting spatial redundancy of natural images especially for the application of image denoising. However, two-dimensional low-rank model can not fully exploit the spatio-temporal correlation in larger data sets such as multispectral images and 3D MRIs. In this work, we propose a novel low-rank tensor approximation framework with Laplacian Scale Mixture (LSM) modeling for multi-frame image denoising. First, similar 3D patches are grouped to form a tensor of d-order and high-order Singular Value Decomposition (HOSVD) is applied to the grouped tensor. Then the task of multiframe image denoising is formulated as a Maximum A Posterior (MAP) estimation problem with the LSM prior for tensor coefficients. Both unknown sparse coefficients and hidden LSM parameters can be efficiently estimated by the method of alternating optimization. Specifically, we have derived closed-form solutions for both subproblems. Experimental results on spectral and dynamic MRI images show that the proposed algorithm can better preserve the sharpness of important image structures and outperform several existing state-of-the-art multiframe denoising methods (e.g., BM4D and tensor dictionary learning).", "The symmetric tensor spherical harmonics (STSH’s) on the N‐sphere (SN), which are defined as the totally symmetric, traceless, and divergence‐free tensor eigenfunctions of the Laplace–Beltrami (LB) operator on SN, are studied. Specifically, their construction is shown recursively starting from the lower‐dimensional ones. The symmetric traceless tensors induced by STSH’s are introduced. These play a crucial role in the recursive construction of STSH’s. The normalization factors for STSH’s are determined by using their transformation properties under SO(N+1). Then the symmetric, traceless, and divergence‐free tensor eigenfunctions of the LB operator in the N‐dimensional de Sitter space‐time which are obtained by the analytic continuation of the STSH’s on SN are studied. Specifically, the allowed eigenvalues of the LB operator under the restriction of unitarity are determined. Our analysis gives a group‐theoretical explanation of the forbidden mass range observed earlier for the spin‐2 field theory in de Sit...", "In the present paper we analyze a class of tensor-structured preconditioners for the multidimensional second-order elliptic operators in ℝ d , d≥2. For equations in a bounded domain, the construction is based on the rank-R tensor-product approximation of the elliptic resolvent ℬ R ≈(ℒ−λ I)−1, where ℒ is the sum of univariate elliptic operators. We prove the explicit estimate on the tensor rank R that ensures the spectral equivalence. For equations in an unbounded domain, one can utilize the tensor-structured approximation of Green’s kernel for the shifted Laplacian in ℝ d , which is well developed in the case of nonoscillatory potentials. For the oscillating kernels e −i κ‖x‖ ‖x‖, x∈ℝ d , κ∈ℝ+, we give constructive proof of the rank-O(κ) separable approximation. This leads to the tensor representation for the discretized 3D Helmholtz kernel on an n×n×n grid that requires only O(κ |log e|2 n) reals for storage. Such representations can be applied to both the 3D volume and boundary calculations with sublinear cost O(n 2), even in the case κ=O(n).", "Abstract After a very brief recollection of how my scientific collaboration with Ugo started, in this talk I will present some recent results obtained with localization: the deformed gauge theory partition function Z ( τ → | q ) and the expectation value of circular Wilson loops W on a squashed four-sphere will be computed. The partition function is deformed by turning on τ J tr Φ J interactions with Φ the N = 2 superfield. For the N = 4 theory SUSY gauge theory exact formulae for Z and W in terms of an underlying U ( N ) interacting matrix model can be derived thus replacing the free Gaussian model describing the undeformed N = 4 theory. These results will be then compared with those obtained with the dual CFT according to the AGT correspondence. The interactions introduced previously are in fact related to the insertions of commuting integrals of motion in the four-point CFT correlator and the chiral correlators are expressed as τ -derivatives of the gauge theory partition function on a finite Ω -background." ] }
cs9903014
1612660921
We present an open architecture for just-in-time code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make system-level code generation useful in practice.
Pioneering research in dynamic runtime optimization was done by Hansen @cite_8 who first described a fully automated system for runtime code optimization. His system was similar in structure to our system---it was composed of a loader, a profiler, and an optimizer---but used profiling data only to decide when to optimize and what to optimize, not how to optimize. Also, his system interpreted code prior to optimization, since load time code generation was too memory and time consuming at the time.
{ "cite_N": [ "@cite_8" ], "mid": [ "2165006697", "2171438172", "2116210226", "2093922742" ], "abstract": [ "Optimizing programs at run-time provides opportunities to apply aggressive optimizations to programs based on information that was not available at compile time. At run time, programs can be adapted to better exploit architectural features, optimize the use of dynamic libraries, and simplify code based on run-time constants. Our profiling system provides a framework for collecting information required for performing run-time optimization. We sample the performance hardware registers available on an Itanium processor, and select a set of code that is likely to lead to important performance-events. We gather distribution information about the performance-events we wish to monitor, and test our traces by estimating the ability for dynamic patching of a program to execute run-time generated traces. Our results show that we are able to capture 58 of execution time across various SPEC2000 integer benchmarks using our profile and patching techniques on a relatively small number of frequently executed execution paths. Our profiling and detection system overhead increases execution time by only 2-4 .", "Traditional query optimizers assume accurate knowledge of run-time parameters such as selectivities and resource availability during plan optimization, i.e., at compile time. In reality, however, this assumption is often not justified. Therefore, the “static” plans produced by traditional optimizers may not be optimal for many of their actual run-time invocations. Instead, we propose a novel optimization model that assigns the bulk of the optimization effort to compile-time and delays carefully selected optimization decisions until run-time. Our previous work defined the run-time primitives, “dynamic plans” using “choose-plan” operators, for executing such delayed decisions, but did not solve the problem of constructing dynamic plans at compile-time. The present paper introduces techniques that solve this problem. Experience with a working prototype optimizer demonstrates (i) that the additional optimization and start-up overhead of dynamic plans compared to static plans is dominated by their advantage at run-time, (ii) that dynamic plans are as robust as the “brute-force” remedy of run-time optimization, i.e., dynamic plans maintain their optimality even if parameters change between compile-time and run-time, and (iii) that the start-up overhead of dynamic plans is significantly less than the time required for complete optimization at run-time. In other words, our proposed techniques are superior to both techniques considered to-date, namely compile-time optimization into a single static plan as well as run-time optimization. Finally, we believe that the concepts and technology described can be transferred to commercial query optimizers in order to improve the performance of embedded queries with host variables in the query predicate and to adapt to run-time system loads unpredictable at compile time.", "Compile-time optimization is often limited by a lack of target machine and input data set knowledge. Without this information, compilers may be forced to make conservative assumptions to preserve correctness and to avoid performance degradation. In order to cope with this lack of information at compile-time, adaptive and dynamic systems can be used to perform optimization at runtime when complete knowledge of input and machine parameters is available. This paper presents a compiler-supported high-level adaptive optimization system. Users describe, in a domain specific language, optimizations performed by stand-alone optimization tools and backend compiler flags, as well as heuristics for applying these optimizations dynamically at runtime. The ADAPT compiler reads these descriptions and generates application-specific runtime systems to apply the heuristics. To facilitate the usage of existing tools and compilers, overheads are minimized by decoupling optimization from execution. Our system, ADAPT, supports a range of paradigms proposed recently, including dynamic compilation, parameterization and runtime sampling. We demonstrate our system by applying several optimization techniques to a suite of benchmarks on two target machines. ADAPT is shown to consistently outperform statically generated executables, improving performance by as much as 70 .", "Many profilers based on bytecode instrumentation yield wrong results in the presence of an optimizing dynamic compiler, either due to not being aware of optimizations such as stack allocation and method inlining, or due to the inserted code disrupting such optimizations. To avoid such perturbations, we present a novel technique to make any profiler implemented at the bytecode level aware of optimizations performed by the dynamic compiler. We implement our approach in a state-of-the-art Java virtual machine and demonstrate its significance with concrete profilers. We quantify the impact of escape analysis on allocation profiling, object life-time analysis, and the impact of method inlining on callsite profiling. We illustrate how our approach enables new kinds of profilers, such as a profiler for non-inlined callsites, and a testing framework for locating performance bugs in dynamic compiler implementations." ] }
cs9903014
1612660921
We present an open architecture for just-in-time code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make system-level code generation useful in practice.
Hansen's work was followed by several other projects that have investigated the benefits of runtime optimization: the Smalltalk @cite_33 and SELF @cite_0 systems that focused on the benefits of dynamic optimization in an object-oriented environment; Morph'', a project developed at Harvard University @cite_16 ; and the system described by the authors of this paper @cite_4 @cite_30 . Other projects have experimented with optimization at link time rather than at runtime @cite_18 . At link time, many of the problems described in this paper are non-existent. Among them the decision when to optimize, what to optimize, and how to replace code. However, there is also a price to pay, namely that it cannot be performed in the presence of dynamic loading.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_4", "@cite_33", "@cite_0", "@cite_16" ], "mid": [ "2171438172", "2788459922", "2165006697", "2116210226" ], "abstract": [ "Traditional query optimizers assume accurate knowledge of run-time parameters such as selectivities and resource availability during plan optimization, i.e., at compile time. In reality, however, this assumption is often not justified. Therefore, the “static” plans produced by traditional optimizers may not be optimal for many of their actual run-time invocations. Instead, we propose a novel optimization model that assigns the bulk of the optimization effort to compile-time and delays carefully selected optimization decisions until run-time. Our previous work defined the run-time primitives, “dynamic plans” using “choose-plan” operators, for executing such delayed decisions, but did not solve the problem of constructing dynamic plans at compile-time. The present paper introduces techniques that solve this problem. Experience with a working prototype optimizer demonstrates (i) that the additional optimization and start-up overhead of dynamic plans compared to static plans is dominated by their advantage at run-time, (ii) that dynamic plans are as robust as the “brute-force” remedy of run-time optimization, i.e., dynamic plans maintain their optimality even if parameters change between compile-time and run-time, and (iii) that the start-up overhead of dynamic plans is significantly less than the time required for complete optimization at run-time. In other words, our proposed techniques are superior to both techniques considered to-date, namely compile-time optimization into a single static plan as well as run-time optimization. Finally, we believe that the concepts and technology described can be transferred to commercial query optimizers in order to improve the performance of embedded queries with host variables in the query predicate and to adapt to run-time system loads unpredictable at compile time.", "This paper presents the interesting observation that by performing fewer of the optimizations available in a standard compiler optimization level such as -02, while preserving their original ordering, significant savings can be achieved in both execution time and energy consumption. This observation has been validated on two embedded processors, namely the ARM Cortex-M0 and the ARM Cortex-M3, using two different versions of the LLVM compilation framework; v3.8 and v5.0. Experimental evaluation with 71 embedded benchmarks demonstrated performance gains for at least half of the benchmarks for both processors. An average execution time reduction of 2.4 and 5.3 was achieved across all the benchmarks for the Cortex-M0 and Cortex-M3 processors, respectively, with execution time improvements ranging from 1 up to 90 over the -02. The savings that can be achieved are in the same range as what can be achieved by the state-of-the-art compilation approaches that use iterative compilation or machine learning to select flags or to determine phase orderings that result in more efficient code. In contrast to these time consuming and expensive to apply techniques, our approach only needs to test a limited number of optimization configurations, less than 64, to obtain similar or even better savings. Furthermore, our approach can support multi-criteria optimization as it targets execution time, energy consumption and code size at the same time.", "Optimizing programs at run-time provides opportunities to apply aggressive optimizations to programs based on information that was not available at compile time. At run time, programs can be adapted to better exploit architectural features, optimize the use of dynamic libraries, and simplify code based on run-time constants. Our profiling system provides a framework for collecting information required for performing run-time optimization. We sample the performance hardware registers available on an Itanium processor, and select a set of code that is likely to lead to important performance-events. We gather distribution information about the performance-events we wish to monitor, and test our traces by estimating the ability for dynamic patching of a program to execute run-time generated traces. Our results show that we are able to capture 58 of execution time across various SPEC2000 integer benchmarks using our profile and patching techniques on a relatively small number of frequently executed execution paths. Our profiling and detection system overhead increases execution time by only 2-4 .", "Compile-time optimization is often limited by a lack of target machine and input data set knowledge. Without this information, compilers may be forced to make conservative assumptions to preserve correctness and to avoid performance degradation. In order to cope with this lack of information at compile-time, adaptive and dynamic systems can be used to perform optimization at runtime when complete knowledge of input and machine parameters is available. This paper presents a compiler-supported high-level adaptive optimization system. Users describe, in a domain specific language, optimizations performed by stand-alone optimization tools and backend compiler flags, as well as heuristics for applying these optimizations dynamically at runtime. The ADAPT compiler reads these descriptions and generates application-specific runtime systems to apply the heuristics. To facilitate the usage of existing tools and compilers, overheads are minimized by decoupling optimization from execution. Our system, ADAPT, supports a range of paradigms proposed recently, including dynamic compilation, parameterization and runtime sampling. We demonstrate our system by applying several optimization techniques to a suite of benchmarks on two target machines. ADAPT is shown to consistently outperform statically generated executables, improving performance by as much as 70 ." ] }
cs9903014
1612660921
We present an open architecture for just-in-time code generation and dynamic code optimization that is flexible, customizable, and extensible. While previous research has primarily investigated functional aspects of such a system, architectural aspects have so far remained unexplored. In this paper, we argue that these properties are important to generate optimal code for a variety of hardware architectures and different processor generations within processor families. These properties are also important to make system-level code generation useful in practice.
Common to the above-mentioned work is that the main focus has always been on functional aspects, that is how to profile and which optimizations to perform. Related to this is research on how to boost application performance by combining profiling data and code optimizations at compile time (not at runtime), including work on method dispatch optimizations for object-oriented programming languages @cite_22 @cite_35 , profile-guided intermodular optimizations @cite_3 @cite_26 , code positioning techniques @cite_13 @cite_25 , and profile-guided data cache locality optimizations @cite_29 @cite_10 @cite_12 .
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_22", "@cite_10", "@cite_29", "@cite_3", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2165006697", "2093922742", "2130570838", "2115971347" ], "abstract": [ "Optimizing programs at run-time provides opportunities to apply aggressive optimizations to programs based on information that was not available at compile time. At run time, programs can be adapted to better exploit architectural features, optimize the use of dynamic libraries, and simplify code based on run-time constants. Our profiling system provides a framework for collecting information required for performing run-time optimization. We sample the performance hardware registers available on an Itanium processor, and select a set of code that is likely to lead to important performance-events. We gather distribution information about the performance-events we wish to monitor, and test our traces by estimating the ability for dynamic patching of a program to execute run-time generated traces. Our results show that we are able to capture 58 of execution time across various SPEC2000 integer benchmarks using our profile and patching techniques on a relatively small number of frequently executed execution paths. Our profiling and detection system overhead increases execution time by only 2-4 .", "Many profilers based on bytecode instrumentation yield wrong results in the presence of an optimizing dynamic compiler, either due to not being aware of optimizations such as stack allocation and method inlining, or due to the inserted code disrupting such optimizations. To avoid such perturbations, we present a novel technique to make any profiler implemented at the bytecode level aware of optimizations performed by the dynamic compiler. We implement our approach in a state-of-the-art Java virtual machine and demonstrate its significance with concrete profilers. We quantify the impact of escape analysis on allocation profiling, object life-time analysis, and the impact of method inlining on callsite profiling. We illustrate how our approach enables new kinds of profilers, such as a profiler for non-inlined callsites, and a testing framework for locating performance bugs in dynamic compiler implementations.", "Commercial applications such as databases and Web servers constitute the most important market segment for high-performance servers. Among these applications, on-line transaction processing (OLTP) workloads provide a challenging set of requirements for system designs since they often exhibit inefficient executions dominated by a large memory stall component. This behavior arises from large instruction and data footprints and high communication miss rates. A number of recent studies have characterized the behavior of commercial workloads and proposed architectural features to improve their performance. However, there has been little research on the impact of software and compiler-level optimizations for improving the behavior of such workloads. This paper provides a detailed study of profile-driven compiler optimizations to improve the code layout in commercial workloads with large instruction footprints. Our compiler algorithms are implemented in the context of Spike, an executable optimizer for the Alpha architecture. Our experiments use the Oracle commercial database engine running an OLTP workload, with results generated using both full system simulations and actual runs on Alpha multiprocessors. Our results show that code layout optimizations can provide a major improvement in the instruction cache behavior, providing a 55 to 65 reduction in the application misses for 64-128K caches. Our analysis shows that this improvement primarily arises from longer sequences of consecutively executed instructions and more reuse of cache lines before they are replaced. We also show that the majority of application instruction misses are caused by self-interference. However, code layout optimizations significantly reduce the amount of self-interference, thus elevating the relative importance of interference with operating system code. Finally, we show that better code layout can also provide substantial improvements in the behavior of other memory system components such as the instruction TLB and the unified second-level cache. The overall performance impact of our code layout optimizations is an improvement of 1.33 times in the execution time of our workload.", "Program profiles identify frequently executed portions of a program, which are the places at which optimizations offer programmers and compilers the greatest benefit. Compilers, however, infrequently exploit program profiles, because, profiling a program requires a programmer to instrument and run the program. An attractive alternative is for the complier to statically estimate program profiles. This paper presents several new techniques for static branch prediction and profiling. The first technique combines multiple predictions of a branch's outcome into a prediction of the probability that the branch is taken. Another technique uses these predictions to estimate the relative execution frequency (i.e., profile) of basic blocks and control-flow edges within a procedure. A third algorithm uses local frequency estimates to predict the global frequency of calls, procedure invocations, and basic block and control-flow edge executions. Experiments on the SPEC92 integer benchmarks and Unix applications show that the frequently executed blocks, edges, and functions identified by our techniques closely match those in a dynamic profile." ] }
cs9903018
1593496962
Scripting languages are becoming more and more important as a tool for software development, as they provide great flexibility for rapid prototyping and for configuring componentware applications. In this paper we present LuaJava, a scripting tool for Java. LuaJava adopts Lua, a dynamically typed interpreted language, as its script language. Great emphasis is given to the transparency of the integration between the two languages, so that objects from one language can be used inside the other like native objects. The final result of this integration is a tool that allows the construction of configurable Java applications, using off-the-shelf components, in a high abstraction level.
For Tcl @cite_13 two integration solutions exist: the TclBlend binding @cite_11 and the Jacl implementation @cite_14 . TclBlend is a binding between Java and Tcl, which, as LuaJava, allows Java objects to be manipulated by scripts. Some operations, such as access to fields and static method invocations, require specific functions. Calls to instance methods are handled naturally by Tcl commands.
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_11" ], "mid": [ "2162914120", "2118300983", "2147650421", "1984632619" ], "abstract": [ "This paper describes the motivations and strategies behind our group’s efforts to integrate the Tcl and Java programming languages. From the Java perspective, we wish to create a powerful scripting solution for Java applications and operating environments. From the Tcl perspective, we want to allow for cross-platform Tcl extensions and leverage the useful features and user community Java has to offer. We are specifically focusing on Java tasks like Java Bean manipulation, where a scripting solution is preferable to using straight Java code. Our goal is to create a synergy between Tcl and Java, similar to that of Visual Basic and Visual C++ on the Microsoft desktop, which makes both languages more powerful together than they are individually.", "We describe JastAdd, a Java-based system for compiler construction. JastAdd is centered around an object-oriented representation of the abstract syntax tree where reference variables can be used to link together different parts of the tree. JastAdd supports the combination of declarative techniques (using Reference Attributed Grammars) and imperative techniques (using ordinary Java code) in implementing the compiler. The behavior can be modularized into different aspects, e.g. name analysis, type checking, code generation, etc., that are woven together into classes using aspect-oriented programming techniques, providing a safer and more powerful alternative to the Visitor pattern. The JastAdd system is independent of the underlying parsing technology and supports any noncircular dependencies between computations, thereby allowing general multi-pass compilation. The attribute evaluator (optimal recursive evaluation) is implemented very conveniently using Java classes, interfaces, and virtual methods.", "We present the first verification of full functional correctness for a range of linked data structure implementations, including mutable lists, trees, graphs, and hash tables. Specifically, we present the use of the Jahob verification system to verify formal specifications, written in classical higher-order logic, that completely capture the desired behavior of the Java data structure implementations (with the exception of properties involving execution time and or memory consumption). Given that the desired correctness properties include intractable constructs such as quantifiers, transitive closure, and lambda abstraction, it is a challenge to successfully prove the generated verification conditions. Our Jahob verification system uses integrated reasoning to split each verification condition into a conjunction of simpler subformulas, then apply a diverse collection of specialized decision procedures, first-order theorem provers, and, in the worst case, interactive theorem provers to prove each subformula. Techniques such as replacing complex subformulas with stronger but simpler alternatives, exploiting structure inherently present in the verification conditions, and, when necessary, inserting verified lemmas and proof hints into the imperative source code make it possible to seamlessly integrate all of the specialized decision procedures and theorem provers into a single powerful integrated reasoning system. By appropriately applying multiple proof techniques to discharge different subformulas, this reasoning system can effectively prove the complex and challenging verification conditions that arise in this context.", "The integration of database and programming languages is dif- ficult due to the dierent data models and type systems prevalent in each field. We present a solution where the developer may express queries en- compassing program and database data. The notation used for queries is based on comprehensions, a declarative style that does not impose any specific execution strategy. In our approach, the type safety of language- integrated queries is analyzed at compile-time, followed by a translation that optimizes for database evaluation. We show the translation total and semantics preserving, and introduce a language-independent classifi- cation. According to this classification, our approach compares favorably with Microsoft's LINQ, today's best-known representative. We provide an implementation in terms of Scala compiler plugins, accepting two nota- tions for queries: LINQ and the native Scala syntax for comprehensions. The prototype relies on Ferry, a query language that already supports comprehensions yet targets SQL:1999. The reported techniques pave the way for further progress in bridging the programming and the database worlds." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
The main objective of this chapter was to study the basic @math -branes that one encounters in M-theory, and to treat them in a unified way. The need to unify the treatment is inspired by U-duality @cite_22 @cite_86 @cite_144 , which states that from the effective lower dimensional space-time point of view, all the charges carried by the different branes are on the same footing. While string theory breaks' this U-duality symmetry, choosing the NSNS string to be the fundamental object of the perturbative theory, the supergravity low-energy effective theories realize the U-duality at the classical level.
{ "cite_N": [ "@cite_86", "@cite_22", "@cite_144" ], "mid": [ "1992572456", "2141847212", "2152342374", "2022574854" ], "abstract": [ "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "Abstract The effective action for type II string theory compactified on a six-torus is N = 8 supergravity, which is known to have an E7 duality symmetry. We show that this is broken by quantum effects to a discrete subgroup, E 7 ( Z ) , which contains both the T-duality group O(6, 6; Z ) and the S-duality group SL(2; Z ). We present evidence for the conjecture that E 7 ( Z ) is an exact ‘U-duality’ symmetry of type II string theory. This conjecture requires certain extreme black hole states to be identified with massive modes of the fundamental string. The gauge bosons from the Ramond-Ramond sector couple not to string excitations but to solitons. We discuss similar issues in the context of toroidal string compactifications to other dimensions, compactifications of the type II string on K3 × T2 and compactifications of 11-dimensional supermembrane theory.", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system.", "The recent discovery of an explicit conformal field theory description of Type II p-branes makes it possible to investigate the existence of bound states of such objects. In particular, it is possible with reasonable precision to verify the prediction that the Type IIB superstring in ten dimensions has a family of soliton and bound state strings permuted by SL(2,Z). The space-time coordinates enter tantalizingly in the formalism as non-commuting matrices." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
It should also not be underestimated that the derivation of the intersecting solutions presented in this chapter is a thorough consistency check of all the dualities acting on, and between, the supergravity theories. It is straightforward to check that, starting from one definite configuration, all its dual configurations are also found between the solutions presented here (with the exception of the solutions involving waves and KK monopoles). In this line of thoughts, we presented a recipe for building five and four dimensional extreme supersymmetric black holes. Some of these black holes were used in the literature to perform a microscopic counting of their entropy, as in @cite_191 @cite_61 for the 5-dimensional ones. Actually, the only (5 dimensional) black holes in the U-duality orbit' that were counted were the ones containing only D-branes and KK momentum. It is still an open problem to directly count the microscopic states of the same black hole but in a different M-theoretic formulation.
{ "cite_N": [ "@cite_191", "@cite_61" ], "mid": [ "2054280159", "1992572456", "2048444779", "2045285156" ], "abstract": [ "Abstract We present non-extreme generalisations of intersecting p -brane solutions of eleven-dimensional supergravity which upon toroidal compactification reduce to non-extreme static black holes in dimensions D = 4, D = 5 and 6 ⩽ D ⩽ 9, parameterised by four, three and two charges, respectively. The D = 4 black holes are obtained either from a non-extreme configuration of three intersecting five-branes with a boost along the common string or from a non-extreme intersecting system of two two-branes and two five-branes. The D = 5 black holes arise from three intersecting two-branes or from a system of an intersecting two-brane and five-brane with a boost along the common string. The five-brane and two-brane with a boost along one direction reduce to black holes in D = 6 and D = 9, respectively, while a D = 7 black hole can be interpreted in terms of a non-extreme configuration of two intersecting two-branes. We discuss the expressions for the corresponding masses and entropies.", "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "We construct the most general supersymmetric configuration of @math -branes and @math -branes on a 6-torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16-parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math -branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group.", "Abstract Strominger and Vafa have used D-brane technology to identify and precisely count the degenerate quantum states responsible for the entropy of certain extremal, BPS-saturated black holes. Here we give a Type-II D-brane description of a class of extremal and non-extremal five-dimensional Reissner-Nordstrom solutions and identify a corresponding set of degenerate D-brane configurations. We use this information to do a string theory calculation of the entropy, radiation rate and “Hawking” temperature. The results agree perfectly with standard Hawking results for the corresponding nearly extremal Reissner-Nordstrom black holes. Although these calculations suffer from open-string strong coupling problems, we give some reasons to believe that they are nonetheless qualitatively reliable. In this optimistic scenario there would be no “information loss” in black hole quantum evolution." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Some of the intersection rules intersectionrules point towards an M-theory interpretation in terms of open branes ending on other branes. This idea will be elaborated and made firmer in the next chapter. It suffices to say here that this interpretation is consistent with dualities if we postulate that the open character' of a fundamental string ending on a D-brane is invariant under dualities. S-duality directly implies, for instance, that D-strings can end on NS5-branes @cite_176 . Then T-dualities imply that all the D-branes can end on the NS5-brane. In particular, the fact that the D2-brane can end on the NS5-brane should imply that the M5-brane is a D-brane for the M2-branes @cite_176 @cite_143 @cite_38 (this could also be extrapolated from the fact that a F1-string ends on a D4-brane). In the next chapter we will see how these ideas are further supported by the presence of the Chern-Simons terms in the supergravities, and by the structure of the world-volume effective actions of the branes.
{ "cite_N": [ "@cite_143", "@cite_38", "@cite_176" ], "mid": [ "2086840642", "1992572456", "2054280159", "2152342374" ], "abstract": [ "Abstract We present a general rule determining how extremal branes can intersect in a configuration with zero binding energy. The rule is derived in a model independent way and in arbitrary spacetime dimensions D by solving the equations of motion of gravity coupled to a dilaton and several different n -form field strengths. The intersection rules are all compatible with supersymmetry, although derived without using it. We then specialize to the branes occurring in type II string theories and in M-theory. We show that the intersection rules are consistent with the picture that open branes can have boundaries on some other branes. In particular, all the D-branes of dimension q , with 1 ≤ q ≤ 6, can have boundaries on the solitonic 5-brane.", "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "Abstract We present non-extreme generalisations of intersecting p -brane solutions of eleven-dimensional supergravity which upon toroidal compactification reduce to non-extreme static black holes in dimensions D = 4, D = 5 and 6 ⩽ D ⩽ 9, parameterised by four, three and two charges, respectively. The D = 4 black holes are obtained either from a non-extreme configuration of three intersecting five-branes with a boost along the common string or from a non-extreme intersecting system of two two-branes and two five-branes. The D = 5 black holes arise from three intersecting two-branes or from a system of an intersecting two-brane and five-brane with a boost along the common string. The five-brane and two-brane with a boost along one direction reduce to black holes in D = 6 and D = 9, respectively, while a D = 7 black hole can be interpreted in terms of a non-extreme configuration of two intersecting two-branes. We discuss the expressions for the corresponding masses and entropies.", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
In this chapter we presented only extremal configurations of intersecting branes. The natural further step to take would be to consider also non-extremal configurations of intersecting branes. There is however a subtlety: there could be a difference between intersections of non-extremal branes, and non-extremal intersections of otherwise extremal branes. If we focus on bound states (and thus not on configurations of well separated branes), it appears that a non-extremal configuration would be characterized for instance by @math charges and by its mass. There is only one additional parameter with respect to the extremal configurations. Physically, we could have hardly expected to have, say, as many non-extremality parameters as the number of branes in the bound state. Indeed, non-extremality can be roughly associated to the branes being in an excited state, and it would have thus been very unlikely that the excitations did not mix between the various branes in the bound state. Non-extremal intersecting brane solutions were found first in @cite_48 , and were derived from the equations of motion following a similar approach as here in @cite_81 @cite_16 .
{ "cite_N": [ "@cite_48", "@cite_81", "@cite_16" ], "mid": [ "2086840642", "2152342374", "2054280159", "1992572456" ], "abstract": [ "Abstract We present a general rule determining how extremal branes can intersect in a configuration with zero binding energy. The rule is derived in a model independent way and in arbitrary spacetime dimensions D by solving the equations of motion of gravity coupled to a dilaton and several different n -form field strengths. The intersection rules are all compatible with supersymmetry, although derived without using it. We then specialize to the branes occurring in type II string theories and in M-theory. We show that the intersection rules are consistent with the picture that open branes can have boundaries on some other branes. In particular, all the D-branes of dimension q , with 1 ≤ q ≤ 6, can have boundaries on the solitonic 5-brane.", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system.", "Abstract We present non-extreme generalisations of intersecting p -brane solutions of eleven-dimensional supergravity which upon toroidal compactification reduce to non-extreme static black holes in dimensions D = 4, D = 5 and 6 ⩽ D ⩽ 9, parameterised by four, three and two charges, respectively. The D = 4 black holes are obtained either from a non-extreme configuration of three intersecting five-branes with a boost along the common string or from a non-extreme intersecting system of two two-branes and two five-branes. The D = 5 black holes arise from three intersecting two-branes or from a system of an intersecting two-brane and five-brane with a boost along the common string. The five-brane and two-brane with a boost along one direction reduce to black holes in D = 6 and D = 9, respectively, while a D = 7 black hole can be interpreted in terms of a non-extreme configuration of two intersecting two-branes. We discuss the expressions for the corresponding masses and entropies.", "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Supergravity solutions corresponding to D-branes at angles were found in @cite_31 @cite_59 @cite_178 . The resulting solutions contain as expected off-diagonal elements in the internal metric, and the derivation from the equations of motion as in @cite_59 is accordingly rather intricated.
{ "cite_N": [ "@cite_31", "@cite_178", "@cite_59" ], "mid": [ "1992572456", "2048444779", "2054280159", "2086840642" ], "abstract": [ "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "We construct the most general supersymmetric configuration of @math -branes and @math -branes on a 6-torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16-parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math -branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group.", "Abstract We present non-extreme generalisations of intersecting p -brane solutions of eleven-dimensional supergravity which upon toroidal compactification reduce to non-extreme static black holes in dimensions D = 4, D = 5 and 6 ⩽ D ⩽ 9, parameterised by four, three and two charges, respectively. The D = 4 black holes are obtained either from a non-extreme configuration of three intersecting five-branes with a boost along the common string or from a non-extreme intersecting system of two two-branes and two five-branes. The D = 5 black holes arise from three intersecting two-branes or from a system of an intersecting two-brane and five-brane with a boost along the common string. The five-brane and two-brane with a boost along one direction reduce to black holes in D = 6 and D = 9, respectively, while a D = 7 black hole can be interpreted in terms of a non-extreme configuration of two intersecting two-branes. We discuss the expressions for the corresponding masses and entropies.", "Abstract We present a general rule determining how extremal branes can intersect in a configuration with zero binding energy. The rule is derived in a model independent way and in arbitrary spacetime dimensions D by solving the equations of motion of gravity coupled to a dilaton and several different n -form field strengths. The intersection rules are all compatible with supersymmetry, although derived without using it. We then specialize to the branes occurring in type II string theories and in M-theory. We show that the intersection rules are consistent with the picture that open branes can have boundaries on some other branes. In particular, all the D-branes of dimension q , with 1 ≤ q ≤ 6, can have boundaries on the solitonic 5-brane." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
Other half-supersymmetric bound states of this class are the @math multiplets of 1- and 5-branes in type IIB theory @cite_95 @cite_0 , or more precisely the configurations F1 @math D1 and NS5 @math D5, also called @math 1- and 5-branes, where @math is the NSNS charge and @math the RR charge of the compound. The classical solutions corresponding to this latter case were actually found more simply performing an @math transformation on the F1 or NS5 solutions.
{ "cite_N": [ "@cite_0", "@cite_95" ], "mid": [ "1992572456", "2048444779", "2019049541", "2152342374" ], "abstract": [ "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "We construct the most general supersymmetric configuration of @math -branes and @math -branes on a 6-torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16-parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math -branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group.", "Abstract By looking at fractional D p -branes of type IIA on T 4 Z 2 as wrapped branes and by using boundary state techniques we construct the effective low-energy action for the fields generated by fractional branes, build their worldvolume action and find the corresponding classical geometry. The explicit form of the classical background is consistent only outside an enhancon sphere of radius r e , which encloses a naked singularity of repulson-type. The perturbative running of the gauge coupling constant, dictated by the NS–NS twisted field that keeps its one-loop expression at any distance, also fails at r e .", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system." ] }
hep-th9807171
1774239421
The thesis begins with an introduction to M-theory (at a graduate student's level), starting from perturbative string theory and proceeding to dualities, D-branes and finally Matrix theory. The following chapter treats, in a self-contained way, of general classical p-brane solutions. Black and extremal branes are reviewed, along with their semi-classical thermodynamics. We then focus on intersecting extremal branes, the intersection rules being derived both with and without the explicit use of supersymmetry. The last three chapters comprise more advanced aspects of brane physics, such as the dynamics of open branes, the little theories on the world-volume of branes and how the four dimensional Schwarzschild black hole can be mapped to an extremal configuration of branes, thus allowing for a statistical interpretation of its entropy. The original results were already reported in hep-th 9701042, hep-th 9704190, hep-th 9710027 and hep-th 9801053.
In @cite_106 (inspired by @cite_54 ) a solution is presented which corresponds to a M5 @math M5=1 configuration, which follows the harmonic superposition rule, provided however that the harmonic functions depend on the respective relative transverse space (i.e. they are functions of two different spaces). The problem now is that the harmonic functions do not depend on the overall transverse space (which is 1-dimensional in the case above), the configuration thus not being localized there. A method actually inspired by the one presented here to derive the intersecting brane solutions, has been applied in @cite_89 to the intersections of this second kind. Imposing that the functions depend on the relative transverse space(s) (with factorized dependence) and not on the overall one, the authors of @cite_89 arrive at a formula for the intersections very similar to intersectionrules , with @math on the l.h.s. This rule correctly reproduces the M5 @math M5=1 configuration, and moreover also all the configurations of two D-branes with 8 Neumann-Dirichlet directions, which preserve @math supersymmetries but were excluded from the intersecting solutions derived in this chapter (only the configurations with 4 ND directions were found as solutions). One such configuration is e.g. D0 @math D8.
{ "cite_N": [ "@cite_54", "@cite_106", "@cite_89" ], "mid": [ "1992572456", "2086840642", "2048444779", "2054280159" ], "abstract": [ "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "Abstract We present a general rule determining how extremal branes can intersect in a configuration with zero binding energy. The rule is derived in a model independent way and in arbitrary spacetime dimensions D by solving the equations of motion of gravity coupled to a dilaton and several different n -form field strengths. The intersection rules are all compatible with supersymmetry, although derived without using it. We then specialize to the branes occurring in type II string theories and in M-theory. We show that the intersection rules are consistent with the picture that open branes can have boundaries on some other branes. In particular, all the D-branes of dimension q , with 1 ≤ q ≤ 6, can have boundaries on the solitonic 5-brane.", "We construct the most general supersymmetric configuration of @math -branes and @math -branes on a 6-torus. It contains arbitrary numbers of branes at relative @math angles. The corresponding supergravity solutions are constructed and expressed in a remarkably simple form, using the complex geometry of the compact space. The spacetime supersymmetry of the configuration is verified explicitly, by solution of the Killing spinor equations, and the equations of motion are verified too. Our configurations can be interpreted as a 16-parameter family of regular extremal black holes in four dimensions. Their entropy is interpreted microscopically by counting the degeneracy of bound states of @math -branes. Our result agrees in detail with the prediction for the degeneracy of BPS states in terms of the quartic invariant of the E(7,7) duality group.", "Abstract We present non-extreme generalisations of intersecting p -brane solutions of eleven-dimensional supergravity which upon toroidal compactification reduce to non-extreme static black holes in dimensions D = 4, D = 5 and 6 ⩽ D ⩽ 9, parameterised by four, three and two charges, respectively. The D = 4 black holes are obtained either from a non-extreme configuration of three intersecting five-branes with a boost along the common string or from a non-extreme intersecting system of two two-branes and two five-branes. The D = 5 black holes arise from three intersecting two-branes or from a system of an intersecting two-brane and five-brane with a boost along the common string. The five-brane and two-brane with a boost along one direction reduce to black holes in D = 6 and D = 9, respectively, while a D = 7 black hole can be interpreted in terms of a non-extreme configuration of two intersecting two-branes. We discuss the expressions for the corresponding masses and entropies." ] }
1903.05435
2921710190
In this work, we are interested in the applications of big data in the telecommunication domain, analysing two weeks of datasets provided by Telecom Italia for Milan and Trento. Our objective is to identify hotspots which are places with very high communication traffic relative to others and measure the interaction between them. We model the hotspots as nodes in a graph and then apply node centrality metrics that quantify the importance of each node. We review five node centrality metrics and show that they can be divided into two families: the first family is composed of closeness and betweenness centrality whereas the second family consists of degree, PageRank and eigenvector centrality. We then proceed with a statistical analysis in order to evaluate the consistency of the results over the two weeks. We find out that the ranking of the hotspots under the various centrality metrics remains practically the same with the time for both Milan and Trento. We further identify that the relative difference of the values of the metrics is smaller for PageRank centrality than for closeness centrality and this holds for both Milan and Trento. Finally, our analysis reveals that the variance of the results is significantly smaller for Trento than for Milan.
Nowadays, telecom companies use widely big data in order to mine the behaviour of their customers, improve the quality of service that they provide and reduce the customers' churn. Towards this direction, demographic statistics, network deployments and call detail records (CDRs) are key factors that need to be carefully integrated in order to make accurate predictions. Though there are various open source data for the first two factors, researchers rarely have access to traffic demand data, since it is a sensitive information for the operators. Therefore, researchers need to rely on synthetic models, which do not always capture accurately large-scale mobile networks @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2556289220", "1993599520", "1571751884", "1994273294" ], "abstract": [ "In this study, with Singapore as an example, we demonstrate how we can use mobile phone call detail record (CDR) data, which contains millions of anonymous users, to extract individual mobility networks comparable to the activity-based approach. Such an approach is widely used in the transportation planning practice to develop urban micro simulations of individual daily activities and travel; yet it depends highly on detailed travel survey data to capture individual activity-based behavior. We provide an innovative data mining framework that synthesizes the state-of-the-art techniques in extracting mobility patterns from raw mobile phone CDR data, and design a pipeline that can translate the massive and passive mobile phone records to meaningful spatial human mobility patterns readily interpretable for urban and transportation planning purposes. With growing ubiquitous mobile sensing, and shrinking labor and fiscal resources in the public sector globally, the method presented in this research can be used as a low-cost alternative for transportation and planning agencies to understand the human activity patterns in cities, and provide targeted plans for future sustainable development.", "With billions of handsets in use worldwide, the quantity of mobility data is gigantic. When aggregated they can help understand complex processes, such as the spread viruses, and built better transportation systems, prevent traffic congestion. While the benefits provided by these datasets are indisputable, they unfortunately pose a considerable threat to location privacy. In this paper, we present a new anonymization scheme to release the spatio-temporal density of Paris, in France, i.e., the number of individuals in 989 different areas of the city released every hour over a whole week. The density is computed from a call-data-record (CDR) dataset, provided by the French Telecom operator Orange, containing the CDR of roughly 2 million users over one week. Our scheme is differential private, and hence, provides provable privacy guarantee to each individual in the dataset. Our main goal with this case study is to show that, even with large dimensional sensitive data, differential privacy can provide practical utility with meaningful privacy guarantee, if the anonymization scheme is carefully designed. This work is part of the national project XData (http: xdata.fr) that aims at combining large (anonymized) datasets provided by different service providers (telecom, electricity, water management, postal service, etc.).", "As technology to connect people across the world is advancing, there should be corresponding advancement in taking advantage of data that is generated out of such connection. To that end, next place prediction is an important problem for mobility data. In this paper we propose several models using dynamic Bayesian network (DBN). Idea behind development of these models come from typical daily mobility patterns a user have. Three features (location, day of the week (DoW), and time of the day (ToD)) and their combinations are used to develop these models. Knowing that not all models work well for all situations, we developed three combined models using least entropy, highest probability and ensemble. Extensive performance study is conducted to compare these models over two different mobility data sets: a CDR data and Nokia mobile data which is based on GPS. Results show that least entropy and highest probability DBNs perform the best.", "Continuous personal position information has been attracting attention in a variety of service and research areas. In recent years, many studies have applied the telecommunication histories of mobile phones (CDRs: call detail records) to position acquisition. Although large-scale and long-term data are accumulated from CDRs through everyday use of mobile phones, the spatial resolution of CDRs is lower than that of existing positioning technologies. Therefore, interpolating spatiotemporal positions of such sparse CDRs in accordance with human behavior models will facilitate services and researches. In this paper, we propose a new method to compensate for CDR drawbacks in tracking positions. We generate as many candidate routes as possible in the spatiotemporal domain using trip patterns interpolated using road and railway networks and select the most likely route from them. Trip patterns are feasible combinations between stay places that are detected from individual location histories in CDRs. The most likely route could be estimated through comparing candidate routes to observed CDRs during a target day. We also show the assessment of our method using CDRs and GPS logs obtained in the experimental survey." ] }
1903.05435
2921710190
In this work, we are interested in the applications of big data in the telecommunication domain, analysing two weeks of datasets provided by Telecom Italia for Milan and Trento. Our objective is to identify hotspots which are places with very high communication traffic relative to others and measure the interaction between them. We model the hotspots as nodes in a graph and then apply node centrality metrics that quantify the importance of each node. We review five node centrality metrics and show that they can be divided into two families: the first family is composed of closeness and betweenness centrality whereas the second family consists of degree, PageRank and eigenvector centrality. We then proceed with a statistical analysis in order to evaluate the consistency of the results over the two weeks. We find out that the ranking of the hotspots under the various centrality metrics remains practically the same with the time for both Milan and Trento. We further identify that the relative difference of the values of the metrics is smaller for PageRank centrality than for closeness centrality and this holds for both Milan and Trento. Finally, our analysis reveals that the variance of the results is significantly smaller for Trento than for Milan.
For example, the authors in @cite_4 analyse an heterogeneous cellular network which consists of different types of nodes, such as macrocells and microcells. Nowadays a popular model is the one from Wyner @cite_0 , but it fails to fully capture a real heterogeneous cellular network because it is simplistic. Another approach is to use the spatial Poisson point process model (SPPP) @cite_9 , which can be derived from the premise that all base stations are uniformly distributed. However, a city can be classified in different areas, which have different population densities. These different areas can be characterised as dense urban, urban and suburban. To be able to classify the heterogeneous networks into these areas, the authors introduce SPPP for homogeneous and inhomogeneous sets. They show that the SPPP-model captures accurately both urban and suburban areas, whereas this is not the case for dense urban areas, because of a considerable population concentrated in small areas.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_4" ], "mid": [ "2005411736", "2076773434", "1994267277", "1659111644" ], "abstract": [ "In heterogeneous cellular networks spatial characteristics of base stations (BSs) influence the system performance intensively. Existing models like two-dimensional hexagonal grid model or homogeneous spatial poisson point process (SPPP) are based on the assumption that BSs are ideal or uniformly distributed, but the aggregation behavior of users in hot spots has an important effect on the location of low power nodes (LPNs), so these models fail to characterize the distribution of BSs in the current mobile cellular networks. In this paper, firstly existing spatial models are analyzed. Then, based on real data from a mobile operator in one large city of China, a set of spatial models is proposed in three typical regions: dense urban, urban and suburban. For dense urban area, “Two Tiers Poisson Cluster Superimposed Process” is proposed to model the spatial characteristics of real-world BSs. Specifically, for urban and suburban area, conventional SPPP model still can be used. Finally, the fundamental relationship between user behavior and BS distribution is illustrated and summarized. Numerous results show that SPPP is only appropriate in the urban and suburban regions where users are not gathered together obviously. Principal parameters of these models are provided as reference for the theoretical analysis and computer simulation, which describe the complex spatial configuration more reasonably and reflect the current mobile cellular network performance more precisely.", "We consider spatial stochastic models of downlink heterogeneous cellular networks (HCNs) with multiple tiers, where the base stations (BSs) of each tier have a particular spatial density, transmission power and path-loss exponent. Prior works on such spatial models of HCNs assume, due to its tractability, that the BSs are deployed according to homogeneous Poisson point processes. This means that the BSs are located independently of each other and their spatial correlation is ignored. In the current paper, we propose two spatial models for the analysis of downlink HCNs, in which the BSs are deployed according to @a-Ginibre point processes. The @a-Ginibre point processes constitute a class of determinantal point processes and account for the repulsion between the BSs. Besides, the degree of repulsion is adjustable according to the value of @[email protected]?(0,1]. In one proposed model, the BSs of different tiers are deployed according to mutually independent @a-Ginibre processes, where the @a can take different values for the different tiers. In the other model, all the BSs are deployed according to an @a-Ginibre point process and they are classified into multiple tiers by mutually independent marks. For these proposed models, we derive computable representations for the coverage probability of a typical user-the probability that the downlink signal-to-interference-plus-noise ratio for the typical user achieves a target threshold. We exhibit the results of some numerical experiments and compare the proposed models and the Poisson based model.", "The spatial structure of transmitters in wireless networks plays a key role in evaluating the mutual interference and hence the performance. Although the Poisson point process (PPP) has been widely used to model the spatial configuration of wireless networks, it is not suitable for networks with repulsion. The Ginibre point process (GPP) is one of the main examples of determinantal point processes that can be used to model random phenomena where repulsion is observed. Considering the accuracy, tractability and practicability tradeoffs, we introduce and promote the @math -GPP, an intermediate class between the PPP and the GPP, as a model for wireless networks when the nodes exhibit repulsion. To show that the model leads to analytically tractable results in several cases of interest, we derive the mean and variance of the interference using two different approaches: the Palm measure approach and the reduced second moment approach, and then provide approximations of the interference distribution by three known probability density functions. Besides, to show that the model is relevant for cellular systems, we derive the coverage probability of the typical user and also find that the fitted @math -GPP can closely model the deployment of actual base stations in terms of the coverage probability and other statistics.", "Although the Poisson point process (PPP) has been widely used to model base station (BS) locations in cellular networks, it is an idealized model that neglects the spatial correlation among BSs. This paper proposes the use of the determinantal point process (DPP) to take into account these correlations, in particular the repulsiveness among macro BS locations. DPPs are demonstrated to be analytically tractable by leveraging several unique computational properties. Specifically, we show that the empty space function, the nearest neighbor function, the mean interference, and the signal-to-interference ratio (SIR) distribution have explicit analytical representations and can be numerically evaluated for cellular networks with DPP-configured BSs. In addition, the modeling accuracy of DPPs is investigated by fitting three DPP models to real BS location data sets from two major U.S. cities. Using hypothesis testing for various performance metrics of interest, we show that these fitted DPPs are significantly more accurate than popular choices such as the PPP and the perturbed hexagonal grid model." ] }
1903.05355
2968491849
Learning the dynamics of robots from data can help achieve more accurate tracking controllers, or aid their navigation algorithms. However, when the actual dynamics of the robots change due to external conditions, on-line adaptation of their models is required to maintain high fidelity performance. In this work, a framework for on-line learning of robot dynamics is developed to adapt to such changes. The proposed framework employs an incremental support vector regression method to learn the model sequentially from data streams. In combination with the incremental learning, strategies for including and forgetting data are developed to obtain better generalization over the whole state space. The framework is tested in simulation and real experimental scenarios demonstrating its adaptation capabilities to changes in the robot’s dynamics.
In the field of marine robotics, @cite_10 used locally weighted projection regression to compensate the mismatch between the physics based model and the sensors reading of the AUV Nessie. Auto-regressive networks augmented with a genetic algorithm as a gating network were used to identify the model of a simulated AUV with variable mass. In a previous work @cite_16 , an on-line adaptation method was proposed to model the change in the damping forces resulting from a structural change of an AUVs mechanical structure. The algorithm showed good adaptation capability but was only limited to modelling the damping effect of an AUV model. In this work we build upon our the results of @cite_14 @cite_11 to provide a general framework for on-line learning of AUV fully coupled nonlinear dynamics, and validating the proposed approach on simulated data as well as real robot data.
{ "cite_N": [ "@cite_14", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2060977605", "2106353989", "1938668859", "2022798939" ], "abstract": [ "This paper proposes a pose-based algorithm to solve the full Simultaneous Localization And Mapping (SLAM) problem for an Autonomous Underwater Vehicle (AUV), navigating in an unknown and possibly unstructured environment. A probabilistic scan matching technique using range scans gathered from a Mechanical Scanning Imaging Sonar (MSIS) is used together with the robot dead-reckoning displacements. The proposed method utilizes two Extended Kalman Filters (EKFs). The first, estimates the local path traveled by the robot while forming the scan as well as its uncertainty, providing position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augmented state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. Also, a method of estimating the uncertainty of the scan matching estimation is provided. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach.", "A full-scale adaptive ocean sampling network was deployed throughout the month-long 2006 Adaptive Sam- pling and Prediction (ASAP) field experiment in Monterey Bay, California. One of the central goals of the field experiment was to test and demonstrate newly developed techniques for coordinated motion control of au- tonomous vehicles carrying environmental sensors to efficiently sample the ocean. We describe the field results for the heterogeneous fleet of autonomous underwater gliders that collected data continuously throughout the month-long experiment. Six of these gliders were coordinated autonomously for 24 days straight using feed- back laws that scale with the number of vehicles. These feedback laws were systematically computed using recently developed methodology to produce desired collective motion patterns, tuned to the spatial and tem- poral scales in the sampled fields for the purpose of reducing statistical uncertainty in field estimates. The implementation was designed to allow for adaptation of coordinated sampling patterns using human-in-the- loop decision making, guided by optimization and prediction tools. The results demonstrate an innovative tool for ocean sampling and provide a proof of concept for an important field robotics endeavor that integrates coordinated motion control with adaptive sampling. C", "Robotic sampling is attractive in many field robotics applications that require persistent collection of physical samples for ex-situ analysis. Examples abound in the earth sciences in studies involving the collection of rock, soil, and water samples for laboratory analysis. In our test domain, marine ecosystem monitoring, detailed understanding of plankton ecology requires laboratory analysis of water samples, but predictions using physical and chemical properties measured in real-time by sensors aboard an autonomous underwater vehicle AUV can guide sample collection decisions. In this paper, we present a data-driven and opportunistic sampling strategy to minimize cumulative regret for batches of plankton samples acquired by an AUV over multiple surveys. Samples are labeled at the end of each survey, and used to update a probabilistic model that guides sampling during subsequent surveys. During a survey, the AUV makes irrevocable sample collection decisions online for a sequential stream of candidates, with no knowledge of the quality of future samples. In addition to extensive simulations using historical field data, we present results from a one-day field trial where beginning with a prior model learned from data collected and labeled in an earlier campaign, the AUV collected water samples with a high abundance of a pre-specified planktonic target. This is the first time such a field experiment has been carried out in its entirety in a data-driven fashion, in effect ?closing the loop? on a significant and relevant ecosystem monitoring problem while allowing domain experts marine ecologists to specify the mission at a relatively high level.", "Navigation is instrumental in the successful deployment of Autonomous Underwater Vehicles (AUVs). Sensor hardware is installed on AUVs to support navigational accuracy. Sensors, however, may fail during deployment, thereby jeopardizing the mission. This work proposes a solution, based on an adaptive dynamic model, to accurately predict the navigation of the AUV. A hydrodynamic model, derived from simple laws of physics, is integrated with a powerful non-parametric regression method. The incremental regression method, namely the Locally Weighted Projection Regression (LWPR), is used to compensate for un-modeled dynamics, as well as for possible changes in the operating conditions of the vehicle. The augmented hydrodynamic model is used within an Extended Kalman Filter, to provide optimal estimations of the AUV’s position and orientation. Experimental results demonstrate an overall improvement in the prediction of the vehicle’s acceleration and velocity." ] }
1903.05454
2950587559
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
In recent years, research regarding image matching has been influenced by the developments in other areas of computer vision. Deep learning architectures have been developed both for image matching @cite_10 @cite_1 @cite_17 and geopositioning @cite_13 @cite_5 @cite_18 with attractive results.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_1", "@cite_5", "@cite_10", "@cite_17" ], "mid": [ "1762798876", "2607603241", "2606149788", "2964213755" ], "abstract": [ "We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk ( A comparison of affine region detectors, 2005), the MPI-Sintel ( A naturalistic open source movie for optical flow evaluation, 2012) and the Kitti ( Vision meets robotics: The KITTI dataset, 2013) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures. We also apply DeepMatching to the computation of optical flow, called DeepFlow, by integrating it in the large displacement optical flow (LDOF) approach of Brox and Malik (Large displacement optical flow: descriptor matching in variational motion estimation, 2011). Additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation.", "Finding matching images across large datasets plays a key role in many computer vision applications such as structure-from-motion (SfM), multi-view 3D reconstruction, image retrieval, and image-based localisation. In this paper, we propose finding matching and non-matching pairs of images by representing them with neural network based feature vectors, whose similarity is measured by Euclidean distance. The feature vectors are obtained with convolutional neural networks which are learnt from labeled examples of matching and non-matching image pairs by using a contrastive loss function in a Siamese network architecture. Previously Siamese architecture has been utilised in facial image verification and in matching local image patches, but not yet in generic image retrieval or whole-image matching. Our experimental results show that the proposed features improve matching performance compared to baseline features obtained with networks which are trained for image classification task. The features generalize well and improve matching of images of new landmarks which are not seen at training time. This is despite the fact that the labeling of matching and non-matching pairs is imperfect in our training data. The results are promising considering image retrieval applications, and there is potential for further improvement by utilising more training image pairs with more accurate ground truth labels.", "Despite significant progress of deep learning in recent years, state-of-the-art semantic matching methods still rely on legacy features such as SIFT or HoG. We argue that the strong invariance properties that are key to the success of recent deep architectures on the classification task make them unfit for dense correspondence tasks, unless a large amount of supervision is used. In this work, we propose a deep network, termed AnchorNet, that produces image representations that are well-suited for semantic matching. It relies on a set of filters whose response is geometrically consistent across different object instances, even in the presence of strong intra-class, scale, or viewpoint variations. Trained only with weak image-level labels, the final representation successfully captures information about the object structure and improves results of state-of-the-art semantic matching methods such as the deformable spatial pyramid or the proposal flow methods. We show positive results on the cross-instance matching task where different instances of the same object category are matched as well as on a new cross-category semantic matching task aligning pairs of instances each from a different object class.", "Despite significant progress of deep learning in recent years, state-of-the-art semantic matching methods still rely on legacy features such as SIFT or HoG. We argue that the strong invariance properties that are key to the success of recent deep architectures on the classification task make them unfit for dense correspondence tasks, unless a large amount of supervision is used. In this work, we propose a deep network, termed AnchorNet, that produces image representations that are well-suited for semantic matching. It relies on a set of filters whose response is geometrically consistent across different object instances, even in the presence of strong intra-class, scale, or viewpoint variations. Trained only with weak image-level labels, the final representation successfully captures information about the object structure and improves results of state-of-the-art semantic matching methods such as the Deformable Spatial Pyramid or the Proposal Flow methods. We show positive results on the cross-instance matching task where different instances of the same object category are matched as well as on a new cross-category semantic matching task aligning pairs of instances each from a different object class." ] }
1903.05454
2950587559
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
Convolutional features extracted from the deep layers of CNNs have shown great utility when addressing image matching and retrieval problems. Babenko @cite_10 employ pre-trained networks to generate descriptors based on high-level convolutional features used for retrieving images of various landmarks. Sunderhauf @cite_2 solve the problem of urban scene recognition, employing salient regions and convolutional features of local objects. This method is extended in @cite_8 , where additional spatial information is used to increase the algorithm performance.
{ "cite_N": [ "@cite_8", "@cite_10", "@cite_2" ], "mid": [ "2289772031", "2749407104", "2258484932", "2161381512" ], "abstract": [ "Convolutional neural networks (CNNs) have recently achieved remarkable successes in various image classification and understanding tasks. The deep features obtained at the top fully connected layer of the CNN (FC-features) exhibit rich global semantic information and are extremely effective in image classification. On the other hand, the convolutional features in the middle layers of the CNN also contain meaningful local information, but are not fully explored for image representation. In this paper, we propose a novel locally supervised deep hybrid model (LS-DHM) that effectively enhances and explores the convolutional features for scene recognition. First, we notice that the convolutional features capture local objects and fine structures of scene images, which yield important cues for discriminating ambiguous scenes, whereas these features are significantly eliminated in the highly compressed FC representation. Second, we propose a new local convolutional supervision layer to enhance the local structure of the image by directly propagating the label information to the convolutional layers. Third, we propose an efficient Fisher convolutional vector (FCV) that successfully rescues the orderless mid-level semantic information (e.g., objects and textures) of scene image. The FCV encodes the large-sized convolutional maps into a fixed-length mid-level representation, and is demonstrated to be strongly complementary to the high-level FC-features. Finally, both the FCV and FC-features are collaboratively employed in the LS-DHM representation, which achieves outstanding performance in our experiments. It obtains 83.75 and 67.56 accuracies, respectively, on the heavily benchmarked MIT Indoor67 and SUN397 data sets, advancing the state-of-the-art substantially.", "In this paper, we present a robust method for scene recognition, which leverages Convolutional Neural Networks (CNNs) features and Sparse Coding setting by creating a new representation of indoor scenes. Although CNNs highly benefited the fields of computer vision and pattern recognition, convolutional layers adjust weights on a global-approach, which might lead to losing important local details such as objects and small structures. Our proposed scene representation relies on both: global features that mostly refers to environment’s structure, and local features that are sparsely combined to capture characteristics of common objects of a given scene. This new representation is based on fragments of the scene and leverages features extracted by CNNs. The experimental evaluation shows that the resulting representation outperforms previous scene recognition methods on Scene15 and MIT67 datasets, and performs competitively on SUN397, while being highly robust to perturbations in the input image such as noise and occlusion.", "Convolutional neural network (CNN) has achieved the state-of-the-art performance in many different visual tasks. Learned from a large-scale training data set, CNN features are much more discriminative and accurate than the handcrafted features. Moreover, CNN features are also transferable among different domains. On the other hand, traditional dictionary-based features (such as BoW and spatial pyramid matching) contain much more local discriminative and structural information, which is implicitly embedded in the images. To further improve the performance, in this paper, we propose to combine CNN with dictionary-based models for scene recognition and visual domain adaptation (DA). Specifically, based on the well-tuned CNN models (e.g., AlexNet and VGG Net), two dictionary-based representations are further constructed, namely, mid-level local representation (MLR) and convolutional Fisher vector (CFV) representation. In MLR, an efficient two-stage clustering method, i.e., weighted spatial and feature space spectral clustering on the parts of a single image followed by clustering all representative parts of all images, is used to generate a class-mixture or a class-specific part dictionary. After that, the part dictionary is used to operate with the multiscale image inputs for generating mid-level representation. In CFV, a multiscale and scale-proportional Gaussian mixture model training strategy is utilized to generate Fisher vectors based on the last convolutional layer of CNN. By integrating the complementary information of MLR, CFV, and the CNN features of the fully connected layer, the state-of-the-art performance can be achieved on scene recognition and DA problems. An interested finding is that our proposed hybrid representation (from VGG net trained on ImageNet) is also complementary to GoogLeNet and or VGG-11 (trained on Place205) greatly.", "Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization." ] }
1903.05454
2950587559
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
The problem of geopositioning can be seen as a dedicated branch of image retrieval. In this case, the objective is to compute extrinsic parameters (or coordinates) of a camera capturing the query image, based on the matched georeferenced images from a database. There exist many different algorithms and neural network architectures that attempt to identify the geographical location of a street-level query image. Lin @cite_13 learn deep representations for matching aerial and ground images. Workman @cite_18 use spatial features at multiple scales which are fused with street-level features, to solve the problem of geolocalization. @cite_5 , a fully automated processing pipeline matches multi-view stereo (MVS) models to aerial images. This matching algorithm handles the viewpoint variance across aerial and street-level images.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_13" ], "mid": [ "1946093182", "2479919622", "2199890863", "2949514689" ], "abstract": [ "The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.", "In this paper we aim to determine the location and orientation of a ground-level query image by matching to a reference database of overhead (e.g. satellite) images. For this task we collect a new dataset with one million pairs of street view and overhead images sampled from eleven U.S. cities. We explore several deep CNN architectures for cross-domain matching – Classification, Hybrid, Siamese, and Triplet networks. Classification and Hybrid architectures are accurate but slow since they allow only partial feature precomputation. We propose a new loss function which significantly improves the accuracy of Siamese and Triplet embedding networks while maintaining their applicability to large-scale retrieval tasks like image geolocalization. This image matching task is challenging not just because of the dramatic viewpoint difference between ground-level and overhead imagery but because the orientation (i.e. azimuth) of the street views is unknown making correspondence even more difficult. We examine several mechanisms to match in spite of this – training for rotation invariance, sampling possible rotations at query time, and explicitly predicting relative rotation of ground and overhead images with our deep networks. It turns out that explicit orientation supervision also improves location prediction accuracy. Our best performing architectures are roughly 2.5 times as accurate as the commonly used Siamese network baseline.", "We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales.", "We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales." ] }
1903.05454
2950587559
In this work, we present a camera geopositioning system based on matching a query image against a database with panoramic images. For matching, our system uses memory vectors aggregated from global image descriptors based on convolutional features to facilitate fast searching in the database. To speed up searching, a clustering algorithm is used to balance geographical positioning and computation time. We refine the obtained position from the query image using a new outlier removal algorithm. The matching of the query image is obtained with a recall@5 larger than 90 for panorama-to-panorama matching. We cluster available panoramas from geographically adjacent locations into a single compact representation and observe computational gains of approximately 50 at the cost of only a small (approximately 3 ) recall loss. Finally, we present a coordinate estimation algorithm that reduces the median geopositioning error by up to 20 .
A common factor of the above work is that it either requires the combination of aerial and street-level images for geopositioning, or extensive training on specific datasets. Both cases and their solutions cannot be easily generalized. In our approach, we utilize georeferenced, street-level panoramic images only and a pre-trained CNN combined with image matching techniques for coordinate estimation. This avoids lengthy training and labeling procedures and assumes street-level data to be available without requiring aerial images. Furthermore, and unlike @cite_1 , we do not assume that our query and database images originate from the same imaging devices.
{ "cite_N": [ "@cite_1" ], "mid": [ "1946093182", "2479919622", "1984093092", "2199890863" ], "abstract": [ "The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.", "In this paper we aim to determine the location and orientation of a ground-level query image by matching to a reference database of overhead (e.g. satellite) images. For this task we collect a new dataset with one million pairs of street view and overhead images sampled from eleven U.S. cities. We explore several deep CNN architectures for cross-domain matching – Classification, Hybrid, Siamese, and Triplet networks. Classification and Hybrid architectures are accurate but slow since they allow only partial feature precomputation. We propose a new loss function which significantly improves the accuracy of Siamese and Triplet embedding networks while maintaining their applicability to large-scale retrieval tasks like image geolocalization. This image matching task is challenging not just because of the dramatic viewpoint difference between ground-level and overhead imagery but because the orientation (i.e. azimuth) of the street views is unknown making correspondence even more difficult. We examine several mechanisms to match in spite of this – training for rotation invariance, sampling possible rotations at query time, and explicitly predicting relative rotation of ground and overhead images with our deep networks. It turns out that explicit orientation supervision also improves location prediction accuracy. Our best performing architectures are roughly 2.5 times as accurate as the commonly used Siamese network baseline.", "We present a new method for the robust detection and matching of multiple planes in pairs of images. Such planes can serve as stable landmarks for vision-based urban navigation. Our approach starts from SIFT matches and generates multiple local homography hypotheses using the recent J-linkage technique by Toldo and Fusiello, a robust randomized multi-model estimation algorithm. These hypotheses are then globally merged, spatially analyzed, robustly fitted, and checked for stability. When tested on more than 30,000 image pairs taken from panoramic views of a college campus, our method yields no false positives and recovers 72 of the matchable building walls identified by a human, despite significant occlusions and viewpoint changes.", "We propose to use deep convolutional neural networks to address the problem of cross-view image geolocalization, in which the geolocation of a ground-level query image is estimated by matching to georeferenced aerial images. We use state-of-the-art feature representations for ground-level images and introduce a cross-view training approach for learning a joint semantic feature representation for aerial images. We also propose a network architecture that fuses features extracted from aerial images at multiple spatial scales. To support training these networks, we introduce a massive database that contains pairs of aerial and ground-level images from across the United States. Our methods significantly out-perform the state of the art on two benchmark datasets. We also show, qualitatively, that the proposed feature representations are discriminative at both local and continental spatial scales." ] }
1903.05524
2972959293
In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graph-theoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
Kirchhoff index or equivalently effective graph resistance based measures have been instrumental in quantifying the effect of noise on the expected steady state dispersion in linear dynamical networks, particularly in the ones with the consensus dynamics, for instance see @cite_23 @cite_8 @cite_22 . Furthermore, limits on robustness measures that quantify expected steady-state dispersion due to external stochastic disturbances in linear dynamical networks are also studied in @cite_9 @cite_10 . To maximize robustness in networks by minimizing their Kirchhoff indices, various optimization approaches (e.g., @cite_26 @cite_1 ) including graph-theoretic ones @cite_0 have been proposed. The main objective there is to determine crucial edges that need to be added or maintained to maximize robustness under given constraints @cite_11 .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_8", "@cite_9", "@cite_1", "@cite_0", "@cite_23", "@cite_10", "@cite_11" ], "mid": [ "2247236687", "1980652450", "2206841333", "2962884567" ], "abstract": [ "We investigate the (generalized) Walsh decomposition of point-to-point effective resistances on countable random electric networks with i.i.d. resistances. We show that it is concentrated on low levels, and thus point-to-point effective resistances are uniformly stable to noise. For graphs that satisfy some homogeneity property, we show in addition that it is concentrated on sets of small diameter. As a consequence, we compute the right order of the variance and prove a central limit theorem for the effective resistance through the discrete torus of side length n in Zd, when n goes to infinity.", "This paper considers the inverse problem with observed variables Y = BGX ⊕Z, where BG is the incidence matrix of a graph G, X is the vector of unknown vertex variables with a uniform prior, and Z is a noise vector with Bernoulli(e) i.i.d. entries. All variables and operations are Boolean. This model is motivated by coding, synchronization, and community detection problems. In particular, it corresponds to a stochastic block model or a correlation clustering problem with two communities and censored edges. Without noise, exact recovery of X is possible if and only the graph G is connected, with a sharp threshold at the edge probability log(n) n for Erdős-Renyi random graphs. The first goal of this paper is to determine how the edge probability p needs to scale to allow exact recovery in the presence of noise. Defining the degree (oversampling) rate of the graph by α = np log(n), it is shown that exact recovery is possible if and only if α > 2 (1−2e)+o(1 (1−2e)). In other words, 2 (1−2e) is the information theoretic threshold for exact recovery at lowSNR. In addition, an efficient recovery algorithm based on semidefinite programming is proposed and shown to succeed in the threshold regime up to twice the optimal rate. Full version available in [1].", "Given a background graph with n vertices, the binary censored block model assumes that vertices are partitioned into two clusters, and every edge is labeled independently at random with labels drawn from Bern(1 − e) if two endpoints are in the same cluster, or from Bern(e) otherwise, where e ∈ [0; 1 2] is a fixed constant. For Erdős-Renyi graphs with edge probability p = a log n n and fixed a, we show that the semidefinite programming relaxation of the maximum likelihood estimator achieves the optimal threshold equation for exactly recovering the partition from the labeled graph with probability tending to one as n → ∞. For random regular graphs with degree scaling as a log n, we show that the semidefinite programming relaxation also achieves the optimal recovery threshold aD(Bern(1 2)∥Bern(e)) > 1, where D denotes the Kullback-Leibler divergence.", "We study a graph-theoretic property known as robustness, which plays a key role in the behavior of certain classes of dynamics on networks (such as resilient consensus and contagion). This property is much stronger than other graph properties such as connectivity and minimum degree, in that one can construct graphs with high connectivity and minimum degree but low robustness. In this paper, we investigate the robustness of common random graph models for complex networks (Erdős-Renyi, geometric random, and preferential attachment graphs). We show that the notions of connectivity and robustness coincide on these random graph models: the properties share the same threshold function in the Erdős-Renyi model, cannot be very different in the geometric random graph model, and are equivalent in the preferential attachment model. This indicates that a variety of purely local diffusion dynamics will be effective at spreading information in such networks." ] }
1903.05524
2972959293
In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graph-theoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
To quantify controllability, several approaches have been adapted, including determining the minimum number of inputs (leader nodes) needed to (structurally or strong structurally) control a network, determining the worst-case control energy, metrics based on controllability Gramians, and so on (e.g., see @cite_7 @cite_5 ). Strong structural controllability, due to its independence on coupling weights between nodes, is a generalized notion of controllability with practical implications. There have been recent studies providing graph-theoretic characterizations of this concept @cite_20 @cite_13 @cite_17 . There are numerous other studies regarding leader selection to optimize network performance measures under various constraints, such as to minimize the deviation from consensus in a noisy environment @cite_4 @cite_2 , and to maximize various controllability measures, for instance @cite_15 @cite_18 @cite_25 @cite_14 . Recently, optimization methods are also presented to select leader nodes that exploit submodularity properties of performance measures for network robustness and structural controllability @cite_5 @cite_3 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_7", "@cite_3", "@cite_2", "@cite_5", "@cite_15", "@cite_13", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2111725629", "2315383458", "2736121037", "1938602245" ], "abstract": [ "This paper studies the problem of controlling complex networks, i.e., the joint problem of selecting a set of control nodes and of designing a control input to steer a network to a target state. For this problem, 1) we propose a metric to quantify the difficulty of the control problem as a function of the required control energy, 2) we derive bounds based on the system dynamics (network topology and weights) to characterize the tradeoff between the control energy and the number of control nodes, and 3) we propose an open-loop control strategy with performance guarantees. In our strategy, we select control nodes by relying on network partitioning, and we design the control input by leveraging optimal and distributed control techniques. Our findings show several control limitations and properties. For instance, for Schur stable and symmetric networks: 1) if the number of control nodes is constant, then the control energy increases exponentially with the number of network nodes; 2) if the number of control nodes is a fixed fraction of the network nodes, then certain networks can be controlled with constant energy independently of the network dimension; and 3) clustered networks may be easier to control because, for sufficiently many control nodes, the control energy depends only on the controllability properties of the clusters and on their coupling strength. We validate our results with examples from power networks, social networks and epidemics spreading.", "In this technical note, we study the controllability of diffusively coupled networks from a graph theoretic perspective. We consider leader-follower networks, where the external control inputs are injected to only some of the agents, namely the leaders. Our main result relates the controllability of such systems to the graph distances between the agents. More specifically, we present a graph topological lower bound on the rank of the controllability matrix. This lower bound is tight, and it is applicable to systems with arbitrary network topologies, coupling weights, and number of leaders. An algorithm for computing the lower bound is also provided. Furthermore, as a prominent application, we present how the proposed bound can be utilized to select a minimal set of leaders for achieving controllability, even when the coupling weights are unknown.", "This paper investigates the robustness of strong structural controllability for linear time-invariant directed networked systems with respect to structural perturbations, including edge additions and deletions. In this regard, an algorithm is presented that is initiated by endowing each node of a network with a successive set of integers. Using this algorithm, a new notion of perfect graphs associated with a network is introduced, and tight upper bounds on the number of edges that can be added to, or removed from a network, while ensuring strong structural controllability, are derived. Moreover, we obtain a characterization of critical edges with respect to edge additions and deletions; these sets are the maximal sets of edges whose any subset can be respectively added to, or removed from a network, while preserving strong structural controllability.", "Controllability and observability have long been recognized as fundamental structural properties of dynamical systems, but have recently seen renewed interest in the context of large, complex networks of dynamical systems. A basic problem is sensor and actuator placement: choose a subset from a finite set of possible placements to optimize some real-valued controllability and observability metrics of the network. Surprisingly little is known about the structure of such combinatorial optimization problems. In this paper, we show that several important classes of metrics based on the controllability and observability Gramians have a strong structural property that allows for either efficient global optimization or an approximation guarantee by using a simple greedy heuristic for their maximization. In particular, the mapping from possible placements to several scalar functions of the associated Gramian is either a modular or submodular set function. The results are illustrated on randomly generated systems and on a problem of power-electronic actuator placement in a model of the European power grid." ] }
1903.05524
2972959293
In this paper, we study the relationship between two crucial properties in linear dynamical networks of diffusively coupled agents, that is controllability and robustness to noise and structural changes in the network. In particular, for any given network size and diameter, we identify networks that are maximally robust and then analyze their strong structural controllability. We do so by determining the minimum number of leaders to make such networks completely controllable with arbitrary coupling weights between agents. Similarly, we design networks with the same given parameters that are completely controllable independent of coupling weights through a minimum number of leaders, and then also analyze their robustness. We utilize the notion of Kirchhoff index to measure network robustness to noise and structural changes. Our controllability analysis is based on novel graph-theoretic methods that offer insights on the important connection between network robustness and strong structural controllability in such networks.
Very recently in @cite_21 , trade-off between controllability and fragility in complex networks is investigated. Fragility measures the smallest perturbation in edge weights to make the network unstable. Authors in @cite_21 show that networks that require small control energy, as measured by the eigen values of the controllability Gramian, to drive from one state to another are more fragile and vice versa. In our work, for control performance, we consider minimum leaders for strong structural controllability, which is independent of coupling weights; and for robustness, we utilize the Kirchhoff index which measures robustness to noise as well as to structural changes in the underlying network graph. Moreover, in this work we focus on designing and comparing extremal networks for these properties. The rest of the paper is organized as follows: Section describes preliminaries and network dynamics. Section explains the measures for robustness and controllability, and also outlines the main problems. Section presents maximally robust networks for a given @math and @math , and also analyzes their controllability. Section provides a design of maximally controllable networks and also evaluates their robustness. Finally, Section concludes the paper.
{ "cite_N": [ "@cite_21" ], "mid": [ "2887109490", "1938602245", "2736121037", "2256710691" ], "abstract": [ "Mathematical theories and empirical evidence suggest that several complex natural and man-made systems are fragile: as their size increases, arbitrarily small and localized alterations of the system parameters may trigger system-wide failures. Examples are abundant, from perturbation of the population densities leading to extinction of species in ecological networks [1], to structural changes in metabolic networks preventing reactions [2], cascading failures in power networks [3], and the onset of epileptic seizures following alterations of structural connectivity among populations of neurons [4]. While fragility of these systems has long been recognized [5], convincing theories of why natural evolution or technological advance has failed, or avoided, to enhance robustness in complex systems are still lacking. In this paper we propose a mechanistic explanation of this phenomenon. We show that a fundamental tradeoff exists between fragility of a complex network and its controllability degree, that is, the control energy needed to drive the network state to a desirable state. We provide analytical and numerical evidence that easily controllable networks are fragile, suggesting that natural and man-made systems can either be resilient to parameters perturbation or efficient to adapt their state in response to external excitations and controls.", "Controllability and observability have long been recognized as fundamental structural properties of dynamical systems, but have recently seen renewed interest in the context of large, complex networks of dynamical systems. A basic problem is sensor and actuator placement: choose a subset from a finite set of possible placements to optimize some real-valued controllability and observability metrics of the network. Surprisingly little is known about the structure of such combinatorial optimization problems. In this paper, we show that several important classes of metrics based on the controllability and observability Gramians have a strong structural property that allows for either efficient global optimization or an approximation guarantee by using a simple greedy heuristic for their maximization. In particular, the mapping from possible placements to several scalar functions of the associated Gramian is either a modular or submodular set function. The results are illustrated on randomly generated systems and on a problem of power-electronic actuator placement in a model of the European power grid.", "This paper investigates the robustness of strong structural controllability for linear time-invariant directed networked systems with respect to structural perturbations, including edge additions and deletions. In this regard, an algorithm is presented that is initiated by endowing each node of a network with a successive set of integers. Using this algorithm, a new notion of perfect graphs associated with a network is introduced, and tight upper bounds on the number of edges that can be added to, or removed from a network, while ensuring strong structural controllability, are derived. Moreover, we obtain a characterization of critical edges with respect to edge additions and deletions; these sets are the maximal sets of edges whose any subset can be respectively added to, or removed from a network, while preserving strong structural controllability.", "In this paper, we consider a robust network control problem. We consider linear unstable and uncertain discrete time plants with a network between the sensor and controller and the controller and plant. We investigate the effect of data drop out in the form of packet losses. Four distinct control schemes are explored and sufficient conditions to ensure almost sure stability of the closed loop system are derived for each of them in terms of minimum packet arrival rate and the maximum uncertainty. I. INTRODUCTION In the past decade, networked control systems (NCS) have gained much attention from both the control com- munity and the network and communication community. When compared with classical feedback control system, networked control systems have several advantages. For example, they can reduce the system wiring, make the system easy to operate and maintain and later diagnose in case of malfunctioning, and increase system agility (20). Although NCS have advantages, inserting a network in between the plant and the controller introduces many problems as well. For instance, zero-delayed sensing and actuation, perfect information and synchronization are no longer guaranteed in the new system architecture as only finite bandwidth is available and data packet drops and delays may occur due to network traffic conditions. These must be revisited and analyzed before networked control systems become prevalent. Recently, many researchers have spent effort on these issues and some significant results were obtained and many are in progress. Many of the aforementioned issues are studied separately. Tatikonda (19) and Sahai (13) have presented some interesting results in the area of control under communication constraints. Specifically, Tatikonda gave a necessary and sufficient condition on the channel data rate such that a noiseless LTI system in the closed loop is asymptotically stable. He also gave rate results for stabilizing a noisy LTI system over a digital channel. Sahai proposed the notion of anytime capacity to deal with real time estimation and control for a networked control system. In our paper (17), the authors have considered various rate issues under finite bandwidth, packet drops and finite controls. An optimal bit allocation scheme is given in (16) under the networked setting. The effect of pacekt drops on state estimation was studied by Sinopoli, et. al. in (3). It has further been investigated by many researchers including the present authors in (15) and (6)." ] }
cmp-lg9408015
2952406681
Effective problem solving among multiple agents requires a better understanding of the role of communication in collaboration. In this paper we show that there are communicative strategies that greatly improve the performance of resource-bounded agents, but that these strategies are highly sensitive to the task requirements, situation parameters and agents' resource limitations. We base our argument on two sources of evidence: (1) an analysis of a corpus of 55 problem solving dialogues, and (2) experimental simulations of collaborative problem solving dialogues in an experimental world, Design-World, where we parameterize task requirements, agents' resources and communicative strategies.
Design-World is also based on the method used in Carletta's JAM simulation for the Edinburgh Map-Task @cite_10 . JAM is based on the Map-Task Dialogue corpus, where the goal of the task is for the planning agent, the instructor, to instruct the reactive agent, the instructee, how to get from one place to another on the map. JAM focuses on efficient strategies for recovery from error and parametrizes agents according to their communicative and error recovery strategies. Given good error recovery strategies, Carletta argues that high risk' strategies are more efficient, where efficiency is a measure of the number of utterances in the dialogue. While the focus here is different, we have shown that that the number of utterances is just one parameter for evaluating performance, and that the task definition determines when strategies are effective.
{ "cite_N": [ "@cite_10" ], "mid": [ "2949964922", "2118142207", "2126810476", "2062244797" ], "abstract": [ "Building a dialogue agent to fulfill complex tasks, such as travel planning, is challenging because the agent has to learn to collectively complete multiple subtasks. For example, the agent needs to reserve a hotel and book a flight so that there leaves enough time for commute between arrival and hotel check-in. This paper addresses this challenge by formulating the task in the mathematical framework of options over Markov Decision Processes (MDPs), and proposing a hierarchical deep reinforcement learning approach to learning a dialogue manager that operates at different temporal scales. The dialogue manager consists of: (1) a top-level dialogue policy that selects among subtasks or options, (2) a low-level dialogue policy that selects primitive actions to complete the subtask given by the top-level policy, and (3) a global state tracker that helps ensure all cross-subtask constraints be satisfied. Experiments on a travel planning task with simulated and real users show that our approach leads to significant improvements over three baselines, two based on handcrafted rules and the other based on flat deep reinforcement learning.", "This paper describes a corpus of unscripted, task-oriented dialogues which has been designed, digitally recorded, and transcribed to support the study of spontaneous speech on many levels. The corpus uses the Map Task (Brown, Anderson, Yule, and Shillcock, 1983) in which speakers must collaborate verbally to reproduce on one participant's map a route printed on the other's. In all, the corpus includes four conversations from each of 64 young adults and manipulates the following variables: familiarity of speakers, eye contact between speakers, matching between landmarks on the participants' maps, opportunities for contrastive stress, and phonological characteristics of landmark names. The motivations for the design are set out and basic corpus statistics are presented.", "Human-machine dialogue is heavily influenced by speech recognition and understanding errors and it is hence desirable to train and test statistical dialogue system policies under realistic noise conditions. This paper presents a novel approach to error simulation based on statistical models for word-level utterance generation, ASR confusions, and confidence score generation. While the method explicitly models the context-dependent acoustic confusability of words and allows the system specific language model and semantic decoder to be incorporated, it is computationally inexpensive and thus potentially suitable for running thousands of training simulations. Experimental evaluation results with a POMDP-based dialogue system and the Hidden Agenda User Simulator indicate a close match between the statistical properties of real and synthetic errors.", "This paper describes a novel method by which a spoken dialogue system can learn to choose an optimal dialogue strategy from its experience interacting with human users. The method is based on a combination of reinforcement learning and performance modeling of spoken dialogue systems. The reinforcement learning component applies Q-learning (Watkins, 1989), while the performance modeling component applies the PARADISE evaluation framework (, 1997) to learn the performance function (reward) used in reinforcement learning. We illustrate the method with a spoken dialogue system named elvis (EmaiL Voice Interactive System), that supports access to email over the phone. We conduct a set of experiments for training an optimal dialogue strategy on a corpus of 219 dialogues in which human users interact with elvis over the phone. We then test that strategy on a corpus of 18 dialogues. We show that elvis can learn to optimize its strategy selection for agent initiative, for reading messages, and for summarizing email folders." ] }
cs9907027
2949190809
The aim of the Alma project is the design of a strongly typed constraint programming language that combines the advantages of logic and imperative programming. The first stage of the project was the design and implementation of Alma-0, a small programming language that provides a support for declarative programming within the imperative programming framework. It is obtained by extending a subset of Modula-2 by a small number of features inspired by the logic programming paradigm. In this paper we discuss the rationale for the design of Alma-0, the benefits of the resulting hybrid programming framework, and the current work on adding constraint processing capabilities to the language. In particular, we discuss the role of the logical and customary variables, the interaction between the constraint store and the program, and the need for lists.
We concentrate here on the related work involving addition of constraints to imperative languages. For an overview of related work pertaining to the language we refer the reader to @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2034373223", "2137628566", "2168617729", "2067273780" ], "abstract": [ "We present a new approach to adding state and state-changing commands to a term language. As a formal semantics it can be seen as a generalization of predicate transformer semantics, but beyond that it brings additional opportunities for specifying and verifying programs. It is based on a construct called a phrase, which is a term of the form C r t, where C stands for a command and t stands for a term of any type. If R is boolean, C r R is closely related to the weakest precondition wp(C,R). The new theory draws together functional and imperative programming in a simple way. In particular, imperative procedures and functions are seen to be governed by the same laws as classical functions. We get new techniques for reasoning about programs, including the ability to dispense with logical variables and their attendant complexities. The theory covers both programming and specification languages, and supports unbounded demonic and angelic nondeterminacy in both commands and terms.", "In joint work with Peter O'Hearn and others, based on early ideas of Burstall, we have developed an extension of Hoare logic that permits reasoning about low-level imperative programs that use shared mutable data structure. The simple imperative programming language is extended with commands (not expressions) for accessing and modifying shared structures, and for explicit allocation and deallocation of storage. Assertions are extended by introducing a \"separating conjunction\" that asserts that its subformulas hold for disjoint parts of the heap, and a closely related \"separating implication\". Coupled with the inductive definition of predicates on abstract data structures, this extension permits the concise and flexible description of structures with controlled sharing. In this paper, we survey the current development of this program logic, including extensions that permit unrestricted address arithmetic, dynamically allocated arrays, and recursive procedures. We also discuss promising future directions.", "We present a unified environment for running declarative specifications in the context of an imperative object-Oriented programming language. Specifications are Alloy-like, written in first-order relational logic with transitive closure, and the imperative language is Java. By being able to mix imperative code with executable declarative specifications, the user can easily express constraint problems in place, i.e., in terms of the existing data structures and objects on the heap. After a solution is found, the heap is updated to reflect the solution, so the user can continue to manipulate the program heap in the usual imperative way. We show that this approach is not only convenient, but, for certain problems can also outperform a standard imperative implementation. We also present an optimization technique that allowed us to run our tool on heaps with almost 2000 objects.", "This paper describes the design, implementation, and applications of the constraint logic language cc(FD). cc(FD) is a declarative nondeterministic constraint logic language over finite domains based on the cc framework [33], an extension of the Constraint Logic Programming (CLP) scheme [21]. Its constraint solver includes (nonlinear) arithmetic constraints over natural numbers which are approximated using domain and interval consistency. The main novelty of cc (FD) is the inclusion of a number of general-purpose combinators, in particular cardinality, constructive disjunction, and blocking implication, in conjunction with new constraint operations such as constraint entailment and generalization. These combinators significantly improve the operational expressiveness, extensibility, and flexibility of CLP languages and allow issues such as the definition of nonprimitive constraints and disjunctions to be tackled at the language level. The implementation of cc (FD) (about 40,000 lines of C) includes a WAM-based engine [44], optimal are-consistency algorithms based on AC-5 [40], and incremental implementation of the combinators. Results on numerous problems, including scheduling, resource allocation, sequencing, packing, and hamiltonian paths are reported and indicate that cc(FD) comes close to procedural languages on a number of combinatorial problems. In addition, a small cc(FD) program was able to find the optimal solution and prove optimality to a famous 10 10 disjunctive scheduling problem [29], which was left open for more than 20 years and finally solved in 1986. (C) 1998 Elsevier Science Inc. All rights reserved." ] }
cmp-lg9709007
1578881253
Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.
To our knowledge, lexical databases have been used only once before in TC, apart from our previous work. Hearst @cite_10 adapted a disambiguation algorithm by Yarowsky using WordNet to recognize category occurrences. Categories are made of WordNet terms, which is not the general case of standard or user-defined categories. It is a hard task to adapt WordNet subsets to pre-existing categories, especially when they are domain dependent. Hearst's approach has shown promising results confirmed by our previous work @cite_23 and present results.
{ "cite_N": [ "@cite_10", "@cite_23" ], "mid": [ "2038721957", "1578881253", "2144108169", "2962984063" ], "abstract": [ "Part 1 The lexical database: nouns in WordNet, George A. Miller modifiers in WordNet, Katherine J. Miller a semantic network of English verbs, Christiane Fellbaum design and implementation of the WordNet lexical database and searching software, Randee I. Tengi. Part 2: automated discovery of WordNet relations, Marti A. Hearst representing verb alterations in WordNet, Karen T. the formalization of WordNet by methods of relational concept analysis, Uta E. Priss. Part 3 Applications of WordNet: building semantic concordances, Shari performance and confidence in a semantic annotation task, Christiane WordNet and class-based probabilities, Philip Resnik combining local context and WordNet similarity for word sense identification, Claudia Leacock and Martin Chodorow using WordNet for text retrieval, Ellen M. Voorhees lexical chains as representations of context for the detection and correction of malapropisms, Graeme Hirst and David St-Onge temporal indexing through lexical chaining, Reem Al-Halimi and Rick Kazman COLOR-X - using knowledge from WordNet for conceptual modelling, J.F.M. Burg and R.P. van de Riet knowledge processing on an extended WordNet, Sanda M. Harabagiu and Dan I Moldovan appendix - obtaining and using WordNet.", "Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.", "We propose a novel algorithm for inducing semantic taxonomies. Previous algorithms for taxonomy induction have typically focused on independent classifiers for discovering new single relationships based on hand-constructed or automatically discovered textual patterns. By contrast, our algorithm flexibly incorporates evidence from multiple classifiers over heterogenous relationships to optimize the entire structure of the taxonomy, using knowledge of a word's coordinate terms to help in determining its hypernyms, and vice versa. We apply our algorithm on the problem of sense-disambiguated noun hyponym acquisition, where we combine the predictions of hypernym and coordinate term classifiers with the knowledge in a preexisting semantic taxonomy (WordNet 2.1). We add 10,000 novel synsets to WordNet 2.1 at 84 precision, a relative error reduction of 70 over a non-joint algorithm using the same component classifiers. Finally, we show that a taxonomy built using our algorithm shows a 23 relative F-score improvement over WordNet 2.1 on an independent testset of hypernym pairs.", "Abstract Motivated by the success of powerful while expensive techniques to recognize words in a holistic way (, 2013; , 2014; , 2016) object proposals techniques emerge as an alternative to the traditional text detectors. In this paper we introduce a novel object proposals method that is specifically designed for text. We rely on a similarity based region grouping algorithm that generates a hierarchy of word hypotheses. Over the nodes of this hierarchy it is possible to apply a holistic word recognition method in an efficient way. Our experiments demonstrate that the presented method is superior in its ability of producing good quality word proposals when compared with class-independent algorithms. We show impressive recall rates with a few thousand proposals in different standard benchmarks, including focused or incidental text datasets, and multi-language scenarios. Moreover, the combination of our object proposals with existing whole-word recognizers (, 2014; , 2016) shows competitive performance in end-to-end word spotting, and, in some benchmarks, outperforms previously published results. Concretely, in the challenging ICDAR2015 Incidental Text dataset, we overcome in more than 10 F -score the best-performing method in the last ICDAR Robust Reading Competition (Karatzas, 2015). Source code of the complete end-to-end system is available at https: github.com lluisgomez TextProposals ." ] }
cmp-lg9709007
1578881253
Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.
Lexical databases have been employed recently in word sense disambiguation. For example, Agirre and Rigau @cite_4 make use of a semantic distance that takes into account structural factors in WordNet for achieving good results for this task. Additionally, Resnik @cite_3 combines the use of WordNet and a text collection for a definition of a distance for disambiguating noun groupings. Although the text collection is not a training collection (in the sense of a collection of manually labeled texts for a pre-defined text processing task), his approach can be regarded as the most similar to ours in the disambiguation setting. Finally, Ng and Lee @cite_11 make use of several sources of information inside a training collection (neighborhood, part of speech, morphological form, etc.) to get good results in disambiguating unrestricted text.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_11" ], "mid": [ "2165897980", "1561908597", "128995279", "2952637164" ], "abstract": [ "Words and phrases acquire meaning from the way they are used in society, from their relative semantics to other words and phrases. For computers, the equivalent of \"society\" is \"database,\" and the equivalent of \"use\" is \"a way to search the database\". We present a new theory of similarity between words and phrases based on information distance and Kolmogorov complexity. To fix thoughts, we use the World Wide Web (WWW) as the database, and Google as the search engine. The method is also applicable to other search engines and databases. This theory is then applied to construct a method to automatically extract similarity, the Google similarity distance, of words and phrases from the WWW using Google page counts. The WWW is the largest database on earth, and the context information entered by millions of independent users averages out to provide automatic semantics of useful quality. We give applications in hierarchical clustering, classification, and language translation. We give examples to distinguish between colors and numbers, cluster names of paintings by 17th century Dutch masters and names of books by English novelists, the ability to understand emergencies and primes, and we demonstrate the ability to do a simple automatic English-Spanish translation. Finally, we use the WordNet database as an objective baseline against which to judge the performance of our method. We conduct a massive randomized trial in binary classification using support vector machines to learn categories based on our Google distance, resulting in an a mean agreement of 87 percent with the expert crafted WordNet categories", "This paper presents an adaptation of Lesk's dictionary-based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the SENSEVAL-2 word sense disambiguation exercise, and attains an overall accuracy of 32 . This represents a significant improvement over the 16 and 23 accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation systems.", "The WordNet lexical database is now quite large and offers broad coverage of general lexical relations in English. As is evident in this volume, WordNet has been employed as a resource for many applications in natural language processing (NLP) and information retrieval (IR). However, many potentially useful lexical relations are currently missing from WordNet. Some of these relations, while useful for NLP and IR applications, are not necessarily appropriate for a general, domain-independent lexical database. For example, WordNet’s coverage of proper nouns is rather sparse, but proper nouns are often very important in application tasks. The standard way lexicographers find new relations is to look through huge lists of concordance lines. However, culling through long lists of concordance lines can be a rather daunting task (Church and Hanks, 1990), so a method that picks out those lines that are very likely to hold relations of interest should be an improvement over more traditional techniques. This chapter describes a method for the automatic discovery of WordNetstyle lexico-semantic relations by searching for corresponding lexico-syntactic patterns in large text collections. Large text corpora are now widely available, and can be viewed as vast resources from which to mine lexical, syntactic, and semantic information. This idea is reminiscent of what is known as “data mining” in the artificial intelligence literature (Fayyad and Uthurusamy, 1996), however, in this case the ore is raw text rather than tables of numerical data. The Lexico-Syntactic Pattern Extraction (LSPE) method is meant to be useful as an automated or semi-automated aid for lexicographers and builders of domain-dependent knowledge-bases. The LSPE technique is light-weight; it does not require a knowledge base or complex interpretation modules in order to suggest new WordNet relations.", "Word sense disambiguation algorithms, with few exceptions, have made use of only one lexical knowledge source. We describe a system which performs unrestricted word sense disambiguation (on all content words in free text) by combining different knowledge sources: semantic preferences, dictionary definitions and subject domain codes along with part-of-speech tags. The usefulness of these sources is optimised by means of a learning algorithm. We also describe the creation of a new sense tagged corpus by combining existing resources. Tested accuracy of our approach on this corpus exceeds 92 , demonstrating the viability of all-word disambiguation rather than restricting oneself to a small sample." ] }
cmp-lg9706006
2953123431
Learning problems in the text processing domain often map the text to a space whose dimensions are the measured features of the text, e.g., its words. Three characteristic properties of this domain are (a) very high dimensionality, (b) both the learned concepts and the instances reside very sparsely in the feature space, and (c) a high variation in the number of active features in an instance. In this work we study three mistake-driven learning algorithms for a typical task of this nature -- text categorization. We argue that these algorithms -- which categorize documents by learning a linear separator in the feature space -- have a few properties that make them ideal for this domain. We then show that a quantum leap in performance is achieved when we further modify the algorithms to better address some of the specific characteristics of the domain. In particular, we demonstrate (1) how variation in document length can be tolerated by either normalizing feature weights or by using negative weights, (2) the positive effect of applying a threshold range in training, (3) alternatives in considering feature frequency, and (4) the benefits of discarding features while training. Overall, we present an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set.
The methods that are most similar to our techniques are the on-line algorithms used in @cite_16 and @cite_10 . In the first, two algorithms, a multiplicative update and additive update algorithms suggested in @cite_5 are evaluated in the domain, and are shown to perform somewhat better than Rocchio's algorithm. While both these works make use of multiplicative update algorithms, as we do, there are two major differences between those studies and the current one. First, there are some important technical differences between the algorithms used. Second, the algorithms we study here are mistake-driven; they update the weight vector only when a mistake is made, and not after every example seen. The Experts algorithm studied in @cite_10 is very similar to a basic version of the algorithm which we study here. The way we treat the negative weights is different, though, and significantly more efficient, especially in sparse domains (see ). Cohen and Singer experiment also, using the same algorithm, with more complex features (sparse n-grams) and show that, as expected, it yields better results.
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_10" ], "mid": [ "2097645432", "1988790447", "2069317438", "2003677307" ], "abstract": [ "In most kernel based online learning algorithms, when an incoming instance is misclassified, it will be added into the pool of support vectors and assigned with a weight, which often remains unchanged during the rest of the learning process. This is clearly insufficient since when a new support vector is added, we generally expect the weights of the other existing support vectors to be updated in order to reflect the influence of the added support vector. In this paper, we propose a new online learning method, termed Double Updating Online Learning, or DUOL for short, that explicitly addresses this problem. Instead of only assigning a fixed weight to the misclassified example received at the current trial, the proposed online learning algorithm also tries to update the weight for one of the existing support vectors. We show that the mistake bound can be improved by the proposed online learning method. We conduct an extensive set of empirical evaluations for both binary and multi-class online learning tasks. The experimental results show that the proposed technique is considerably more effective than the state-of-the-art online learning algorithms. The source code is available to public at http: www.cais.ntu.edu.sg chhoi DUOL .", "In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.", "We consider two algorithm for on-line prediction based on a linear model. The algorithms are the well-known Gradient Descent (GD) algorithm and a new algorithm, which we call EG(+ -). They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG(+ -) algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG(+ -) and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG(+ -) has a much smaller loss if only a few components of the input are relevant for the predictions. We have performed experiments, which show that our worst-case upper bounds are quite tight already on simple artificial data.", "We describe and analyze an online algorithm for supervised learning of pseudo-metrics. The algorithm receives pairs of instances and predicts their similarity according to a pseudo-metric. The pseudo-metrics we use are quadratic forms parameterized by positive semi-definite matrices. The core of the algorithm is an update rule that is based on successive projections onto the positive semi-definite cone and onto half-space constraints imposed by the examples. We describe an efficient procedure for performing these projections, derive a worst case mistake bound on the similarity predictions, and discuss a dual version of the algorithm in which it is simple to incorporate kernel operators. The online algorithm also serves as a building block for deriving a large-margin batch algorithm. We demonstrate the merits of the proposed approach by conducting experiments on MNIST dataset and on document filtering." ] }
cmp-lg9701003
2952751702
In expert-consultation dialogues, it is inevitable that an agent will at times have insufficient information to determine whether to accept or reject a proposal by the other agent. This results in the need for the agent to initiate an information-sharing subdialogue to form a set of shared beliefs within which the agents can effectively re-evaluate the proposal. This paper presents a computational strategy for initiating such information-sharing subdialogues to resolve the system's uncertainty regarding the acceptance of a user proposal. Our model determines when information-sharing should be pursued, selects a focus of information-sharing among multiple uncertain beliefs, chooses the most effective information-sharing strategy, and utilizes the newly obtained information to re-evaluate the user proposal. Furthermore, our model is capable of handling embedded information-sharing subdialogues.
Grosz, Sidner and Lochbaum @cite_7 @cite_1 developed a SharedPlan approach to modelling collaborative discourse, and Sidner formulated an artificial language for modeling such discourse. Sidner viewed a collaborative planning process as proposal acceptance and proposal rejection sequences. Her artificial language treats an utterance such as Why do X? as a proposal for the hearer to provide support for his proposal to do X. However, Sidner's work is descriptive and does not provide a mechanism for determining when and how such a proposal should be made nor how responses should be formulated in information-sharing subdialogues.
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "2148389694", "1564910013", "2962845465", "2122514299" ], "abstract": [ "A model of plan recognition in discourse must be based on intended recognition, distinguish each agent's beliefs and intentions from the other's, and avoid assumptions about the correctness or completeness of the agents' beliefs. In this paper, we present an algorithm for plan recognition that is based on the Shared-Plan model of collaboration (Grosz and Sidner, 1990; , 1990) and that satisfies these constraints.", "This paper addresses the problem of designing a system that accepts a plan structure of the sort generated by AI planning programs and produces natural language text explaining how to execute the plan. We describe a system that generates text from plans produced by the NONLIN planner (Tate 1976).The results of our system are promising, but the texts still lack much of the smoothness of human-generated text. This is partly because, although the domain of plans seems a priori to provide rich structure that a natural language generator can use, in practice a plan that is generated without the production of explanations in mind rarely contains the kinds of information that would yield an interesting natural language account. For instance, the hierarchical organization assigned to a plan is liable to reflect more a programmer's approach to generating a class of plans efficiently than the way that a human would naturally \"chunk\" the relevant actions. Such problems are, of course, similar to those that Swartout (1983) encountered with expert systems. In addition, AI planners have a restricted view of the world that is hard to match up with the normal semantics of natural language expressions. Thus constructs that are primitive to the planner may be only clumsily or misleadingly expressed in natural language, and the range of possible natural language constructs may be artificially limited by the shallowness of the planner's representations.", "The paper presents a knowledge representation formalism, in the form of a high-level Action Description Language (ADL) for multi-agent systems, where autonomous agents reason and act in a shared environment. Agents are autonomously pursuing individual goals, but are capable of interacting through a shared knowledge repository. In their interactions through shared portions of the world, the agents deal with problems of synchronization and concurrency; the action language allows the description of strategies to ensure a consistent global execution of the agents’ autonomously derived plans. A distributed planning problem is formalized by providing the declarative specications of the portion of the problem pertaining a single agent. Each of these specications is executable by a stand-alone CLP-based planner. The coordination among agents exploits a Linda infrastructure. The proposal is validated in a prototype implementation developed in SICStus Prolog. To appear in Theory and Practice of Logic Programming (TPLP). Research partially funded by GNCS-INdAM projects, MUR-PRIN: Innovative and multidisciplinary approaches for constraint and preference reasoning project; NSF grants IIS-0812267 and HRD-0420407; and grants 2009.010.0336 and 2010.011.0403.", "Most previous work on trainable language generation has focused on two paradigms: (a) using a generation decisions of an existing generator. Both approaches rely on the existence of a handcrafted generation component, which is likely to limit their scalability to new domains. The first contribution of this article is to present Bagel, a fully data-driven generation method that treats the language generation task as a search for the most likely sequence of semantic concepts and realization phrases, according to Factored Language Models (FLMs). As domain utterances are not readily available for most natural language generation tasks, a large creative effort is required to produce the data necessary to represent human linguistic variation for nontrivial domains. This article is based on the assumption that learning to produce paraphrases can be facilitated by collecting data from a large sample of untrained annotators using crowdsourcing—rather than a few domain experts—by relying on a coarse meaning representation. A second contribution of this article is to use crowdsourced data to show how dialogue naturalness can be improved by learning to vary the output utterances generated for a given semantic input. Two data-driven methods for generating paraphrases in dialogue are presented: (a) by sampling from the n-best list of realizations produced by Bagel's FLM reranker; and (b) by learning a structured perceptron predicting whether candidate realizations are valid paraphrases. We train Bagel on a set of 1,956 utterances produced by 137 annotators, which covers 10 types of dialogue acts and 128 semantic concepts in a tourist information system for Cambridge. An automated evaluation shows that Bagel outperforms utterance class LM baselines on this domain. A human evaluation of 600 resynthesized dialogue extracts shows that Bagel's FLM output produces utterances comparable to a handcrafted baseline, whereas the perceptron classifier performs worse. Interestingly, human judges find the system sampling from the n-best list to be more natural than a system always returning the first-best utterance. The judges are also more willing to interact with the n-best system in the future. These results suggest that capturing the large variation found in human language using data-driven methods is beneficial for dialogue interaction." ] }

This is a copy of the Multi-XScience dataset, except the input source documents of its train, validation and test splits have been replaced by a dense retriever. The retrieval pipeline used:

  • query: The related_work field of each example
  • corpus: The union of all documents in the train, validation and test splits
  • retriever: facebook/contriever-msmarco via PyTerrier with default settings
  • top-k strategy: "max", i.e. the number of documents retrieved, k, is set as the maximum number of documents seen across examples in this dataset, in this case k==4

Retrieval results on the train set:

Recall@100 Rprec Precision@k Recall@k
0.5270 0.2005 0.1551 0.2357

Retrieval results on the validation set:

Recall@100 Rprec Precision@k Recall@k
0.5310 0.2026 0.1603 0.2432

Retrieval results on the test set:

Recall@100 Rprec Precision@k Recall@k
0.5229 0.2081 0.1612 0.2440
Downloads last month
2