aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
0706.2434
2143252188
In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.
In @cite_8 , Ilow and Hatzinakos model the interference as a shot noise process and show that the interference is a symmetric @math -stable process @cite_7 when the nodes are Poisson distributed on the plane. They also show that channel randomness affects the dispersion of the distribution, while the path-loss exponent affects the exponent of the process. The throughput and outage in the presence of interference are analyzed in @cite_14 @cite_22 @cite_9 . In @cite_14 , the shot-noise process is analyzed using stochastic geometry when the nodes are distributed as Poisson and the fading is Rayleigh. In @cite_23 upper and lower bounds are obtained under general fading and Poisson arrangement of nodes.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_7", "@cite_8", "@cite_9", "@cite_23" ], "mid": [ "2086605530", "2143252188", "2740650368", "2022637917" ], "abstract": [ "We define and analyze a random coverage process of the @math -dimensional Euclidian space which allows one to describe a continuous spectrum that ranges from the Boolean model to the Poisson-Voronoi tessellation to the Johnson-Mehl model. Like for the Boolean model, the minimal stochastic setting consists of a Poisson point process on this Euclidian space and a sequence of real valued random variables considered as marks of this point process. In this coverage process, the cell attached to a point is defined as the region of the space where the effect of the mark of this point exceeds an affine function of the cumulated effect of all marks. This cumulated effect is defined as the shot noise process associated with the marked point process. In addition to analyzing and visualizing this continuum, we study various basic properties of the coverage process such as the probability that a point or a pair of points be covered by a typical cell. We also determine the distribution of the number of cells which cover a given point, and show how to provide deterministic bounds on this number. Finally, we also analyze convergence properties of the coverage process using the framework of closed sets, and its differentiability properties using perturbation analysis. Our results require a pathwise continuity property for the shot noise process for which we provide sufficient conditions. The model in question stems from wireless communications where several antennas share the same (or different but interfering) channel(s). In this case, the area where the signal of a given antenna can be received is the area where the signal to interference ratio is large enough. We describe this class of problems in detail in the paper. The obtained results allow one to compute quantities of practical interest within this setting: for instance the outage probability is obtained as the complement of the volume fraction; the law of the number of cells covering a point allows one to characterize handover strategies etc.", "In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.", "We consider a single cell wireless uplink in which randomly arriving devices transmit their payload to a receiver. Given SNR per user, payload size per device, a fixed latency constraint T, total available bandwidth W, i.e., total symbol resources is given by N = TW. The total bandwidth W is evenly partitioned into B bins. Each time slot of duration T is split into a maximum number of retransmission attempts M. Hence, the N resources are partitioned into N MB resources each bin per retransmission. We characterize the maximum average rate or number of Poisson arrivals that can successfully complete the random access procedure such that the probability of outage is sufficiently small. We analyze the proposed setting for i) noise-limited regime and ii) interference-limited regime. We show that in the noise-limited regime the devices share the resources, and in the interference-limited regime, the resources split such that devices do not experience any interference. We then incorporate Rayleigh fading to model the channel power gain distribution. Although the variability of the channel causes a drop in the number of arrivals that can successfully complete the random access phase, similar scaling results extend to the Rayleigh fading case.", "In mobile networks, distance variations caused by node mobility generate fluctuations in the channel gains. Such fluctuations can be treated as another type of fading besides multipath effects. In this paper, the interference statistics in mobile random networks are characterized by incorporating the distance variations of mobile nodes to the channel gain fluctuations. The mean interference is calculated at the origin and at the border of a finite mobile network. The network performance is evaluated in terms of the outage probability. Compared to a static network, the interference in a single snapshot does not change under uniform mobility models. However, random waypoint mobility increases (decreases) the interference at the origin (at the border). Furthermore, due to the correlation of the node locations, the interference and outage are temporally and spatially correlated. We quantify the temporal correlation of the interference and outage in mobile Poisson networks in terms of the correlation coefficient and conditional outage probability, respectively. The results show that it is essential that routing, MAC, and retransmission schemes need to be smart (i.e., correlation-aware) to avoid bursts of transmission failures." ] }
0706.2434
2143252188
In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.
Even in the case of the PPP, the interference distribution is not known for all fading distributions and all channel attenuation models. Only the characteristic function or the Laplace transform of the interference can be obtained in most of the cases. The Laplace transform can be used to evaluate the outage probabilities under Rayleigh fading characteristics @cite_14 @cite_21 . In the analysis of outage probability, the Laplace transform is required, , the Laplace transform given that there is a point of the process located at the origin. For the PPP, the conditional Laplace transform is equal to the unconditional Laplace transform. To the best of our knowledge, we are not aware of any literature pertaining to the interference characterization in a clustered network.
{ "cite_N": [ "@cite_14", "@cite_21" ], "mid": [ "2114207517", "2042164227", "2224070075", "2740650368" ], "abstract": [ "In cellular network models, the base stations are usually assumed to form a lattice or a Poisson point process (PPP). In reality, however, they are deployed neither fully regularly nor completely randomly. Accordingly, in this paper, we consider the very general class of motion-invariant models and analyze the behavior of the outage probability (the probability that the signal-to-interference-plus-noise-ratio (SINR) is smaller than a threshold) as the threshold goes to zero. We show that, remarkably, the slope of the outage probability (in dB) as a function of the threshold (also in dB) is the same for essentially all motion-invariant point processes. The slope merely depends on the fading statistics. Using this result, we introduce the notion of the asymptotic deployment gain (ADG), which characterizes the horizontal gap between the success probabilities of the PPP and another point process in the high-reliability regime (where the success probability is near 1). To demonstrate the usefulness of the ADG for the characterization of the SINR distribution, we investigate the outage probabilities and the ADGs for different point processes and fading statistics by simulations.", "This paper deals with the distribution of cumulated instantaneous interference power in a Rayleigh fading channel for an infinite number of interfering stations, where each station transmits with a certain probability, independently of all others. If all distances are known, a necessary and sufficient condition is given for the corresponding distribution to be nondefective. Explicit formulae of density and distribution functions are obtained in the interesting special case that interfering stations are located on a linear grid. Moreover, the Laplace transform of cumulated power is investigated when the positions of stations follow a one- or two-dimensional Poisson process. It turns out that the corresponding distribution is defective for the two-dimensional models.", "Interference field in wireless networks is often modeled by a homogeneous Poisson point process (PPP). While it is realistic in modeling the inherent node irregularity and provides meaningful first-order results, it falls short in modeling the effect of interference management techniques, which typically introduces some form of spatial interaction among active transmitters. In some applications, such as cognitive radio and device-to-device networks, this interaction may result in the formation of holes in an otherwise homogeneous interference field. The resulting interference field can be accurately modeled as a Poisson hole process (PHP) . Despite the importance of the PHP in many applications, the exact characterization of interference experienced by a typical node in the PHP is not known. In this paper, we derive several tight upper and lower bounds on the Laplace transform of this interference. Numerical comparisons reveal that the new bounds outperform all known bounds and approximations, and are remarkably tight in all operational regimes of interest. The key in deriving these tight and yet simple bounds is to capture the local neighborhood around the typical node accurately while simplifying the far field to attain tractability. Ideas for tightening these bounds further by incorporating the effect of overlaps in the holes are also discussed. These results immediately lead to an accurate characterization of the coverage probability of the typical node in the PHP under Rayleigh fading.", "We consider a single cell wireless uplink in which randomly arriving devices transmit their payload to a receiver. Given SNR per user, payload size per device, a fixed latency constraint T, total available bandwidth W, i.e., total symbol resources is given by N = TW. The total bandwidth W is evenly partitioned into B bins. Each time slot of duration T is split into a maximum number of retransmission attempts M. Hence, the N resources are partitioned into N MB resources each bin per retransmission. We characterize the maximum average rate or number of Poisson arrivals that can successfully complete the random access procedure such that the probability of outage is sufficiently small. We analyze the proposed setting for i) noise-limited regime and ii) interference-limited regime. We show that in the noise-limited regime the devices share the resources, and in the interference-limited regime, the resources split such that devices do not experience any interference. We then incorporate Rayleigh fading to model the channel power gain distribution. Although the variability of the channel causes a drop in the number of arrivals that can successfully complete the random access phase, similar scaling results extend to the Rayleigh fading case." ] }
0706.2434
2143252188
In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.
@cite_16 introduces the notion of transmission capacity , which is a measure of the area spectral efficiency of the successful transmissions resulting from the optimal contention density as a function of the link distance. Transmission capacity is defined as the product of the maximum density of successful transmissions and their data rate, given an outage constraint. , provide bounds for the transmission capacity under different models of fading, when the node location are Poisson distributed.
{ "cite_N": [ "@cite_16" ], "mid": [ "2963847582", "2095796369", "2130335471", "2139956786" ], "abstract": [ "Transmission capacity (TC) is a performance metric for wireless networks that measures the spatial intensity of successful transmissions per unit area, subject to a constraint on the permissible outage probability (where outage occurs when the signal to interference plus noise ratio (SINR) at a receiver is below a threshold). This volume gives a unified treatment of the TC framework that has been developed by the authors and their collaborators over the past decade. The mathematical framework underlying the analysis (reviewed in Section 2) is stochastic geometry: Poisson point processes model the locations of interferers, and (stable) shot noise processes represent the aggregate interference seen at a receiver. Section 3 presents TC results (exact, asymptotic, and bounds) on a simple model in order to illustrate a key strength of the framework: analytical tractability yields explicit performance dependence upon key model parameters. Section 4 presents enhancements to this basic model — channel fading, variable link distances (VLD), and multihop. Section 5 presents four network design case studies well-suited to TC: (i) spectrum management, (ii) interference cancellation, (iii) signal threshold transmission scheduling, and (iv) power control. Section 6 studies the TC when nodes have multiple antennas, which provides a contrast vs. classical results that ignore interference.", "In this paper, upper and lower bounds on the transmission capacity of spread-spectrum (SS) wireless ad hoc networks are derived. We define transmission capacity as the product of the maximum density of successful transmissions multiplied by their data rate, given an outage constraint. Assuming that the nodes are randomly distributed in space according to a Poisson point process, we derive upper and lower bounds for frequency hopping (FH-CDMA) and direct sequence (DS-CDMA) SS networks, which incorporate traditional modulation types (no spreading) as a special case. These bounds cleanly summarize how ad hoc network capacity is affected by the outage probability, spreading factor, transmission power, target signal-to-noise ratio (SNR), and other system parameters. Using these bounds, it can be shown that FH-CDMA obtains a higher transmission capacity than DS-CDMA on the order of M sup 1-2 spl alpha , where M is the spreading factor and spl alpha >2 is the path loss exponent. A tangential contribution is an (apparently) novel technique for obtaining tight bounds on tail probabilities of additive functionals of homogeneous Poisson point processes.", "The transmission capacity of an ad-hoc network is the maximum density of active transmitters in an unit area, given an outage constraint at each receiver for a fixed rate of transmission. Assuming channel state information is available at the receiver, this paper presents bounds on the transmission capacity as a function of the number of antennas used for transmission, and the spatial receive degrees of freedom used for interference cancelation at the receiver. Canceling the strongest interferers, using a single antenna for transmission together with using all but one spatial receive degrees of freedom for interference cancelation is shown to maximize the transmission capacity. Canceling the closest interferers, using a single antenna for transmission together with using a fraction of the total spatial receive degrees of freedom for interference cancelation depending on the path loss exponent, is shown to maximize the transmission capacity.", "We study the transmission capacities of two coexisting wireless networks (a primary network vs. a secondary network) that operate in the same geographic region and share the same spectrum. We define transmission capacity as the product among the density of transmissions, the transmission rate, and the successful transmission probability (1 minus the outage probability). The primary (PR) network has a higher priority to access the spectrum without particular considerations for the secondary (SR) network, where the SR network limits its interference to the PR network by carefully controlling the density of its transmitters. Assuming that the nodes are distributed according to Poisson point processes and the two networks use different transmission ranges, we quantify the transmission capacities for both of these two networks and discuss their tradeoff based on asymptotic analysis. Our results show that if the PR network permits a small increase of its outage probability, the sum transmission capacity of the two networks (i.e., the overall spectrum efficiency per unit area) will be boosted significantly over that of a single network." ] }
0707.0648
2950368606
The k-forest problem is a common generalization of both the k-MST and the dense- @math -subgraph problems. Formally, given a metric space on @math vertices @math , with @math demand pairs @math and a target'' @math , the goal is to find a minimum cost subgraph that connects at least @math demand pairs. In this paper, we give an @math -approximation algorithm for @math -forest, improving on the previous best ratio of @math by Segev & Segev. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an @math point metric space with @math objects each with its own source and destination, and a vehicle capable of carrying at most @math objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an @math -approximation algorithm for the @math -forest problem implies an @math -approximation algorithm for Dial-a-Ride. Using our results for @math -forest, we get an @math - approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an @math -approximation by Charikar & Raghavachari; our results give a different proof of a similar approximation guarantee--in fact, when the vehicle capacity @math is large, we give a slight improvement on their results.
The @math -forest problem: The @math -forest problem is relatively new: it was defined by Hajiaghayi & Jain @cite_20 . An @math -approximation algorithm for even the directed @math -forest problem can be inferred from @cite_19 . Recently, Segev & Segev @cite_10 gave an @math approximation algorithm for @math -forest.
{ "cite_N": [ "@cite_19", "@cite_10", "@cite_20" ], "mid": [ "2173641041", "1971222832", "1500932665", "2120899253" ], "abstract": [ "An instance of the k-Steiner forest problem consists of an undirected graph G = (V,E), the edges of which are associated with non-negative costs, and a collection D = (si,ti): 1 ≤ i ≤ d of distinct pairs of vertices, interchangeably referred to as demands. We say that a forest F ⊆ G connects a demand (si, ti) when it contains an si-ti path. Given a requirement parameter K ≤ |D|, the goal is to find a minimum cost forest that connects at least k demands in D. This problem has recently been studied by Hajiaghayi and Jain [SODA'06], whose main contribution in this context was to relate the inapproximability of k-Steiner forest to that of the dense k-subgraph problem. However, Hajiaghayi and Jain did not provide any algorithmic result for the respective settings, and posed this objective as an important direction for future research. In this paper, we present the first non-trivial approximation algorithm for the k-Steiner forest problem, which is based on a novel extension of the Lagrangian relaxation technique. Specifically, our algorithm constructs a feasible forest whose cost is within a factor of O(min n2 3, √d ċ log d) of optimal, where n is the number of vertices in the input graph and d is the number of demands.", "In this paper we study the prize-collecting version of the Generalized Steiner Tree problem. To the best of our knowledge, there is no general combinatorial technique in approximation algorithms developed to study the prize-collecting versions of various problems. These problems are studied on a case by case basis by [5] by applying an LP-rounding technique which is not a combinatorial approach. The main contribution of this paper is to introduce a general combinatorial approach towards solving these problems through novel primal-dual schema (without any need to solve an LP). We fuse the primal-dual schema with Farkas lemma to obtain a combinatorial 3-approximation algorithm for the Prize-Collecting Generalized Steiner Tree problem. Our work also inspires a combinatorial algorithm [19] for solving a special case of Kelly's problem [22] of pricing edges.We also consider the k-forest problem, a generalization of k-MST and k-Steiner tree, and we show that in spite of these problems for which there are constant factor approximation algorithms, the k-forest problem is much harder to approximate. In particular, obtaining an approximation factor better than O(n1 6-e) for k-forest requires substantially new ideas including improving the approximation factor O(n1 3-e) for the notorious densest k-subgraph problem. We note that k-forest and prize-collecting version of Generalized Steiner Tree are closely related to each other, since the latter is the Lagrangian relaxation of the former.", "The k-forest problem is a common generalization of both the k-MST and the dense-k-subgraph problems. Formally, given a metric space on n vertices V, with m demand pairs ⊆ V × V and a \"target\" k ≤ m, the goal is to find a minimum cost subgraph that connects at least k demand pairs. In this paper, we give an O(min √n,√k )- approximation algorithm for k-forest, improving on the previous best ratio of O(min n2 3,√m log n) by Segev and Segev [20]. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an n point metric space with m objects each with its own source and destination, and a vehicle capable of carrying at most k objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an a-approximation algorithm for the k-forest problem implies an O(αċlog2 n)-approximation algorithm for Dial-a-Ride. Using our results for k-forest, we get an O(min √n,√k ċlog2 n)-approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an O(√k log n)-approximation by Charikar and Raghavachari [5]; our results give a different proof of a similar approximation guarantee-- in fact, when the vehicle capacity k is large, we give a slight improvement on their results. The reduction from Dial-a-Ride to the k-forest problem is fairly robust, and allows us to obtain approximation algorithms (with the same guarantee) for the following generalizations: (i) Non-uniform Dial-a-Ride, where the cost of traversing each edge is an arbitrary nondecreasing function of the number of objects in the vehicle; and (ii) Weighted Diala-Ride, where demands are allowed to have different weights. The reduction is essential, as it is unclear how to extend the techniques of Charikar and Raghavachari to these Dial-a-Ride generalizations.", "In the Steiner Forest problem, we are given terminal pairs si, ti, and need to find the cheapest subgraph which connects each of the terminal pairs together. In 1991, Agrawal, Klein, and Ravi gave a primal-dual constant-factor approximation algorithm for this problem. Until this work, the only constant-factor approximations we know are via linear programming relaxations. In this paper, we consider the following greedy algorithm: Given terminal pairs in a metric space, a terminal is active if its distance to its partner is non-zero. Pick the two closest active terminals (say si, tj), set the distance between them to zero, and buy a path connecting them. Recompute the metric, and repeat. It has long been open to analyze this greedy algorithm. Our main result shows that this algorithm is a constant-factor approximation. We use this algorithm to give new, simpler constructions of cost-sharing schemes for Steiner forest. In particular, the first \"group-strict\" cost-shares for this problem implies a very simple combinatorial sampling-based algorithm for stochastic Steiner forest." ] }
0707.0648
2950368606
The k-forest problem is a common generalization of both the k-MST and the dense- @math -subgraph problems. Formally, given a metric space on @math vertices @math , with @math demand pairs @math and a target'' @math , the goal is to find a minimum cost subgraph that connects at least @math demand pairs. In this paper, we give an @math -approximation algorithm for @math -forest, improving on the previous best ratio of @math by Segev & Segev. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an @math point metric space with @math objects each with its own source and destination, and a vehicle capable of carrying at most @math objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an @math -approximation algorithm for the @math -forest problem implies an @math -approximation algorithm for Dial-a-Ride. Using our results for @math -forest, we get an @math - approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an @math -approximation by Charikar & Raghavachari; our results give a different proof of a similar approximation guarantee--in fact, when the vehicle capacity @math is large, we give a slight improvement on their results.
Dense @math -subgraph: The problem is a generalization of the problem @cite_8 , as shown in @cite_20 . The best known approximation guarantee for the problem is @math where @math is some constant, due to @cite_8 , and obtaining an improved guarantee has been a long standing open problem. Strictly speaking, @cite_8 study a potentially harder problem: the version of , where one wants to pick @math vertices to maximize the number of edges in the induced graph. However, nothing better is known even for the version of (where one wants to pick the minimum number of vertices that induce @math edges), which is a special case of @math -forest. The @math -forest problem is also a generalization of @math -MST, for which a 2-approximation is known (Garg @cite_14 ).
{ "cite_N": [ "@cite_14", "@cite_20", "@cite_8" ], "mid": [ "1500932665", "2266714125", "2161190897", "2890196309" ], "abstract": [ "The k-forest problem is a common generalization of both the k-MST and the dense-k-subgraph problems. Formally, given a metric space on n vertices V, with m demand pairs ⊆ V × V and a \"target\" k ≤ m, the goal is to find a minimum cost subgraph that connects at least k demand pairs. In this paper, we give an O(min √n,√k )- approximation algorithm for k-forest, improving on the previous best ratio of O(min n2 3,√m log n) by Segev and Segev [20]. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an n point metric space with m objects each with its own source and destination, and a vehicle capable of carrying at most k objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an a-approximation algorithm for the k-forest problem implies an O(αċlog2 n)-approximation algorithm for Dial-a-Ride. Using our results for k-forest, we get an O(min √n,√k ċlog2 n)-approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an O(√k log n)-approximation by Charikar and Raghavachari [5]; our results give a different proof of a similar approximation guarantee-- in fact, when the vehicle capacity k is large, we give a slight improvement on their results. The reduction from Dial-a-Ride to the k-forest problem is fairly robust, and allows us to obtain approximation algorithms (with the same guarantee) for the following generalizations: (i) Non-uniform Dial-a-Ride, where the cost of traversing each edge is an arbitrary nondecreasing function of the number of objects in the vehicle; and (ii) Weighted Diala-Ride, where demands are allowed to have different weights. The reduction is essential, as it is unclear how to extend the techniques of Charikar and Raghavachari to these Dial-a-Ride generalizations.", "Numerous graph mining applications rely on detecting subgraphs which are large near-cliques. Since formulations that are geared towards finding large near-cliques are hard and frequently inapproximable due to connections with the Maximum Clique problem, the poly-time solvable densest subgraph problem which maximizes the average degree over all possible subgraphs \"lies at the core of large scale data mining\" [10]. However, frequently the densest subgraph problem fails in detecting large near-cliques in networks. In this work, we introduce the k-clique densest subgraph problem, k ≥ 2. This generalizes the well studied densest subgraph problem which is obtained as a special case for k=2. For k=3 we obtain a novel formulation which we refer to as the triangle densest subgraph problem: given a graph G(V,E), find a subset of vertices S* such that τ(S*)=max limitsS ⊆ V t(S) |S|, where t(S) is the number of triangles induced by the set S. On the theory side, we prove that for any k constant, there exist an exact polynomial time algorithm for the k-clique densest subgraph problem . Furthermore, we propose an efficient 1 k-approximation algorithm which generalizes the greedy peeling algorithm of Asahiro and Charikar [8,18] for k=2. Finally, we show how to implement efficiently this peeling framework on MapReduce for any k ≥ 3, generalizing the work of Bahmani, Kumar and Vassilvitskii for the case k=2 [10]. On the empirical side, our two main findings are that (i) the triangle densest subgraph is consistently closer to being a large near-clique compared to the densest subgraph and (ii) the peeling approximation algorithms for both k=2 and k=3 achieve on real-world networks approximation ratios closer to 1 rather than the pessimistic 1 k guarantee. An interesting consequence of our work is that triangle counting, a well-studied computational problem in the context of social network analysis can be used to detect large near-cliques. Finally, we evaluate our proposed method on a popular graph mining application.", "We present algorithmic and hardness results for network design problems with degree or order constraints. The first problem we consider is the Survivable Network Design problem with degree constraints on vertices. The objective is to find a minimum cost subgraph which satisfies connectivity requirements between vertices and also degree upper bounds @math on the vertices. This includes the well-studied Minimum Bounded Degree Spanning Tree problem as a special case. Our main result is a @math -approximation algorithm for the edge-connectivity Survivable Network Design problem with degree constraints, where the cost of the returned solution is at most twice the cost of an optimum solution (satisfying the degree bounds) and the degree of each vertex @math is at most @math . This implies the first constant factor (bicriteria) approximation algorithms for many degree constrained network design problems, including the Minimum Bounded Degree Steiner Forest problem. Our results also extend to directed graphs and provide the first constant factor (bicriteria) approximation algorithms for the Minimum Bounded Degree Arborescence problem and the Minimum Bounded Degree Strongly @math -Edge-Connected Subgraph problem. In contrast, we show that the vertex-connectivity Survivable Network Design problem with degree constraints is hard to approximate, even when the cost of every edge is zero. A striking aspect of our algorithmic result is its simplicity. It is based on the iterative relaxation method, which is an extension of Jain's iterative rounding method. This provides an elegant and unifying algorithmic framework for a broad range of network design problems. We also study the problem of finding a minimum cost @math -edge-connected subgraph with at least @math vertices, which we call the @math -subgraph problem. This generalizes some well-studied classical problems such as the @math -MST and the minimum cost @math -edge-connected subgraph problems. We give a polylogarithmic approximation for the @math -subgraph problem. However, by relating it to the Densest @math -Subgraph problem, we provide evidence that the @math -subgraph problem might be hard to approximate for arbitrary @math .", "Several algorithms with an approximation guarantee of @math are known for the Set Cover problem, where @math is the number of elements. We study a generalization of the Set Cover problem, called the Partition Set Cover problem. Here, the elements are partitioned into @math , and we are required to cover at least @math elements from each color class @math , using the minimum number of sets. We give a randomized LP-rounding algorithm that is an @math approximation for the Partition Set Cover problem. Here @math denotes the approximation guarantee for a related Set Cover instance obtained by rounding the standard LP. As a corollary, we obtain improved approximation guarantees for various set systems for which @math is known to be sublogarithmic in @math . We also extend the LP rounding algorithm to obtain @math approximations for similar generalizations of the Facility Location type problems. Finally, we show that many of these results are essentially tight, by showing that it is NP-hard to obtain an @math -approximation for any of these problems." ] }
0707.0050
2164280180
In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.
This section is dedicated to present some of the works that use game theory for power control. We remind that a Nash equilibrium is a stable solution, where no player has an incentive to deviate unilaterally, while a Pareto equilibrium is a cooperative dominating solution, where there is no way to improve the performance of a player without harming another one. Generally, both concepts do not coincide. Following the general presentation of power allocation games in @cite_11 @cite_31 , an abundance of works can be found on the subject.
{ "cite_N": [ "@cite_31", "@cite_11" ], "mid": [ "2116199025", "2143535483", "1539417924", "2072113937" ], "abstract": [ "We study in this paper a noncooperative approach for sharing resources of a common pool among users, wherein each user strives to maximize its own utility. The optimality notion is then a Nash equilibrium. First, we present a general framework of systems wherein a Nash equilibrium is Pareto inefficient, which are similar to the 'tragedy of the commons' in economics. As examples that fit in the above framework, we consider noncooperative flow-control problems in communication networks where each user decides its throughput to optimize its own utility. As such a utility, we first consider the power which is defined as the throughput divided by the expected end-to-end packet delay, and then consider another utility of additive costs. For both utilities, we establish the non-efficiency of the Nash equilibria.", "We address the problem of spectrum pricing in a cognitive radio network where multiple primary service providers compete with each other to offer spectrum access opportunities to the secondary users. By using an equilibrium pricing scheme, each of the primary service providers aims to maximize its profit under quality of service (QoS) constraint for primary users. We formulate this situation as an oligopoly market consisting of a few firms and a consumer. The QoS degradation of the primary services is considered as the cost in offering spectrum access to the secondary users. For the secondary users, we adopt a utility function to obtain the demand function. With a Bertrand game model, we analyze the impacts of several system parameters such as spectrum substitutability and channel quality on the Nash equilibrium (i.e., equilibrium pricing adopted by the primary services). We present distributed algorithms to obtain the solution for this dynamic game. The stability of the proposed dynamic game algorithms in terms of convergence to the Nash equilibrium is studied. However, the Nash equilibrium is not efficient in the sense that the total profit of the primary service providers is not maximized. An optimal solution to gain the highest total profit can be obtained. A collusion can be established among the primary services so that they gain higher profit than that for the Nash equilibrium. However, since one or more of the primary service providers may deviate from the optimal solution, a punishment mechanism may be applied to the deviating primary service provider. A repeated game among primary service providers is formulated to show that the collusion can be maintained if all of the primary service providers are aware of this punishment mechanism, and therefore, properly weight their profits to be obtained in the future.", "So far, most equilibrium concepts in game theory require that the rewards and actions of the other agents are known and or observed by all agents. However, in real life problems, agents are generally faced with situations where they only have partial or no knowledge about their environment and the other agents evolving in it. In this context, all an agent can do is reasoning about its own payoffs and consequently, cannot rely on classical equilibria through deliberation, which requires full knowledge and observability of the other agents. To palliate to this difficulty, we introduce the satisfaction principle from which an equilibrium can arise as the result of the agents' individual learning experiences. We define such an equilibrium and then we present different algorithms that can be used to reach it. Finally, we present experimental results that show that using learning strategies based on this specific equilibrium, agents will generally coordinate themselves on a Pareto-optimal joint strategy, that is not always a Nash equilibrium, even though each agent is individually rational, in the sense that they try to maximize their own satisfaction.", "In the traffic assignment problem, first proposed by Wardrop in 1952, commuters select the shortest available path to travel from their origins to their destinations. We study a generalization of this problem in which competitors, who may control a nonnegligible fraction of the total flow, ship goods across a network. This type of games, usually referred to as atomic games, readily applies to situations in which the competing freight companies have market power. Other applications include intelligent transportation systems, competition among telecommunication network service providers, and scheduling with flexible machines. Our goal is to determine to what extent these systems can benefit from some form of coordination or regulation. We measure the quality of the outcome of the game without centralized control by computing the worst-case inefficiency of Nash equilibria. The main conclusion is that although self-interested competitors will not achieve a fully efficient solution from the system's point of view, the loss is not too severe. We show how to compute several bounds for the worst-case inefficiency that depend on the characteristics of cost functions and on the market structure in the game. In addition, building upon the work of Catoni and Pallotino, we show examples in which market aggregation (or collusion) adversely impacts the aggregated competitors, even though their market power increases. For example, Nash equilibria of atomic network games may be less efficient than the corresponding Wardrop equilibria. When competitors are completely symmetric, we provide a characterization of the Nash equilibrium using a potential function, and prove that this counterintuitive phenomenon does not arise. Finally, we study a pricing mechanism that elicits more coordination from the players by reducing the worst-case inefficiency of Nash equilibria." ] }
0707.0050
2164280180
In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.
In particular, the utility generally considered in those articles is justified in @cite_4 where the author describes a widely applicable model from first principles''. Conditions under which the utility will allow to obtain non-trivial Nash equilibria (i.e., users actually transmit at the equilibrium) are derived. The utility consisting of throughput-to-power ratio (detailed in Sec. ) is shown to satisfy these conditions. In addition, it possesses a propriety of reliability in the sense that the transmission occurs at non-negligible rates at the equilibrium. This kind of utility function had been introduced in previous works, with an economic leaning @cite_13 @cite_5 .
{ "cite_N": [ "@cite_13", "@cite_5", "@cite_4" ], "mid": [ "2148911784", "2057838148", "2116199025", "2022592042" ], "abstract": [ "A game-theoretic model for studying power control in multicarrier code-division multiple-access systems is proposed. Power control is modeled as a noncooperative game in which each user decides how much power to transmit over each carrier to maximize its own utility. The utility function considered here measures the number of reliable bits transmitted over all the carriers per joule of energy consumed and is particularly suitable for networks where energy efficiency is important. The multidimensional nature of users' strategies and the nonquasi-concavity of the utility function make the multicarrier problem much more challenging than the single-carrier or throughput-based-utility case. It is shown that, for all linear receivers including the matched filter, the decorrelator, and the minimum-mean-square-error detector, a user's utility is maximized when the user transmits only on its \"best\" carrier. This is the carrier that requires the least amount of power to achieve a particular target signal-to-interference-plus-noise ratio at the output of the receiver. The existence and uniqueness of Nash equilibrium for the proposed power control game are studied. In particular, conditions are given that must be satisfied by the channel gains for a Nash equilibrium to exist, and the distribution of the users among the carriers at equilibrium is characterized. In addition, an iterative and distributed algorithm for reaching the equilibrium (when it exists) is presented. It is shown that the proposed approach results in significant improvements in the total utility achieved at equilibrium compared with a single-carrier system and also to a multicarrier system in which each user maximizes its utility over each carrier independently", "Power allocation across users in two adjacent cells is studied for a code-division multiple access (CDMA) data service. The forward link is considered and cells are modeled as one-dimensional with uniformly distributed users and orthogonal signatures within each cell. Each user is assumed to have a utility function that describes the user's received utility, or willingness to pay, for a received signal-to-interference-plus-noise ratio (SINR). The objective is to allocate the transmitted power to maximize the total utility summed over all users subject to power constraints in each cell. It is first shown that this optimization can be achieved by a pricing scheme in which each base station announces a price per unit transmitted power to the users, and each user requests power to maximize individual surplus (utility minus cost). Setting prices to maximize total revenue over both cells is also considered, and it is shown that, in general, the solution is different from the one obtained by maximizing total utility. Conditions are given for which independent optimization in each cell, which leads to a Nash equilibrium (NE), is globally optimal. It is shown that, in general, coordination between the two cells is needed to achieve the maximum utility or revenue.", "We study in this paper a noncooperative approach for sharing resources of a common pool among users, wherein each user strives to maximize its own utility. The optimality notion is then a Nash equilibrium. First, we present a general framework of systems wherein a Nash equilibrium is Pareto inefficient, which are similar to the 'tragedy of the commons' in economics. As examples that fit in the above framework, we consider noncooperative flow-control problems in communication networks where each user decides its throughput to optimize its own utility. As such a utility, we first consider the power which is defined as the throughput divided by the expected end-to-end packet delay, and then consider another utility of additive costs. For both utilities, we establish the non-efficiency of the Nash equilibria.", "We consider a multi-cell wireless network with a large number of users. Each user selfishly chooses the Base Station (BS) that gives it the best throughput (utility), and each BS allocates its resource by some simple scheduling policy. First we consider two cases: (1) BS allocates the same time to its users; (2) BS allocates the same throughput to its users. It turns out that, combined with users' selfish behavior, case (1) results in a single Nash Equilibrium (NE), which achieves system-wide Proportional Fairness. On the other hand, case (2) results in many possible Nash Equilibria, some of which are very inefficient. Next, we extend (1) to the case where the users have general concave utility functions. It is shown that the if each BS performs intra- cell optimization, the total utility of all users is maximized at NE. This suggests that under our model, the task of joining the \";correct\"; BS can be left to individual users, leading to a distributed solution." ] }
0707.0050
2164280180
In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.
Unfortunately, Nash equilibria often lead to inefficient allocations, in the sense that higher rates (Pareto equilibria) could be obtained for all mobiles if they cooperated. To alleviate this problem, in addition to the non-cooperative game setting, @cite_5 introduces a pricing strategy to force users to transmit at a socially optimal rate. They obtain communication at Pareto equilibrium.
{ "cite_N": [ "@cite_5" ], "mid": [ "2072113937", "2116199025", "1993202891", "2133243407" ], "abstract": [ "In the traffic assignment problem, first proposed by Wardrop in 1952, commuters select the shortest available path to travel from their origins to their destinations. We study a generalization of this problem in which competitors, who may control a nonnegligible fraction of the total flow, ship goods across a network. This type of games, usually referred to as atomic games, readily applies to situations in which the competing freight companies have market power. Other applications include intelligent transportation systems, competition among telecommunication network service providers, and scheduling with flexible machines. Our goal is to determine to what extent these systems can benefit from some form of coordination or regulation. We measure the quality of the outcome of the game without centralized control by computing the worst-case inefficiency of Nash equilibria. The main conclusion is that although self-interested competitors will not achieve a fully efficient solution from the system's point of view, the loss is not too severe. We show how to compute several bounds for the worst-case inefficiency that depend on the characteristics of cost functions and on the market structure in the game. In addition, building upon the work of Catoni and Pallotino, we show examples in which market aggregation (or collusion) adversely impacts the aggregated competitors, even though their market power increases. For example, Nash equilibria of atomic network games may be less efficient than the corresponding Wardrop equilibria. When competitors are completely symmetric, we provide a characterization of the Nash equilibrium using a potential function, and prove that this counterintuitive phenomenon does not arise. Finally, we study a pricing mechanism that elicits more coordination from the players by reducing the worst-case inefficiency of Nash equilibria.", "We study in this paper a noncooperative approach for sharing resources of a common pool among users, wherein each user strives to maximize its own utility. The optimality notion is then a Nash equilibrium. First, we present a general framework of systems wherein a Nash equilibrium is Pareto inefficient, which are similar to the 'tragedy of the commons' in economics. As examples that fit in the above framework, we consider noncooperative flow-control problems in communication networks where each user decides its throughput to optimize its own utility. As such a utility, we first consider the power which is defined as the throughput divided by the expected end-to-end packet delay, and then consider another utility of additive costs. For both utilities, we establish the non-efficiency of the Nash equilibria.", "This paper addresses the behavior of the selfish service providers in the form of IP sinks providing high-speed IP access. Service providers compete for mobile users by adjusting the price they charge for their services. Their aim is to maximize the total collected profit. Mobile users are also selfish choosing the service provider offering the best quality of service and price combination. As the service providers come closer to each other, we show the existence of three critical phase transitions in their behavior. Depending on the separation between them, there may exists a unique Nash equilibrium, or a continuum of Nash equilibria, or no Nash equilibrium. We completely characterize the pricing strategies of service providers at Nash equilibria. We also prove that the total social welfare in the presence of selfish providers is close to the maximum social welfare that can reached through non- selfish optimization.", "The price of anarchy (POA) is a worst-case measure of the inefficiency of selfish behavior, defined as the ratio of the objective function value of a worst Nash equilibrium of a game and that of an optimal outcome. This measure implicitly assumes that players successfully reach some Nash equilibrium. This drawback motivates the search for inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash and correlated equilibria; or to sequences of outcomes generated by natural experimentation strategies, such as successive best responses or simultaneous regret-minimization. We prove a general and fundamental connection between the price of anarchy and its seemingly stronger relatives in classes of games with a sum objective. First, we identify a \"canonical sufficient condition\" for an upper bound of the POA for pure Nash equilibria, which we call a smoothness argument. Second, we show that every bound derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of regret-minimizing players (or \"price of total anarchy\"). Smoothness arguments also have automatic implications for the inefficiency of approximate and Bayesian-Nash equilibria and, under mild additional assumptions, for bicriteria bounds and for polynomial-length best-response sequences. We also identify classes of games --- most notably, congestion games with cost functions restricted to an arbitrary fixed set --- that are tight, in the sense that smoothness arguments are guaranteed to produce an optimal worst-case upper bound on the POA, even for the smallest set of interest (pure Nash equilibria). Byproducts of our proof of this result include the first tight bounds on the POA in congestion games with non-polynomial cost functions, and the first structural characterization of atomic congestion games that are universal worst-case examples for the POA." ] }
0707.0050
2164280180
In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.
In @cite_17 , defining the utility as advised in @cite_4 as the ratio of the throughput to the transmission power, the authors obtain results of existence and unicity of a Nash equilibrium for a CDMA system. They extend this work to the case of multiple carriers in @cite_36 . In particular, it is shown that users will select and only transmit over their best carrier. As far as the attenuation is concerned, the consideration is restricted to flat fading in @cite_17 and in @cite_36 (each carrier being flat fading in the latter). However, wireless transmissions generally suffer from the effect of multiple paths, thus becoming frequency-selective. The goal of this paper is to determine the influence of the number of paths (or the selectivity of the channel) on the performance of PA.
{ "cite_N": [ "@cite_36", "@cite_4", "@cite_17" ], "mid": [ "2100366181", "2168150810", "2063449729", "2114927595" ], "abstract": [ "For pt.I see ibid., vol.44, no.7, p.2796-815 (1998). In multiaccess wireless systems, dynamic allocation of resources such as transmit power, bandwidths, and rates is an important means to deal with the time-varying nature of the environment. We consider the problem of optimal resource allocation from an information-theoretic point of view. We focus on the multiaccess fading channel with Gaussian noise, and define two notions of capacity depending on whether the traffic is delay-sensitive or not. In the present paper, we introduce a notion of delay-limited capacity which is the maximum rate achievable with delay independent of how slow the fading is. We characterize the delay-limited capacity region of the multiaccess fading channel and the associated optimal resource allocation schemes. We show that successive decoding is optimal, and the optimal decoding order and power allocation can be found explicitly as a function of the fading states; this is a consequence of an underlying polymatroid structure that we exploit.", "In this contribution, the performance of an uplink CDMA system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). We consider the realistic case of frequency selective channels. This scenario illustrates the case of decentralized schemes and aims at reducing the downlink signaling overhead. Various receivers are considered, namely the Matched filter, the MMSE filter and the optimum filter. The goal of this paper is to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large. To that end we combine two asymptotic methodologies. The first is asymptotic random matrix theory which allows us to obtain explicit expressions for the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games along with the Wardrop equilibrium concept which allows us to compute good approximations of the Nash equilibrium as the number of mobiles grow.", "We determine the optimal adaptive rate and power control strategies to maximize the total throughput in a multirate code-division multiple-access system. The total throughput of the system provides a meaningful baseline in the form of an upper bound to the throughput achievable with additional restrictions imposed on the system to guarantee fairness. Peak power and instantaneous bit energy-to-noise spectral density constraints are assumed at the transmitter with matched filter detection at the receiver. Our results apply to frequency selective fading in so far as the bit energy-to-equivalent noise power spectral density ratio definition can be used as the quality-of-service metric. The bit energy-to-equivalent noise power spectral density ratio metric coincides with the bit-error rate metric under the assumption that the processing gains and the number of users are high enough so that self-interference can be neglected. We first obtain results for the case where the rates available to each user are unrestricted, and we then consider the more practical scenario where each user has a finite discrete set of rates. An upper bound to the maximum average throughput is obtained and evaluated for Rayleigh fading. Suboptimal low-complexity schemes are considered to illustrate the performance tradeoffs between optimality and complexity. We also show that the optimum rate and power adaptation scheme with unconstrained rates is in fact just a rate adaptation scheme with fixed transmit powers, and it performs significantly better than a scheme that uses power adaptation alone.", "We consider networks consisting of nodes with radios, and without any wired infrastructure, thus necessitating all communication to take place only over the shared wireless medium. The main focus of this paper is on the effect of fading in such wireless networks. We examine the attenuation regime where either the medium is absorptive, a situation which generally prevails, or the path loss exponent is greater than 3. We study the transport capacity, defined as the supremum over the set of feasible rate vectors of the distance weighted sum of rates. We consider two assumption sets. Under the first assumption set, which essentially requires only a mild time average type of bound on the fading process, we show that the transport capacity can grow no faster than O(n), where n denotes the number of nodes, even when the channel state information (CSI) is available noncausally at both the transmitters and the receivers. This assumption includes common models of stationary ergodic channels; constant, frequency-selective channels; flat, rapidly varying channels; and flat slowly varying channels. In the second assumption set, which essentially features an independence, time average of expectation, and nonzeroness condition on the fading process, we constructively show how to achieve transport capacity of spl Omega (n) even when the CSI is unknown to both the transmitters and the receivers, provided that every node has an appropriately nearby node. This assumption set includes common models of independent and identically distributed (i.i.d.) channels; constant, flat channels; and constant, frequency-selective channels. The transport capacity is achieved by nodes communicating only with neighbors, and using only point-to-point coding. The thrust of these results is that the multihop strategy, toward which much protocol development activity is currently targeted, is appropriate for fading environments. The low attenuation regime is open." ] }
0707.0050
2164280180
In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.
This work is an extension of @cite_17 in the case of frequency-selective fading, in the framework of multi-user systems. We do not consider multiple carriers, as in @cite_36 , and the results are very different to those obtained in that work. The extension is not trivial and involves advanced results on random matrices with non-equal variances due to Girko @cite_40 whereas classical results rely on the work of Silverstein @cite_14 . A part of this work was previously published as a conference paper @cite_19 .
{ "cite_N": [ "@cite_14", "@cite_36", "@cite_19", "@cite_40", "@cite_17" ], "mid": [ "2132158614", "2114927595", "2100366181", "2168150810" ], "abstract": [ "This paper extends Khatri (1964, 1969) distribution of the largest eigenvalue of central complex Wishart matrices to the noncentral case. It then applies the resulting new statistical results to obtain closed-form expressions for the outage probability of multiple-input-multiple-output (MIMO) systems employing maximal ratio combining (known also as \"beamforming\" systems) and operating over Rician-fading channels. When applicable these expressions are compared with special cases previously reported in the literature dealing with the performance of (1) MIMO systems over Rayleigh-fading channels and (2) single-input-multiple-output (SIMO) systems over Rician-fading channels. As a double check these analytical results are validated by Monte Carlo simulations and as an illustration of the mathematical formalism some numerical examples for particular cases of interest are plotted and discussed. These results show that, given a fixed number of total antenna elements and under the same scattering condition (1) SIMO systems are equivalent to multiple-input-single-output systems and (2) it is preferable to distribute the number of antenna elements evenly between the transmitter and the receiver for a minimum outage probability performance.", "We consider networks consisting of nodes with radios, and without any wired infrastructure, thus necessitating all communication to take place only over the shared wireless medium. The main focus of this paper is on the effect of fading in such wireless networks. We examine the attenuation regime where either the medium is absorptive, a situation which generally prevails, or the path loss exponent is greater than 3. We study the transport capacity, defined as the supremum over the set of feasible rate vectors of the distance weighted sum of rates. We consider two assumption sets. Under the first assumption set, which essentially requires only a mild time average type of bound on the fading process, we show that the transport capacity can grow no faster than O(n), where n denotes the number of nodes, even when the channel state information (CSI) is available noncausally at both the transmitters and the receivers. This assumption includes common models of stationary ergodic channels; constant, frequency-selective channels; flat, rapidly varying channels; and flat slowly varying channels. In the second assumption set, which essentially features an independence, time average of expectation, and nonzeroness condition on the fading process, we constructively show how to achieve transport capacity of spl Omega (n) even when the CSI is unknown to both the transmitters and the receivers, provided that every node has an appropriately nearby node. This assumption set includes common models of independent and identically distributed (i.i.d.) channels; constant, flat channels; and constant, frequency-selective channels. The transport capacity is achieved by nodes communicating only with neighbors, and using only point-to-point coding. The thrust of these results is that the multihop strategy, toward which much protocol development activity is currently targeted, is appropriate for fading environments. The low attenuation regime is open.", "For pt.I see ibid., vol.44, no.7, p.2796-815 (1998). In multiaccess wireless systems, dynamic allocation of resources such as transmit power, bandwidths, and rates is an important means to deal with the time-varying nature of the environment. We consider the problem of optimal resource allocation from an information-theoretic point of view. We focus on the multiaccess fading channel with Gaussian noise, and define two notions of capacity depending on whether the traffic is delay-sensitive or not. In the present paper, we introduce a notion of delay-limited capacity which is the maximum rate achievable with delay independent of how slow the fading is. We characterize the delay-limited capacity region of the multiaccess fading channel and the associated optimal resource allocation schemes. We show that successive decoding is optimal, and the optimal decoding order and power allocation can be found explicitly as a function of the fading states; this is a consequence of an underlying polymatroid structure that we exploit.", "In this contribution, the performance of an uplink CDMA system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). We consider the realistic case of frequency selective channels. This scenario illustrates the case of decentralized schemes and aims at reducing the downlink signaling overhead. Various receivers are considered, namely the Matched filter, the MMSE filter and the optimum filter. The goal of this paper is to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large. To that end we combine two asymptotic methodologies. The first is asymptotic random matrix theory which allows us to obtain explicit expressions for the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games along with the Wardrop equilibrium concept which allows us to compute good approximations of the Nash equilibrium as the number of mobiles grow." ] }
0707.0050
2164280180
In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows.
Moreover, in addition to the linear filters studied in @cite_17 , we study the enhancements provided by the optimum and successive interference cancellation filters.
{ "cite_N": [ "@cite_17" ], "mid": [ "2168729028", "2004559848", "2795172492", "2055294446" ], "abstract": [ "Several contributions have been made so far to develop optimal multichannel linear filtering approaches and show their ability to reduce the acoustic noise. However, there has not been a clear unifying theoretical analysis of their performance in terms of both noise reduction and speech distortion. To fill this gap, we analyze the frequency-domain (non-causal) multichannel linear filtering for noise reduction in this paper. For completeness, we consider the noise reduction constrained optimization problem that leads to the parameterized multichannel non-causal Wiener filter (PMWF). Our contribution is fivefold. First, we formally show that the minimum variance distortionless response (MVDR) filter is a particular case of the PMWF by properly formulating the constrained optimization problem of noise reduction. Second, we propose new simplified expressions for the PMWF, the MVDR, and the generalized sidelobe canceller (GSC) that depend on the signals' statistics only. In contrast to earlier works, these expressions are explicitly independent of the channel transfer function ratios. Third, we quantify the theoretical gains and losses in terms of speech distortion and noise reduction when using the PWMF by establishing new simplified closed-form expressions for three performance measures, namely, the signal distortion index, the noise reduction factor (originally proposed in the paper titled ldquoNew insights into the noise reduction Wiener filter,rdquo by J. Chen (IEEE Transactions on Audio, Speech, and Language Processing, Vol. 15, no. 4, pp. 1218-1234, Jul. 2006) to analyze the single channel time-domain Wiener filter), and the output signal-to-noise ratio (SNR). Fourth, we analyze the effects of coherent and incoherent noise in addition to the benefits of utilizing multiple microphones. Fifth, we propose a new proof for the a posteriori SNR improvement achieved by the PMWF. Finally, we provide some simulations results to corroborate the findings of this work.", "In this work, we propose the construction of two-channel wavelet filter banks for analyzing functions defined on the vertices of any arbitrary finite weighted undirected graph. These graph based functions are referred to as graph-signals as we build a framework in which many concepts from the classical signal processing domain, such as Fourier decomposition, signal filtering and downsampling can be extended to graph domain. Especially, we observe a spectral folding phenomenon in bipartite graphs which occurs during downsampling of these graphs and produces aliasing in graph signals. This property of bipartite graphs, allows us to design critically sampled two-channel filter banks, and we propose quadrature mirror filters (referred to as graph-QMF) for bipartite graph which cancel aliasing and lead to perfect reconstruction. For arbitrary graphs we present a bipartite subgraph decomposition which produces an edge-disjoint collection of bipartite subgraphs. Graph-QMFs are then constructed on each bipartite subgraph leading to “multi-dimensional” separable wavelet filter banks on graphs. Our proposed filter banks are critically sampled and we state necessary and sufficient conditions for orthogonality, aliasing cancellation and perfect reconstruction. The filter banks are realized by Chebychev polynomial approximations.", "In millimeter wave (mm-wave) massive multiple-input multiple-output (MIMO) systems, acquiring accurate channel state information is essential for efficient beamforming (BF) and multiuser interference cancellation, which is a challenging task since a low signal-to-noise ratio is encountered before BF in large antenna arrays. The mm-wave channel exhibits a 3-D clustered structure in the virtual angle of arrival (AOA), angle of departure (AOD), and delay domain that is imposed by the effect of power leakage, angular spread, and cluster duration. We extend the approximate message passing (AMP) with a nearest neighbor pattern learning algorithm for improving the attainable channel estimation performance, which adaptively learns and exploits the clustered structure in the 3-D virtual AOA-AOD-delay domain. The proposed method is capable of approaching the performance bound described by the state evolution based on vector AMP framework, and our simulation results verify its superiority in mm-wave systems associated with a broad bandwidth.", "In this paper, we analyze the feasibility of linear interference alignment (IA) for multi-input-multi-output (MIMO) interference broadcast channel (MIMO-IBC) with constant coefficients. We pose and prove the necessary conditions of linear IA feasibility for general MIMO-IBC. Except for the proper condition, we find another necessary condition to ensure a kind of irreducible interference to be eliminated. We then prove the necessary and sufficient conditions for a special class of MIMO-IBC, where the numbers of antennas are divisible by the number of data streams per user. Since finding an invertible Jacobian matrix is crucial for the sufficiency proof, we first analyze the impact of sparse structure and repeated structure of the Jacobian matrix. Considering that for the MIMO-IBC the sub-matrices of the Jacobian matrix corresponding to the transmit and receive matrices have different repeated structure, we find an invertible Jacobian matrix by constructing the two sub-matrices separately. We show that for the MIMO-IBC where each user has one desired data stream, a proper system is feasible. For symmetric MIMO-IBC, we provide proper but infeasible region of antenna configurations by analyzing the difference between the necessary conditions and the sufficient conditions of linear IA feasibility." ] }
0707.0546
2952386894
We study the problem of assigning jobs to applicants. Each applicant has a weight and provides a preference list ranking a subset of the jobs. A matching M is popular if there is no other matching M' such that the weight of the applicants who prefer M' over M exceeds the weight of those who prefer M over M'. This paper gives efficient algorithms to find a popular matching if one exists.
Following the publication of the work of Abraham al @cite_11 , the topic of unweighted popular matchings has been further explored in many interesting directions. Suppose we want to go from an arbitrary matching to some popular matching by a sequence of matchings each more popular than the previous; Abraham and Kavitha @cite_0 showed that there is always a sequence of length at most two and gave a linear time algorithm to find it. One of the main drawbacks of popular matchings is that they may not always exist; Mahdian @cite_10 nicely addressed this issue by showing that the probability that a random instance admits a popular matching depends on the ratio @math , and exhibits a phase transition around @math . Motivated by a house allocation application, Manlove and Sng @cite_1 gave fast algorithms for popular assignments with capacities on the jobs.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_10", "@cite_11" ], "mid": [ "1529648317", "2096797149", "2605027943", "2547342773" ], "abstract": [ "We investigate the following problem: given a set of jobs and a set of people with preferences over the jobs, what is the optimal way of matching people to jobs? Here we consider the notion of popularity. A matching Mis popular if there is no matching Mi¾? such that more people prefer Mi¾? to Mthan the other way around. Determining whether a given instance admits a popular matching and, if so, finding one, was studied in [2]. If there is no popular matching, a reasonable substitute is a matching whose unpopularityis bounded. We consider two measures of unpopularity - unpopularity factordenoted by u(M) and unpopularity margindenoted by g(M). McCutchen recently showed that computing a matching Mwith the minimum value of u(M) or g(M) is NP-hard, and that if Gdoes not admit a popular matching, then we have u(M) i¾? 2 for all matchings Min G. Here we show that a matching Mthat achieves u(M) = 2 can be computed in @math time (where mis the number of edges in Gand nis the number of nodes) provided a certain graph Hadmits a matching that matches all people. We also describe a sequence of graphs: H= H 2 , H 3 ,...,H k such that if H k admits a matching that matches all people, then we can compute in @math time a matching Msuch that u(M) ≤ ki¾? 1 and @math . Simulation results suggest that our algorithm finds a matching with low unpopularity.", "We study the problem of matching applicants to jobs under one-sided preferences; that is, each applicant ranks a non-empty subset of jobs under an order of preference, possibly involving ties. A matching M is said to be more popular than T if the applicants that prefer M to T outnumber those that prefer T to M. A matching is said to be popular if there is no matching more popular than it. Equivalently, a matching M is popular if @f(M,T)>[email protected](T,M) for all matchings T, where @f(X,Y) is the number of applicants that prefer X to Y. Previously studied solution concepts based on the popularity criterion are either not guaranteed to exist for every instance (e.g., popular matchings) or are NP-hard to compute (e.g., least unpopular matchings). This paper addresses this issue by considering mixed matchings. A mixed matching is simply a probability distribution over matchings in the input graph. The function @f that compares two matchings generalizes in a natural manner to mixed matchings by taking expectation. A mixed matching P is popular if @f(P,Q)>[email protected](Q,P) for all mixed matchings Q. We show that popular mixed matchings always exist and we design polynomial time algorithms for finding them. Then we study their efficiency and give tight bounds on the price of anarchy and price of stability of the popular matching problem.", "Copyright © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Motivated by the fact that in several cases a matching in a graph is stable if and only if it is produced by a greedy algorithm, we study the problem of computing a maximum weight greedy matching on weighted graphs, termed GREEDY-MATCHING. In wide contrast to the maximum weight matching problem, for which many efficient algorithms are known, we prove that GREEDYMATCHING is strongly NP-hard and APX-complete, and thus it does not admit a PTAS unless P=NP, even on graphs with maximum degree at most 3 and with at most three different integer edge weights. Furthermore we prove that GREEDYMATCHING is strongly NP-hard if the input graph is in addition bipartite. Moreover we consider three natural parameters of the problem, for which we establish a sharp threshold behavior between NP-hardness and computational tractability. On the positive side, we present a randomized approximation algorithm (RGMA) for GREEDYMATCHING on a special class of weighted graphs, called bush graphs. We highlight an unexpected connection between RGMA and the approximation of maximum cardinality matching in unweighted graphs via randomized greedy algorithms. We show that, if the approximation ratio of RGMA is θ, then for every ϵ > 0 the randomized MRG algorithm of ( 1995) gives a (ρ - ϵ)-approximation for the maximum cardinality matching. We conjecture that a tight bound for ρ is 2 3; we prove our conjecture true for four subclasses of bush graphs. Proving a tight bound for the approximation ratio of MRG on unweighted graphs (and thus also proving a tight value for ρ) is a long-standing open problem (Poloczek and Szegedy 2012). This unexpected relation of our RGMA algorithm with the MRG algorithm may provide new insights for solving this problem.", "In an instance G = (A union B, E) of the stable marriage problem with strict and possibly incomplete preference lists, a matching M is popular if there is no matching M0 where the vertices that prefer M' to M outnumber those that prefer M to M'. All stable matchings are popular and there is a simple linear time algorithm to compute a maximum-size popular matching. More generally, what we seek is a min-cost popular matching where we assume there is a cost function c : E -> Q. However there is no polynomial time algorithm currently known for solving this problem. Here we consider the following generalization of a popular matching called a popular half-integral matching: this is a fractional matching x = (M_1 + M_2) 2, where M1 and M2 are the 0-1 edge incidence vectors of matchings in G, such that x satisfies popularity constraints. We show that every popular half-integral matching is equivalent to a stable matching in a larger graph G^*. This allows us to solve the min-cost popular half-integral matching problem in polynomial time." ] }
0707.1053
1672175433
We present a deterministic exploration mechanism for sponsored search auctions, which enables the auctioneer to learn the relevance scores of advertisers, and allows advertisers to estimate the true value of clicks generated at the auction site. This exploratory mechanism deviates only minimally from the mechanism being currently used by Google and Yahoo! in the sense that it retains the same pricing rule, similar ranking scheme, as well as, similar mathematical structure of payoffs. In particular, the estimations of the relevance scores and true-values are achieved by providing a chance to lower ranked advertisers to obtain better slots. This allows the search engine to potentially test a new pool of advertisers, and correspondingly, enables new advertisers to estimate the value of clicks leads generated via the auction. Both these quantities are unknown a priori, and their knowledge is necessary for the auction to operate efficiently. We show that such an exploration policy can be incorporated without any significant loss in revenue for the auctioneer. We compare the revenue of the new mechanism to that of the standard mechanism at their corresponding symmetric Nash equilibria and compute the cost of uncertainty, which is defined as the relative loss in expected revenue per impression. We also bound the loss in efficiency, as well as, in user experience due to exploration, under the same solution concept (i.e. SNE). Thus the proposed exploration mechanism learns the relevance scores while incorporating the incentive constraints from the advertisers who are selfish and are trying to maximize their own profits, and therefore, the exploration is essentially achieved via mechanism design. We also discuss variations of the new mechanism such as truthful implementations.
Moreover, as we discuss later in , the problem of designing a family of optimal exploratory mechanisms, which for example would provide the most information while minimizing expected loss in revenue is far from being solved. The work in @cite_3 and in this paper provide just two instances of mechanism design which do provably well, but more work that analyze different aspects of exploratory mechanisms are necessary in this emerging field. Thus, to the best of our knowledge, we are one of the first groups to formally study the problem of estimating relevance and valuations from incentive as well as learning theory perspective without deviating much from the current settings of the mechanism currently in place.
{ "cite_N": [ "@cite_3" ], "mid": [ "2021734699", "2130359158", "1987352480", "1797933865" ], "abstract": [ "We consider the problem of designing a revenue-maximizing auction for a single item, when the values of the bidders are drawn from a correlated distribution. We observe that there exists an algorithm that finds the optimal randomized mechanism that runs in time polynomial in the size of the support. We leverage this result to show that in the oracle model introduced by Ronen and Saberi [FOCS'02], there exists a polynomial time truthful in expectation mechanism that provides a (1.5+e)-approximation to the revenue achievable by an optimal truthful-in-expectation mechanism, and a polynomial time deterministic truthful mechanism that guarantees 5 3 approximation to the revenue achievable by an optimal deterministic truthful mechanism. We show that the 5 3-approximation mechanism provides the same approximation ratio also with respect to the optimal truthful-in-expectation mechanism. This shows that the performance gap between truthful-in-expectation and deterministic mechanisms is relatively small. En route, we solve an open question of Mehta and Vazirani [EC'04]. Finally, we extend some of our results to the multi-item case, and show how to compute the optimal truthful-in-expectation mechanisms for bidders with more complex valuations.", "We study mechanism design in dynamic quasilinear environments where private information arrives over time and decisions are made over multiple periods. We make three contributions. First, we provide a necessary condition for incentive compatibility that takes the form of an envelope formula for the derivative of an agent's equilibrium expected payoff with respect to his current type. It combines the familiar marginal effect of types on payoffs with novel marginal effects of the current type on future ones that are captured by “impulse response functions.” The formula yields an expression for dynamic virtual surplus that is instrumental to the design of optimal mechanisms and to the study of distortions under such mechanisms. Second, we characterize the transfers that satisfy the envelope formula and establish a sense in which they are pinned down by the allocation rule (“revenue equivalence”). Third, we characterize perfect Bayesian equilibrium‐implementable allocation rules in Markov environments, which yields tractable sufficient conditions that facilitate novel applications. We illustrate the results by applying them to the design of optimal mechanisms for the sale of experience goods (“bandit auctions”).", "In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester on getting hired. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is -- if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility? We note that although the previous work (Singer (2010) (2011)) has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve a competitive ratio better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves a competitive ratio of 1 -- 1 e = 0.63. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism, which is used in all the previous works so far on this problem. Interestingly, we can also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1 -- 1 e, thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the market is large.", "We introduce a dynamic mechanism design problem in which the designer wants to offer for sale an item to an agent, and another item to the same agent at some point in the future. The agent's joint distribution of valuations for the two items is known, and the agent knows the valuation for the current item (but not for the one in the future). The designer seeks to maximize expected revenue, and the auction must be deterministic, truthful, and ex post individually rational. The optimum mechanism involves a protocol whereby the seller elicits the buyer's current valuation, and based on the bid makes two take-it-or-leave-it offers, one for now and one for the future. We show that finding the optimum deterministic mechanism in this situation --- arguably the simplest meaningful dynamic mechanism design problem imaginable --- is NP-hard. We also prove several positive results, among them a polynomial linear programming-based algorithm for the optimum randomized auction (even for many bidders and periods), and we show strong separations in revenue between non-adaptive, adaptive, and randomized auctions, even when the valuations in the two periods are uncorrelated. Finally, for the same problem in an environment in which contracts cannot be enforced, and thus perfection of equilibrium is necessary, we show that the optimum randomized mechanism requires multiple rounds of cheap talk-like interactions." ] }
0707.1099
1670687021
We are concerned with the problem of maximizing the worst-case lifetime of a data-gathering wireless sensor network consisting of a set of sensor nodes directly communicating with a base-station.We propose to solve this problem by modeling sensor node and base-station communication as the interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). We provide practical and scalable interactive communication protocols for data gathering in sensor networks and demonstrate their efficiency compared to traditional approaches. In this paper, we first develop a formalism to address the problem of worst-case interactive communication between a set of multiple correlated informants and a recipient. We realize that there can be different objectives to achieve in such a communication scenario and compute the optimal number of messages and bits exchanged to realize these objectives. Then, we propose to adapt these results in the context of single-hop data-gathering sensor networks. Finally, based on this proposed formalism, we propose a clustering based communication protocol for large sensor networks and demonstrate its superiority over a traditional clustering protocol.
The multiple correlated informants - single recipient'' communication problem we are considering in this paper, is basically well-known distributed source coding (DSC) problem. This problem was first considered by Slepian and Wolf @cite_11 for lossless compression of discrete random variables and by Wyner and Ziv @cite_24 for lossy distributed compression. However, these work only provided theoretical bounds on the compression, but no method of constructing practical codes which achieve predicted theoretical bounds.
{ "cite_N": [ "@cite_24", "@cite_11" ], "mid": [ "2138256990", "2156567124", "2080257599", "2051707898" ], "abstract": [ "We address the problem of compressing correlated distributed sources, i.e., correlated sources which are not co-located or which cannot cooperate to directly exploit their correlation. We consider the related problem of compressing a source which is correlated with another source that is available only at the decoder. This problem has been studied in the information theory literature under the name of the Slepian-Wolf (1973) source coding problem for the lossless coding case, and as \"rate-distortion with side information\" for the lossy coding case. We provide a constructive practical framework based on algebraic trellis codes dubbed as DIstributed Source Coding Using Syndromes (DISCUS), that can be applicable in a variety of settings. Simulation results are presented for source coding of independent and identically distributed (i.i.d.) Gaussian sources with side information available at the decoder in the form of a noisy version of the source to be coded. Our results reveal the promise of this approach: using trellis-based quantization and coset construction, the performance of the proposed approach is 2-5 dB from the Wyner-Ziv (1976) bound.", "This paper deals with the problem of multicasting a set of discrete memoryless correlated sources (DMCS) over a cooperative relay network. Necessary conditions with cut-set interpretation are presented. A Joint source-Wyner-Ziv encoding sliding window decoding scheme is proposed, in which decoding at each receiver is done with respect to an ordered partition of other nodes. For each ordered partition a set of feasibility constraints is derived. Then, utilizing the submodular property of the entropy function and a novel geometrical approach, the results of different ordered partitions are consolidated, which lead to sufficient conditions for our problem. The proposed scheme achieves operational separation between source coding and channel coding. It is shown that sufficient conditions are indeed necessary conditions in two special cooperative networks, namely, Aref network and finite-field deterministic network. Also, in Gaussian cooperative networks, it is shown that reliable transmission of all DMCS whose Slepian-Wolf region intersects the cut-set bound region within a constant number of bits, is feasible. In particular, all results of the paper are specialized to obtain an achievable rate region for cooperative relay networks which includes relay networks and two-way relay networks.", "This paper proposes a practical coding scheme for the Slepian-Wolf problem of separate encoding of correlated sources. Finite-state machine (FSM) encoders, concatenated in parallel, are used at the transmit side and an iterative turbo decoder is applied at the receiver. Simulation results of system performance are presented for binary sources with different amounts of correlation. Obtained results show that the proposed technique outperforms by far both an equivalent uncoded system and a system coded with traditional (non-concatenated) FSM coding.", "This monograph presents a unified treatment of single- and multi-user problems in Shannon's information theory where we depart from the requirement that the error probability decays asymptotically in the blocklength. Instead, the error probabilities for various problems are bounded above by a non-vanishing constant and the spotlight is shone on achievable coding rates as functions of the growing blocklengths. This represents the study of asymptotic estimates with non-vanishing error probabilities.In Part I, after reviewing the fundamentals of information theory, we discuss Strassen's seminal result for binary hypothesis testing where the type-I error probability is non-vanishing and the rate of decay of the type-II error probability with growing number of independent observations is characterized. In Part II, we use this basic hypothesis testing result to develop second- and sometimes, even third-order asymptotic expansions for point-to-point communication. Finally in Part III, we consider network information theory problems for which the second order asymptotics are known. These problems include some classes of channels with random state, the multiple-encoder distributed lossless source coding (Slepian-Wolf) problem and special cases of the Gaussian interference and multiple-access channels. Finally, we discuss avenues for further research." ] }
0707.1099
1670687021
We are concerned with the problem of maximizing the worst-case lifetime of a data-gathering wireless sensor network consisting of a set of sensor nodes directly communicating with a base-station.We propose to solve this problem by modeling sensor node and base-station communication as the interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). We provide practical and scalable interactive communication protocols for data gathering in sensor networks and demonstrate their efficiency compared to traditional approaches. In this paper, we first develop a formalism to address the problem of worst-case interactive communication between a set of multiple correlated informants and a recipient. We realize that there can be different objectives to achieve in such a communication scenario and compute the optimal number of messages and bits exchanged to realize these objectives. Then, we propose to adapt these results in the context of single-hop data-gathering sensor networks. Finally, based on this proposed formalism, we propose a clustering based communication protocol for large sensor networks and demonstrate its superiority over a traditional clustering protocol.
One of the essential characteristic of the standard DSC problem is that the information sources, also called encoders or informants, are not allowed to interact or cooperate with each other, for the purpose of compressing their information. There are two approaches to solve the DSC problem. First, allow the data-gathering node, also called decoder or recipient, and the informants to interact with each other. Second, do not allow the interaction between the recipient and informants. Starting with the seminal paper @cite_11 , almost all of the work in the area of DSC has followed the second approach. In the recent past, Pradhan and Ramchandran @cite_0 and later @cite_9 @cite_18 @cite_2 @cite_10 @cite_15 @cite_17 have provided various practical schemes to achieve the optimal performance using this approach. An interested reader can refer to the survey in @cite_12 for more information. However, only a little work @cite_13 @cite_7 , has been done towards solving DSC problem when the recipient and the informants are allowed to interact with each other. Also, this work stops well short of addressing the general multiple correlated informants - single recipient'' interactive communication problem, which we are concerned with addressing in this paper.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_7", "@cite_9", "@cite_17", "@cite_0", "@cite_2", "@cite_15", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2138256990", "2074430484", "2109053700", "2963175488" ], "abstract": [ "We address the problem of compressing correlated distributed sources, i.e., correlated sources which are not co-located or which cannot cooperate to directly exploit their correlation. We consider the related problem of compressing a source which is correlated with another source that is available only at the decoder. This problem has been studied in the information theory literature under the name of the Slepian-Wolf (1973) source coding problem for the lossless coding case, and as \"rate-distortion with side information\" for the lossy coding case. We provide a constructive practical framework based on algebraic trellis codes dubbed as DIstributed Source Coding Using Syndromes (DISCUS), that can be applicable in a variety of settings. Simulation results are presented for source coding of independent and identically distributed (i.i.d.) Gaussian sources with side information available at the decoder in the form of a noisy version of the source to be coded. Our results reveal the promise of this approach: using trellis-based quantization and coset construction, the performance of the proposed approach is 2-5 dB from the Wyner-Ziv (1976) bound.", "This paper re-visits Shayevitz & Feder's recent ‘Posterior Matching Scheme’, an explicit, dynamical system encoder for communication with feedback that treats the message as a point on the [0,1] line and achieves capacity on memo-ryless channels. It has two key properties that ensure that it maximizes mutual information at each step: (a) the encoder sequentially hands the decoder what is missing; and (b) the next input has the desired statistics. Motivated by brain-machine interface applications and multi-antenna communications, we consider developing dynamical system feedback encoders for scenarios when the message point lies in higher dimensions. We develop a necessary and sufficient condition — the Jacobian equation — for any dynamical system encoder that maximizes mutual information. In general, there are many solutions to this equation. We connect this to the Monge-Kantorovich Optimal Transportation Problem, which provides a framework to identify a unique solution suiting a specific purpose. We provide two examplar capacity-achieving solutions — for different purposes — for the multi-antenna Gaussian channel with feedback. This insight further elucidates an interesting relationship between interactive decision theory problems and the theory of optimal transportation.", "Consider a pair of correlated Gaussian sources (X 1,X 2). Two separate encoders observe the two components and communicate compressed versions of their observations to a common decoder. The decoder is interested in reconstructing a linear combination of X 1 and X 2 to within a mean-square distortion of D. We obtain an inner bound to the optimal rate-distortion region for this problem. A portion of this inner bound is achieved by a scheme that reconstructs the linear function directly rather than reconstructing the individual components X 1 and X 2 first. This results in a better rate region for certain parameter values. Our coding scheme relies on lattice coding techniques in contrast to more prevalent random coding arguments used to demonstrate achievable rate regions in information theory. We then consider the case of linear reconstruction of K sources and provide an inner bound to the optimal rate-distortion region. Some parts of the inner bound are achieved using the following coding structure: lattice vector quantization followed by ldquocorrelatedrdquo lattice-structured binning.", "We consider the problem of providing privacy, in the private information retrieval (PIR) sense, to users requesting data from a distributed storage system (DSS). The DSS uses an (n, k) Maximum Distance Separable (MDS) code to store the data reliably on unreliable storage nodes. Some of these nodes can be spies which report to a third party, such as an oppressive regime, which data is being requested by the user. An information theoretic PIR scheme ensures that a user can satisfy its request while revealing, to the spy nodes, no information on which data is being requested. A user can achieve PIR by downloading all the data in the DSS. However, this is not a feasible solution due to its high communication cost. We construct PIR schemes with low download communication cost. When there is b = 1 spy node in the DSS, we construct PIR schemes with download cost 1 1−R per unit of requested data (R = k n is the code rate), achieving the information theoretic limit for linear schemes. The proposed schemes are universal since they depend on the code rate, but not on the generator matrix of the code. When there are 2 ≤ b ≤ n − k spy nodes, we devise linear PIR schemes that have download cost equal to b + k per unit of requested data." ] }
0707.1099
1670687021
We are concerned with the problem of maximizing the worst-case lifetime of a data-gathering wireless sensor network consisting of a set of sensor nodes directly communicating with a base-station.We propose to solve this problem by modeling sensor node and base-station communication as the interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). We provide practical and scalable interactive communication protocols for data gathering in sensor networks and demonstrate their efficiency compared to traditional approaches. In this paper, we first develop a formalism to address the problem of worst-case interactive communication between a set of multiple correlated informants and a recipient. We realize that there can be different objectives to achieve in such a communication scenario and compute the optimal number of messages and bits exchanged to realize these objectives. Then, we propose to adapt these results in the context of single-hop data-gathering sensor networks. Finally, based on this proposed formalism, we propose a clustering based communication protocol for large sensor networks and demonstrate its superiority over a traditional clustering protocol.
In @cite_13 , only the scenario in which two correlated informants communicate with a recipient is considered. It assumed that both the informants and recipient know the joint distribution of informants' data. Also, only the average of total number of bits exchanged is minimized. In @cite_25 , only two messages are allowed to be exchanged between the encoder and a decoder, which may not be optimal for the general communication problem. Conversely, it does not address the problem of computing the optimal number of messages exchanged between the encoder and a decoder as well as the optimal number of bits sent by the encoder and a decoder for the given objective of the communication in an interactive communication scenario. Also, unlike @cite_13 , this work concerns itself with the lossy compression at the encoders.
{ "cite_N": [ "@cite_13", "@cite_25" ], "mid": [ "2074430484", "2138185924", "1512160561", "140150137" ], "abstract": [ "This paper re-visits Shayevitz & Feder's recent ‘Posterior Matching Scheme’, an explicit, dynamical system encoder for communication with feedback that treats the message as a point on the [0,1] line and achieves capacity on memo-ryless channels. It has two key properties that ensure that it maximizes mutual information at each step: (a) the encoder sequentially hands the decoder what is missing; and (b) the next input has the desired statistics. Motivated by brain-machine interface applications and multi-antenna communications, we consider developing dynamical system feedback encoders for scenarios when the message point lies in higher dimensions. We develop a necessary and sufficient condition — the Jacobian equation — for any dynamical system encoder that maximizes mutual information. In general, there are many solutions to this equation. We connect this to the Monge-Kantorovich Optimal Transportation Problem, which provides a framework to identify a unique solution suiting a specific purpose. We provide two examplar capacity-achieving solutions — for different purposes — for the multi-antenna Gaussian channel with feedback. This insight further elucidates an interesting relationship between interactive decision theory problems and the theory of optimal transportation.", "The reduction in communication achievable by interaction is investigated. The model assumes two communicators: an informant having a random variable X, and a recipient having a possibly dependent random variable Y. Both communicators want the recipient to learn X with no probability of error, whereas the informant may or may not learn Y. To that end, they alternate in transmitting messages comprising finite sequences of bits. Messages are transmitted over an error-free channel and are determined by an agreed-upon, deterministic protocol for (X,Y) (i.e. a protocol for transmitting X to a person who knows Y). A two-message protocol is described, and its worst case performance is investigated. >", "Index coding studies multiterminal source-coding problems where a set of receivers are required to decode multiple (possibly different) messages from a common broadcast, and they each know some messages a priori . In this paper, at the receiver end, we consider a special setting where each receiver knows only one message a priori , and each message is known to only one receiver. At the broadcasting end, we consider a generalized setting where there could be multiple senders, and each sender knows a subset of the messages. The senders collaborate to transmit an index code. This paper looks at minimizing the number of total coded bits the senders are required to transmit. When there is only one sender, we propose a pruning algorithm to find a lower bound on the optimal (i.e., the shortest) index codelength, and show that it is achievable by linear index codes. When there are two or more senders, we propose an appending technique to be used in conjunction with the pruning technique to give a lower bound on the optimal index codelength; we also derive an upper bound based on cyclic codes. While the two bounds do not match in general, for the special case where no two distinct senders know any message in common, the bounds match, giving the optimal index codelength. The results are expressed in terms of strongly connected components in directed graphs that represent the index-coding problems.", "We develop communication strategies for the rate-constrained interactive decoding of a message broadcast to a group of interested users. This situation diers from the relay channel in that all users are interested in the transmitted message, and from the broadcast channel because no user can decode on its own. We focus on two-user scenarios, and describe a baseline strategy that uses ideas of coding with decoder side information. One user acts initially as a relay for the other. That other user then decodes the message and sends back random parity bits, enabling the first user to decode. We show how to improve on this scheme’s performance through a conversation consisting of multiple rounds of discussion. While there are now more messages, each message is shorter, lowering the overall rate of the conversation. Such multi-round conversations can be more ecient because earlier messages serve as side information known at both encoder and decoder. We illustrate these ideas for binary erasure channels. We show that multi-round conversations can decode using less overall rate than is possible with the single-round scheme." ] }
0707.1099
1670687021
We are concerned with the problem of maximizing the worst-case lifetime of a data-gathering wireless sensor network consisting of a set of sensor nodes directly communicating with a base-station.We propose to solve this problem by modeling sensor node and base-station communication as the interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). We provide practical and scalable interactive communication protocols for data gathering in sensor networks and demonstrate their efficiency compared to traditional approaches. In this paper, we first develop a formalism to address the problem of worst-case interactive communication between a set of multiple correlated informants and a recipient. We realize that there can be different objectives to achieve in such a communication scenario and compute the optimal number of messages and bits exchanged to realize these objectives. Then, we propose to adapt these results in the context of single-hop data-gathering sensor networks. Finally, based on this proposed formalism, we propose a clustering based communication protocol for large sensor networks and demonstrate its superiority over a traditional clustering protocol.
A preliminary version of our ideas appears in @cite_22 , where we also extend the notions of and , proposed in @cite_19 and derive some of their properties. We intend to address the average-case communication problem and some other variations of the problem considered here, in future.
{ "cite_N": [ "@cite_19", "@cite_22" ], "mid": [ "2026910567", "2251314334", "1985697096", "2001442493" ], "abstract": [ "In this paper, we address the problem of characterizing the instances of the multiterminal source model of Csiszar and Narayan in which communication from all terminals is needed for establishing a secret key of maximum rate. We give an information-theoretic sufficient condition for identifying such instances. We believe that our sufficient condition is in fact an exact characterization, but we are only able to prove this in the case of the three-terminal source model.", "In this paper we describe an application of language technology to policy formulation, where it can support policy makers assess the acceptance of a yet-unpublished policy before the policy enters public consultation. One of the key concepts is that instead of relying on thematic similarity, we extract arguments expressed in support or opposition of positions that are general statements that are, themselves, consistent with the policy or not. The focus of this paper in this overall pipeline, is identifying arguments in text: we present and empirically evaluate the hypothesis that verbal tense and mood are good indicators of arguments that have not been explored in the relevant literature.", "Written communication of ideas is carried out on the basis of statistical probability in that a writer chooses that level of subject specificity and that combination of words which he feels will convey the most meaning. Since this process varies among individuals and since similar ideas are therefore relayed at different levels of specificity and by means of different words, the problem of literature searching by machines still presents major difficulties. A statistical approach to this problem will be outlined and the various steps of a system based on this approach will be described. Steps include the statistical analysis of a collection of documents in a field of interest, the establishment of a set of \"notions\" and the vocabulary by which they are expressed, the compilation of a thesaurus-type dictionary and index, the automatic encoding of documents by machine with the aid of such a dictionary, the encoding of topological notations (such as branched structures), the recording of the coded information, the establishment of a searching pattern for finding pertinent information, and the programming of appropriate machines to carry out a search.", "deficient core knowledge, I propose that we turn an osten- sible weakness into a strength. We should identify our mission as bring- ing together insights and theories that would otherwise remain scattered in other disciplines. Because of the lack of interchange among the disci- plines, hypotheses thoroughly discredited in one field may receive wide acceptance in another. Potential research paradigms remain fractured, with pieces here and there but no comprehensive statement to guide re- search. By bringing ideas together in one location, communication can aspire to become a master discipline that synthesizes related theories and concepts and exposes them to the most rigorous, comprehensive state- ment and exploration. Reaching this goal would require a more self-con- scious determination by communication scholars to plumb other fields and feed back their studies to outside researchers. At the same time, such an enterprise would enhance the theoretical rigor of communication scholarship proper. The idea" ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Let @math and @math be two sets of materialized views and indexes, respectively, that are termed candidate and are susceptible to reduce the execution cost of a given query set @math (generally supposed representative of system workload). Let @math . Let @math be the storage space allotted by the data warehouse administrator to build objects (materialized views or indexes) from set @math . The joint materialized view and index selection problem consists in building an object configuration @math that minimizes the execution cost of @math , under storage space constraint. This NP-hard problem @cite_56 @cite_17 may be formalized as follows: @math ; @math , where @math is the disk space occupied by object @math .
{ "cite_N": [ "@cite_17", "@cite_56" ], "mid": [ "1623444038", "2132681112", "2071288167", "2122816893" ], "abstract": [ "Materialized view selection is a non-trivial task. Hence, its complexity must be reduced. A judicious choice of views must be cost-driven and influenced by the workload experienced by the system. In this paper, we propose a framework for materialized view selection that exploits a data mining technique (clustering), in order to determine clusters of similar queries. We also propose a view merging algorithm that builds a set of candidate views, as well as a greedy process for selecting a set of views to materialize. This selection is based on cost models that evaluate the cost of accessing data using views and the cost of storing these views. To validate our strategy, we executed a workload of decision-support queries on a test data warehouse, with and without using our strategy. Our experimental results demonstrate its efficiency, even when storage space is limited.", "A data warehouse uses multiple materialized views to efficiently process a given set of queries. These views are accessed by read-only queries and need to be maintained after updates to base tables. Due to the space constraint and maintenance cost constraint, the materialization of all views is not possible. Therefore, a subset of views needs to be selected to be materialized. The problem is NP-hard, therefore, exhaustive search is infeasible. In this paper, we design a View Relevance Driven Selection (VRDS) algorithm based on view relevance to select views. We take into consideration the query processing cost and the view maintenance cost. Our experimental results show that our heuristic aims to minimize the total processing cost, which is the sum of query processing cost and view maintenance cost. Finally, we compare our results against a popular greedy algorithm.", "A data warehouse stores materialized views of aggregate data derived from a fact table in order to minimize the query response time. One of the most important decisions in designing the data warehouse is the selection of materialized views. This paper presents an algorithm which provides appropriate views to be materialized while the goal is to minimize the query response time and maintenance cost. We use a data cube lattice, frequency of queries and updates on views, and view size to select views to be materialized using greedy algorithms. In spite of the simplicity, our algorithm selects views which give us better performance than views that selected by existing algorithms.", "Automatically selecting an appropriate set of materialized views and indexes for SQL databases is a non-trivial task. A judicious choice must be cost-driven and influenced by the workload experienced by the system. Although there has been work in materialized view selection in the context of multidimensional (OLAP) databases, no past work has looked at the problem of building an industry-strength tool for automated selection of materialized views and indexes for SQL workloads. In this paper, we present an end-to-end solution to the problem of selecting materialized views and indexes. We describe results of extensive experimental evaluation that demonstrate the effectiveness of our techniques. Our solution is implemented as part of a tuning wizard that ships with Microsoft SQL Server 2000." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Classical papers in materialized view selection introduce a lattice framework that models and captures dependency (ancestor or descendent) among aggregate views in a multidimensional context @cite_34 @cite_23 @cite_0 @cite_37 . This lattice is greedily browsed with the help of cost models to select the best views to materialize. This problem has first been addressed in one data cube and then extended to multiple cubes @cite_8 . Another theoretical framework, the AND-OR view graph, may also be used to capture the relationships between views @cite_36 @cite_57 @cite_9 @cite_39 . Unfortunately, the majority of these solutions are theoretical and are not truly scalable.
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_36", "@cite_9", "@cite_39", "@cite_0", "@cite_57", "@cite_23", "@cite_34" ], "mid": [ "1623444038", "2055686029", "2055899255", "2950700385" ], "abstract": [ "Materialized view selection is a non-trivial task. Hence, its complexity must be reduced. A judicious choice of views must be cost-driven and influenced by the workload experienced by the system. In this paper, we propose a framework for materialized view selection that exploits a data mining technique (clustering), in order to determine clusters of similar queries. We also propose a view merging algorithm that builds a set of candidate views, as well as a greedy process for selecting a set of views to materialize. This selection is based on cost models that evaluate the cost of accessing data using views and the cost of storing these views. To validate our strategy, we executed a workload of decision-support queries on a test data warehouse, with and without using our strategy. Our experimental results demonstrate its efficiency, even when storage space is limited.", "We propose a probabilistic formulation of joint silhouette extraction and 3D reconstruction given a series of calibrated 2D images. Instead of segmenting each image separately in order to construct a 3D surface consistent with the estimated silhouettes, we compute the most probable 3D shape that gives rise to the observed color information. The probabilistic framework, based on Bayesian inference, enables robust 3D reconstruction by optimally taking into account the contribution of all views. We solve the arising maximum a posteriori shape inference in a globally optimal manner by convex relaxation techniques in a spatially continuous representation. For an interactively provided user input in the form of scribbles specifying foreground and background regions, we build corresponding color distributions as multivariate Gaussians and find a volume occupancy that best fits to this data in a variational sense. Compared to classical methods for silhouette-based multiview reconstruction, the proposed approach does not depend on initialization and enjoys significant resilience to violations of the model assumptions due to background clutter, specular reflections, and camera sensor perturbations. In experiments on several real-world data sets, we show that exploiting a silhouette coherency criterion in a multiview setting allows for dramatic improvements of silhouette quality over independent 2D segmentations without any significant increase of computational efforts. This results in more accurate visual hull estimation, needed by a multitude of image-based modeling approaches. We made use of recent advances in parallel computing with a GPU implementation of the proposed method generating reconstructions on volume grids of more than 20 million voxels in up to 4.41 seconds.", "Classical papers are of great help for beginners to get familiar with a new research area. However, digging them out is a difficult problem. This paper proposes Claper, a novel academic recommendation system based on two proven principles: the Principle of Download Persistence and the Principle of Citation Approaching (we prove them based on real-world datasets). The principle of download persistence indicates that classical papers have few decreasing download frequencies since they were published. The principle of citation approaching indicates that a paper which cites a classical paper is likely to cite citations of that classical paper. Our experimental results based on large-scale real-world datasets illustrate Claper can effectively recommend classical papers of high quality to beginners and thus help them enter their research areas.", "Topic models, such as Latent Dirichlet Allocation (LDA), posit that documents are drawn from admixtures of distributions over words, known as topics. The inference problem of recovering topics from admixtures, is NP-hard. Assuming separability, a strong assumption, [4] gave the first provable algorithm for inference. For LDA model, [6] gave a provable algorithm using tensor-methods. But [4,6] do not learn topic vectors with bounded @math error (a natural measure for probability vectors). Our aim is to develop a model which makes intuitive and empirically supported assumptions and to design an algorithm with natural, simple components such as SVD, which provably solves the inference problem for the model with bounded @math error. A topic in LDA and other models is essentially characterized by a group of co-occurring words. Motivated by this, we introduce topic specific Catchwords, group of words which occur with strictly greater frequency in a topic than any other topic individually and are required to have high frequency together rather than individually. A major contribution of the paper is to show that under this more realistic assumption, which is empirically verified on real corpora, a singular value decomposition (SVD) based algorithm with a crucial pre-processing step of thresholding, can provably recover the topics from a collection of documents drawn from Dominant admixtures. Dominant admixtures are convex combination of distributions in which one distribution has a significantly higher contribution than others. Apart from the simplicity of the algorithm, the sample complexity has near optimal dependence on @math , the lowest probability that a topic is dominant, and is better than [4]. Empirical evidence shows that on several real world corpora, both Catchwords and Dominant admixture assumptions hold and the proposed algorithm substantially outperforms the state of the art [5]." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
A wavelet framework for adaptively representing multidimensional data cubes has also been proposed @cite_40 . This method decomposes data cubes into an indexed hierarchy of wavelet view elements that correspond to partial and residual aggregations of data cubes. An algorithm greedily selects a non-expensive set of wavelet view elements that minimizes the average processing cost of data cube queries. In the same spirit, sis02dwa proposed the Dwarf structure, which compresses data cubes. Dwarf identifies prefix and suffix redundancies within cube cells and factors them out by coalescing their storage. Suppressing redundancy improves the maintenance and interrogation costs of data cubes. These approaches are very interesting, but they are mainly focused on computing efficient data cubes by changing their physical design.
{ "cite_N": [ "@cite_40" ], "mid": [ "2158924892", "1498726635", "1557602253", "2069912449" ], "abstract": [ "This article presents a method for adaptively representing multidimensional data cubes using wavelet view elements in order to more efficiently support data analysis and querying involving aggregations. The proposed method decomposes the data cubes into an indexed hierarchy of wavelet view elements. The view elements differ from traditional data cube cells in that they correspond to partial and residual aggregations of the data cube. The view elements provide highly granular building blocks for synthesizing the aggregated and range-aggregated views of the data cubes. We propose a strategy for selectively materializing alternative sets of view elements based on the patterns of access of views. We present a fast and optimal algorithm for selecting a non-expansive set of wavelet view elements that minimizes the average processing cost for supporting a population of queries of data cube views. We also present a greedy algorithm for allowing the selective materialization of a redundant set of view element sets which, for measured increases in storage capacity, further reduces processing costs. Experiments and analytic results show that the wavelet view element framework performs better in terms of lower processing and storage cost than previous methods that materialize and store redundant views for online analytical processing (OLAP).", "Data cube computation is one of the most essential but expensive operations in data warehousing. Previous studies have developed two major approaches, top-down vs. bottom-up. The former, represented by the Multi-Way Array Cube (called MultiWay) algorithm [25], aggregates simultaneously on multiple dimensions; however, it cannot take advantage of Apriori pruning [2] when computing iceberg cubes (cubes that contain only aggregate cells whose measure value satisfies a threshold, called iceberg condition). The latter, represented by two algorithms: BUC [6] and H-Cubing[11], computes the iceberg cube bottom-up and facilitates Apriori pruning. BUC explores fast sorting and partitioning techniques; whereas H-Cubing explores a data structure, H-Tree, for shared computation. However, none of them fully explores multi-dimensional simultaneous aggregation. In this paper, we present a new method, Star-Cubing, that integrates the strengths of the previous three algorithms and performs aggregations on multiple dimensions simultaneously. It utilizes a star-tree structure, extends the simultaneous aggregation methods, and enables the pruning of the group-by's that do not satisfy the iceberg condition. Our performance study shows that Star-Cubing is highly efficient and outperforms all the previous methods in almost all kinds of data distributions.", "Data cubes are widely used as a powerful tool to provide multidimensional views in data warehousing and On-Line Analytical Processing (OLAP). However, with increasing data sizes, it is becoming computationally expensive to perform data cube analysis. The problem is exacerbated by the demand of supporting more complicated aggregate functions (e.g. CORRELATION, Statistical Analysis) as well as supporting frequent view updates in data cubes. This calls for new scalable and efficient data cube analysis systems. In this paper, we introduce HaCube, an extension of MapReduce, designed for efficient parallel data cube analysis on large-scale data by taking advantages from both MapReduce (in terms of scalability) and parallel DBMS (in terms of efficiency). We also provide a general data cube materialization algorithm which is able to facilitate the features in MapReduce-like systems towards an efficient data cube computation. Furthermore, we demonstrate how HaCube supports view maintenance through either incremental computation (e.g. used for SUM or COUNT) or recomputation (e.g. used for MEDIAN or CORRELATION). We implement HaCube by extending Hadoop and evaluate it based on the TPC-D benchmark over billions of tuples on a cluster with over 320 cores. The experimental results demonstrate the efficiency, scalability and practicality of HaCube for cube analysis over a large amount of data in a distributed environment.", "This paper introduces new tight frames of curvelets to address the problem of finding optimally sparse representations of objects with discontinuities along piecewise C 2 edges. Conceptually, the curvelet transform is a multiscale pyramid with many directions and positions at each length scale, and needle-shaped elements at fine scales. These elements have many useful geometric multiscale features that set them apart from classical multiscale representations such as wavelets. For instance, curvelets obey a parabolic scaling relation which says that at scale 2 -j , each element has an envelope that is aligned along a ridge of length 2 -j 2 and width 2 -j . We prove that curvelets provide an essentially optimal representation of typical objects f that are C 2 except for discontinuities along piecewise C 2 curves. Such representations are nearly as sparse as if f were not singular and turn out to be far more sparse than the wavelet decomposition of the object. For instance, the n-term partial reconstruction f C n obtained by selecting the n largest terms in the curvelet series obeys ∥f - f C n ∥ 2 L2 ≤ C . n -2 . (log n) 3 , n → ∞. This rate of convergence holds uniformly over a class of functions that are C 2 except for discontinuities along piecewise C 2 curves and is essentially optimal. In comparison, the squared error of n-term wavelet approximations only converges as n -1 as n → ∞, which is considerably worse than the optimal behavior." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Other approaches detect common sub-expressions within workload queries in the relational context @cite_22 @cite_11 @cite_48 . The view selection problem then consists in finding common subexpressions corresponding to intermediary results that are suitable to materialize. However, browsing is very costly and these methods are not truly scalable with respect to the number of queries.
{ "cite_N": [ "@cite_48", "@cite_22", "@cite_11" ], "mid": [ "1597931764", "1810232998", "2098388305", "2645036108" ], "abstract": [ "Recently, multi-query optimization techniques have been considered as beneficial in view selection setting. The main interest of such techniques relies in detecting common sub expressions between the different queries of workload. This feature can be exploited for sharing updates and space storage. However, due to the reuse a query change may entail an important reorganization of the multi query graph. In this paper, we present an approach that is based on multi-query optimization for view selection and that attempts to reduce the drawbacks resulting from these techniques. Finally, we present a performance study using workloads consisting of queries over the schema of the TPC-H benchmark. This study shows that our view selection provides significant benefits over the other approaches.", "This paper presents an innovative approach for the publication and discovery of Web services. The proposal is based on two previous works: DIRE (DIstributed REgistry), for the user-centered distributed replication of service-related information, and URBE (UDDI Registry By Example), for the semantic-aware match making between requests and available services. The integrated view also exploits USQL (Unified Service Query Language) to provide users with a higher level and homogeneous means to interact with the different registries. The proposal improves background technology in different ways: we integrate USQL as high-level language to state service requests, widen user notifications based on URBE semantic matching, and apply URBE match making to all the facets with which services can be described in DIRE. All these new concepts are demonstrated on a simple scenario.", "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm.", "Database systems frequently have to execute a set of related queries, which share several common subexpressions. Multi-query optimization exploits this, by finding evaluation plans that share common results. Current approaches to multi-query optimization assume that common subexpressions are materialized. Significant performance benefits can be had if common subexpressions are pipelined to their uses, without being materialized. However, plans with pipelining may not always be realizable with limited buffer space, as we show. We present a general model for schedules with pipelining, and present a necessary and sufficient condition for determining validity of a schedule under our model. We show that finding a valid schedule with minimum cost is NP-hard. We present a greedy heuristic for finding good schedules. Finally, we present a performance study that shows the benefit of our algorithms on batches of queries from the TPCD benchmark." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Finally, the most recent approaches are workload-driven. They syntactically analyze a workload to enumerate relevant candidate views @cite_43 . By exploiting the system's query optimizer, they greedily build a configuration of the most pertinent views. A workload is indeed a good starting point to predict future queries because these queries are probably within or syntactically close to a previous query workload. In addition, extracting candidate views from the workload ensures that future materialized views will probably be used when processing queries.
{ "cite_N": [ "@cite_43" ], "mid": [ "2010149990", "2241020437", "2100773341", "2081728040" ], "abstract": [ "Current trends in data management systems, such as cloud and multi-tenant databases, are leading to data processing environments that concurrently execute heterogeneous query workloads. At the same time, these systems need to satisfy diverse performance expectations. In these newly-emerging settings, avoiding potential Quality-of-Service (QoS) violations heavily relies on performance predictability, i.e., the ability to estimate the impact of concurrent query execution on the performance of individual queries in a continuously evolving workload. This paper presents a modeling approach to estimate the impact of concurrency on query performance for analytical workloads. Our solution relies on the analysis of query behavior in isolation, pairwise query interactions and sampling techniques to predict resource contention under various query mixes and concurrency levels. We introduce a simple yet powerful metric that accurately captures the joint effects of disk and memory contention on query performance in a single value. We also discuss predicting the execution behavior of a time-varying query workload through query-interaction timelines, i.e., a fine-grained estimation of the time segments during which discrete mixes will be executed concurrently. Our experimental evaluation on top of PostgreSQL TPC-H demonstrates that our models can provide query latency predictions within approximately 20 of the actual values in the average case.", "We consider MapReduce workloads that are produced by analytics applications. In contrast to ad hoc query workloads, analytics applications are comprised of fixed data flows that are run over newly arriving data sets or on different portions of an existing data set. Examples of such workloads include document analysis indexing, social media analytics, and ETL (Extract Transform Load). Motivated by these workloads, we propose a technique that predicts the runtime performance for a fixed set of queries running over varying input data sets. Our prediction technique splits each query into several segments where each segment’s performance is estimated using machine learning models. These per-segment estimates are plugged into a global analytical model to predict the overall query runtime. Our approach uses minimal statistics about the input data sets (e.g., tuple size, cardinality), which are complemented with historical information about prior query executions (e.g., execution time). We analyze the accuracy of predictions for several segment granularities on both standard analytical benchmarks such as TPC-DS [17], and on several real workloads. We obtain less than 25 prediction errors for 90 of predictions.", "One of the most challenging aspects of managing a very large data warehouse is identifying how queries will behave before they start executing. Yet knowing their performance characteristics --- their runtimes and resource usage --- can solve two important problems. First, every database vendor struggles with managing unexpectedly long-running queries. When these long-running queries can be identified before they start, they can be rejected or scheduled when they will not cause extreme resource contention for the other queries in the system. Second, deciding whether a system can complete a given workload in a given time period (or a bigger system is necessary) depends on knowing the resource requirements of the queries in that workload. We have developed a system that uses machine learning to accurately predict the performance metrics of database queries whose execution times range from milliseconds to hours. For training and testing our system, we used both real customer queries and queries generated from an extended set of TPC-DS templates. The extensions mimic queries that caused customer problems. We used these queries to compare how accurately different techniques predict metrics such as elapsed time, records used, disk I Os, and message bytes. The most promising technique was not only the most accurate, but also predicted these metrics simultaneously and using only information available prior to query execution. We validated the accuracy of this machine learning technique on a number of HP Neoview configurations. We were able to predict individual query elapsed time within 20 of its actual time for 85 of the test queries. Most importantly, we were able to correctly identify both the short and long-running (up to two hour) queries to inform workload management and capacity planning.", "Accurate query performance prediction (QPP) is central to effective resource management, query optimization and query scheduling. Analytical cost models, used in current generation of query optimizers, have been successful in comparing the costs of alternative query plans, but they are poor predictors of execution latency. As a more promising approach to QPP, this paper studies the practicality and utility of sophisticated learning-based models, which have recently been applied to a variety of predictive tasks with great success, in both static (i.e., fixed) and dynamic query workloads. We propose and evaluate predictive modeling techniques that learn query execution behavior at different granularities, ranging from coarse-grained plan-level models to fine-grained operator-level models. We demonstrate that these two extremes offer a tradeoff between high accuracy for static workload queries and generality to unforeseen queries in dynamic workloads, respectively, and introduce a hybrid approach that combines their respective strengths by selectively composing them in the process of QPP. We discuss how we can use a training workload to (i) pre-build and materialize such models offline, so that they are readily available for future predictions, and (ii) build new models online as new predictions are needed. All prediction models are built using only static features (available prior to query execution) and the performance values obtained from the offline execution of the training workload. We fully implemented all these techniques and extensions on top of Postgre SQL and evaluated them experimentally by quantifying their effectiveness over analytical workloads, represented by well-established TPC-H data and queries. The results provide quantitative evidence that learning-based modeling for QPP is both feasible and effective for both static and dynamic workload scenarios." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
The index selection problem has been studied for many years in databases @cite_53 @cite_49 @cite_43 @cite_1 @cite_50 @cite_45 @cite_58 . In the more specific context of data warehouses, existing research studies may be clustered into two families: algorithms that optimize maintenance cost @cite_6 and algorithms that optimize query response time @cite_24 @cite_18 @cite_33 . In both cases, optimization is realized under storage space constraint. In this paper, we focus on the second family of solutions, which is relevant in our context. Studies falling in this category may be further categorized depending on how the set of candidate indexes @math and the final configuration of indexes @math are built.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_53", "@cite_1", "@cite_6", "@cite_24", "@cite_43", "@cite_45", "@cite_50", "@cite_49", "@cite_58" ], "mid": [ "2059326501", "2078524330", "2165481122", "1548134621" ], "abstract": [ "Abstract Index selection for relational databases is an important issue which has been researched quite extensively [1–5]. In the literature, in index selection algorithms for relational databases, at most one index is considered as a candidate for each attribute of a relation. However, it is possible that more than one different type of indexes with different storage space requirements may be present as candidates for an attribute. Also, it may not be possible to eliminate locally all but one of the candidate indexes for an attribute due to different benefits and storage space requirements associated with the candidates. Thus, the algorithms available in the literature for optimal index selection may not be used when there are multiple candidates for each attribute and there is a need for a global optimization algorithm in which at most one index can be selected from a set of candidate indexes for an attribute. The problem of index selection in the presence of multiple candidate indexes for each attribute (which we call the multiple choice index selection problem) has not been addressed in the literature. In this paper, we present the multiple choice index selection problem, show that it is NP-hard, and present an algorithm which gives an approximately optimal solution within a user specified error bound in a logarithmic time order.", "A problem of considerable interest in the design of a database is the selection of indexes. In this paper, we present a probabilistic model of transactions (queries, updates, insertions, and deletions) to a file. An evaluation function, which is based on the cost saving (in terms of the number of page accesses) attributable to the use of an index set, is then developed. The maximization of this function would yield an optimal set of indexes. Unfortunately, algorithms known to solve this maximization problem require an order of time exponential in the total number of attributes in the file. Consequently, we develop the theoretical basis which leads to an algorithm that obtains a near optimal solution to the index selection problem in polynomial time. The theoretical result consists of showing that the index selection problem can be solved by solving a properly chosen instance of the knapsack problem. A theoretical bound for the amount by which the solution obtained by this algorithm deviates from the true optimum is provided. This result is then interpreted in the light of evidence gathered through experiments.", "We study the index selection problem: Given a workload consisting of SQL statements on a database, and a user-specified storage constraint, recommend a set of indexes that have the maximum benefit for the given workload. We present a formal statement for this problem and show that it is computationally \"hard\" to solve or even approximate it. We develop a new algorithm for the problem which is based on treating the problem as a knapsack problem. The novelty of our approach lies in an LP (linear programming) based method that assigns benefits to individual indexes. For a slightly modified algorithm, that does more work, we prove that we can give instance specific guarantees about the quality of our solution. We conduct an extensive experimental evaluation of this new heuristic and compare it with previous solutions. Our results demonstrate that our solution is more scalable while achieving comparable quality.", "We critically evaluate the current state of research in multiple query opGrnization, synthesize the requirements for a modular opCrnizer, and propose an architecture. Our objective is to facilitate future research by providing modular subproblems and a good general-purpose data structure. In rhe context of this archiuzcture. we provide an improved subsumption algorithm. and discuss migration paths from single-query to multiple-query oplimizers. The architecture has three key ingredients. First. each type of work is performed at an appropriate level of abstraction. Segond, a uniform and very compact representation stores all candidate strategies. Finally, search is handled as a discrete optimization problem separable horn the query processing tasks. 1. Problem Definition and Objectives A multiple query optimizer (h4QO) takes several queries as input and seeks to generate a good multi-strategy, an executable operator gaph that simultaneously computes answers to all the queries. The idea is to save by evaluating common subexpressions only once. The commonalities to be exploited include identical selections and joins, predicates that subsume other predicates, and also costly physical operators such as relation scans and SOULS. The multiple query optimization problem is to find a multi-strategy that minimizes the total cost (with overlap exploited). Figure 1 .l shows a multi-strategy generated exploiting commonalities among queries Ql-Q3 at both the logical and physical level. To be really satisfactory, a multi-query optimization algorithm must offer solution quality, ejjiciency, and ease of Permission to copy without fee all a part of this mataial is granted provided that the copies are nut made a diitributed for direct commercial advantage, the VIDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Da Blse Endowment. To copy otherwise. urturepublim identify many kinds of commonalities (e.g., by predicate splitting, sharing relation scans); and search effectively to choose a good combination of l-strategies. Efficiency requires that the optimization avoid a combinatorial explosion of possibilities, and that within those it considers, redundant work on common subexpressions be minimized. ‘Finally, ease of implementation is crucial an algorithm will be practically useful only if it is conceptually simple, easy to attach to an optimizer, and requires relatively little additional soft-" ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Selecting a set of candidate indexes may be automatic or manual. Warehouse administrators may indeed appeal to their expertise and manually provide, from a given workload, a set of candidate indexes @cite_49 @cite_32 @cite_29 . Such a choice is however subjective. Moreover, the task may be very hard to achieve when the number of queries is very high. In opposition, candidate indexes can also be extracted automatically, through a syntactic analysis of queries @cite_14 @cite_1 @cite_33 . Such an analysis depends on the DBMS, since each DBMS is queried through a specific syntax derived from the SQL standard.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_29", "@cite_1", "@cite_32", "@cite_49" ], "mid": [ "2017733008", "2059326501", "2050272677", "1851390469" ], "abstract": [ "Analytical queries defined on data warehouses are complex and use several join operations that are very costly, especially when run on very large data volumes. To improve response times, data warehouse administrators casually use indexing techniques. This task is nevertheless complex and fastidious. In this paper, we present an automatic, dynamic index selection method for data warehouses that is based on incremental frequent itemset mining from a given query workload. The main advantage of this approach is that it helps update the set of selected indexes when workload evolves instead of recreating it from scratch. Preliminary experimental results illustrate the efficiency of this approach, both in terms of performance enhancement and overhead.", "Abstract Index selection for relational databases is an important issue which has been researched quite extensively [1–5]. In the literature, in index selection algorithms for relational databases, at most one index is considered as a candidate for each attribute of a relation. However, it is possible that more than one different type of indexes with different storage space requirements may be present as candidates for an attribute. Also, it may not be possible to eliminate locally all but one of the candidate indexes for an attribute due to different benefits and storage space requirements associated with the candidates. Thus, the algorithms available in the literature for optimal index selection may not be used when there are multiple candidates for each attribute and there is a need for a global optimization algorithm in which at most one index can be selected from a set of candidate indexes for an attribute. The problem of index selection in the presence of multiple candidate indexes for each attribute (which we call the multiple choice index selection problem) has not been addressed in the literature. In this paper, we present the multiple choice index selection problem, show that it is NP-hard, and present an algorithm which gives an approximately optimal solution within a user specified error bound in a logarithmic time order.", "Considering the wide deployment of databases and its size, particularly in data warehouses, it is important to automate the physical design so that the task of the database administrator (DBA) is minimized. An important part of physical database design is index selection. An auto-index selection tool capable of analyzing large amounts of data and suggesting a good set of indexes for a database is the goal of auto-administration. Clustering is a data mining technique with broad appeal and usefulness in exploratory data analysis. This idea provides a motivation to apply clustering techniques to obtain good indexes for a workload in the database. We describe a technique for auto-indexing using clustering. The experiments conducted show that the proposed technique performs better than Microsoft SQL server index selection tool (1ST) and can suggest indexes faster than Microsoft's IST.", "In this paper we describe novel techniques that make it possible to build an industrial-strength tool for automating the choice of indexes in the physical design of a SQL database. The tool takes as input a workload of SQL queries, and suggests a set of suitable indexes. We ensure that the indexes chosen are effective in reducing the cost of the workload by keeping the index selection tool and the query optimizer \"in step\". The number of index sets that must be evaluated to find the optimal configuration is very large. We reduce the complexity of this problem using three techniques. First, we remove a large number of spurious indexes from consideration by taking into account both query syntax and cost information. Second, we introduce optimizations that make it possible to cheaply evaluate the “goodness” of an index set. Third, we describe an iterative approach to handle the complexity arising from multicolumn indexes. The tool has been implemented on Microsoft SQL Server 7.0. We performed extensive experiments over a range of workloads, including TPC-D. The results indicate that the tool is efficient and its choices are close to optimal." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Ascending greedy methods start from an empty set of candidate indexes @cite_20 @cite_49 @cite_29 @cite_14 . They incrementally add in indexes minimizing cost. This process stops when cost ceases decreasing. Contrarily, descending greedy methods consider the whole set of candidate indexes as a starting point. Then, at each iteration, indexes are pruned @cite_20 @cite_32 . If workload cost before pruning is lower (respectively, greater) than workload cost after pruning, the pruned indexes are useless (respectively, useful) for reducing cost. The pruning process stops when cost increases after pruning.
{ "cite_N": [ "@cite_14", "@cite_29", "@cite_32", "@cite_49", "@cite_20" ], "mid": [ "2006997130", "2435989427", "2153107031", "2963287528" ], "abstract": [ "We introduce static index pruning methods that significantly reduce the index size in information retrieval systems.We investigate uniform and term-based methods that each remove selected entries from the index and yet have only a minor effect on retrieval results. In uniform pruning, there is a fixed cutoff threshold, and all index entries whose contribution to relevance scores is bounded above by a given threshold are removed from the index. In term-based pruning, the cutoff threshold is determined for each term, and thus may vary from term to term. We give experimental evidence that for each level of compression, term-based pruning outperforms uniform pruning, under various measures of precision. We present theoretical and experimental evidence that under our term-based pruning scheme, it is possible to prune the index greatly and still get retrieval results that are almost as good as those based on the full index.", "We propose to prune a random forest (RF) for resource-constrained prediction. We first construct a RF and then prune it to optimize expected feature cost & accuracy. We pose pruning RFs as a novel 0-1 integer program with linear constraints that encourages feature re-use. We establish total unimodularity of the constraint set to prove that the corresponding LP relaxation solves the original integer program. We then exploit connections to combinatorial optimization and develop an efficient primal-dual algorithm, scalable to large datasets. In contrast to our bottom-up approach, which benefits from good RF initialization, conventional methods are top-down acquiring features based on their utility value and is generally intractable, requiring heuristics. Empirically, our pruning algorithm outperforms existing state-of-the-art resource-constrained algorithms.", "We show that if performance measures in a stochastic scheduling problem satisfy a set of so-called partial conservation laws (PCL), which extend previously studied generalized conservation laws (GCL), then the problem is solved optimally by a priority-index policy for an appropriate range of linear performance objectives, where the optimal indices are computed by a one-pass adaptive-greedy algorithm, based on Klimov's. We further apply this framework to investigate the indexability property of restless bandits introduced by Whittle, obtaining the following results: (1) we identify a class of restless bandits (PCL-indexable) which are indexable; membership in this class is tested through a single run of the adaptive-greedy algorithm, which also computes the Whittle indices when the test is positive; this provides a tractable sufficient condition for indexability; (2) we further indentify the class of GCL-indexable bandits, which includes classical bandits, having the property that they are indexable under any linear reward objective. The analysis is based on the so-called achievable region method, as the results follow from new linear programming formulations for the problems investigated.", "We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation-a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Genetic algorithms are commonly used to resolve optimization problems. They have been adapted to the index selection problem @cite_45 . The initial population is a set of input indexes (an index is assimilated to an individual). The objective function to optimize is the workload cost corresponding to an index configuration. The combinatory construction of an index configuration is realized through the crossover, mutation and selection genetic operators. Eventually, the index selection problem has also been formulated in several studies as a knapsack problem @cite_44 @cite_21 @cite_1 @cite_50 where indexes are objects, index storage costs represent object weights, workload cost is the benefit function, and storage space is knapsack size.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_44", "@cite_45", "@cite_50" ], "mid": [ "2078524330", "2017565019", "2755583598", "2109865546" ], "abstract": [ "A problem of considerable interest in the design of a database is the selection of indexes. In this paper, we present a probabilistic model of transactions (queries, updates, insertions, and deletions) to a file. An evaluation function, which is based on the cost saving (in terms of the number of page accesses) attributable to the use of an index set, is then developed. The maximization of this function would yield an optimal set of indexes. Unfortunately, algorithms known to solve this maximization problem require an order of time exponential in the total number of attributes in the file. Consequently, we develop the theoretical basis which leads to an algorithm that obtains a near optimal solution to the index selection problem in polynomial time. The theoretical result consists of showing that the index selection problem can be solved by solving a properly chosen instance of the knapsack problem. A theoretical bound for the amount by which the solution obtained by this algorithm deviates from the true optimum is provided. This result is then interpreted in the light of evidence gathered through experiments.", "This paper considers a new variant of the two-dimensional bin packing problem where each rectangle is assigned a due date and each bin has a fixed processing time. Hence the objective is not only to minimize the number of bins, but also to minimize the maximum lateness of the rectangles. This problem is motivated by the cutting of stock sheets and the potential increased efficiency that might be gained by drawing on a larger pool of demand pieces by mixing orders, while also aiming to ensure a certain level of customer service. We propose a genetic algorithm for searching the solution space, which uses a new placement heuristic for decoding the gene based on the best fit heuristic designed for the strip packing problems. The genetic algorithm employs an innovative crossover operator that considers several different children from each pair of parents. Further, the dual objective is optimized hierarchically with the primary objective periodically alternating between maximum lateness and number of bins. As a result, the approach produces several non-dominated solutions with different trade-offs. Two further approaches are implemented. One is based on a previous Unified Tabu Search, suitably modified to tackle this revised problem. The other is randomized descent and serves as a benchmark for comparing the results. Comprehensive computational results are presented, which show that the Unified Tabu Search still works well in minimizing the bins, but the genetic algorithm performs slightly better. When also considering maximum lateness, the genetic algorithm is considerably better.", "This paper presents algorithm for optimal reconfiguration of distribution networks using hybrid heuristic genetic algorithm. Improvements introduced in this approach make it suitable for real-life networks with realistic degree of complexity and network size. The algorithm introduces several improvements related to the generation of initial set of possible solutions as well as crossover and mutation steps in genetic algorithm. Since the genetic algorithms are often used in distribution network reconfiguration problem, its application is well known, but most of the approaches have very poor effectiveness due to high level of individuals' rejections not-fulfilling radial network constraints requirements and poor convergence rate. One part of these problems is related to ineffective creation of initial population individuals. The other part of the problem in similar approaches is related to inefficient operators implemented in crossover and mutation process over created set of population individuals. The hybrid heuristic-genetic approach presented in this paper provides significant improvements in these areas. The presented algorithm can be used to find optimal radial distribution network topology with minimum network losses or with optimally balanced network loading. The algorithm is rested on a real size network of city of Dubrovnik to identify the optimal network topology after the interpolation (connection) of a new supply point.", "We propose a hybrid algorithm for finding a set of nondominated solutions of a multi objective optimization problem. In the proposed algorithm, a local search procedure is applied to each solution (i.e., each individual) generated by genetic operations. Our algorithm uses a weighted sum of multiple objectives as a fitness function. The fitness function is utilized when a pair of parent solutions are selected for generating a new solution by crossover and mutation operations. A local search procedure is applied to the new solution to maximize its fitness value. One characteristic feature of our algorithm is to randomly specify weight values whenever a pair of parent solutions are selected. That is, each selection (i.e., the selection of two parent solutions) is performed by a different weight vector. Another characteristic feature of our algorithm is not to examine all neighborhood solutions of a current solution in the local search procedure. Only a small number of neighborhood solutions are examined to prevent the local search procedure from spending almost all available computation time in our algorithm. High performance of our algorithm is demonstrated by applying it to multi objective flowshop scheduling problems." ] }
0707.1548
2951394397
Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes.
Another approach determines a trade-off between storage space allotted to indexes and materialized views, depending on query definition @cite_48 . According to the authors, the key factors to leverage query optimization is aggregation level, defined by the attribute list of Group by clauses in SQL queries, and the selectivity of attributes present in Where and Having clauses. View materialization indeed provides a great benefit for queries involving coarse granularity aggregations (few attributes in the Group by clause) because they produce few groups among a large number of tuples. On the other hand, indexes provide their best benefit with queries containing high selectivity attributes. Thus, queries with fine aggregations and high selectivity stimulate indexing, while queries with coarse aggregations and weak selectivity encourage view materialization.
{ "cite_N": [ "@cite_48" ], "mid": [ "1496130826", "2171492933", "2143672210", "1761301028" ], "abstract": [ "View materialization and indexing are the most effective techniques adopted in data warehouses to improve query performance. Since both materialization and indexing algorithms are driven by a constraint on the disk space made available for each, the designer would greatly benefit from being enabled to determine a priori which fractions of the global space available must be devoted to views and indexes, respectively, in order to optimally tune performances. In this paper we first present a comparative evaluation of the benefit (saving per disk page) brought by view materialization and indexing for a single query expressed on a star scheme. Then, we face the problem of determining an effective trade-off between the two space fractions for the core workload of the warehouse. Some experimental results are reported, which prove that the estimated trade-off is satisfactorily near to the optimal one.", "Materialized views can provide massive improvements in query processing time, especially for aggregation queries over large tables. To realize this potential, the query optimizer must know how and when to exploit materialized views. This paper presents a fast and scalable algorithm for determining whether part or all of a query can be computed from materialized views and describes how it can be incorporated in transformation-based optimizers. The current version handles views composed of selections, joins and a final group-by. Optimization remains fully cost based, that is, a single “best” rewrite is not selected by heuristic rules but multiple rewrites are generated and the optimizer chooses the best alternative in the normal way. Experimental results based on an implementation in Microsoft SQL Server show outstanding performance and scalability. Optimization time increases slowly with the number of views but remains low even up to a thousand.", "Cost-based query optimizers need to estimate the selectivity of conjunctive predicates when comparing alternative query execution plans. To this end, advanced optimizers use multivariate statistics to improve information about the joint distribution of attribute values in a table. The joint distribution for all columns is almost always too large to store completely, and the resulting use of partial distribution information raises the possibility that multiple, non-equivalent selectivity estimates may be available for a given predicate. Current optimizers use cumbersome ad hoc methods to ensure that selectivities are estimated in a consistent manner. These methods ignore valuable information and tend to bias the optimizer toward query plans for which the least information is available, often yielding poor results. In this paper we present a novel method for consistent selectivity estimation based on the principle of maximum entropy (ME). Our method exploits all available information and avoids the bias problem. In the absence of detailed knowledge, the ME approach reduces to standard uniformity and independence assumptions. Experiments with our prototype implementation in DB2 UDB show that use of the ME approach can improve the optimizer’s cardinality estimates by orders of magnitude, resulting in better plan quality and significantly reduced query execution times. For almost all queries, these improvements are obtained while adding only tens of milliseconds to the overall time required for query optimization.", "Efficient processing of aggregation queries is essential for decision support applications. This paper describes a class of query transformations, called eager aggregation and laty aggregation, that allows a query optimizer to move group-by operations up and down the query tree. Eager aggregation partially pushes a groupby past a join. After a group-by is partially pushed down, we still need to perform the original groupby in the upper query block. Eager aggregation reduces the number of input rows to the join and thus may result in a better overall plan. The reverse transformation, lazy aggregation, pulls a group-by above a join and combines two group-by operations into one. This transformation is typically of interest when an aggregation query references a grouped view (a view containing a groupby). Experimental results show that the technique is very beneficial for queries in the TPC-D benchmark." ] }
0707.1913
2951035381
Collaborative work on unstructured or semi-structured documents, such as in literature corpora or source code, often involves agreed upon templates containing metadata. These templates are not consistent across users and over time. Rule-based parsing of these templates is expensive to maintain and tends to fail as new documents are added. Statistical techniques based on frequent occurrences have the potential to identify automatically a large fraction of the templates, thus reducing the burden on the programmers. We investigate the case of the Project Gutenberg corpus, where most documents are in ASCII format with preambles and epilogues that are often copied and pasted or manually typed. We show that a statistical approach can solve most cases though some documents require knowledge of English. We also survey various technical solutions that make our approach applicable to large data sets.
The algorithmics of finding frequent items or patterns has received much attention. For a survey of the stream-based algorithms, see Cormode and Muthukrishnan [p. 253] corm:whats-hot . Finding frequent patterns robustly is possible using gap constraints @cite_29 .
{ "cite_N": [ "@cite_29" ], "mid": [ "2136593687", "2103442564", "2117169652", "2116396873" ], "abstract": [ "As data mining techniques are being increasingly applied to non-traditional domains, existing approaches for finding frequent itemsets cannot be used as they cannot model the requirement of these domains. An alternate way of modeling the objects in these data sets is to use graphs. Within that model, the problem of finding frequent patterns becomes that of discovering subgraphs that occur frequently over the entire set of graphs.The authors present a computationally efficient algorithm for finding all frequent subgraphs in large graph databases. We evaluated the performance of the algorithm by experiments with synthetic datasets as well as a chemical compound dataset. The empirical results show that our algorithm scales linearly with the number of input transactions and it is able to discover frequent subgraphs from a set of graph transactions reasonably fast, even though we have to deal with computationally hard problems such as canonical labeling of graphs and subgraph isomorphism which are not necessary for traditional frequent itemset discovery.", "We propose a novel unsupervised method for discovering recurring patterns from a single view. A key contribution of our approach is the formulation and validation of a joint assignment optimization problem where multiple visual words and object instances of a potential recurring pattern are considered simultaneously. The optimization is achieved by a greedy randomized adaptive search procedure (GRASP) with moves specifically designed for fast convergence. We have quantified systematically the performance of our approach under stressed conditions of the input (missing features, geometric distortions). We demonstrate that our proposed algorithm outperforms state of the art methods for recurring pattern discovery on a diverse set of 400+ real world and synthesized test images.", "The application of frequent patterns in classification appeared in sporadic studies and achieved initial success in the classification of relational data, text documents and graphs. In this paper, we conduct a systematic exploration of frequent pattern-based classification, and provide solid reasons supporting this methodology. It was well known that feature combinations (patterns) could capture more underlying semantics than single features. However, inclusion of infrequent patterns may not significantly improve the accuracy due to their limited predictive power. By building a connection between pattern frequency and discriminative measures such as information gain and Fisher score, we develop a strategy to set minimum support in frequent pattern mining for generating useful patterns. Based on this strategy, coupled with a proposed feature selection algorithm, discriminative frequent patterns can be generated for building high quality classifiers. We demonstrate that the frequent pattern-based classification framework can achieve good scalability and high accuracy in classifying large datasets. Empirical studies indicate that significant improvement in classification accuracy is achieved (up to 12 in UCI datasets) using the so-selected discriminative frequent patterns.", "The application of frequent patterns in classification has demonstrated its power in recent studies. It often adopts a two-step approach: frequent pattern (or classification rule) mining followed by feature selection (or rule ranking). However, this two-step process could be computationally expensive, especially when the problem scale is large or the minimum support is low. It was observed that frequent pattern mining usually produces a huge number of \"patterns\" that could not only slow down the mining process but also make feature selection hard to complete. In this paper, we propose a direct discriminative pattern mining approach, DDPMine, to tackle the efficiency issue arising from the two-step approach. DDPMine performs a branch-and-bound search for directly mining discriminative patterns without generating the complete pattern set. Instead of selecting best patterns in a batch, we introduce a \"feature-centered\" mining approach that generates discriminative patterns sequentially on a progressively shrinking FP-tree by incrementally eliminating training instances. The instance elimination effectively reduces the problem size iteratively and expedites the mining process. Empirical results show that DDPMine achieves orders of magnitude speedup without any downgrade of classification accuracy. It outperforms the state-of-the-art associative classification methods in terms of both accuracy and efficiency." ] }
0707.1954
1483340443
Wireless sensor networks are often used for environmental monitoring applications. In this context sampling and reconstruction of a physical field is one of the most important problems to solve. We focus on a bandlimited field and find under which conditions on the network topology the reconstruction of the field is successful, with a given probability. We review irregular sampling theory, and analyze the problem using random matrix theory. We show that even a very irregular spatial distribution of sensors may lead to a successful signal reconstruction, provided that the number of collected samples is large enough with respect to the field bandwidth. Furthermore, we give the basis to analytically determine the probability of successful field reconstruction.
Few papers have addressed the problem of sampling and reconstruction in sensor networks. Efficient techniques for spatial sampling in sensor networks are proposed in @cite_2 @cite_4 . In particular @cite_2 presents an algorithm to determine which sensor subsets should be selected to acquire data from an area of interest and which nodes should remain inactive to save energy. The algorithm chooses sensors in such a way that the node positions can be mapped into a blue noise binary pattern. @cite_4 , an adaptive sampling is described, which allows the central data-collector to vary the number of active sensors, i.e., samples, according to the desired resolution level. Data acquisition is also studied in @cite_10 , where the authors consider a unidimensional field, uniformly sampled at the Nyquist frequency by low precision sensors. The authors show that the number of sensors (i.e., samples) can be traded-off with the precision of sensors. The problem of the reconstruction of a bandlimited signal from an irregular set of samples at unknown locations is addressed in @cite_3 . There, different solution methods are proposed, and the conditions for which there exist multiple solutions or a unique solution are discussed.
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "2114129195", "2133489575", "2571527823", "2953338282" ], "abstract": [ "We address the problem of reconstructing a multiband signal from its sub-Nyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multi-coset sampling. To date, recovery methods for this sampling strategy ensure perfect reconstruction either when the band locations are known, or under strict restrictions on the possible spectral supports. In this paper, only the number of bands and their widths are assumed without any other limitations on the support. We describe how to choose the parameters of the multi-coset sampling so that a unique multiband signal matches the given samples. To recover the signal, the continuous reconstruction is replaced by a single finite-dimensional problem without the need for discretization. The resulting problem is studied within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate, and provides a first systematic study of compressed sensing in a truly analog setting. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate.", "Distributed sampling and reconstruction of a physical field using an array of sensors is a problem of considerable interest in environmental monitoring applications of sensor networks. Our recent work has focused on the sampling of bandlimited sensor fields. However, sensor fields are not perfectly bandlimited but typically have rapidly decaying spectra. In a classical sampling set-up it is possible to precede the A D sampling operation with an appropriate analog anti-aliasing filter. However, in the case of sensor networks, this is infeasible since sampling must precede filtering. We show that even though the effects of aliasing on the reconstruction cannot be prevented due to the \"filter-less\" sampling constraint, they can be suitably controlled by oversampling and carefully reconstructing the field from the samples. We show using a dither-based scheme that it is possible to estimate non-bandlimited fields with a precision that depends on how fast the spectral content of the field decays. We develop a framework for analyzing non-bandlimited fields that lead to upper bounds on the maximum pointwise error for a spatial bit rate of R bits meter. We present results for fields with exponentially decaying spectra as an illustration. In particular, we show that for fields f(t) with exponential tails; i.e., F( spl omega ) < spl pi spl alpha sup - spl alpha | spl omega | , the maximum pointwise error decays as c2e sup -a sub 1 spl radic R+ c3 1 spl radic R (e sup -2a sub 1 spl radic R) with spatial bit rate R bits meter. Finally, we show that for fields with spectra that have a finite second moment, the distortion decreases as O((1 N) sup 2 3 ) as the density of sensors, N, scales up to infinity . We show that if D is the targeted non-zero distortion, then the required (finite) rate R scales as O (1 spl radic D log 1 D).", "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala [30], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Renyi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n+o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For “discrete” signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal pX.", "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by KrzakalaEtAl , message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of non-zero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate @math exceeds the (upper) R 'enyi information dimension of the signal, @math . More precisely, for a sequence of signals of diverging dimension @math whose empirical distribution converges to @math , reconstruction is with high probability successful from @math measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension @math and @math non-zero entries, this implies reconstruction from @math measurements. For discrete' signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from @math measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal @math ." ] }
0707.1954
1483340443
Wireless sensor networks are often used for environmental monitoring applications. In this context sampling and reconstruction of a physical field is one of the most important problems to solve. We focus on a bandlimited field and find under which conditions on the network topology the reconstruction of the field is successful, with a given probability. We review irregular sampling theory, and analyze the problem using random matrix theory. We show that even a very irregular spatial distribution of sensors may lead to a successful signal reconstruction, provided that the number of collected samples is large enough with respect to the field bandwidth. Furthermore, we give the basis to analytically determine the probability of successful field reconstruction.
Note that our work significantly differs from the studies above because we assume that the sensors location are known (or can be determined @cite_11 @cite_6 @cite_0 ) and the sensor precision is sufficiently high so that the quantization error is negligible. The question we pose is instead under which conditions (on the network system) the reconstruction of a bandlimited signal is successful with a given probability.
{ "cite_N": [ "@cite_0", "@cite_6", "@cite_11" ], "mid": [ "2571527823", "2953338282", "2053334136", "2891091350" ], "abstract": [ "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala [30], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Renyi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n+o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For “discrete” signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal pX.", "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by KrzakalaEtAl , message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of non-zero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate @math exceeds the (upper) R 'enyi information dimension of the signal, @math . More precisely, for a sequence of signals of diverging dimension @math whose empirical distribution converges to @math , reconstruction is with high probability successful from @math measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension @math and @math non-zero entries, this implies reconstruction from @math measurements. For discrete' signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from @math measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal @math .", "We consider wireless sensor networks whose nodes are randomly deployed and, thus, provide an irregular sampling of the sensed field. The field is assumed to be bandlimited; a sink node collects the data gathered by the sensors and reconstructs the field by using a technique based on linear filtering. By taking the mean square error (MSE) as performance metric, we evaluate the effect of quasi-equally spaced sensor layouts on the quality of the reconstructed signal. The MSE is derived through asymptotic analysis for different sensor spatial distributions, and for two of them we are able to obtain an approximate closed form expression. The case of uniformly distributed sensors is also considered for the sake of comparison. The validity of our asymptotic analysis is shown by comparison against numerical results and it is proven to hold even for a small number of nodes. Finally, with the help of a simple example, we show the key role that our results play in the deployment of sensor networks.", "In this paper we propose a novel vertex based sampling method for k-bandlimited signals lying on arbitrary graphs, that has a reasonable computational complexity and results in low reconstruction error. Our goal is to find the smallest set of vertices that can guarantee a perfect reconstruction of any k-bandlimited signal on any connected graph. We propose to iteratively search for the vertices that yield the minimum reconstruction error, by minimizing the maximum eigenvalue of the error covariance matrix using a linear solver. We compare the performance of our method with state-of-the-art sampling strategies and random sampling on graphs. Experimental results show that our method successfully computes the smallest sample sets on arbitrary graphs without any parameter tuning. It provides a small reconstruction error, and is robust to noise." ] }
1211.5183
2952217788
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
Propelled by the increasing interest for next-generation Internet architectures and, in particular, Content-Oriented Networking (CON), the research community has produced a large body of work dealing with CON building blocks @cite_27 @cite_3 @cite_33 @cite_44 @cite_24 , performance @cite_78 @cite_49 @cite_72 @cite_82 , and scalability @cite_14 @cite_45 . However, the quest for analyzing and enhancing security in CON is only at the beginning -- in particular, very little work has focused on privacy and anonymity. In this section, we review relevant prior work.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_78", "@cite_3", "@cite_44", "@cite_24", "@cite_27", "@cite_72", "@cite_45", "@cite_49", "@cite_82" ], "mid": [ "2337767373", "2018793106", "1535296432", "2120514843" ], "abstract": [ "With the growing realization that current Internet protocols are reaching the limits of their senescence, several ongoing research efforts aim to design potential next-generation Internet architectures. Although they vary in maturity and scope, in order to avoid past pitfalls, these efforts seek to treat security and privacy as fundamental requirements. Resilience to Denialof-Service (DoS) attacks that plague today’s Internet is a major issue for any new architecture and deserves full attention. In this paper, we focus on DoS in Named Data Networking (NDN) – a specific candidate for next-generation Internet architecture designs. By naming data instead of its locations, NDN transforms data into a first-class entity and makes itself an attractive and viable approach to meet the needs for many current and emerging applications. It also incorporates some basic security features that mitigate classes of attacks that are commonly seen today. However, NDN’s resilience to DoS attacks has not been analyzed to-date. This paper represents a first step towards assessment and possible mitigation of DoS in NDN. After identifying and analyzing several new types of attacks, it investigates their variations, effects and counter-measures. This paper also sheds some light on the debate about relative virtues of self-certifying, as opposed to human-readable, names in the context of content-centric networking.", "With the growing realization that current Internet protocols are reaching the limits of their senescence, several on-going research efforts aim to design potential next-generation Internet architectures. Although they vary in maturity and scope, in order to avoid past pitfalls, these efforts seek to treat security and privacy as fundamental requirements. Resilience to Denial-of-Service (DoS) attacks that plague today's Internet is a major issue for any new architecture and deserves full attention. In this paper, we focus on DoS in Named Data Networking (NDN) - a specific candidate for next-generation Internet architecture designs. By naming data instead of its locations, NDN transforms data into a first-class entity and makes itself an attractive and viable approach to meet the needs for many current and emerging applications. It also incorporates some basic security features that mitigate classes of attacks that are commonly seen today. However, NDN's resilience to DoS attacks has not been analyzed to-date. This paper represents a first step towards assessment and possible mitigation of DoS in NDN. After identifying and analyzing several new types of attacks, it investigates their variations, effects and counter-measures. This paper also sheds some light on the debate about relative virtues of self-certifying, as opposed to human-readable, names in the context of content-centric networking.", "Content-centric networking proposals, as Parc's CCN, have recently emerged to define new network architectures where content, and not its location, becomes the core of the communication model. These new paradigms push data storage and delivery at network layer and are designed to better deal with current Internet usage, mainly centered around content dissemination and retrieval. In this paper, we develop an analytical model of CCN in-network storage and receiver-driven transport, that more generally applies to a class of content ori ented networks identified by chunk-based communication. We derive a closed-form expression for the mean stationary throughput as a function of hit miss probabilities at the caches along the path, of content popularity and of content cache size. Our analytical results, supported by chunk level simulations, can be used to analyze fundamental trade-offs in current CCN architecture, and provide an essential building block for the design and evaluation of enhanced CCN protocols.", "Research on performance, robustness, and evolution of the global Internet is fundamentally handicapped without accurate and thorough knowledge of the nature and structure of the contractual relationships between Autonomous Systems (ASs). In this work we introduce novel heuristics for inferring AS relationships. Our heuristics improve upon previous works in several technical aspects, which we outline in detail and demonstrate with several examples. Seeking to increase the value and reliability of our inference results, we then focus on validation of inferred AS relationships. We perform a survey with ASs' network administrators to collect information on the actual connectivity and policies of the surveyed ASs. Based on the survey results, we find that our new AS relationship inference techniques achieve high levels of accuracy: we correctly infer 96.5 customer to provider (c2p), 82.8 peer to peer (p2p), and 90.3 sibling to sibling (s2s) relationships. We then cross-compare the reported AS connectivity with the AS connectivity data contained in BGP tables. We find that BGP tables miss up to 86.2 of the true adjacencies of the surveyed ASs. The majority of the missing links are of the p2p type, which highlights the limitations of present measuring techniques to capture links of this type. Finally, to make our results easily accessible and practically useful for the community, we open an AS relationship repository where we archive, on a weekly basis, and make publicly available the complete Internet AS-level topology annotated with AS relationship information for every pair of AS neighbors." ] }
1211.5183
2952217788
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
Security in CON. Wong and Nikander @cite_25 address security of naming mechanisms by constructing content name as the concatenation of content provider's ID, cryptographic ID of the content and some meta-data. @cite_5 adopt a similar approach where content name is defined as the concatenation of the hash of the public key and a set of attributes. Both schemes rely on cryptographic hash functions to name the content, which results in a human-unreadable flat naming. @cite_76 show that these schemes have several drawbacks, including the need for an indirection mechanism to map and the lack of binding between name and producer's identity. To resolve these shortcomings, they propose to keep hierarchical human readable names while signing both content name and the content itself, using producer's public key. @cite_31 study DoS and DDOS in CCN @cite_27 by presenting attacks and proposing some initial countermeasures. In another context, @cite_35 propose a secure lighting systems over Named-Data Networking (NDN), providing control access to fixtures via authorization policies, coupled with strong authentication. This approach is a first attempt to port CON out of the content distribution scenario.
{ "cite_N": [ "@cite_35", "@cite_27", "@cite_5", "@cite_31", "@cite_76", "@cite_25" ], "mid": [ "2514172418", "2620403523", "2092433893", "145176944" ], "abstract": [ "User, content, and device names as a security primitive have been an attractive approach especially in the context of Information-Centric Networking (ICN) architectures. We leverage Hierarchical Identity Based Encryption (HIBE) to build (content) name-based security mechanisms used for securely distributing content. In contrast to similar approaches, in our system each user maintains his own Private Key Generator used for generating the master secret key and the public system parameters required by the HIBE algorithm. This way our system does not suffer from the key escrow problem, which is inherent in many similar solutions. In order to disseminate the system parameters of a content owner in a fully distributed way, we use blockchains, a distributed, community managed, global list of transactions.", "We propose a multi-user Symmetric Searchable Encryption (SSE) scheme based on the single-user Oblivious Cross Tags (OXT) protocol (, CRYPTO 2013). The scheme allows any user to perform a search query by interacting with the server and any ( -1 ) ‘helping’ users, and preserves the privacy of database content against the server even assuming leakage of up to ( -1 ) users’ keys to the server (for a threshold parameter ( )), while hiding the query from the ( -1 ) ‘helping users’. To achieve the latter query privacy property, we design a new distributed key-homomorphic pseudorandom function (PRF) that hides the PRF input (search keyword) from the ‘helping’ key share holders. By distributing the utilized keys among the users, the need of constant online presence of the data owner to provide services to the users is eliminated, while providing resilience against user key exposure.", "Several projects propose an information-centric approach to the network of the future. Such an approach makes efficient content distribution possible by making information retrieval host-independent and integrating into the network storage for caching information. Requests for particular content can, thus, be satisfied by any host or server holding a copy. The current security model based on host authentication is not applicable in this context. Basic security functionality must instead be attached directly to the data and its naming scheme. A naming scheme to name content and other objects that enables verification of data integrity as well as owner authentication and identification is here presented. The naming scheme is designed for flexibility and extensibility, e.g., to integrate other security properties like access control. At the same time, the naming scheme offers persistent IDs even though the content, content owner and or owner's organizational structure, or location change. The requirements for the naming scheme and an analysis showing how the proposed scheme fulfills them are presented. Experience with prototyping the naming scheme is also discussed. The naming scheme builds the foundation for a secure information-centric network infrastructure that can also solve some of the main security problems of today's Internet.", "We consider what constitutes identities in cryptography. Typical examples include your name and your social-security number, or your fingerprint iris-scan, or your address, or your (non-revoked) public-key coming from some trusted public-key infrastructure. In many situations, however, where you are defines your identity. For example, we know the role of a bank-teller behind a bullet-proof bank window not because she shows us her credentials but by merely knowing her location. In this paper, we initiate the study of cryptographic protocols where the identity (or other credentials and inputs) of a party are derived from its geographic location. We start by considering the central task in this setting, i.e., securely verifying the position of a device. Despite much work in this area, we show that in the Vanilla (or standard) model, the above task (i.e., of secure positioning) is impossible to achieve. In light of the above impossibility result, we then turn to the Bounded Storage Model and formalize and construct information theoretically secure protocols for two fundamental tasks: Secure Positioning; and Position Based Key Exchange. We then show that these tasks are in fact universal in this setting --- we show how we can use them to realize Secure Multi-Party Computation.Our main contribution in this paper is threefold: to place the problem of secure positioning on a sound theoretical footing; to prove a strong impossibility result that simultaneously shows the insecurity of previous attempts at the problem; and to present positive results by showing that the bounded-storage framework is, in fact, one of the \"right\" frameworks (there may be others) to study the foundations of position-based cryptography." ] }
1211.5183
2952217788
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
Privacy Issues in CON. To the best of our knowledge, the only related privacy study is the recent article by in @cite_48 @cite_37 , that covers security and privacy issues of CCN @cite_27 . Specifically, they highlight a few Denial-of-Service (DoS) vulnerabilities as well as different cache-related attacks. In CCN, a possible DoS attack (also discussed in @cite_27 ) relies on resource exhaustion, targeting either routers or content source. Routers are forced to perform expensive computation, such as, signature verification, which negatively affects the quality of service and can ultimately block traffic. Content source can also be flooded with a huge number of interests, ending up denying service to legitimate users. Additional DoS attacks mainly target the cache mechanism, to either decrease network performance or to gain free and uncontrolled storage. Transforming the cache into a permanent storage is achieved by continuously issuing interests for a desired file. Decreased network performance can also be achieved through cache pollution.
{ "cite_N": [ "@cite_48", "@cite_37", "@cite_27" ], "mid": [ "1984778122", "2159587870", "2901937773", "2590898937" ], "abstract": [ "With the advent of content-centric networking (CCN) where contents can be cached on each CCN router, cache robustness will soon emerge as a serious concern for CCN deployment. Previous studies on cache pollution attacks only focus on a single cache server. The question of how caching will behave over a general caching network such as CCN under cache pollution attacks has never been answered. In this paper, we propose a novel scheme called CacheShield for enhancing cache robustness. CacheShield is simple, easy-to-deploy, and applicable to any popular cache replacement policy. CacheShield can effectively improve cache performance under normal circumstances, and more importantly, shield CCN routers from cache pollution attacks. Extensive simulations including trace-driven simulations demonstrate that CacheShield is effective for both CCN and today's cache servers. We also study the impact of cache pollution attacks on CCN and reveal several new observations on how different attack scenarios can affect cache hit ratios unexpectedly.", "Content-Centric Networking (CCN) is an emerging paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. In CCN, named content - rather than addressable hosts - becomes a first-class entity. Content is therefore decoupled from its location. This allows, among other things, the implementation of ubiquitous caching. Named-Data Networking (NDN) is a prominent example of CCN. In NDN, all nodes (i.e., hosts, routers) are allowed to have a local cache, used to satisfy incoming requests for content. This makes NDN a good architecture for efficient large scale content distribution. However, reliance on caching allows an adversary to perform attacks that are very effective and relatively easy to implement. Such attacks include cache poisoning (i.e., introducing malicious content into caches) and cache pollution (i.e., disrupting cache locality). This paper focuses on cache pollution attacks, where the adversary's goal is to disrupt cache locality to increase link utilization and cache misses for honest consumers. We show, via simulations, that such attacks can be implemented in NDN using limited resources, and that their effectiveness is not limited to small topologies. We then illustrate that existing proactive countermeasures are ineffective against realistic adversaries. Finally, we introduce a new technique for detecting pollution attacks. Our technique detects high and low rate attacks on different topologies with high accuracy.", "Information leakage of sensitive data has become one of the fast growing concerns among computer users. With adversaries turning to hardware for exploits, caches are frequently a target for timing channels since they present different timing profiles for cache miss and hit latencies. Such timing channels operate by having an adversary covertly communicate secrets to a spy simply through modulating resource timing without leaving any physical evidence. In this article, we demonstrate a new vulnerability exposed by cache coherence protocols where adversaries could manipulate the coherence states on certain cache blocks to alter cache access timing and communicate secrets illegitimately. Our threat model assumes the trojan and spy can either exploit explicitly shared read-only physical pages (e.g., shared library code), or use memory deduplication feature to implicitly force create shared physical pages. We demonstrate a template that adversaries may use to construct covert timing channels through manipulating combinations of coherence states and data placement in different caches. We investigate several classes of cache coherence protocols, and observe that both directory-based and snoopy protocols can be subject to covert timing channel attacks. We identify that the root cause of the vulnerability to be the existence of access latency difference for cache lines in read-only cache coherence states: Exlusive and Shared. For defense, we propose a slightly modified cache coherence scheme that will enable the last level cache to directly respond to read data requests in these read-only coherence states, and avoid any latency difference that could enable timing channels.", "The fast-growing Internet traffic is increasingly becoming content-based and driven by mobile users, with users more interested in data rather than its source. This has precipitated the need for an information-centric Internet architecture. Research in information-centric networks (ICNs) have resulted in novel architectures, e.g., CCN NDN, DONA, and PSIRP PURSUIT; all agree on named data based addressing and pervasive caching as integral design components. With network-wide content caching, enforcement of content access control policies become non-trivial. Each caching node in the network needs to enforce access control policies with the help of the content provider. This becomes inefficient and prone to unbounded latencies especially during provider outages. In this paper, we propose an efficient access control framework for ICN, which allows legitimate users to access and use the cached content directly, and does not require verification authentication by an online provider authentication server or the content serving router. This framework would help reduce the impact of system down-time from server outages and reduce delivery latency by leveraging caching while guaranteeing access only to legitimate users. Experimental simulation results demonstrate the suitability of this scheme for all users, but particularly for mobile users, especially in terms of the security and latency overheads." ] }
1211.5183
2952217788
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
From the privacy perspective, work in @cite_48 @cite_37 identifies the issue of information leakage through caches in CCN. It proposes a few simple countermeasures, following and approaches. The former can be achieved using techniques similar to those addressing cache pollution attacks in IP @cite_53 , although such an approach can be difficult to port to CON due to the lack of source address. The latter can actually be global, i.e., treating all traffic as sensitive, delaying all traffic, or deploying a shared cache to circumvent the attack. Alternatively, a selective prevention approach may try to distinguish between sensitive and non-sensitive content, based on content popularity and context (time, location), and then delay or tunnel sensitive content. It is not clear, however, how to implement the selection mechanism to distinguish between private and non-private content, but authors of @cite_37 suggest to implement this service either in the network layer (i.e., the router classifies the content) or by the host (i.e., content source tags sensitive content). Such classification is in turn a very challenging task, since privacy is a relative notion that changes from one user to another. Also, are briefly discussed, although no countermeasures besides tunneling have been proposed.
{ "cite_N": [ "@cite_48", "@cite_37", "@cite_53" ], "mid": [ "2901937773", "2159587870", "2949272603", "2021931360" ], "abstract": [ "Information leakage of sensitive data has become one of the fast growing concerns among computer users. With adversaries turning to hardware for exploits, caches are frequently a target for timing channels since they present different timing profiles for cache miss and hit latencies. Such timing channels operate by having an adversary covertly communicate secrets to a spy simply through modulating resource timing without leaving any physical evidence. In this article, we demonstrate a new vulnerability exposed by cache coherence protocols where adversaries could manipulate the coherence states on certain cache blocks to alter cache access timing and communicate secrets illegitimately. Our threat model assumes the trojan and spy can either exploit explicitly shared read-only physical pages (e.g., shared library code), or use memory deduplication feature to implicitly force create shared physical pages. We demonstrate a template that adversaries may use to construct covert timing channels through manipulating combinations of coherence states and data placement in different caches. We investigate several classes of cache coherence protocols, and observe that both directory-based and snoopy protocols can be subject to covert timing channel attacks. We identify that the root cause of the vulnerability to be the existence of access latency difference for cache lines in read-only cache coherence states: Exlusive and Shared. For defense, we propose a slightly modified cache coherence scheme that will enable the last level cache to directly respond to read data requests in these read-only coherence states, and avoid any latency difference that could enable timing channels.", "Content-Centric Networking (CCN) is an emerging paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. In CCN, named content - rather than addressable hosts - becomes a first-class entity. Content is therefore decoupled from its location. This allows, among other things, the implementation of ubiquitous caching. Named-Data Networking (NDN) is a prominent example of CCN. In NDN, all nodes (i.e., hosts, routers) are allowed to have a local cache, used to satisfy incoming requests for content. This makes NDN a good architecture for efficient large scale content distribution. However, reliance on caching allows an adversary to perform attacks that are very effective and relatively easy to implement. Such attacks include cache poisoning (i.e., introducing malicious content into caches) and cache pollution (i.e., disrupting cache locality). This paper focuses on cache pollution attacks, where the adversary's goal is to disrupt cache locality to increase link utilization and cache misses for honest consumers. We show, via simulations, that such attacks can be implemented in NDN using limited resources, and that their effectiveness is not limited to small topologies. We then illustrate that existing proactive countermeasures are ineffective against realistic adversaries. Finally, we introduce a new technique for detecting pollution attacks. Our technique detects high and low rate attacks on different topologies with high accuracy.", "We investigate the problem of optimal request routing and content caching in a heterogeneous network supporting in-network content caching with the goal of minimizing average content access delay. Here, content can either be accessed directly from a back-end server (where content resides permanently) or be obtained from one of multiple in-network caches. To access a piece of content, a user must decide whether to route its request to a cache or to the back-end server. Additionally, caches must decide which content to cache. We investigate the problem complexity of two problem formulations, where the direct path to the back-end server is modeled as i) a congestion-sensitive or ii) a congestion-insensitive path, reflecting whether or not the delay of the uncached path to the back-end server depends on the user request load, respectively. We show that the problem is NP-complete in both cases. We prove that under the congestion-insensitive model the problem can be solved optimally in polynomial time if each piece of content is requested by only one user, or when there are at most two caches in the network. We also identify a structural property of the user-cache graph that potentially makes the problem NP-complete. For the congestion-sensitive model, we prove that the problem remains NP-complete even if there is only one cache in the network and each content is requested by only one user. We show that approximate solutions can be found for both models within a (1-1 e) factor of the optimal solution, and demonstrate a greedy algorithm that is found to be within 1 of optimal for small problem sizes. Through trace-driven simulations we evaluate the performance of our greedy algorithms, which show up to a 50 reduction in average delay over solutions based on LRU content caching.", "In this work, we study information leakage in timing side channels that arise in the context of shared event schedulers. Consider two processes, one of them an innocuous process (referred to as Alice) and the other a malicious one (referred to as Bob), using a common scheduler to process their jobs. Based on when his jobs get processed, Bob wishes to learn about the pattern (size and timing) of jobs of Alice. Depending on the context, knowledge of this pattern could have serious implications on Alice's privacy and security. For instance, shared routers can reveal traffic patterns, shared memory access can reveal cloud usage patterns, and suchlike. We present a formal framework to study the information leakage in shared resource schedulers using the pattern estimation error as a performance metric. In this framework, a uniform upper bound is derived to benchmark different scheduling policies. The first-come-first-serve scheduling policy is analyzed, and shown to leak significant information when the scheduler is loaded heavily. To mitigate the timing information leakage, we propose an “Accumulate-and-Serve” policy which trades in privacy for a higher delay. The policy is analyzed under the proposed framework and is shown to leak minimum information to the attacker, and is shown to have comparatively lower delay than a fixed scheduler that preemptively assigns service times irrespective of traffic patterns." ] }
1211.5183
2952217788
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
Our work extends that in @cite_48 @cite_37 by encompassing all privacy aspects: caching, naming, signature, and content. Also, it is more general as it does not only consider CCN @cite_27 , but CON in general, independently of the specific instantiation. Furthermore, when suggesting countermeasures, we only propose techniques that can be applied with a minimal change to the architecture.
{ "cite_N": [ "@cite_48", "@cite_37", "@cite_27" ], "mid": [ "2159587870", "2071740737", "2313059177", "2590898937" ], "abstract": [ "Content-Centric Networking (CCN) is an emerging paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. In CCN, named content - rather than addressable hosts - becomes a first-class entity. Content is therefore decoupled from its location. This allows, among other things, the implementation of ubiquitous caching. Named-Data Networking (NDN) is a prominent example of CCN. In NDN, all nodes (i.e., hosts, routers) are allowed to have a local cache, used to satisfy incoming requests for content. This makes NDN a good architecture for efficient large scale content distribution. However, reliance on caching allows an adversary to perform attacks that are very effective and relatively easy to implement. Such attacks include cache poisoning (i.e., introducing malicious content into caches) and cache pollution (i.e., disrupting cache locality). This paper focuses on cache pollution attacks, where the adversary's goal is to disrupt cache locality to increase link utilization and cache misses for honest consumers. We show, via simulations, that such attacks can be implemented in NDN using limited resources, and that their effectiveness is not limited to small topologies. We then illustrate that existing proactive countermeasures are ineffective against realistic adversaries. Finally, we introduce a new technique for detecting pollution attacks. Our technique detects high and low rate attacks on different topologies with high accuracy.", "Named Data Networking architectures have been proposed to improve various shortcomings of the current Internet architecture. A key part of these proposals is the capability of caching arbitrary content in arbitrary network locations. While caching has the potential to improve network performance, the data stored in caches can be seen as transient traces of past communication that attackers can exploit to compromise the users' privacy. With this editorial note, we aim to raise awareness of privacy attacks as an intrinsic and relevant issue in Named Data Networking architectures. Countermeasures against privacy attacks are subject to a trade-off between performance and privacy. We discuss several approaches to countermeasures representing different incarnations of this tradeoff, along with open issues to be looked at by the research community.", "Caching scheme will change the original feature of the network in Content Centric Networking (CCN). So it becomes a challenge to describe the caching node importance according to network traffic and user behavior. In this work, a new metric named Request Influence Degree (RID) is defined to reflect the degree of node importance. Then the caching performance of CCN has been addressed with specially focusing on the size of individual CCN router caches. Finally, a newly content store space heterogeneous allocation scheme based on the metric RID across the CCN network has been proposed. Numerical experiments reveal that the new scheme can decrease the routing stretch and the source server load contrasting that of the homogeneous assignment and several graph-related centrality metrics allocations.", "The fast-growing Internet traffic is increasingly becoming content-based and driven by mobile users, with users more interested in data rather than its source. This has precipitated the need for an information-centric Internet architecture. Research in information-centric networks (ICNs) have resulted in novel architectures, e.g., CCN NDN, DONA, and PSIRP PURSUIT; all agree on named data based addressing and pervasive caching as integral design components. With network-wide content caching, enforcement of content access control policies become non-trivial. Each caching node in the network needs to enforce access control policies with the help of the content provider. This becomes inefficient and prone to unbounded latencies especially during provider outages. In this paper, we propose an efficient access control framework for ICN, which allows legitimate users to access and use the cached content directly, and does not require verification authentication by an online provider authentication server or the content serving router. This framework would help reduce the impact of system down-time from server outages and reduce delivery latency by leveraging caching while guaranteeing access only to legitimate users. Experimental simulation results demonstrate the suitability of this scheme for all users, but particularly for mobile users, especially in terms of the security and latency overheads." ] }
1211.5183
2952217788
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
Anonymity in CON. AND @math NA @cite_30 proposes a Tor-like anonymizing tool for CCN @cite_27 to provide provable anonymity. It also aims to privacy protection via simple tunneling. However, as discussed in , AND @math NA is an all-in-one'' solution that introduces latency and impedes caching. Whereas, fine-grained privacy solutions are needed, since a widespread use of tunneling would inherently take away most of CON benefits in terms of performance and scalability. To provide censorship resistance, @cite_57 describes an algorithm to mix legitimate sensitive content with so-called cover files'' to hide it. By monitoring the content, an adversary would only see the mixed'' content, which prevents him from censoring the content.
{ "cite_N": [ "@cite_30", "@cite_27", "@cite_57" ], "mid": [ "2161829417", "1975016298", "1992286709", "2164649498" ], "abstract": [ "The ability to locate random relays is a key challenge for peer-to-peer (P2P) anonymous communication systems. Earlier attempts like Salsa and AP3 used distributes hash table lookups to locate relays, but the lack of anonymity in their lookup mechanisms enables an adversary to infer the path structure and compromise used anonymity. NISAN and Torsk are state-of-the-art systems for P2P anonymous communication. Their designs include mechanisms that are specifically tailored to mitigate information leak attacks. NISAN proposes to add anonymity into the lookup mechanism itself, while Torsk proposes the use of secret buddy nodes to anonymize the lookup initiator. In this paper, we attack the key mechanisms that hide the relationship between a lookup initiator and its selected relays in NISAN and Torsk. We present passive attacks on the NISAN lookup and show that it is not as anonymous as previously thought. We analyze three circuit construction mechanisms for anonymous communication using the NISAN lookup, and show that the information leaks in the NISAN lookup lead to a significant reduction in user anonymity. We also propose active attacks on Torsk that defeat its secret buddy mechanism and consequently compromise user anonymity. Our results are backed up by probabilistic modeling and extensive simulations. Our study motivates the search for a DHT lookup mechanism that is both secure and anonymous.", "Existing IP anonymity systems tend to sacrifice one of low latency, high bandwidth, or resistance to traffic-analysis. High-latency mix-nets like Mixminion batch messages to resist traffic-analysis at the expense of low latency. Onion routing schemes like Tor deliver low latency and high bandwidth, but are not designed to withstand traffic analysis. Designs based on DC-nets or broadcast channels resist traffic analysis and provide low latency, but are limited to low bandwidth communication. In this paper, we present the design, implementation, and evaluation of Aqua, a high-bandwidth anonymity system that resists traffic analysis. We focus on providing strong anonymity for BitTorrent, and evaluate the performance of Aqua using traces from hundreds of thousands of actual BitTorrent users. We show that Aqua achieves latency low enough for efficient bulk TCP flows, bandwidth sufficient to carry BitTorrent traffic with reasonable efficiency, and resistance to traffic analysis within anonymity sets of hundreds of clients. We conclude that Aqua represents an interesting new point in the space of anonymity network designs.", "It is not uncommon in the data anonymization literature to oppose the \"old\" @math k -anonymity model to the \"new\" differential privacy model, which offers more robust privacy guarantees. Yet, it is often disregarded that the utility of the anonymized results provided by differential privacy is quite limited, due to the amount of noise that needs to be added to the output, or because utility can only be guaranteed for a restricted type of queries. This is in contrast with @math k -anonymity mechanisms, which make no assumptions on the uses of anonymized data while focusing on preserving data utility from a general perspective. In this paper, we show that a synergy between differential privacy and @math k -anonymity can be found: @math k -anonymity can help improving the utility of differentially private responses to arbitrary queries. We devote special attention to the utility improvement of differentially private published data sets. Specifically, we show that the amount of noise required to fulfill @math ? -differential privacy can be reduced if noise is added to a @math k -anonymous version of the data set, where @math k -anonymity is reached through a specially designed microaggregation of all attributes. As a result of noise reduction, the general analytical utility of the anonymized output is increased. The theoretical benefits of our proposal are illustrated in a practical setting with an empirical evaluation on three data sets.", "The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain “identifying” attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented (in Section 2) values for each sensitive attribute. In this paper, we show that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. Motivated by these limitations, we propose a new notion of privacy called “closeness.” We first present the base model t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We then propose a more flexible privacy model called (n,t)-closeness that offers higher utility. We describe our desiderata for designing a distance measure between two probability distributions and present two distance measures. We discuss the rationale for using closeness as a privacy measure and illustrate its advantages through examples and experiments." ] }
1211.5263
2128011898
A smooth affine hypersurface Z of complex dimension n is homotopy equivalent to an n-dimensional cell complex. Given a defining polynomial f for Z as well as a regular triangulation of its Newton polytope, we provide a purely combinatorial construction of a compact topological space S as a union of components of real dimension n, and prove that S embeds into Z as a deformation retract. In particular, Z is homotopy equivalent to S.
A skeleton for Fermat hypersurfaces was described by Deligne in [pp. 88--90] Deligne , and this skeleton is visible in our own in a manner described in Remark . Our skeleta'' are different than the skeleta'' that appear in nonarchimedean geometry @cite_10 @cite_15 , but @math plays a similar role in both constructions. It would be interesting to study this resemblance further.
{ "cite_N": [ "@cite_15", "@cite_10" ], "mid": [ "2021150171", "2022460695", "2048821851", "2769356789" ], "abstract": [ "Recent advances on human motion analysis have made the extraction of human skeleton structure feasible, even from single depth images. This structure has been proven quite informative for discriminating actions in a recognition scenario. In this context, we propose a local skeleton descriptor that encodes the relative position of joint quadruples. Such a coding implies a similarity normalisation transform that leads to a compact (6D) view-invariant skelet al feature, referred to as skelet al quad. Further, the use of a Fisher kernel representation is suggested to describe the skelet al quads contained in a (sub)action. A Gaussian mixture model is learnt from training data, so that the generation of any set of quads is encoded by its Fisher vector. Finally, a multi-level representation of Fisher vectors leads to an action description that roughly carries the order of sub-action within each action sequence. Efficient classification is here achieved by linear SVMs. The proposed action representation is tested on widely used datasets, MSRAction3D and HDM05. The experimental evaluation shows that the proposed method outperforms state-of-the-art algorithms that rely only on joints, while it competes with methods that combine joints with extra cues.", "In this paper we explore the idea of characterizing sentences by the shapes of their structural descriptions only; for example, in the case of context free grammars, by the shapes of the derivation trees only. Such structural descriptions will be called skeletons . A skeleton exhibits all of the grouping structure (phrase structure) of the sentence without naming the syntactic categories used in the description. The inclusion of syntactic categories as variables is primarily a question of economy of description. Every context free grammar is strongly equivalent to a skelet al grammar, in a sense made precise in the paper. Besides clarifying the role of skeletons in mathematical linguistics, we show that skelet al automata provide a characterization of local sets, remedying a “defect” in the usual tree automata theory. We extend the method of skelet al structural descriptions to other forms of tree describing systems. We also suggest a theoretical basis for grammatical inference based on grouping structure only.", "Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skelet al representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skelet al representation lies in the Lie group SE(3)×…×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skelet al representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.", "Human skeleton joints are popular for action analysis since they can be easily extracted from videos to discard background noises. However, current skeleton representations do not fully benefit from machine learning with CNNs. We propose \"Skepxels\" a spatio-temporal representation for skeleton sequences to fully exploit the \"local\" correlations between joints using the 2D convolution kernels of CNN. We transform skeleton videos into images of flexible dimensions using Skepxels and develop a CNN-based framework for effective human action recognition using the resulting images. Skepxels encode rich spatio-temporal information about the skeleton joints in the frames by maximizing a unique distance metric, defined collaboratively over the distinct joint arrangements used in the skelet al image. Moreover, they are flexible in encoding compound semantic notions such as location and speed of the joints. The proposed action recognition exploits the representation in a hierarchical manner by first capturing the micro-temporal relations between the skeleton joints with the Skepxels and then exploiting their macro-temporal relations by computing the Fourier Temporal Pyramids over the CNN features of the skelet al images. We extend the Inception-ResNet CNN architecture with the proposed method and improve the state-of-the-art accuracy by 4.4 on the large scale NTU human activity dataset. On the medium-sized N-UCLA and UTH-MHAD datasets, our method outperforms the existing results by 5.7 and 9.3 respectively." ] }
1211.5263
2128011898
A smooth affine hypersurface Z of complex dimension n is homotopy equivalent to an n-dimensional cell complex. Given a defining polynomial f for Z as well as a regular triangulation of its Newton polytope, we provide a purely combinatorial construction of a compact topological space S as a union of components of real dimension n, and prove that S embeds into Z as a deformation retract. In particular, Z is homotopy equivalent to S.
Hypersurfaces in algebraic tori have been studied by Danilov-Khovanski @cite_14 and Batyrev @cite_19 . Danilov-Khovankski computed mixed Hodge numbers, while Batyev studied the variation of mixed Hodge structures. Log geometry has been extensively employed by Gross and Siebert @cite_21 in their seminal work studying the degenerations appearing in mirror symmetry. Their strategy is crucial to our work, even though we take a somewhat different track by working in a non-compact setting for hypersurfaces that are not necessarily Calabi-Yau. The non-compactness allows us to deal with log-smooth log structures. Mirror symmetry for general hypersurfaces was recently studied in @cite_16 (projective case) and @cite_4 (affine case) using polyhedral decompositions of the Newton polytope. This relates to the Gross-Siebert program by embedding the hypersurface in codimension two in the special fiber of a degenerating Calabi-Yau family. In this family, the hypersurface coincides with the log singular locus --- see @cite_0 for the simplicial case.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_21", "@cite_0", "@cite_19", "@cite_16" ], "mid": [ "2734934103", "2613622891", "2951039910", "1645016350" ], "abstract": [ "We show that the category of coherent sheaves on the toric boundary divisor of a smooth quasiprojective DM toric stack is equivalent to the wrapped Fukaya category of a hypersurface in a complex torus. Hypersurfaces with every Newton polytope can be obtained. Our proof has the following ingredients. Using Mikhalkin-Viro patchworking, we compute the skeleton of the hypersurface. The result matches the [FLTZ] skeleton and is naturally realized as a Legendrian in the cosphere bundle of a torus. By [GPS1, GPS2, GPS3], we trade wrapped Fukaya categories for microlocal sheaf theory. By proving a new functoriality result for Bondal's coherent-constructible correspondence, we reduce the sheaf calculation to Kuwagaki's recent theorem on mirror symmetry for toric varieties.", "Using the Minimal Model Program, any degeneration of K-trivial varieties can be arranged to be in a Kulikov type form, i.e. with trivial relative canonical divisor and mild singularities. In the hyper-K \"ahler setting, we can then deduce a finiteness statement for monodromy acting on @math , once one knows that one component of the central fiber is not uniruled. Independently of this, using deep results from the geometry of hyper-K \"ahler manifolds, we prove that a finite monodromy projective degeneration of hyper-K \"ahler manifolds has a smooth filling (after base change and birational modifications). As a consequence of these two results, we prove a generalization of Huybrechts' theorem about birational versus deformation equivalence, allowing singular central fibers. As an application, we give simple proofs for the deformation type of certain geometric constructions of hyper-K \"ahler manifolds (e.g. Debarre--Voisin or Laza--Sacc a--Voisin). In a slightly different direction, we establish some basic properties (dimension and rational homology type) for the dual complex of a Kulikov type degeneration of hyper-K \"ahler manifolds.", "We consider mirror symmetry for (essentially arbitrary) hypersurfaces in (possibly noncompact) toric varieties from the perspective of the Strominger-Yau-Zaslow (SYZ) conjecture. Given a hypersurface @math in a toric variety @math we construct a Landau-Ginzburg model which is SYZ mirror to the blowup of @math along @math , under a positivity assumption. This construction also yields SYZ mirrors to affine conic bundles, as well as a Landau-Ginzburg model which can be naturally viewed as a mirror to @math . The main applications concern affine hypersurfaces of general type, for which our results provide a geometric basis for various mirror symmetry statements that appear in the recent literature. We also obtain analogous results for complete intersections.", "Let f be a polynomial of degree n in ZZ[x_1,..,x_n], typically reducible but squarefree. From the hypersurface f=0 one may construct a number of other subschemes Y by extracting prime components, taking intersections, taking unions, and iterating this procedure. We prove that if the number of solutions to f=0 in ^n is not a multiple of p, then all these intersections in ^n_ just described are reduced. (If this holds for infinitely many p, then it holds over as well.) More specifically, there is a_Frobenius splitting_ on ^n_ compatibly splitting all these subschemes Y . We determine when a Gr \"obner degeneration f_0=0 of such a hypersurface f=0 is again such a hypersurface. Under this condition, we prove that compatibly split subschemes degenerate to compatibly split subschemes, and stay reduced. Our results are strongest in the case that f's lexicographically first term is i=1 ^n x_i. Then for all large p, there is a Frobenius splitting that compatibly splits f's hypersurface and all the associated Y . The Gr \"obner degeneration Y' of each such Y is a reduced union of coordinate spaces (a Stanley-Reisner scheme), and we give a result to help compute its Gr \"obner basis. We exhibit an f whose associated Y include Fulton's matrix Schubert varieties, and recover much more easily the Gr \"obner basis theorem of [Knutson-Miller '05]. We show that in Bott-Samelson coordinates on an opposite Bruhat cell X^v_ in G B, the f defining the complement of the big cell also has initial term i=1 ^n x_i, and hence the Kazhdan-Lusztig subvarieties X^v_ w degenerate to Stanley-Reisner schemes. This recovers, in a weak form, the main result of [Knutson '08]." ] }
1211.5263
2128011898
A smooth affine hypersurface Z of complex dimension n is homotopy equivalent to an n-dimensional cell complex. Given a defining polynomial f for Z as well as a regular triangulation of its Newton polytope, we provide a purely combinatorial construction of a compact topological space S as a union of components of real dimension n, and prove that S embeds into Z as a deformation retract. In particular, Z is homotopy equivalent to S.
In the symplectic-topological setting, Mikhalkin @cite_5 constructed a degeneration of a projective algebraic hypersurface using a triangulation of its Newton polytope to provide a higher-dimensional pair-of-pants'' decomposition. He further identified a stratified torus fibration over the spine of the corresponding amoeba. This viewpoint was first applied to homological mirror symmetry ( HMS'') by Abouzaid @cite_11 . Mikhalkin's construction and perspective inform the current work greatly, even though our route from HMS is a bit top-down.'' We describe it here.
{ "cite_N": [ "@cite_5", "@cite_11" ], "mid": [ "2734934103", "2605842359", "2613622891", "2056254807" ], "abstract": [ "We show that the category of coherent sheaves on the toric boundary divisor of a smooth quasiprojective DM toric stack is equivalent to the wrapped Fukaya category of a hypersurface in a complex torus. Hypersurfaces with every Newton polytope can be obtained. Our proof has the following ingredients. Using Mikhalkin-Viro patchworking, we compute the skeleton of the hypersurface. The result matches the [FLTZ] skeleton and is naturally realized as a Legendrian in the cosphere bundle of a torus. By [GPS1, GPS2, GPS3], we trade wrapped Fukaya categories for microlocal sheaf theory. By proving a new functoriality result for Bondal's coherent-constructible correspondence, we reduce the sheaf calculation to Kuwagaki's recent theorem on mirror symmetry for toric varieties.", "We prove a homological mirror symmetry equivalence between an @math -brane category for the pair of pants, computed as a wrapped microlocal sheaf category, and a @math -brane category for a mirror LG model, understood as a category of matrix factorizations. The equivalence improves upon prior results in two ways: it intertwines evident affine Weyl group symmetries on both sides, and it exhibits the relation of wrapped microlocal sheaves along different types of Lagrangian skeleta for the same hypersurface. The equivalence proceeds through the construction of a combinatorial realization of the @math -model via arboreal singularities. The constructions here represent the start of a program to generalize to higher dimensions many of the structures which have appeared in topological approaches to Fukaya categories of surfaces.", "Using the Minimal Model Program, any degeneration of K-trivial varieties can be arranged to be in a Kulikov type form, i.e. with trivial relative canonical divisor and mild singularities. In the hyper-K \"ahler setting, we can then deduce a finiteness statement for monodromy acting on @math , once one knows that one component of the central fiber is not uniruled. Independently of this, using deep results from the geometry of hyper-K \"ahler manifolds, we prove that a finite monodromy projective degeneration of hyper-K \"ahler manifolds has a smooth filling (after base change and birational modifications). As a consequence of these two results, we prove a generalization of Huybrechts' theorem about birational versus deformation equivalence, allowing singular central fibers. As an application, we give simple proofs for the deformation type of certain geometric constructions of hyper-K \"ahler manifolds (e.g. Debarre--Voisin or Laza--Sacc a--Voisin). In a slightly different direction, we establish some basic properties (dimension and rational homology type) for the dual complex of a Kulikov type degeneration of hyper-K \"ahler manifolds.", "Abstract It is well-known that a Riemann surface can be decomposed into the so-called pairs-of-pants . Each pair-of-pants is diffeomorphic to a Riemann sphere minus 3 points. We show that a smooth complex projective hypersurface of arbitrary dimension admits a similar decomposition. The n -dimensional pair-of-pants is diffeomorphic to CP n minus n +2 hyperplanes. Alternatively, these decompositions can be treated as certain fibrations on the hypersurfaces. We show that there exists a singular fibration on the hypersurface with an n -dimensional polyhedral complex as its base and a real n -torus as its fiber. The base accommodates the geometric genus of a hypersurface V . Its homotopy type is a wedge of h n , o ( V ) spheres S n ." ] }
1211.5184
2951471647
Many online social networks feature restrictive web interfaces which only allow the query of a user's local neighborhood through the interface. To enable analytics over such an online social network through its restrictive web interface, many recent efforts reuse the existing Markov Chain Monte Carlo methods such as random walks to sample the social network and support analytics based on the samples. The problem with such an approach, however, is the large amount of queries often required (i.e., a long "mixing time") for a random walk to reach a desired (stationary) sampling distribution. In this paper, we consider a novel problem of enabling a faster random walk over online social networks by "rewiring" the social network on-the-fly. Specifically, we develop Modified TOpology (MTO)-Sampler which, by using only information exposed by the restrictive web interface, constructs a "virtual" overlay topology of the social network while performing a random walk, and ensures that the random walk follows the modified overlay topology rather than the original one. We show that MTO-Sampler not only provably enhances the efficiency of sampling, but also achieves significant savings on query cost over real-world online social networks such as Google Plus, Epinion etc.
With global topology, @cite_12 discussed sampling techniques like random node, random edge, random subgraph in large graphs. @cite_22 introduced Albatross sampling which combines random jump and MHRW. @cite_26 also demonstrated true uniform sampling method among the users' id as ground-truth".
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_12" ], "mid": [ "1997991368", "2110117460", "2568950526", "1630098887" ], "abstract": [ "In this paper, we propose an efficient method to detect the underlying structures in data. The same as RANSAC, we randomly sample MSSs (minimal size samples) and generate hypotheses. Instead of analyzing each hypothesis separately, the consensus information in all hypotheses is naturally fused into a hypergraph, called random consensus graph, with real structures corresponding to its dense subgraphs. The sampling process is essentially a progressive refinement procedure of the random consensus graph. Due to the huge number of hyperedges, it is generally inefficient to detect dense subgraphs on random consensus graphs. To overcome this issue, we construct a pairwise graph which approximately retains the dense subgraphs of the random consensus graph. The underlying structures are then revealed by detecting the dense subgraphs of the pair-wise graph. Since our method fuses information from all hypotheses, it can robustly detect structures even under a small number of MSSs. The graph framework enables our method to simultaneously discover multiple structures. Besides, our method is very efficient, and scales well for large scale problems. Extensive experiments illustrate the superiority of our proposed method over previous approaches, achieving several orders of magnitude speedup along with satisfactory accuracy and robustness.", "Graph sampling via crawling has been actively considered as a generic and important tool for collecting uniform node samples so as to consistently estimate and uncover various characteristics of complex networks. The so-called simple random walk with re-weighting (SRW-rw) and Metropolis-Hastings (MH) algorithm have been popular in the literature for such unbiased graph sampling. However, an unavoidable downside of their core random walks -- slow diffusion over the space, can cause poor estimation accuracy. In this paper, we propose non-backtracking random walk with re-weighting (NBRW-rw) and MH algorithm with delayed acceptance (MHDA) which are theoretically guaranteed to achieve, at almost no additional cost, not only unbiased graph sampling but also higher efficiency (smaller asymptotic variance of the resulting unbiased estimators) than the SRW-rw and the MH algorithm, respectively. In particular, a remarkable feature of the MHDA is its applicability for any non-uniform node sampling like the MH algorithm, but ensuring better sampling efficiency than the MH algorithm. We also provide simulation results to confirm our theoretical findings.", "We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013", "A recurrent challenge for modern applications is the processing of large graphs. The ability to generate representative samples of smaller size is useful not only to circumvent scalability issues but also, per se, for statistical analysis and other data mining tasks. For such purposes adequate sampling techniques must be devised. We are interested, in this paper, in the uniform random sampling of a connected subgraph from a graph. We require that the sample contains a prescribed number of vertices. The sampled graph is the corresponding induced graph. We devise, present and discuss several algorithms that leverage three different techniques: Rejection Sampling, Random Walk and Markov Chain Monte Carlo. We empirically evaluate and compare the performance of the algorithms. We show that they are effective and efficient but that there is a trade-off, which depends on the density of the graphs and the sample size. We propose one novel algorithm, which we call Neighbour Reservoir Sampling (NRS), that very successfully realizes the trade-off between effectiveness and efficiency." ] }
1211.5184
2951471647
Many online social networks feature restrictive web interfaces which only allow the query of a user's local neighborhood through the interface. To enable analytics over such an online social network through its restrictive web interface, many recent efforts reuse the existing Markov Chain Monte Carlo methods such as random walks to sample the social network and support analytics based on the samples. The problem with such an approach, however, is the large amount of queries often required (i.e., a long "mixing time") for a random walk to reach a desired (stationary) sampling distribution. In this paper, we consider a novel problem of enabling a faster random walk over online social networks by "rewiring" the social network on-the-fly. Specifically, we develop Modified TOpology (MTO)-Sampler which, by using only information exposed by the restrictive web interface, constructs a "virtual" overlay topology of the social network while performing a random walk, and ensures that the random walk follows the modified overlay topology rather than the original one. We show that MTO-Sampler not only provably enhances the efficiency of sampling, but also achieves significant savings on query cost over real-world online social networks such as Google Plus, Epinion etc.
@cite_21 found that the mixing time of typical online social networks is much larger than anticipated, which validates our motivation to shorten the mixing time of random walk. @cite_13 derived the fastest mixing random walk on a graph by convex optimization on second largest eigenvalue of the transition matrix, but it need the whole topology of the graph, and its high time complexity make it inapplicable in large graphs.
{ "cite_N": [ "@cite_21", "@cite_13" ], "mid": [ "2127503167", "1973616368", "2134711723", "2118713653" ], "abstract": [ "Social networks provide interesting algorithmic properties that can be used to bootstrap the security of distributed systems. For example, it is widely believed that social networks are fast mixing, and many recently proposed designs of such systems make crucial use of this property. However, whether real-world social networks are really fast mixing is not verified before, and this could potentially affect the performance of such systems based on the fast mixing property. To address this problem, we measure the mixing time of several social graphs, the time that it takes a random walk on the graph to approach the stationary distribution of that graph, using two techniques. First, we use the second largest eigenvalue modulus which bounds the mixing time. Second, we sample initial distributions and compute the random walk length required to achieve probability distributions close to the stationary distribution. Our findings show that the mixing time of social graphs is much larger than anticipated, and being used in literature, and this implies that either the current security systems based on fast mixing have weaker utility guarantees or have to be less efficient, with less security guarantees, in order to compensate for the slower mixing.", "In this article we present a study of the mixing time of a random walk on the largest component of a supercritical random graph, also known as the giant component. We identify local obstructions that slow down the random walk, when the average degree d is at most O( @math ), proving that the mixing time in this case is Θ((n-d)2) asymptotically almost surely. As the average degree grows these become negligible and it is the diameter of the largest component that takes over, yielding mixing time Θ(n-d) a.a.s.. We proved these results during the 2003–04 academic year. Similar results but for constant d were later proved independently by in [3]. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008 Most of this work was completed while the author was a research fellow at the School of Computer Science, McGill University.", "We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibrium distribution; the rate of convergence to this distribution, i.e., the mixing rate of the Markov chain, is determined by the second largest eigenvalue modulus (SLEM) of the transition probability matrix. In this paper we address the problem of assigning probabilities to the edges of the graph in such a way as to minimize the SLEM, i.e., the problem of finding the fastest mixing Markov chain on the graph. We show that this problem can be formulated as a convex optimization problem, which can in turn be expressed as a semidefinite program (SDP). This allows us to easily compute the (globally) fastest mixing Markov chain for any graph with a modest number of edges (say, @math ) using standard numerical methods for SDPs. Larger problems can be solved by exploiting various types of symmetry and structure in the problem, and far larger problems (say, 100,000 edges) can be solved using a subgradient method we describe. We compare the fastest mixing Markov chain to those obtained using two commonly used heuristics: the maximum-degree method, and the Metropolis--Hastings algorithm. For many of the examples considered, the fastest mixing Markov chain is substantially faster than those obtained using these heuristic methods. We derive the Lagrange dual of the fastest mixing Markov chain problem, which gives a sophisticated method for obtaining (arbitrarily good) bounds on the optimal mixing rate, as well as the optimality conditions. Finally, we describe various extensions of the method, including a solution of the problem of finding the fastest mixing reversible Markov chain, on a fixed graph, with a given equilibrium distribution.", "We examine the spectral prole bound of Goel, Montenegro and Tet ali for the L 1 mixing time of continuous-time random walk in reversible settings. We nd that it is precise up to a log log factor, and that this log log factor cannot be improved." ] }
1211.5184
2951471647
Many online social networks feature restrictive web interfaces which only allow the query of a user's local neighborhood through the interface. To enable analytics over such an online social network through its restrictive web interface, many recent efforts reuse the existing Markov Chain Monte Carlo methods such as random walks to sample the social network and support analytics based on the samples. The problem with such an approach, however, is the large amount of queries often required (i.e., a long "mixing time") for a random walk to reach a desired (stationary) sampling distribution. In this paper, we consider a novel problem of enabling a faster random walk over online social networks by "rewiring" the social network on-the-fly. Specifically, we develop Modified TOpology (MTO)-Sampler which, by using only information exposed by the restrictive web interface, constructs a "virtual" overlay topology of the social network while performing a random walk, and ensures that the random walk follows the modified overlay topology rather than the original one. We show that MTO-Sampler not only provably enhances the efficiency of sampling, but also achieves significant savings on query cost over real-world online social networks such as Google Plus, Epinion etc.
@cite_6 compared latent space model with real social network data. @cite_14 introduced hybrid graph model to incorporate the small world phenomenon. @cite_7 also measured the difference between multiple synthetic graphs and real world social network graphs.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_6" ], "mid": [ "1917516492", "2123718282", "2039750798", "2951741291" ], "abstract": [ "We propose a temporal latent space model for link prediction in dynamic social networks, where the goal is to predict links over time based on a sequence of previous graph snapshots. The model assumes that each user lies in an unobserved latent space, and interactions are more likely to occur between similar users in the latent space representation. In addition, the model allows each user to gradually move its position in the latent space as the network structure evolves over time. We present a global optimization algorithm to effectively infer the temporal latent space. Two alternative optimization algorithms with local and incremental updates are also proposed, allowing the model to scale to larger networks without compromising prediction accuracy. Empirically, we demonstrate that our model, when evaluated on a number of real-world dynamic networks, significantly outperforms existing approaches for temporal link prediction in terms of both scalability and predictive power.", "The small world phenomenon, that consistently occurs in numerous exist- ing networks, refers to two similar but different properties — small average distance and the clustering effect. We consider a hybrid graph model that incorporates both properties by combining a global graph and a local graph. The global graph is modeled by a random graph with a power law degree distribution, while the local graph has specified local connectivity. We will prove that the hybrid graph has average distance and diameter close to that of random graphs with the same degree distribution (under certain mild conditions). We also give a simple decomposition algorithm which, for any given (real) graph, identifies the global edges and extracts the local graph (which is uniquely determined depending only on the local connectivity). We can then apply our theoretical results for analyzing real graphs, provided the parameters of the hybrid model can be appropriately chosen.", "This paper explores two aspects of social network modeling. First, we generalize a successful static model of relationships into a dynamic model that accounts for friendships drifting over time. Second, we show how to make it tractable to learn such models from data, even as the number of entities n gets large. The generalized model associates each entity with a point in p-dimensional Euclidean latent space. The points can move as time progresses but large moves in latent space are improbable. Observed links between entities are more likely if the entities are close in latent space. We show how to make such a model tractable (sub-quadratic in the number of entities) by the use of appropriate kernel functions for similarity in latent space; the use of low dimensional KD-trees; a new efficient dynamic adaptation of multidimensional scaling for a first pass of approximate projection of entities into latent space; and an efficient conjugate gradient update rule for non-linear local optimization in which amortized time per entity during an update is O(log n). We use both synthetic and real-world data on up to 11,000 entities which indicate near-linear scaling in computation time and improved performance over four alternative approaches. We also illustrate the system operating on twelve years of NIPS co-authorship data.", "In this paper we present a fully Bayesian latent variable model which exploits conditional nonlinear(in)-dependence structures to learn an efficient latent representation. The latent space is factorized to represent shared and private information from multiple views of the data. In contrast to previous approaches, we introduce a relaxation to the discrete segmentation and allow for a \"softly\" shared latent space. Further, Bayesian techniques allow us to automatically estimate the dimensionality of the latent spaces. The model is capable of capturing structure underlying extremely high dimensional spaces. This is illustrated by modelling unprocessed images with tenths of thousands of pixels. This also allows us to directly generate novel images from the trained model by sampling from the discovered latent spaces. We also demonstrate the model by prediction of human pose in an ambiguous setting. Our Bayesian framework allows us to perform disambiguation in a principled manner by including latent space priors which incorporate the dynamic nature of the data." ] }
1211.5608
2952304478
We consider the problem of recovering two unknown vectors, @math and @math , of length @math from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension @math and the other with dimension @math . Although the observed convolution is nonlinear in both @math and @math , it is linear in the rank-1 matrix formed by their outer product @math . This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that for "generic" signals, the program can deconvolve @math and @math exactly when the maximum of @math and @math is almost on the order of @math . That is, we show that if @math is drawn from a random subspace of dimension @math , and @math is a vector in a subspace of dimension @math whose basis vectors are "spread out" in the frequency domain, then nuclear norm minimization recovers @math without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length @math which we code using a random @math coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length @math , then the receiver can recover both the channel response and the message when @math , to within constant and log factors.
While this paper is only concerned with recovery by nuclear norm minimization, other types of recovery techniques have proven effective both in theory and in practice; see for example @cite_4 @cite_12 @cite_14 . It is possible that the guarantees given in this paper could be extended to these other algorithms.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_12" ], "mid": [ "2136912397", "2508366294", "2747056667", "2158121106" ], "abstract": [ "The problem of minimizing the rank of a matrix subject to affine constraints has applications in several areas including machine learning, and is known to be NP-hard. A tractable relaxation for this problem is nuclear norm (or trace norm) minimization, which is guaranteed to find the minimum rank matrix under suitable assumptions. In this paper, we propose a family of Iterative Reweighted Least Squares algorithms IRLS-p (with 0 ≤ p ≤ 1), as a computationally efficient way to improve over the performance of nuclear norm minimization. The algorithms can be viewed as (locally) minimizing certain smooth approximations to the rank function. When p = 1, we give theoretical guarantees similar to those for nuclear norm minimization, that is, recovery of low-rank matrices under certain assumptions on the operator defining the constraints. For p < 1, IRLS-p shows better empirical performance in terms of recovering low-rank matrices than nuclear norm minimization. We provide an efficient implementation for IRLS-p, and also present a related family of algorithms, sIRLS-p. These algorithms exhibit competitive run times and improved recovery when compared to existing algorithms for random instances of the matrix completion problem, as well as on the MovieLens movie recommendation data set.", "Minimization of the nuclear norm is often used as a surrogate, convex relaxation, for finding the minimum rank completion (recovery) of a partial matrix. The minimum nuclear norm problem can be solved as a trace minimization semidefinite programming problem, (SDP). The SDP and its dual are regular in the sense that they both satisfy strict feasibility. Interior point algorithms are the current methods of choice for these problems. This means that it is difficult to solve large scale problems and difficult to get high accuracy solutions. In this paper we take advantage of the structure at optimality for the minimum nuclear norm problem. We show that even though strict feasibility holds, the facial reduction framework can be successfully applied to obtain a proper face that contains the optimal set, and thus can dramatically reduce the size of the final nuclear norm problem while guaranteeing a low-rank solution. We include numerical tests for both exact and noisy cases. In all cases we assume that knowledge of a target rank is available.", "The nonsmooth and nonconvex regularization has many applications in imaging science and machine learning research due to its excellent recovery performance. A proximal iteratively reweighted nuclear norm algorithm has been proposed for the nonsmooth and nonconvex matrix minimizations. In this paper, we aim to investigate the convergence of the algorithm. With the Kurdyka–Łojasiewicz property, we prove the algorithm globally converges to a critical point of the objective function. The numerical results presented in this paper coincide with our theoretical findings.", "This paper considers the recovery of a low-rank matrix from an observed version that simultaneously contains both 1) erasures, most entries are not observed, and 2) errors, values at a constant fraction of (unknown) locations are arbitrarily corrupted. We provide a new unified performance guarantee on when minimizing nuclear norm plus l1 norm succeeds in exact recovery. Our result allows for the simultaneous presence of random and deterministic components in both the error and erasure patterns. By specializing this one single result in different ways, we recover (up to poly-log factors) as corollaries all the existing results in exact matrix completion, and exact sparse and low-rank matrix decomposition. Our unified result also provides the first guarantees for 1) recovery when we observe a vanishing fraction of entries of a corrupted matrix, and 2) deterministic matrix completion." ] }
1211.5608
2952304478
We consider the problem of recovering two unknown vectors, @math and @math , of length @math from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension @math and the other with dimension @math . Although the observed convolution is nonlinear in both @math and @math , it is linear in the rank-1 matrix formed by their outer product @math . This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that for "generic" signals, the program can deconvolve @math and @math exactly when the maximum of @math and @math is almost on the order of @math . That is, we show that if @math is drawn from a random subspace of dimension @math , and @math is a vector in a subspace of dimension @math whose basis vectors are "spread out" in the frequency domain, then nuclear norm minimization recovers @math without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length @math which we code using a random @math coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length @math , then the receiver can recover both the channel response and the message when @math , to within constant and log factors.
As we will see below, our mathematical analysis has mostly to do how matrices of the form in act on rank-2 matrices in a certain subspace. Matrices of this type have been considered in the context of sparse recovery in the compressed sensing literature for applications including multiple-input multiple-output channel estimation @cite_37 , multi-user detection @cite_18 , and multiplexing of spectrally sparse signals @cite_13 .
{ "cite_N": [ "@cite_13", "@cite_37", "@cite_18" ], "mid": [ "2962909343", "1770500012", "2130345277", "2166207339" ], "abstract": [ "In this paper, we improve existing results in the field of compressed sensing and matrix completion when sampled data may be grossly corrupted. We introduce three new theorems. (1) In compressed sensing, we show that if the m×n sensing matrix has independent Gaussian entries, then one can recover a sparse signal x exactly by tractable l 1 minimization even if a positive fraction of the measurements are arbitrarily corrupted, provided the number of nonzero entries in x is O(m (log(n m)+1)). (2) In the very general sensing model introduced in Candes and Plan (IEEE Trans. Inf. Theory 57(11):7235–7254, 2011) and assuming a positive fraction of corrupted measurements, exact recovery still holds if the signal now has O(m (log2 n)) nonzero entries. (3) Finally, we prove that one can recover an n×n low-rank matrix from m corrupted sampled entries by tractable optimization provided the rank is on the order of O(m (nlog2 n)); again, this holds when there is a positive fraction of corrupted samples.", "This paper revisits the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem is an extension of single channel sparse recovery, which lies at the heart of compressed sensing. Inspired by the links to array signal processing, a new family of MMV algorithms is considered that highlight the role of rank in determining the difficulty of the MMV recovery problem. The simplest such method is a discrete version of MUSIC which is guaranteed to recover the sparse vectors in the full rank MMV setting, under mild conditions. This idea is extended to a rank aware pursuit algorithm that naturally reduces to Order Recursive Matching Pursuit (ORMP) in the single measurement case while also providing guaranteed recovery in the full rank setting. In contrast, popular MMV methods such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques are shown to be rank blind in terms of worst case analysis. Numerical simulations demonstrate that the rank aware techniques are significantly better than existing methods in dealing with multiple measurements.", "Compressed sensing seeks to recover a sparse vector from a small number of linear and non-adaptive measurements. While most work so far focuses on Gaussian or Bernoulli random measurements we investigate the use of partial random circulant and Toeplitz matrices in connection with recovery by 1-minization. In contrast to recent work in this direction we allow the use of an arbitrary subset of rows of a circulant and Toeplitz matrix. Our recovery result predicts that the necessary number of measurements to ensure sparse reconstruction by 1-minimization with random partial circulant or Toeplitz matrices scales linearly in the sparsity up to a log-factor in the ambient dimension. This represents a significant improvement over previous recovery results for such matrices. As a main tool for the proofs we use a new version of the non-commutative Khintchine inequality.", "We investigate the problem of reconstructing a high-dimensional nonnegative sparse vector from lower-dimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes can be more efficient both with respect to signal sensing as well as reconstruction complexity. Known constructions use the adjacency matrices of expander graphs, which often lead to recovery algorithms which are much more efficient than minimization. However, prior constructions of sparse measurement matrices rely on expander graphs with very high expansion coefficients which make the construction of such graphs difficult and the size of the recoverable sets very small. In this paper, we introduce sparse measurement matrices for the recovery of nonnegative vectors, using perturbations of the adjacency matrices of expander graphs requiring much smaller expansion coefficients, hereby referred to as minimal expanders. We show that when minimization is used as the reconstruction method, these constructions allow the recovery of signals that are almost three orders of magnitude larger compared to the existing theoretical results for sparse measurement matrices. We provide for the first time tight upper bounds for the so called weak and strong recovery thresholds when minimization is used. We further show that the success of optimization is equivalent to the existence of a “unique” vector in the set of solutions to the linear equations, which enables alternative algorithms for minimization. We further show that the defined minimal expansion property is necessary for all measurement matrices for compressive sensing, (even when the non-negativity assumption is removed) therefore implying that our construction is tight. We finally present a novel recovery algorithm that exploits expansion and is much more computationally efficient compared to minimization." ] }
1502.00842
1560878967
We introduce a new family of erasure codes, called group decodable code (GDC), for distributed storage system. Given a set of design parameters ; ; k; t , where k is the number of information symbols, each codeword of an ( ; ; k; t)-group decodable code is a t-tuple of strings, called buckets, such that each bucket is a string of symbols that is a codeword of a [ ; ] MDS code (which is encoded from information symbols). Such codes have the following two properties: (P1) Locally Repairable: Each code symbol has locality ( ; - + 1). (P2) Group decodable: From each bucket we can decode information symbols. We establish an upper bound of the minimum distance of ( ; ; k; t)-group decodable code for any given set of ; ; k; t ; We also prove that the bound is achievable when the coding field F has size |F| > n-1 k-1.
In @cite_12 , the concept of @math -locality was defined, which captures the property that there exist @math pairwise disjoint local repair sets for a code symbol. An upper bound on the minimum distance for @math linear codes with information @math -locality was derived, and codes that attain this bound was constructed for the length @math . However, for @math , it is not known whether there exist codes attaining this bound. Upper bounds on the rate and minimum distance of codes with all-symbol @math -locality was proved in @cite_13 . However, no explicit construction of codes that achieve this bound was presented. It is still an open question whether the distance bound in @cite_13 is achievable.
{ "cite_N": [ "@cite_13", "@cite_12" ], "mid": [ "1969158823", "2018102393", "1993830711", "1639538057" ], "abstract": [ "Repair locality is a desirable property for erasure codes in distributed storage systems. Recently, different structures of local repair groups have been proposed in the definitions of repair locality. In this paper, the concept of regenerating set is introduced to characterize the local repair groups. A definition of locality @math (i.e., locality @math with repair tolerance @math ) under the most general structure of regenerating sets is given. All previously studied locality turns out to be special cases of this definition. Furthermore, three representative concepts of locality proposed before are reinvestigated under the framework of regenerating sets, and their respective upper bounds on the minimum distance are reproved in a uniform and brief form. Additionally, a more precise distance bound is derived for the square code which is a class of linear codes with locality @math and high information rate, and an explicit code construction attaining the optimal distance bound is obtained.", "In distributed storage systems, erasure codes with locality (r ) are preferred because a coordinate can be locally repaired by accessing at most (r ) other coordinates which in turn greatly reduces the disk I O complexity for small (r ) . However, the local repair may not be performed when some of the (r ) coordinates are also erased. To overcome this problem, we propose the ((r, )_ c ) -locality providing ( -1 ) nonoverlapping local repair groups of size no more than (r ) for a coordinate. Consequently, the repair locality (r ) can tolerate ( -1 ) erasures in total. We derive an upper bound on the minimum distance for any linear ([n,k] ) code with information ((r, )_ c ) -locality. Then, we prove existence of the codes that attain this bound when (n k(r( -1)+1) ) . Although the locality ((r, ) ) defined by provides the same level of locality and local repair tolerance as our definition, codes with ((r, )_ c ) -locality attaining the bound are proved to have more advantage in the minimum distance. In particular, we construct a class of codes with all symbol ((r, )_ c ) -locality where the gain in minimum distance is ( ( r ) ) and the information rate is close to 1.", "Consider a linear [n,k,d]q code C. We say that the ith coordinate of C has locality r , if the value at this coordinate can be recovered from accessing some other r coordinates of C. Data storage applications require codes with small redundancy, low locality for information coordinates, large distance, and low locality for parity coordinates. In this paper, we carry out an in-depth study of the relations between these parameters. We establish a tight bound for the redundancy n-k in terms of the message length, the distance, and the locality of information coordinates. We refer to codes attaining the bound as optimal. We prove some structure theorems about optimal codes, which are particularly strong for small distances. This gives a fairly complete picture of the tradeoffs between codewords length, worst case distance, and locality of information symbols. We then consider the locality of parity check symbols and erasure correction beyond worst case distance for optimal codes. Using our structure theorem, we obtain a tight bound for the locality of parity symbols possible in such codes for a broad class of parameter settings. We prove that there is a tradeoff between having good locality and the ability to correct erasures beyond the minimum distance.", "In this work, we present a new upper bound on the minimum distance d of linear locally repairable codes (LRCs) with information locality and availability. The bound takes into account the code length n, dimension k, locality r, availability t, and field size q. We use tensor product codes to construct several families of LRCs with information locality, and then we extend the construction to design LRCs with information locality and availability. Some of these codes are shown to be optimal with respect to their minimum distance, achieving the new bound. Finally, we study the all-symbol locality and availability properties of several classes of one-step majority-logic decodable codes, including cyclic simplex codes, cyclic difference-set codes, and 4-cycle free regular low-density parity-check (LDPC) codes. We also investigate their optimality using the new bound." ] }
1502.00739
2038147967
This paper aims at developing an integrated system of clothing co-parsing, in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. We propose a data-driven framework consisting of two phases of inference. The first phase, referred as "image co-segmentation", iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM (ESVM) technique [23]. In the second phase (i.e. "region colabeling"), we construct a multi-image graphical model by taking the segmented regions as vertices, and incorporate several contexts of clothing configuration (e.g., item location and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [30], we construct a dataset called CCP consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the CCP datasets, respectively, which are superior compared with state-of-the-art methods.
In literature, existing efforts on clothing human segmentation and recognition mainly focused on constructing expressive models to address various clothing styles and appearances @cite_0 @cite_1 @cite_14 @cite_34 @cite_32 @cite_3 @cite_19 . One classic work @cite_0 proposed a composite And-Or graph template for modeling and parsing clothing configurations. Later works studied on blocking models to segment clothes for highly occluded group images @cite_29 , or deformable spatial priors modeling for improving performance of clothing segmentation @cite_9 . Recent approaches incorporated shape-based human model @cite_16 , or pose estimation and supervised region labeling @cite_22 , and achieved impressive results. Despite acknowledged successes, these works have not yet been extended to the problem of clothing co-parsing, and they often require much labeling workload.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_29", "@cite_9", "@cite_1", "@cite_32", "@cite_3", "@cite_16", "@cite_0", "@cite_19", "@cite_34" ], "mid": [ "2313077179", "2757508077", "2038147967", "2964318046" ], "abstract": [ "This paper aims at developing an integrated system for clothing co-parsing (CCP), in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. A novel data-driven system consisting of two phases of inference is proposed. The first phase, referred as “image cosegmentation,” iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM technique [1] . In the second phase (i.e., “region colabeling”), we construct a multiimage graphical model by taking the segmented regions as vertices, and incorporating several contexts of clothing configuration (e.g., item locations and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [2] , we construct a dataset called the SYSU-Clothes dataset consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the SYSU-Clothes datasets, respectively, which are superior compared with the previous methods. Furthermore, we apply our method on a challenging task, i.e., cross-domain clothing retrieval: given user photo depicting a clothing image, retrieving the same clothing items from online shopping stores based on the fine-grained parsing results.", "We present a novel and effective approach for generating new clothing on a wearer through generative adversarial learning. Given an input image of a person and a sentence describing a different outfit, our model \"redresses\" the person as desired, while at the same time keeping the wearer and her his pose unchanged. Generating new outfits with precise regions conforming to a language description while retaining wearer's body structure is a new challenging task. Existing generative adversarial networks are not ideal in ensuring global coherence of structure given both the input photograph and language description as conditions. We address this challenge by decomposing the complex generative process into two conditional stages. In the first stage, we generate a plausible semantic segmentation map that obeys the wearer's pose as a latent spatial arrangement. An effective spatial constraint is formulated to guide the generation of this semantic segmentation map. In the second stage, a generative model with a newly proposed compositional mapping layer is used to render the final image with precise regions and textures conditioned on this map. We extended the DeepFashion dataset [8] by collecting sentence descriptions for 79K images. We demonstrate the effectiveness of our approach through both quantitative and qualitative evaluations. A user study is also conducted. The codes and the data are available at this http URL edu.hk projects FashionGAN .", "This paper aims at developing an integrated system of clothing co-parsing, in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. We propose a data-driven framework consisting of two phases of inference. The first phase, referred as \"image co-segmentation\", iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM (ESVM) technique [23]. In the second phase (i.e. \"region colabeling\"), we construct a multi-image graphical model by taking the segmented regions as vertices, and incorporate several contexts of clothing configuration (e.g., item location and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [30], we construct a dataset called CCP consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the CCP datasets, respectively, which are superior compared with state-of-the-art methods.", "We present a novel and effective approach for generating new clothing on a wearer through generative adversarial learning. Given an input image of a person and a sentence describing a different outfit, our model “redresses” the person as desired, while at the same time keeping the wearer and her his pose unchanged. Generating new outfits with precise regions conforming to a language description while retaining wearer’s body structure is a new challenging task. Existing generative adversarial networks are not ideal in ensuring global coherence of structure given both the input photograph and language description as conditions. We address this challenge by decomposing the complex generative process into two conditional stages. In the first stage, we generate a plausible semantic segmentation map that obeys the wearer’s pose as a latent spatial arrangement. An effective spatial constraint is formulated to guide the generation of this semantic segmentation map. In the second stage, a generative model with a newly proposed compositional mapping layer is used to render the final image with precise regions and textures conditioned on this map. We extended the DeepFashion dataset [8] by collecting sentence descriptions for 79K images. We demonstrate the effectiveness of our approach through both quantitative and qualitative evaluations. A user study is also conducted." ] }
1502.00739
2038147967
This paper aims at developing an integrated system of clothing co-parsing, in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. We propose a data-driven framework consisting of two phases of inference. The first phase, referred as "image co-segmentation", iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM (ESVM) technique [23]. In the second phase (i.e. "region colabeling"), we construct a multi-image graphical model by taking the segmented regions as vertices, and incorporate several contexts of clothing configuration (e.g., item location and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [30], we construct a dataset called CCP consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the CCP datasets, respectively, which are superior compared with state-of-the-art methods.
Clothing co-parsing is also highly related to image object co-labeling, where a batch of input images containing similar objects are processed jointly @cite_5 @cite_21 @cite_27 . For example, unsupervised shape guided approaches were adopted in @cite_13 to achieve single object category co-labeling. Winn et. al. @cite_24 incoporated automatic image segmentation and spatially coherent latent topic model to obtain unsupervised multi-class image labeling. These methods, however, solved the problem in an unsupervised manner, and might be intractable under circumstances with large numbers of categories and diverse appearances. To deal with more complex scenario, some recent works focused on supervised label propagation, utilizing pixelwise label map in the training set and propagating labels to unseen images. Pioneering work of @cite_5 proposed to propagate labels over scene images using a bi-layer sparse coding formulation. Similar ideas were also explored in @cite_25 . These methods, however, are often limited by expensive annotations. In addition, they extracted image correspondences upon the pixels (or superpixels), which are not discriminative for the clothing parsing problem.
{ "cite_N": [ "@cite_21", "@cite_24", "@cite_27", "@cite_5", "@cite_13", "@cite_25" ], "mid": [ "2313077179", "2038147967", "2204578866", "2115091888" ], "abstract": [ "This paper aims at developing an integrated system for clothing co-parsing (CCP), in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. A novel data-driven system consisting of two phases of inference is proposed. The first phase, referred as “image cosegmentation,” iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM technique [1] . In the second phase (i.e., “region colabeling”), we construct a multiimage graphical model by taking the segmented regions as vertices, and incorporating several contexts of clothing configuration (e.g., item locations and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [2] , we construct a dataset called the SYSU-Clothes dataset consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the SYSU-Clothes datasets, respectively, which are superior compared with the previous methods. Furthermore, we apply our method on a challenging task, i.e., cross-domain clothing retrieval: given user photo depicting a clothing image, retrieving the same clothing items from online shopping stores based on the fine-grained parsing results.", "This paper aims at developing an integrated system of clothing co-parsing, in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. We propose a data-driven framework consisting of two phases of inference. The first phase, referred as \"image co-segmentation\", iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM (ESVM) technique [23]. In the second phase (i.e. \"region colabeling\"), we construct a multi-image graphical model by taking the segmented regions as vertices, and incorporate several contexts of clothing configuration (e.g., item location and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [30], we construct a dataset called CCP consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the CCP datasets, respectively, which are superior compared with state-of-the-art methods.", "In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixel-wise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic structure and the local fine details within the cross-layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global image-level context. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN architecture over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [15] reaches 76.95 by Co-CNN, significantly higher than 62.81 and 64.38 by the state-of-the-art algorithms, M-CNN [21] and ATR [15], respectively.", "Clothing recognition is a societ ally and commercially important yet extremely challenging problem due to large variations in clothing appearance, layering, style, and body shape and pose. In this paper, we tackle the clothing parsing problem using a retrieval-based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to recognize clothing items in the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse-masks (Paper Doll item transfer) from retrieved examples. We evaluate our approach extensively and show significant improvements over previous state-of-the-art for both localization (clothing parsing given weak supervision in the form of tags) and detection (general clothing parsing). Our experimental results also indicate that the general pose estimation problem can benefit from clothing parsing." ] }
1502.00749
2083858539
This article investigates a data-driven approach for semantic scene understanding, without pixelwise annotation or classifier training. The proposed framework parses a target image in two steps: first, retrieving its exemplars (that is, references) from an image database, where all images are unsegmented but annotated with tags; second, recovering its pixel labels by propagating semantics from the references. The authors present a novel framework making the two steps mutually conditional and bootstrapped under the probabilistic Expectation-Maximization (EM) formulation. In the first step, the system selects the references by jointly matching the appearances as well as the semantics (that is, the assigned labels) with the target. They process the second step via a combinatorial graphical representation, in which the vertices are superpixels extracted from the target and its selected references. Then they derive the potentials of assigning labels to one vertex of the target, which depend upon the graph edges that connect the vertex to its spatial neighbors of the target and to similar vertices of the references. The proposed framework can be applied naturally to perform image annotation on new test images. In the experiments, the authors validated their approach on two public databases, and demonstrated superior performance over the state-of-the-art methods in both semantic segmentation and image annotation tasks.
Traditional efforts for scene understanding mainly focused on capturing scene appearances, structures and spatial contexts by developing combinatorial models, e.g., CRF @cite_9 @cite_17 , Texton-Forest @cite_0 , Graph Grammar @cite_4 . These models were generally founded on supervised learning techniques, and required manually prepared training data containing labels at pixel level.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_4", "@cite_17" ], "mid": [ "2341569833", "2283234189", "1581592866", "2045587041" ], "abstract": [ "Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data— performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-theart RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset.", "Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data --- performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-the-art RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset. Additionally, we offer a route to generating synthesized frame or video data, and understanding of different factors influencing performance gains.", "In this work we propose a hierarchical approach for labeling semantic objects and regions in scenes. Our approach is reminiscent of early vision literature in that we use a decomposition of the image in order to encode relational and spatial information. In contrast to much existing work on structured prediction for scene understanding, we bypass a global probabilistic model and instead directly train a hierarchical inference procedure inspired by the message passing mechanics of some approximate inference procedures in graphical models. This approach mitigates both the theoretical and empirical difficulties of learning probabilistic models when exact inference is intractable. In particular, we draw from recent work in machine learning and break the complex inference process into a hierarchical series of simple machine learning subproblems. Each subproblem in the hierarchy is designed to capture the image and contextual statistics in the scene. This hierarchy spans coarse-to-fine regions and explicitly models the mixtures of semantic labels that may be present due to imperfect segmentation. To avoid cascading of errors and overfitting, we train the learning problems in sequence to ensure robustness to likely errors earlier in the inference sequence and leverage the stacking approach developed by", "Semantic reconstruction of a scene is important for a variety of applications such as 3D modelling, object recognition and autonomous robotic navigation. However, most object labelling methods work in the image domain and fail to capture the information present in 3D space. In this work we propose a principled way to generate object labelling in 3D. Our method builds a triangulated meshed representation of the scene from multiple depth estimates. We then define a CRF over this mesh, which is able to capture the consistency of geometric properties of the objects present in the scene. In this framework, we are able to generate object hypotheses by combining information from multiple sources: geometric properties (from the 3D mesh), and appearance properties (from images). We demonstrate the robustness of our framework in both indoor and outdoor scenes. For indoor scenes we created an augmented version of the NYU indoor scene dataset (RGBD images) with object labelled meshes for training and evaluation. For outdoor scenes, we created ground truth object labellings for the KITTY odometry dataset (stereo image sequence). We observe a significant speed-up in the inference stage by performing labelling on the mesh, and additionally achieve higher accuracies." ] }
1502.00749
2083858539
This article investigates a data-driven approach for semantic scene understanding, without pixelwise annotation or classifier training. The proposed framework parses a target image in two steps: first, retrieving its exemplars (that is, references) from an image database, where all images are unsegmented but annotated with tags; second, recovering its pixel labels by propagating semantics from the references. The authors present a novel framework making the two steps mutually conditional and bootstrapped under the probabilistic Expectation-Maximization (EM) formulation. In the first step, the system selects the references by jointly matching the appearances as well as the semantics (that is, the assigned labels) with the target. They process the second step via a combinatorial graphical representation, in which the vertices are superpixels extracted from the target and its selected references. Then they derive the potentials of assigning labels to one vertex of the target, which depend upon the graph edges that connect the vertex to its spatial neighbors of the target and to similar vertices of the references. The proposed framework can be applied naturally to perform image annotation on new test images. In the experiments, the authors validated their approach on two public databases, and demonstrated superior performance over the state-of-the-art methods in both semantic segmentation and image annotation tasks.
Several weakly supervised methods are proposed to indicate the classes presented in the images with only image-level labels. For example, @cite_16 proposed to learn object classes based on unsupervised image segmentation. @cite_8 learned classification models for all scene labels by selecting representative training samples, and multiple instance learning was utilized in @cite_7 . Some nonparametric approaches have been also studied that solve the problems by searching and matching with an auxiliary image database. For example, an efficient structure-aware matching algorithm was discussed in @cite_6 to transfer labels from the database to the target image, but the pixelwise annotation was required for the auxiliary images.
{ "cite_N": [ "@cite_16", "@cite_6", "@cite_7", "@cite_8" ], "mid": [ "2963697527", "2029731618", "2474876375", "2203062554" ], "abstract": [ "Supervised object detection and semantic segmentation require object or even pixel level annotations. When there exist image level labels only, it is challenging for weakly supervised algorithms to achieve accurate predictions. The accuracy achieved by top weakly supervised algorithms is still significantly lower than their fully supervised counterparts. In this paper, we propose a novel weakly supervised curriculum learning pipeline for multi-label object recognition, detection and semantic segmentation. In this pipeline, we first obtain intermediate object localization and pixel labeling results for the training images, and then use such results to train task-specific deep networks in a fully supervised manner. The entire process consists of four stages, including object localization in the training images, filtering and fusing object instances, pixel labeling for the training images, and task-specific network training. To obtain clean object instances in the training images, we propose a novel algorithm for filtering, fusing and classifying object instances collected from multiple solution mechanisms. In this algorithm, we incorporate both metric learning and density-based clustering to filter detected object instances. Experiments show that our weakly supervised pipeline achieves state-of-the-art results in multi-label image classification as well as weakly supervised object detection and very competitive results in weakly supervised semantic segmentation on MS-COCO, PASCAL VOC 2007 and PASCAL VOC 2012.", "We address the task of learning a semantic segmentation from weakly supervised data. Our aim is to devise a system that predicts an object label for each pixel by making use of only image level labels during training – the information whether a certain object is present or not in the image. Such coarse tagging of images is faster and easier to obtain as opposed to the tedious task of pixelwise labeling required in state of the art systems. We cast this task naturally as a multiple instance learning (MIL) problem. We use Semantic Texton Forest (STF) as the basic framework and extend it for the MIL setting. We make use of multitask learning (MTL) to regularize our solution. Here, an external task of geometric context estimation is used to improve on the task of semantic segmentation. We report experimental results on the MSRC21 and the very challenging VOC2007 datasets. On MSRC21 dataset we are able, by using 276 weakly labeled images, to achieve the performance of a supervised STF trained on pixelwise labeled training set of 56 images, which is a significant reduction in supervision needed.", "In this paper, we propose a novel method to perform weakly-supervised image parsing based on the dictionary learning framework. To deal with the challenges caused by the label ambiguities, we design a saliency guided weight assignment scheme to boost the discriminative dictionary learning. More specifically, with a collection of tagged images, the proposed method first conducts saliency detection and automatically infers the confidence for each semantic class to be foreground or background. These clues are then incorporated to learn the dictionaries, the weights, as well as the sparse representation coefficients in the meanwhile. Once obtained the coefficients of a superpixel, we use a sparse representation classifier to determine its semantic label. The approach is validated on the MSRC21, PASCAL VOC07, and VOC12 datasets. Experimental results demonstrate the encouraging performance of our approach in comparison with some state-of-the-arts.", "We present a weakly-supervised approach to semantic segmentation. The goal is to assign pixel-level labels given only partial information, for example, image-level labels. This is an important problem in many application scenarios where it is difficult to get accurate segmentation or not feasible to obtain detailed annotations. The proposed approach starts with an initial coarse segmentation, followed by a spectral clustering approach that groups related image parts into communities. A community-driven graph is then constructed that captures spatial and feature relationships between communities while a label graph captures correlations between image labels. Finally, mapping the image level labels to appropriate communities is formulated as a convex optimization problem. The proposed approach does not require location information for image level labels and can be trained using partially labeled datasets. Compared to the state-of-the-art weakly supervised approaches, we achieve a significant performance improvement of 9 on MSRC-21 dataset and 11 on LabelMe dataset, while being more than 300 times faster." ] }
1502.00743
2950368212
This paper investigates how to extract objects-of-interest without relying on hand-craft features and sliding windows approaches, that aims to jointly solve two sub-tasks: (i) rapidly localizing salient objects from images, and (ii) accurately segmenting the objects based on the localizations. We present a general joint task learning framework, in which each task (either object localization or object segmentation) is tackled via a multi-layer convolutional neural network, and the two networks work collaboratively to boost performance. In particular, we propose to incorporate latent variables bridging the two networks in a joint optimization manner. The first network directly predicts the positions and scales of salient objects from raw images, and the latent variables adjust the object localizations to feed the second network that produces pixelwise object masks. An EM-type method is presented for the optimization, iterating with two steps: (i) by using the two networks, it estimates the latent variables by employing an MCMC-based sampling method; (ii) it optimizes the parameters of the two networks unitedly via back propagation, with the fixed latent variables. Extensive experiments suggest that our framework significantly outperforms other state-of-the-art approaches in both accuracy and efficiency (e.g. 1000 times faster than competing approaches).
Extracting pixelwise objects-of-interest from an image, our work is related to the salient region object detections @cite_34 @cite_19 @cite_0 @cite_26 . These methods mainly focused on feature engineering and graph-based segmentation. For example, @cite_19 proposed a regional contrast based saliency extraction algorithm and further segmented objects by applying an iterative version of GrabCut. Some approaches @cite_9 @cite_14 trained object appearance models and utilized spatial or geometric priors to address this task. @cite_9 proposed to transfer segmentation masks from training data into testing images by searching and matching visually similar objects within the sliding windows. Other related approaches @cite_23 @cite_29 simultaneously processed a batch of images for object discovery and co-segmentation, but they often required category information as priors.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_9", "@cite_29", "@cite_0", "@cite_19", "@cite_23", "@cite_34" ], "mid": [ "2037954058", "2605929543", "2155080527", "1969840923" ], "abstract": [ "Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.", "Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps, estimating saliency map, detecting salient object contours and identifying salient object instances. For the first two steps, we propose a multiscale saliency refinement network, which generates high-quality salient region masks and salient object contours. Once integrated with multiscale combinatorial grouping and a MAP-based subset optimization framework, our method can generate very promising salient object instance segmentation results. To promote further research and evaluation of salient instance segmentation, we also construct a new database of 1000 images and their pixelwise salient instance annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks for salient region detection as well as on our new dataset for salient instance segmentation.", "We present a generic detection localization algorithm capable of searching for a visual object of interest without training. The proposed method operates using a single example of an object of interest to find similar matches, does not require prior knowledge (learning) about objects being sought, and does not require any preprocessing step or segmentation of a target image. Our method is based on the computation of local regression kernels as descriptors from a query, which measure the likeness of a pixel to its surroundings. Salient features are extracted from said descriptors and compared against analogous features from the target image. This comparison is done using a matrix generalization of the cosine similarity measure. We illustrate optimality properties of the algorithm using a naive-Bayes framework. The algorithm yields a scalar resemblance map, indicating the likelihood of similarity between the query and all patches in the target image. By employing nonparametric significance tests and nonmaxima suppression, we detect the presence and location of objects similar to the given query. The approach is extended to account for large variations in scale and rotation. High performance is demonstrated on several challenging data sets, indicating successful detection of objects in diverse contexts and under different imaging conditions.", "Abstract This paper pertains to the detection of objects located in complex backgrounds. A feature-based segmentation approach to the object detection problem is pursued, where the features are computed over multiple spatial orientations and frequencies. The method proceeds as follows: a given image is passed through a bank of even-symmetric Gabor filters. A selection of these filtered images is made and each (selected) filtered image is subjected to a nonlinear (sigmoidal like) transformation. Then, a measure of texture energy is computed in a window around each transformed image pixel. The texture energy (“Gabor features”) and their spatial locations are inputted to a squared-error clustering algorithm. This clustering algorithm yields a segmentation of the original image—it assigns to each pixel in the image a cluster label that identifies the amount of mean local energy the pixel possesses across different spatial orientations and frequencies. The method is applied to a number of visual and infrared images, each one of which contains one or more objects. The region corresponding to the object is usually segmented correctly, and a unique signature of “Gabor features” is typically associated with the segment containing the object(s) of interest. Experimental results are provided to illustrate the usefulness of this object detection method in a number of problem domains. These problems arise in IVHS, military reconnaissance, fingerprint analysis, and image database query." ] }
1502.00743
2950368212
This paper investigates how to extract objects-of-interest without relying on hand-craft features and sliding windows approaches, that aims to jointly solve two sub-tasks: (i) rapidly localizing salient objects from images, and (ii) accurately segmenting the objects based on the localizations. We present a general joint task learning framework, in which each task (either object localization or object segmentation) is tackled via a multi-layer convolutional neural network, and the two networks work collaboratively to boost performance. In particular, we propose to incorporate latent variables bridging the two networks in a joint optimization manner. The first network directly predicts the positions and scales of salient objects from raw images, and the latent variables adjust the object localizations to feed the second network that produces pixelwise object masks. An EM-type method is presented for the optimization, iterating with two steps: (i) by using the two networks, it estimates the latent variables by employing an MCMC-based sampling method; (ii) it optimizes the parameters of the two networks unitedly via back propagation, with the fixed latent variables. Extensive experiments suggest that our framework significantly outperforms other state-of-the-art approaches in both accuracy and efficiency (e.g. 1000 times faster than competing approaches).
Recently resurgent deep learning methods have also been applied in object detection and image segmentation @cite_30 @cite_25 @cite_15 @cite_13 @cite_18 @cite_3 @cite_27 @cite_2 . Among these works, @cite_8 detected objects by training category-level convolutional neural networks. @cite_2 proposed to combine multiple components (e.g., feature extraction, occlusion handling, and classification) within a deep architecture for human detection. @cite_13 presented the multiscale recursive neural networks for robust image segmentation. These mentioned methods generally achieved impressive performances, but they usually rely on sliding detect windows over scales and positions of testing images. Very recently, @cite_10 adopted neural networks to recognize object categories while predicting potential object localizations without exhaustive enumeration. This work inspired us to design the first network to localize objects. To the best of our knowledge, our framework is original to make the different tasks collaboratively optimized by introducing latent variables together with network parameter learning.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_8", "@cite_10", "@cite_3", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_25" ], "mid": [ "2963542991", "1487583988", "2410641892", "1929903369" ], "abstract": [ "Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets.", "Deep convolutional neural networks (CNN) have seen tremendous success in large-scale generic object recognition. In comparison with generic object recognition, fine-grained image classification (FGIC) is much more challenging because (i) fine-grained labeled data is much more expensive to acquire (usually requiring domain expertise); (ii) there exists large intra-class and small inter-class variance. Most recent work exploiting deep CNN for image recognition with small training data adopts a simple strategy: pre-train a deep CNN on a large-scale external dataset (e.g., ImageNet) and fine-tune on the small-scale target data to fit the specific classification task. In this paper, beyond the fine-tuning strategy, we propose a systematic framework of learning a deep CNN that addresses the challenges from two new perspectives: (i) identifying easily annotated hyper-classes inherent in the fine-grained data and acquiring a large number of hyper-class-labeled images from readily available external sources (e.g., image search engines), and formulating the problem into multitask learning; (ii) a novel learning model by exploiting a regularization between the fine-grained recognition model and the hyper-class recognition model. We demonstrate the success of the proposed framework on two small-scale fine-grained datasets (Stanford Dogs and Stanford Cars) and on a large-scale car dataset that we collected." ] }
1502.00702
2169585179
In this paper, we propose a novel model for high-dimensional data, called the Hybrid Orthogonal Projection and Estimation (HOPE) model, which combines a linear orthogonal projection and a finite mixture model under a unified generative modeling framework. The HOPE model itself can be learned unsupervised from unlabelled data based on the maximum likelihood estimation as well as discriminatively from labelled data. More interestingly, we have shown the proposed HOPE models are closely related to neural networks (NNs) in a sense that each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework can be used as a novel tool to probe why and how NNs work, more importantly, to learn NNs in either supervised or unsupervised ways. In this work, we have investigated the HOPE framework to learn NNs for several standard tasks, including image recognition on MNIST and speech recognition on TIMIT. Experimental results have shown that the HOPE framework yields significant performance gains over the current state-of-the-art methods in various types of NN learning problems, including unsupervised feature learning, supervised or semi-supervised learning.
Similar to PCA, the Fisher's linear discriminant analysis (LDA) can also be viewed as a linear dimensionality reduction technique. However, PCA is unsupervised in the sense that PCA depends only on the data while Fisher's LDA is supervised since it uses both data and class-label information. The high-dimensional data are linearly projected to a subspace where various classes are best distinguished as measured by the Fisher criterion. In @cite_0 , the so-called heteroscedastic discriminant analysis (HDA) is proposed to extend LDA to deal with high-dimensional data with heteroscedastic covariance, where a linear projection can be learned from data and class labels based on the maximum likelihood criterion.
{ "cite_N": [ "@cite_0" ], "mid": [ "2108146080", "2035667327", "2019127072", "862919699" ], "abstract": [ "Fishers linear discriminant analysis (LDA) is a classical multivariate technique both for dimension reduction and classification. The data vectors are transformed into a low dimensional subspace such that the class centroids are spread out as much as possible. In this subspace LDA works as a simple prototype classifier with linear decision boundaries. However, in many applications the linear boundaries do not adequately separate the classes. We present a nonlinear generalization of discriminant analysis that uses the kernel trick of representing dot products by kernel functions. The presented algorithm allows a simple formulation of the EM-algorithm in terms of kernel functions which leads to a unique concept for unsupervised mixture analysis, supervised discriminant analysis and semi-supervised discriminant analysis with partially unlabelled observations in feature spaces.", "Linear Discriminant Analysis (LDA) and its nonlinear version Kernel Discriminant Analysis (KDA) are well-known and widely used techniques for supervised feature extraction and dimensionality reduction. They determine an optimal discriminant space for (non)linear data projection based on certain assumptions, e.g. on using normal distributions (either on the input or in the kernel space) for each class and employing class representation by the corresponding class mean vectors. However, there might be other vectors that can be used for classes representation, in order to increase class discrimination in the resulted feature space. In this paper, we propose an optimization scheme aiming at the optimal class representation, in terms of Fisher ratio maximization, for nonlinear data projection. Compared to the standard approach, the proposed optimization scheme increases class discrimination in the reduced-dimensionality feature space and achieves higher classification rates in publicly available data sets.", "Linear discriminant analysis (LDA) is a widely used technique for supervised feature extraction and dimensionality reduction. LDA determines an optimal discriminant space for linear data projection based on certain assumptions, e.g., on using normal distributions for each class and employing class representation by the mean class vectors. However, there might be other vectors that can represent each class, to increase class discrimination. In this brief, we propose an optimization scheme aiming at the optimal class representation, in terms of Fisher ratio maximization, for LDA-based data projection. Compared with the standard LDA approach, the proposed optimization scheme increases class discrimination in the reduced dimensionality space and achieves higher classification rates in publicly available data sets.", "Linear discriminant analysis (LDA) is a popular dimensionality reduction and classification method that simultaneously maximizes between-class scatter and minimizes within-class scatter. In this paper, we verify the equivalence of LDA and least squares (LS) with a set of dependent variable matrices. The equivalence is in the sense that the LDA solution matrix and the LS solution matrix have the same range. The resulting LS provides an intuitive interpretation in which its solution performs data clustering according to class labels. Further, the fact that LDA and LS have the same range allows us to design a two-stage algorithm that computes the LDA solution given by generalized eigenvalue decomposition (GEVD), much faster than computing the original GEVD. Experimental results demonstrate the equivalence of the LDA solution and the proposed LS solution." ] }
1904.12164
2941755003
The problem of influence maximization is to select the most influential individuals in a social network. With the popularity of social network sites, and the development of viral marketing, the importance of the problem has been increased. The influence maximization problem is NP-hard, and therefore, there will not exist a polynomial-time algorithm to solve the problem unless P=NP. Many heuristics are proposed to find a nearly good solution in a shorter time. In this paper, we propose two heuristic algorithms to find good solutions. The heuristics are based on two ideas: (1) vertices of high degree have more influence in the network, and (2) nearby vertices influence on almost analogous sets of vertices. We evaluate our algorithms on several well-known data sets and show that our heuristics achieve better results (up to @math in influence spread) for this problem in a shorter time (up to @math improvement in the running time).
In order to improve the efficiency of the computations, many studies have been made. Leskovec al @cite_3 proposed Cost-Effective Lazy Forward (CELF) optimization that reduces the computation cost of the influence spread using sub-modularity property of the objective function.
{ "cite_N": [ "@cite_3" ], "mid": [ "2030378176", "2259139337", "1971844861", "2950865888" ], "abstract": [ "Influence maximization, defined as finding a small subset of nodes that maximizes spread of influence in social networks, is NP-hard under both Linear Threshold (LT) and Independent Cascade (IC) models, where a line of greedy heuristic algorithms have been proposed. The simple greedy algorithm [14] achieves an approximation ratio of 1-1 e. The advanced CELF algorithm [16], by exploiting the sub modular property of the spread function, runs 700 times faster than the simple greedy algorithm on average. However, CELF is still inefficient [4], as the first iteration calls for N times of spread estimations (N is the number of nodes in networks), which is computationally expensive especially for large networks. To this end, in this paper we derive an upper bound function for the spread function. The bound can be used to reduce the number of Monte-Carlo simulation calls in greedy algorithms, especially in the first iteration of initialization. Based on the upper bound, we propose an efficient Upper Bound based Lazy Forward algorithm (UBLF in short), by incorporating the bound into the CELF algorithm. We test and compare our algorithm with prior algorithms on real-world data sets. Experimental results demonstrate that UBLF, compared with CELF, reduces more than 95 Monte-Carlo simulations and achieves at least 2-5 times speed-raising when the seed set is small.", "A number of recent works have emphasized the prominent role played by the Kurdyka-źojasiewicz inequality for proving the convergence of iterative algorithms solving possibly nonsmooth nonconvex optimization problems. In this work, we consider the minimization of an objective function satisfying this property, which is a sum of two terms: (i) a differentiable, but not necessarily convex, function and (ii) a function that is not necessarily convex, nor necessarily differentiable. The latter function is expressed as a separable sum of functions of blocks of variables. Such an optimization problem can be addressed with the Forward---Backward algorithm which can be accelerated thanks to the use of variable metrics derived from the Majorize---Minimize principle. We propose to combine the latter acceleration technique with an alternating minimization strategy which relies upon a flexible update rule. We give conditions under which the sequence generated by the resulting Block Coordinate Variable Metric Forward---Backward algorithm converges to a critical point of the objective function. An application example to a nonconvex phase retrieval problem encountered in signal image processing shows the efficiency of the proposed optimization method.", "We have recently introduced a multistep extension of the greedy algorithm for modularity optimization. The extension is based on the idea that merging l pairs of communities (l>1) at each iteration prevents premature condensation into few large communities. Here, an empirical formula is presented for the choice of the step width l that generates partitions with (close to) optimal modularity for 17 real-world and 1100 computer-generated networks. Furthermore, an in-depth analysis of the communities of two real-world networks (the metabolic network of the bacterium E. coli and the graph of coappearing words in the titles of papers coauthored by Martin Karplus) provides evidence that the partition obtained by the multistep greedy algorithm is superior to the one generated by the original greedy algorithm not only with respect to modularity, but also according to objective criteria. In other words, the multistep extension of the greedy algorithm reduces the danger of getting trapped in local optima of modularity and generates more reasonable partitions.", "Is it possible to maximize a monotone submodular function faster than the widely used lazy greedy algorithm (also known as accelerated greedy), both in theory and practice? In this paper, we develop the first linear-time algorithm for maximizing a general monotone submodular function subject to a cardinality constraint. We show that our randomized algorithm, STOCHASTIC-GREEDY, can achieve a @math approximation guarantee, in expectation, to the optimum solution in time linear in the size of the data and independent of the cardinality constraint. We empirically demonstrate the effectiveness of our algorithm on submodular functions arising in data summarization, including training large-scale kernel methods, exemplar-based clustering, and sensor placement. We observe that STOCHASTIC-GREEDY practically achieves the same utility value as lazy greedy but runs much faster. More surprisingly, we observe that in many practical scenarios STOCHASTIC-GREEDY does not evaluate the whole fraction of data points even once and still achieves indistinguishable results compared to lazy greedy." ] }
1904.12164
2941755003
The problem of influence maximization is to select the most influential individuals in a social network. With the popularity of social network sites, and the development of viral marketing, the importance of the problem has been increased. The influence maximization problem is NP-hard, and therefore, there will not exist a polynomial-time algorithm to solve the problem unless P=NP. Many heuristics are proposed to find a nearly good solution in a shorter time. In this paper, we propose two heuristic algorithms to find good solutions. The heuristics are based on two ideas: (1) vertices of high degree have more influence in the network, and (2) nearby vertices influence on almost analogous sets of vertices. We evaluate our algorithms on several well-known data sets and show that our heuristics achieve better results (up to @math in influence spread) for this problem in a shorter time (up to @math improvement in the running time).
Chen al @cite_8 proposed new greedy algorithms for independent cascade and weighted cascade models. They made the greedy algorithm faster by combining their algorithms with CELF. They also proposed a new heuristic, named , which produces results of quality close to the greedy algorithm, while is much faster than that and performs better than the traditional degree and distance centrality heuristics. In order to avoid running repeated influence propagation simulations, Borgs al @cite_16 generated a random hypergraph according to reverse reachability probability of vertices in the original graph and select @math vertices that cover the largest number of vertices in the hypergraph. They guarantee @math approximation ratio of the solution with probability at least @math . Later, Tang al @cite_14 @cite_15 proposed TIM and IMM to cover drawbacks of Borgs al's algorithm @cite_16 and improved its running time. Bucur and Iacca @cite_1 and Kr "o mer and Nowakov 'a @cite_11 used genetic algorithms for the influence maximization problem. Weskida and Michalski @cite_6 used GPU acceleration in their genetic algorithm to improve its efficiency.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_1", "@cite_6", "@cite_15", "@cite_16", "@cite_11" ], "mid": [ "2552732996", "2151690061", "2030378176", "2069278600" ], "abstract": [ "Nowadays, in the world of limited attention, the techniques that maximize the spread of social influence are more than welcomed. Companies try to maximize their profits on sales by providing customers with free samples believing in the power of word-of-mouth marketing, governments and non-governmental organizations often want to introduce positive changes in the society by appropriately selecting individuals or election candidates want to spend least budget yet still win the election. In this work we propose the use of evolutionary algorithm as a mean for selecting seeds in social networks. By framing the problem as genetic algorithm challenge we show that it is possible to outperform well-known greedy algorithm in the problem of influence maximization for the linear threshold model in both: quality (up to 16 better) and efficiency (up to 35 times faster). We implemented these two algorithms by using GPGPU approach showing that also the evolutionary algorithm can benefit from GPU acceleration making it efficient and scaling better than the greedy algorithm. As the experiments conducted by using three real world datasets reveal, the evolutionary approach proposed in this paper outperforms the greedy algorithm in terms of the outcome and it also scales much better than the greedy algorithm when the network size is increasing. The only drawback in the GPGPU approach so far is the maximum size of the network that can be processed - it is limited by the memory of the GPU card. We believe that by showing the superiority of the evolutionary approach over the greedy algorithm, we will motivate the scientific community to look for an idea to overcome this limitation of the GPU approach - we also suggest one of the possible paths to explore. Since the proposed approach is based only on topological features of the network, not on the attributes of nodes, the applications of it are broader than the ones that are dataset-specific.", "The greedy sequential algorithm for maximal independent set (MIS) loops over the vertices in an arbitrary order adding a vertex to the resulting set if and only if no previous neighboring vertex has been added. In this loop, as in many sequential loops, each iterate will only depend on a subset of the previous iterates (i.e. knowing that any one of a vertex's previous neighbors is in the MIS, or knowing that it has no previous neighbors, is sufficient to decide its fate one way or the other). This leads to a dependence structure among the iterates. If this structure is shallow then running the iterates in parallel while respecting the dependencies can lead to an efficient parallel implementation mimicking the sequential algorithm. In this paper, we show that for any graph, and for a random ordering of the vertices, the dependence length of the sequential greedy MIS algorithm is polylogarithmic (O(log^2 n) with high probability). Our results extend previous results that show polylogarithmic bounds only for random graphs. We show similar results for greedy maximal matching (MM). For both problems we describe simple linear-work parallel algorithms based on the approach. The algorithms allow for a smooth tradeoff between more parallelism and reduced work, but always return the same result as the sequential greedy algorithms. We present experimental results that demonstrate efficiency and the tradeoff between work and parallelism.", "Influence maximization, defined as finding a small subset of nodes that maximizes spread of influence in social networks, is NP-hard under both Linear Threshold (LT) and Independent Cascade (IC) models, where a line of greedy heuristic algorithms have been proposed. The simple greedy algorithm [14] achieves an approximation ratio of 1-1 e. The advanced CELF algorithm [16], by exploiting the sub modular property of the spread function, runs 700 times faster than the simple greedy algorithm on average. However, CELF is still inefficient [4], as the first iteration calls for N times of spread estimations (N is the number of nodes in networks), which is computationally expensive especially for large networks. To this end, in this paper we derive an upper bound function for the spread function. The bound can be used to reduce the number of Monte-Carlo simulation calls in greedy algorithms, especially in the first iteration of initialization. Based on the upper bound, we propose an efficient Upper Bound based Lazy Forward algorithm (UBLF in short), by incorporating the bound into the CELF algorithm. We test and compare our algorithm with prior algorithms on real-world data sets. Experimental results demonstrate that UBLF, compared with CELF, reduces more than 95 Monte-Carlo simulations and achieves at least 2-5 times speed-raising when the seed set is small.", "We present a new efficient algorithm for the search version of the approximate Closest Vector Problem with Preprocessing (CVPP). Our algorithm achieves an approximation factor of O(n sqrt log n ), improving on the previous best of O(n^ 1.5 ) due to Lag arias, Lenstra, and Schnorr hkzbabai . We also show, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve the problem (with the slightly worse approximation factor of O(n)). We remark that this still leaves a large gap with respect to the decisional version of CVPP, where the best known approximation factor is O(sqrt n log n ) due to Aharonov and Regev AharonovR04 . To achieve these results, we show a reduction to the same problem restricted to target points that are close to the lattice and a more efficient reduction to a harder problem, Bounded Distance Decoding with preprocessing (BDDP). Combining either reduction with the previous best-known algorithm for BDDP by Liu, Lyubashevsky, and Micciancio LiuLM06 gives our main result. In the setting of CVP without preprocessing, we also give a reduction from (1+eps)gamma approximate CVP to gamma approximate CVP where the target is at distance at most 1+1 eps times the minimum distance (the length of the shortest non-zero vector) which relies on the lattice sparsification techniques of Dadush and Kun DadushK13 . As our final and most technical contribution, we present a substantially more efficient variant of the LLM algorithm (both in terms of run-time and amount of preprocessing advice), and via an improved analysis, show that it can decode up to a distance proportional to the reciprocal of the smoothing parameter of the dual lattice MR04 . We show that this is never smaller than the LLM decoding radius, and that it can be up to an wide tilde Omega (sqrt n ) factor larger." ] }
1904.12164
2941755003
The problem of influence maximization is to select the most influential individuals in a social network. With the popularity of social network sites, and the development of viral marketing, the importance of the problem has been increased. The influence maximization problem is NP-hard, and therefore, there will not exist a polynomial-time algorithm to solve the problem unless P=NP. Many heuristics are proposed to find a nearly good solution in a shorter time. In this paper, we propose two heuristic algorithms to find good solutions. The heuristics are based on two ideas: (1) vertices of high degree have more influence in the network, and (2) nearby vertices influence on almost analogous sets of vertices. We evaluate our algorithms on several well-known data sets and show that our heuristics achieve better results (up to @math in influence spread) for this problem in a shorter time (up to @math improvement in the running time).
There are some community-based algorithms for the influence maximization problem that partition the graph into small subgraphs and select the most influential vertices from each subgraph. Chen al @cite_2 used H-Clustering algorithm and Manaskasemsak al @cite_13 used Markov clustering algorithm for community detection. Song al @cite_12 divided the graph into communities, then selected the most influential vertices by a dynamic programming algorithm.
{ "cite_N": [ "@cite_13", "@cite_12", "@cite_2" ], "mid": [ "2106597862", "2964201038", "2889208719", "2035165116" ], "abstract": [ "Given a social graph, the problem of influence maximization is to determine a set of nodes that maximizes the spread of influences. While some recent research has studied the problem of influence maximization, these works are generally too time consuming for practical use in a large-scale social network. In this article, we develop a new framework, community-based influence maximization (CIM), to tackle the influence maximization problem with an emphasis on the time efficiency issue. Our proposed framework, CIM, comprises three phases: (i) community detection, (ii) candidate generation, and (iii) seed selection. Specifically, phase (i) discovers the community structure of the network; phase (ii) uses the information of communities to narrow down the possible seed candidates; and phase (iii) finalizes the seed nodes from the candidate set. By exploiting the properties of the community structures, we are able to avoid overlapped information and thus efficiently select the number of seeds to maximize information spreads. The experimental results on both synthetic and real datasets show that the proposed CIM algorithm significantly outperforms the state-of-the-art algorithms in terms of efficiency and scalability, with almost no compromise of effectiveness.", "We consider the canonical problem of influence maximization in social networks. Since the seminal work of Kempe, Kleinberg, and Tardos there have been two, largely disjoint efforts on this problem. The first studies the problem associated with learning the generative model that produces cascades, and the second focuses on the algorithmic challenge of identifying a set of influencers, assuming the generative model is known. Recent results on learning and optimization imply that in general, if the generative model is not known but rather learned from training data, no algorithm for influence maximization can yield a constant factor approximation guarantee using polynomially-many samples, drawn from any distribution. In this paper we describe a simple algorithm for maximizing influence from training data. The main idea behind the algorithm is to leverage the strong community structure of social networks and identify a set of individuals who are influentials but whose communities have little overlap. Although in general, the approximation guarantee of such an algorithm is unbounded, we show that this algorithm performs well experimentally. To analyze its performance, we prove this algorithm obtains a constant factor approximation guarantee on graphs generated through the stochastic block model, traditionally used to model networks with community structure.", "In the well-studied Influence Maximization problem, the goal is to identify a set of k nodes in a social network whose joint influence on the network is maximized. A large body of recent work has justified research on Influence Maximization models and algorithms with their potential to create societ al or economic value. However, in order to live up to this potential, the algorithms must be robust to large amounts of noise, for they require quantitative estimates of the influence, which individuals exert on each other; ground truth for such quantities is inaccessible, and even decent estimates are very difficult to obtain. We begin to address this concern formally. First, we exhibit simple inputs on which even very small estimation errors may mislead every algorithm into highly suboptimal solutions. Motivated by this observation, we propose the Perturbation Interval model as a framework to characterize the stability of Influence Maximization against noise in the inferred diffusion network. Analyzing the susceptibility of specific instances to estimation errors leads to a clean algorithmic question, which we term the Influence Difference Maximization problem. However, the objective function of Influence Difference Maximization is NP-hard to approximate within a factor of O(n1−e) for any e > 0. Given the infeasibility of diagnosing instability algorithmically, we focus on finding influential users robustly across multiple diffusion settings. We define a Robust Influence Maximization framework wherein an algorithm is presented with a set of influence functions. The algorithm’s goal is to identify a set of k nodes who are simultaneously influential for all influence functions, compared to the (function-specific) optimum solutions. We show strong approximation hardness results for this problem unless the algorithm gets to select at least a logarithmic factor more seeds than the optimum solution. However, when enough extra seeds may be selected, we show that techniques of can be used to approximate the optimum robust influence to within a factor of 1−1 e. We evaluate this bicriteria approximation algorithm against natural heuristics on several real-world datasets. Our experiments indicate that the worst-case hardness does not necessarily translate into bad performance on real-world datasets; all algorithms perform fairly well.", "Given a social network G and a positive integer k, the influence maximization problem asks for k nodes (in G) whose adoptions of a certain idea or product can trigger the largest expected number of follow-up adoptions by the remaining nodes. This problem has been extensively studied in the literature, and the state-of-the-art technique runs in O((k+l) (n+m) log n e2) expected time and returns a (1-1 e-e)-approximate solution with at least 1 - 1 n l probability. This paper presents an influence maximization algorithm that provides the same worst-case guarantees as the state of the art, but offers significantly improved empirical efficiency. The core of our algorithm is a set of estimation techniques based on martingales, a classic statistical tool. Those techniques not only provide accurate results with small computation overheads, but also enable our algorithm to support a larger class of information diffusion models than existing methods do. We experimentally evaluate our algorithm against the states of the art under several popular diffusion models, using real social networks with up to 1.4 billion edges. Our experimental results show that the proposed algorithm consistently outperforms the states of the art in terms of computation efficiency, and is often orders of magnitude faster." ] }
1904.12191
2941057241
We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.
Approximation properties of neural networks and, more generally, nonlinear approximation have been studied in detail in the nineties, see e.g. @cite_17 @cite_2 @cite_4 . The main concern of the present paper is quite different, since we focus on the random feature model, and the (recently proposed) neural tangent model. Further, our focus is on the high-dimensional regime in which @math grows with @math . Most approximation theory literature considers @math fixed, and @math .
{ "cite_N": [ "@cite_2", "@cite_4", "@cite_17" ], "mid": [ "2166116275", "2941057241", "2963446085", "2103496339" ], "abstract": [ "Approximation properties of a class of artificial neural networks are established. It is shown that feedforward networks with one layer of sigmoidal nonlinearities achieve integrated squared error of order O(1 n), where n is the number of nodes. The approximated function is assumed to have a bound on the first moment of the magnitude distribution of the Fourier transform. The nonlinear parameters associated with the sigmoidal nodes, as well as the parameters of linear combination, are adjusted in the approximation. In contrast, it is shown that for series expansions with n terms, in which only the parameters of linear combination are adjusted, the integrated squared approximation error cannot be made smaller than order 1 n sup 2 d uniformly for functions satisfying the same smoothness assumption, where d is the dimension of the input to the function. For the class of functions examined, the approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings. >", "We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.", "In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. For an expected loss function of a deep nonlinear neural network, we prove the following statements under the independence assumption adopted from recent work: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) the property of saddle points differs for shallow networks (with three layers) and deeper networks (with more than three layers). Moreover, we prove that the same four statements hold for deep linear neural networks with any depth, any widths and no unrealistic assumptions. As a result, we present an instance, for which we can answer to the following question: how difficult to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima and the property of the saddle points). We note that even though we have advanced the theoretical foundations of deep learning, there is still a gap between theory and practice.", "In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks." ] }
1904.12191
2941057241
We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.
The random features model , has been studied in considerable depth since the original work in @cite_21 . The classical viewpoint suggests that @math should be regarded as an approximation of the reproducing kernel Hilbert space (RKHS) @math defined by the kernel (see @cite_13 for general background) Indeed the space @math is the RKHS defined by the following finite-rank approximation of this kernel The paper @cite_21 proved convergence of @math to @math as functions. Subsequent work established quantitative approximation of @math by the random feature model @math . In particular, @cite_18 provides upper and lower bounds in terms of the eigenvalues of the kernel @math , which match up to logarithmic terms (see also @cite_26 @cite_11 @cite_23 for related work).
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_21", "@cite_23", "@cite_13", "@cite_11" ], "mid": [ "2418306335", "2124331852", "2950268835", "1988813039" ], "abstract": [ "A Hilbert space embedding of a distribution---in short, a kernel mean embedding---has recently emerged as a powerful tool for machine learning and inference. The basic idea behind this framework is to map distributions into a reproducing kernel Hilbert space (RKHS) in which the whole arsenal of kernel methods can be extended to probability measures. It can be viewed as a generalization of the original \"feature map\" common to support vector machines (SVMs) and other kernel methods. While initially closely associated with the latter, it has meanwhile found application in fields ranging from kernel machines and probabilistic modeling to statistical inference, causal discovery, and deep learning. The goal of this survey is to give a comprehensive review of existing work and recent advances in this research area, and to discuss the most challenging issues and open problems that could lead to new research directions. The survey begins with a brief introduction to the RKHS and positive definite kernels which forms the backbone of this survey, followed by a thorough discussion of the Hilbert space embedding of marginal distributions, theoretical guarantees, and a review of its applications. The embedding of distributions enables us to apply RKHS methods to probability measures which prompts a wide range of applications such as kernel two-sample testing, independent testing, and learning on distributional data. Next, we discuss the Hilbert space embedding for conditional distributions, give theoretical insights, and review some applications. The conditional mean embedding enables us to perform sum, product, and Bayes' rules---which are ubiquitous in graphical model, probabilistic inference, and reinforcement learning---in a non-parametric way. We then discuss relationships between this framework and other related areas. Lastly, we give some suggestions on future research directions.", "A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A pseudometric on the space of probability measures can be defined as the distance between distribution embeddings: we denote this as γk, indexed by the kernel function k that defines the inner product in the RKHS. We present three theoretical properties of γk. First, we consider the question of determining the conditions on the kernel k for which γk is a metric: such k are denoted characteristic kernels. Unlike pseudometrics, a metric is zero only when two distributions coincide, thus ensuring the RKHS embedding maps all distributions uniquely (i.e., the embedding is injective). While previously published conditions may apply only in restricted circumstances (e.g., on compact domains), and are difficult to check, our conditions are straightforward and intuitive: integrally strictly positive definite kernels are characteristic. Alternatively, if a bounded continuous kernel is translation-invariant on ℜd, then it is characteristic if and only if the support of its Fourier transform is the entire ℜd. Second, we show that the distance between distributions under γk results from an interplay between the properties of the kernel and the distributions, by demonstrating that distributions are close in the embedding space when their differences occur at higher frequencies. Third, to understand the nature of the topology induced by γk, we relate γk to other popular metrics on probability measures, and present conditions on the kernel k under which γk metrizes the weak topology.", "A nonparametric kernel-based method for realizing Bayes' rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces. Probabilities are uniquely characterized by the mean of the canonical map to the RKHS. The prior and conditional probabilities are expressed in terms of RKHS functions of an empirical sample: no explicit parametric model is needed for these quantities. The posterior is likewise an RKHS mean of a weighted sample. The estimator for the expectation of a function of the posterior is derived, and rates of consistency are shown. Some representative applications of the kernel Bayes' rule are presented, including Baysian computation without likelihood and filtering with a nonparametric state-space model.", "With the goal of accelerating the training and testing complexity of nonlinear kernel methods, several recent papers have proposed explicit embeddings of the input data into low-dimensional feature spaces, where fast linear methods can instead be used to generate approximate solutions. Analogous to random Fourier feature maps to approximate shift-invariant kernels, such as the Gaussian kernel, on Rd, we develop a new randomized technique called random Laplace features, to approximate a family of kernel functions adapted to the semigroup structure of R+d. This is the natural algebraic structure on the set of histograms and other non-negative data representations. We provide theoretical results on the uniform convergence of random Laplace features. Empirical analyses on image classification and surveillance event detection tasks demonstrate the attractiveness of using random Laplace features relative to several other feature maps proposed in the literature." ] }
1904.12191
2941057241
We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.
Of course, this approach generally breaks down if the dimension @math is large (technically, if it grows with @math ). This curse of dimensionality' is already revealed by classical lower bounds in functional approximation, see e.g. @cite_17 @cite_18 . However, previous work does not clarify what happens precisely in this high-dimensional regime. In contrast, the picture emerging from our work is remarkably simple. In particular, in the regime @math , random feature models are performing vanilla linear regression with respect to the raw features.
{ "cite_N": [ "@cite_18", "@cite_17" ], "mid": [ "2941057241", "1991143958", "1800334520", "2157133710" ], "abstract": [ "We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.", "Constructing a good approximation to a function of many variables suffers from the “curse of dimensionality”. Namely, functions on ℝN with smoothness of order s can in general be captured with accuracy at most O(n−s N) using linear spaces or nonlinear manifolds of dimension n. If N is large and s is not, then n has to be chosen inordinately large for good accuracy. The large value of N often precludes reasonable numerical procedures. On the other hand, there is the common belief that real world problems in high dimensions have as their solution, functions which are more amenable to numerical recovery. This has led to the introduction of models for these functions that do not depend on smoothness alone but also involve some form of variable reduction. In these models it is assumed that, although the function depends on N variables, only a small number of them are significant. Another variant of this principle is that the function lives on a low dimensional manifold. Since the dominant variables (respectively the manifold) are unknown, this leads to new problems of how to organize point queries to capture such functions. The present paper studies where to query the values of a ridge function f(x)=g(a⋅x) when both a∈ℝN and g∈C[0,1] are unknown. We establish estimates on how well f can be approximated using these point queries under the assumptions that g∈Cs[0,1]. We also study the role of sparsity or compressibility of a in such query problems.", "Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the @math minimization method for identifying a sparse vector from random linear samples. Indeed, this approach succeeds with high probability when the number of samples exceeds a threshold that depends on the sparsity level; otherwise, it fails with high probability. @PARASPLIT This paper provides the first rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems with random measurements, to demixing problems under a random incoherence model, and also to cone programs with random affine constraints. @PARASPLIT These applications depend on foundational research in conic geometry. This paper introduces a new summary parameter, called the statistical dimension, that canonically extends the dimension of a linear subspace to the class of convex cones. The main technical result demonstrates that the sequence of conic intrinsic volumes of a convex cone concentrates sharply near the statistical dimension. This fact leads to an approximate version of the conic kinematic formula that gives bounds on the probability that a randomly oriented cone shares a ray with a fixed cone.", "Different aspects of the curse of dimensionality are known to present serious challenges to various machine-learning methods and tasks. This paper explores a new aspect of the dimensionality curse, referred to as hubness, that affects the distribution of k-occurrences: the number of times a point appears among the k nearest neighbors of other points in a data set. Through theoretical and empirical analysis involving synthetic and real data sets we show that under commonly used assumptions this distribution becomes considerably skewed as dimensionality increases, causing the emergence of hubs, that is, points with very high k-occurrences which effectively represent \"popular\" nearest neighbors. We examine the origins of this phenomenon, showing that it is an inherent property of data distributions in high-dimensional vector space, discuss its interaction with dimensionality reduction, and explore its influence on a wide range of machine-learning tasks directly or indirectly based on measuring distances, belonging to supervised, semi-supervised, and unsupervised learning families." ] }
1904.12191
2941057241
We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.
The connection between kernel methods and neural networks was recently revived by the work of Belkin and coauthors @cite_12 @cite_24 who pointed out intriguing similarities between some properties of modern deep learning models, and large scale kernel learning. A concrete explanation for this analogy was proposed in @cite_28 via the , model. This explanation postulates that, for large neural networks, the network weights do not change much during the training phase. Considering a random initialization @math and denoting by @math the change during the training phase, we linearize the neural network as Assuming @math (which is reasonable for certain random initializations), this suggests that a two-layers neural network learns a model in @math (if both layers are trained), or simply @math (if only the first layer is trained). The analysis of @cite_27 @cite_15 @cite_29 @cite_5 establishes that indeed this linearization is accurate in a certain highly overparametrized regime, namely when @math for a certain constant @math . Empirical evidence in the same direction was presented in @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_28", "@cite_29", "@cite_24", "@cite_27", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "2950743785", "2941057241", "2899748887", "2809090039" ], "abstract": [ "At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function @math (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function @math follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.", "We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.", "Deep neural networks (DNNs) have demonstrated dominating performance in many fields; since AlexNet, networks used in practice are going wider and deeper. On the theoretical side, a long line of works has been focusing on training neural networks with one hidden layer. The theory of multi-layer networks remains largely unsettled. In this work, we prove why stochastic gradient descent (SGD) can find @math on the training objective of DNNs in @math . We only make two assumptions: the inputs are non-degenerate and the network is over-parameterized. The latter means the network width is sufficiently large: @math in @math , the number of layers and in @math , the number of samples. Our key technique is to derive that, in a sufficiently large neighborhood of the random initialization, the optimization landscape is almost-convex and semi-smooth even with ReLU activations. This implies an equivalence between over-parameterized neural networks and neural tangent kernel (NTK) in the finite (and polynomial) width setting. As concrete examples, starting from randomly initialized weights, we prove that SGD can attain 100 training accuracy in classification tasks, or minimize regression loss in linear convergence speed, with running time polynomial in @math . Our theory applies to the widely-used but non-smooth ReLU activation, and to any smooth and possibly non-convex loss functions. In terms of network architectures, our theory at least applies to fully-connected neural networks, convolutional neural networks (CNN), and residual neural networks (ResNet).", "At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function (which maps input vectors to output vectors) follows the so-called kernel gradient associated with a new object, which we call the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit." ] }
1904.12191
2941057241
We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial.
It is worth mentioning that an alternative approach to the analysis of two-layers neural networks, in the limit of a large number of neurons, was developed in @cite_19 @cite_14 @cite_20 @cite_22 @cite_9 . Unlike in the neural tangent approach, the evolution of network weights is described beyond the linear regime in this theory.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_9", "@cite_19", "@cite_20" ], "mid": [ "2913010492", "2276412021", "2899748887", "2942052807" ], "abstract": [ "We consider learning two layer neural networks using stochastic gradient descent. The mean-field description of this learning dynamics approximates the evolution of the network weights by an evolution in the space of probability distributions in @math (where @math is the number of parameters associated to each neuron). This evolution can be defined through a partial differential equation or, equivalently, as the gradient flow in the Wasserstein space of probability distributions. Earlier work shows that (under some regularity assumptions), the mean field description is accurate as soon as the number of hidden units is much larger than the dimension @math . In this paper we establish stronger and more general approximation guarantees. First of all, we show that the number of hidden units only needs to be larger than a quantity dependent on the regularity properties of the data, and independent of the dimensions. Next, we generalize this analysis to the case of unbounded activation functions, which was not covered by earlier bounds. We extend our results to noisy stochastic gradient descent. Finally, we show that kernel ridge regression can be recovered as a special limit of the mean field analysis.", "This paper discusses a new method to perform propagation over a (two-layer, feed-forward) Neural Network embedded in a Constraint Programming model. The method is meant to be employed in Empirical Model Learning, a technique designed to enable optimal decision making over systems that cannot be modeled via conventional declarative means. The key step in Empirical Model Learning is to embed a Machine Learning model into a combinatorial model. It has been showed that Neural Networks can be embedded in a Constraint Programming model by simply encoding each neuron as a global constraint, which is then propagated individually. Unfortunately, this decomposition approach may lead to weak bounds. To overcome such limitation, we propose a new network-level propagator based on a non-linear Lagrangian relaxation that is solved with a subgradient algorithm. The method proved capable of dramatically reducing the search tree size on a thermal-aware dispatching problem on multicore CPUs. The overhead for optimizing the Lagrangian multipliers is kept within a reasonable level via a few simple techniques. This paper is an extended version of [27], featuring an improved structure, a new filtering technique for the network inputs, a set of overhead reduction techniques, and a thorough experimentation.", "Deep neural networks (DNNs) have demonstrated dominating performance in many fields; since AlexNet, networks used in practice are going wider and deeper. On the theoretical side, a long line of works has been focusing on training neural networks with one hidden layer. The theory of multi-layer networks remains largely unsettled. In this work, we prove why stochastic gradient descent (SGD) can find @math on the training objective of DNNs in @math . We only make two assumptions: the inputs are non-degenerate and the network is over-parameterized. The latter means the network width is sufficiently large: @math in @math , the number of layers and in @math , the number of samples. Our key technique is to derive that, in a sufficiently large neighborhood of the random initialization, the optimization landscape is almost-convex and semi-smooth even with ReLU activations. This implies an equivalence between over-parameterized neural networks and neural tangent kernel (NTK) in the finite (and polynomial) width setting. As concrete examples, starting from randomly initialized weights, we prove that SGD can attain 100 training accuracy in classification tasks, or minimize regression loss in linear convergence speed, with running time polynomial in @math . Our theory applies to the widely-used but non-smooth ReLU activation, and to any smooth and possibly non-convex loss functions. In terms of network architectures, our theory at least applies to fully-connected neural networks, convolutional neural networks (CNN), and residual neural networks (ResNet).", "How well does a classic deep net architecture like AlexNet or VGG19 classify on a standard dataset such as CIFAR-10 when its \"width\" --- namely, number of channels in convolutional layers, and number of nodes in fully-connected internal layers --- is allowed to increase to infinity? Such questions have come to the forefront in the quest to theoretically understand deep learning and its mysteries about optimization and generalization. They also connect deep learning to notions such as Gaussian processes and kernels. A recent paper [, 2018] introduced the Neural Tangent Kernel (NTK) which captures the behavior of fully-connected deep nets in the infinite width limit trained by gradient descent; this object was implicit in some other recent papers. A subsequent paper [, 2019] gave heuristic Monte Carlo methods to estimate the NTK and its extension, Convolutional Neural Tangent Kernel (CNTK) and used this to try to understand the limiting behavior on datasets like CIFAR-10. The current paper gives the first efficient exact algorithm (based upon dynamic programming) for computing CNTK as well as an efficient GPU implementation of this algorithm. This results in a significant new benchmark for performance of a pure kernel-based method on CIFAR-10, being 10 higher than the methods reported in [, 2019], and only 5 lower than the performance of the corresponding finite deep net architecture (once batch normalization etc. are turned off). We give the first non-asymptotic proof showing that a fully-trained sufficiently wide net is indeed equivalent to the kernel regression predictor using NTK. Our experiments also demonstrate that earlier Monte Carlo approximation can degrade the performance significantly, thus highlighting the power of our exact kernel computation, which we have applied even to the full CIFAR-10 dataset and 20-layer nets." ] }
1904.11968
2942322914
The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks.
introduce CODEnn @cite_7 , which creates vector representations of Java source code by jointly embedding code with a natural language description of the method. Their architecture uses recurrent neural networks (RNN) on sequences of API calls and on tokens from the method name. It then fuses this with the output of a multi-layer perceptron which takes inputs from the non-API tokens in the code. By jointly embedding the code with natural language, the learned vectors are tailored to summarize code at a human-level description, which may not always be accurate (given that code often evolves independently from comments), and is limited by the ability of natural langauges to describe specifications. Additionally, natural languages (and consequently, code comments) are context-sensitive, so the comment may be missing crucial information about the semantics of the code.
{ "cite_N": [ "@cite_7" ], "mid": [ "2794601162", "2951712437", "2402619042", "2546915671" ], "abstract": [ "To implement a program functionality, developers can reuse previously written code snippets by searching through a large-scale codebase. Over the years, many code search tools have been proposed to help developers. The existing approaches often treat source code as textual documents and utilize information retrieval models to retrieve relevant code snippets that match a given query. These approaches mainly rely on the textual similarity between source code and natural language query. They lack a deep understanding of the semantics of queries and source code. In this paper, we propose a novel deep neural network named CODEnn (Code-Description Embedding Neural Network). Instead of matching text similarity, CODEnn jointly embeds code snippets and natural language descriptions into a high-dimensional vector space, in such a way that code snippet and its corresponding description have similar vectors. Using the unified vector representation, code snippets related to a natural language query can be retrieved according to their vectors. Semantically related words can also be recognized and irrelevant noisy keywords in queries can be handled. As a proof-of-concept application, we implement a code search tool named D eep CS using the proposed CODEnn model. We empirically evaluate D eep CS on a large scale codebase collected from GitHub. The experimental results show that our approach can effectively retrieve relevant code snippets and outperforms previous techniques.", "Developers often wonder how to implement a certain functionality (e.g., how to parse XML files) using APIs. Obtaining an API usage sequence based on an API-related natural language query is very helpful in this regard. Given a query, existing approaches utilize information retrieval models to search for matching API sequences. These approaches treat queries and APIs as bag-of-words (i.e., keyword matching or word-to-word alignment) and lack a deep understanding of the semantics of the query. We propose DeepAPI, a deep learning based approach to generate API usage sequences for a given natural language query. Instead of a bags-of-words assumption, it learns the sequence of words in a query and the sequence of associated APIs. DeepAPI adapts a neural language model named RNN Encoder-Decoder. It encodes a word sequence (user query) into a fixed-length context vector, and generates an API sequence based on the context vector. We also augment the RNN Encoder-Decoder by considering the importance of individual APIs. We empirically evaluate our approach with more than 7 million annotated code snippets collected from GitHub. The results show that our approach generates largely accurate API sequences and outperforms the related approaches.", "Developers often wonder how to implement a certain functionality (e.g., how to parse XML files) using APIs. Obtaining an API usage sequence based on an API-related natural language query is very helpful in this regard. Given a query, existing approaches utilize information retrieval models to search for matching API sequences. These approaches treat queries and APIs as bags-of-words and lack a deep understanding of the semantics of the query. We propose DeepAPI, a deep learning based approach to generate API usage sequences for a given natural language query. Instead of a bag-of-words assumption, it learns the sequence of words in a query and the sequence of associated APIs. DeepAPI adapts a neural language model named RNN Encoder-Decoder. It encodes a word sequence (user query) into a fixed-length context vector, and generates an API sequence based on the context vector. We also augment the RNN Encoder-Decoder by considering the importance of individual APIs. We empirically evaluate our approach with more than 7 million annotated code snippets collected from GitHub. The results show that our approach generates largely accurate API sequences and outperforms the related approaches.", "Recurrent neural networks (RNNs) have achieved state-of-the-art performances in many natural language processing tasks, such as language modeling and machine translation. However, when the vocabulary is large, the RNN model will become very big (e.g., possibly beyond the memory capacity of a GPU device) and its training will become very inefficient. In this work, we propose a novel technique to tackle this challenge. The key idea is to use 2-Component (2C) shared embedding for word representations. We allocate every word in the vocabulary into a table, each row of which is associated with a vector, and each column associated with another vector. Depending on its position in the table, a word is jointly represented by two components: a row vector and a column vector. Since the words in the same row share the row vector and the words in the same column share the column vector, we only need @math vectors to represent a vocabulary of @math unique words, which are far less than the @math vectors required by existing approaches. Based on the 2-Component shared embedding, we design a new RNN algorithm and evaluate it using the language modeling task on several benchmark datasets. The results show that our algorithm significantly reduces the model size and speeds up the training process, without sacrifice of accuracy (it achieves similar, if not better, perplexity as compared to state-of-the-art language models). Remarkably, on the One-Billion-Word benchmark Dataset, our algorithm achieves comparable perplexity to previous language models, whilst reducing the model size by a factor of 40-100, and speeding up the training process by a factor of 2. We name our proposed algorithm to reflect its very small model size and very high training speed." ] }
1904.11968
2942322914
The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks.
@cite_0 convert the AST into a binary tree and then use an autoencoder to learn an embedding model for each node. This learned embedding model is applied recursively to the tree to obtain a final embedding for the root node. The model is an autoencoder, and as such, may fail to recognize the semantic equivalency between different implementations of the same algorithm. e.g. that a for" loop and a while" loop are equivalent.
{ "cite_N": [ "@cite_0" ], "mid": [ "2798657914", "2172174689", "2796167946", "2952010730" ], "abstract": [ "Sequence-to-sequence attention-based models have recently shown very promising results on automatic speech recognition (ASR) tasks, which integrate an acoustic, pronunciation and language model into a single neural network. In these models, the Transformer, a new sequence-to-sequence attention-based model relying entirely on self-attention without using RNNs or convolutions, achieves a new single-model state-of-the-art BLEU on neural machine translation (NMT) tasks. Since the outstanding performance of the Transformer, we extend it to speech and concentrate on it as the basic architecture of sequence-to-sequence attention-based model on Mandarin Chinese ASR tasks. Furthermore, we investigate a comparison between syllable based model and context-independent phoneme (CI-phoneme) based model with the Transformer in Mandarin Chinese. Additionally, a greedy cascading decoder with the Transformer is proposed for mapping CI-phoneme sequences and syllable sequences into word sequences. Experiments on HKUST datasets demonstrate that syllable based model with the Transformer performs better than CI-phoneme based counterpart, and achieves a character error rate (CER) of , which is competitive to the state-of-the-art CER of @math by the joint CTC-attention based encoder-decoder network.", "We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying non-linearity that turns a code vector into a quasi-binary sparse code vector. Given an input, the optimal code minimizes the distance between the output of the decoder and the input patch while being as similar as possible to the encoder output. Learning proceeds in a two-phase EM-like fashion: (1) compute the minimum-energy code vector, (2) adjust the parameters of the encoder and decoder so as to decrease the energy. The model produces \"stroke detectors\" when trained on handwritten numerals, and Gabor-like filters when trained on natural image patches. Inference and learning are very fast, requiring no preprocessing, and no expensive sampling. Using the proposed unsupervised method to initialize the first layer of a convolutional network, we achieved an error rate slightly lower than the best reported result on the MNIST dataset. Finally, an extension of the method is described to learn topographical filter maps.", "Celebrated and its fruitful variants are powerful models to achieve excellent performance on the tasks that map sequences to sequences. However, these are many machine learning tasks with inputs naturally represented in a form of graphs, which imposes significant challenges to existing Seq2Seq models for lossless conversion from its graph form to the sequence. In this work, we present a general end-to-end approach to map the input graph to a sequence of vectors, and then another attention-based LSTM to decode the target sequence from these vectors. Specifically, to address inevitable information loss for data conversion, we introduce a novel graph-to-sequence neural network model that follows the encoder-decoder architecture. Our method first uses an improved graph-based neural network to generate the node and graph embeddings by a novel aggregation strategy to incorporate the edge direction information into the node embeddings. We also propose an attention based mechanism that aligns node embeddings and decoding sequence to better cope with large graphs. Experimental results on bAbI task, Shortest Path Task, and Natural Language Generation Task demonstrate that our model achieves the state-of-the-art performance and significantly outperforms other baselines. We also show that with the proposed aggregation strategy, our proposed model is able to quickly converge to good performance.", "In this paper we propose a novel data augmentation method for attention-based end-to-end automatic speech recognition (E2E-ASR), utilizing a large amount of text which is not paired with speech signals. Inspired by the back-translation technique proposed in the field of machine translation, we build a neural text-to-encoder model which predicts a sequence of hidden states extracted by a pre-trained E2E-ASR encoder from a sequence of characters. By using hidden states as a target instead of acoustic features, it is possible to achieve faster attention learning and reduce computational cost, thanks to sub-sampling in E2E-ASR encoder, also the use of the hidden states can avoid to model speaker dependencies unlike acoustic features. After training, the text-to-encoder model generates the hidden states from a large amount of unpaired text, then E2E-ASR decoder is retrained using the generated hidden states as additional training data. Experimental evaluation using LibriSpeech dataset demonstrates that our proposed method achieves improvement of ASR performance and reduces the number of unknown words without the need for paired data." ] }
1904.11968
2942322914
The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks.
The work of @cite_4 encodes nodes of an AST using a weighted mixture of left and right weight matrices and children of that node. A tree-based convolutional neural network (CNN) is then applied against the tree to encode the AST. The only way semantic equivalents are learned in this model are by recognizing that certain nodes have the same children. This assumption is not necessarily correct, and as such, may not fully capture the semantic meaning of code.
{ "cite_N": [ "@cite_4" ], "mid": [ "2954346764", "2341555367", "2963998559", "2756815061" ], "abstract": [ "This paper proposes a learning strategy that extracts object-part concepts from a pre-trained convolutional neural network (CNN), in an attempt to 1) explore explicit semantics hidden in CNN units and 2) gradually grow a semantically interpretable graphical model on the pre-trained CNN for hierarchical object understanding. Given part annotations on very few (e.g., 3-12) objects, our method mines certain latent patterns from the pre-trained CNN and associates them with different semantic parts. We use a four-layer And-Or graph to organize the mined latent patterns, so as to clarify their internal semantic hierarchy. Our method is guided by a small number of part annotations, and it achieves superior performance (about 13 -107 improvement) in part center prediction on the PASCAL VOC and ImageNet datasets.", "Recent approaches for instance-aware semantic labeling have augmented convolutional neural networks (CNNs) with complex multi-task architectures or computationally expensive graphical models. We present a method that leverages a fully convolutional network (FCN) to predict semantic labels, depth and an instance-based encoding using each pixel’s direction towards its corresponding instance center. Subsequently, we apply low-level computer vision techniques to generate state-of-the-art instance segmentation on the street scene datasets KITTI and Cityscapes. Our approach outperforms existing works by a large margin and can additionally predict absolute distances of individual instances from a monocular image as well as a pixel-level semantic labeling.", "During the last half decade, convolutional neural networks (CNNs) have triumphed over semantic segmentation, which is a core task of various emerging industrial applications such as autonomous driving and medical imaging. However, to train CNNs requires a huge amount of data, which is difficult to collect and laborious to annotate. Recent advances in computer graphics make it possible to train CNN models on photo-realistic synthetic data with computer-generated annotations. Despite this, the domain mismatch between the real images and the synthetic data significantly decreases the models’ performance. Hence we propose a curriculum-style learning approach to minimize the domain gap in semantic segmentation. The curriculum domain adaptation solves easy tasks first in order to infer some necessary properties about the target domain; in particular, the first task is to learn global label distributions over images and local distributions over landmark superpixels. These are easy to estimate because images of urban traffic scenes have strong idiosyncrasies (e.g., the size and spatial relations of buildings, streets, cars, etc.). We then train the segmentation network in such a way that the network predictions in the target domain follow those inferred properties. In experiments, our method significantly outperforms the baselines as well as the only known existing approach to the same problem.", "Convolutional Neural Network (CNN) image classifiers are traditionally designed to have sequential convolutional layers with a single output layer. This is based on the assumption that all target classes should be treated equally and exclusively. However, some classes can be more difficult to distinguish than others, and classes may be organized in a hierarchy of categories. At the same time, a CNN is designed to learn internal representations that abstract from the input data based on its hierarchical layered structure. So it is natural to ask if an inverse of this idea can be applied to learn a model that can predict over a classification hierarchy using multiple output layers in decreasing order of class abstraction. In this paper, we introduce a variant of the traditional CNN model named the Branch Convolutional Neural Network (B-CNN). A B-CNN model outputs multiple predictions ordered from coarse to fine along the concatenated convolutional layers corresponding to the hierarchical structure of the target classes, which can be regarded as a form of prior knowledge on the output. To learn with B-CNNs a novel training strategy, named the Branch Training strategy (BT-strategy), is introduced which balances the strictness of the prior with the freedom to adjust parameters on the output layers to minimize the loss. In this way we show that CNN based models can be forced to learn successively coarse to fine concepts in the internal layers at the output stage, and that hierarchical prior knowledge can be adopted to boost CNN models' classification performance. Our models are evaluated to show that the B-CNN extensions improve over the corresponding baseline CNN on the benchmark datasets MNIST, CIFAR-10 and CIFAR-100." ] }
1904.11968
2942322914
The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks.
@cite_1 used a long-short term memory (LSTM) cell in a tree structure applied to an AST to classify defects. The model is trained in an unsupervised manner to predict a node from its children. @cite_13 learn code vector embeddings by evaluating paths in the AST and evaluate the resulting vectors by predicting method names from code snippets. @cite_15 focus on identifying clones that fall between Type III and Type IV by using a deep neural network --- their model is limited by only using 24 method summary metrics as input, and so cannot deeply evaluate the code itself.
{ "cite_N": [ "@cite_13", "@cite_15", "@cite_1" ], "mid": [ "2259512711", "2952453038", "1952243512", "2964150020" ], "abstract": [ "Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have been successfully applied to a variety of sequence modeling tasks. In this paper we develop Tree Long Short-Term Memory (TreeLSTM), a neural network model based on LSTM, which is designed to predict a tree rather than a linear sequence. TreeLSTM defines the probability of a sentence by estimating the generation probability of its dependency tree. At each time step, a node is generated based on the representation of the generated sub-tree. We further enhance the modeling power of TreeLSTM by explicitly representing the correlations between left and right dependents. Application of our model to the MSR sentence completion challenge achieves results beyond the current state of the art. We also report results on dependency parsing reranking achieving competitive performance.", "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.", "We introduce a novel schema for sequence to sequence learning with a Deep Q-Network (DQN), which decodes the output sequence iteratively. The aim here is to enable the decoder to first tackle easier portions of the sequences, and then turn to cope with difficult parts. Specifically, in each iteration, an encoder-decoder Long Short-Term Memory (LSTM) network is employed to, from the input sequence, automatically create features to represent the internal states of and formulate a list of potential actions for the DQN. Take rephrasing a natural sentence as an example. This list can contain ranked potential words. Next, the DQN learns to make decision on which action (e.g., word) will be selected from the list to modify the current decoded sequence. The newly modified output sequence is subsequently used as the input to the DQN for the next decoding iteration. In each iteration, we also bias the reinforcement learning's attention to explore sequence portions which are previously difficult to be decoded. For evaluation, the proposed strategy was trained to decode ten thousands natural sentences. Our experiments indicate that, when compared to a left-to-right greedy beam search LSTM decoder, the proposed method performed competitively well when decoding sentences from the training set, but significantly outperformed the baseline when decoding unseen sentences, in terms of BLEU score obtained.", "We present a neural model for representing snippets of code as continuous distributed vectors ( code embeddings''). The main idea is to represent a code snippet as a single fixed-length code vector, which can be used to predict semantic properties of the snippet. To this end, code is first decomposed to a collection of paths in its abstract syntax tree. Then, the network learns the atomic representation of each path while simultaneously learning how to aggregate a set of them. We demonstrate the effectiveness of our approach by using it to predict a method's name from the vector representation of its body. We evaluate our approach by training a model on a dataset of 12M methods. We show that code vectors trained on this dataset can predict method names from files that were unobserved during training. Furthermore, we show that our model learns useful method name vectors that capture semantic similarities, combinations, and analogies. A comparison of our approach to previous techniques over the same dataset shows an improvement of more than 75 , making it the first to successfully predict method names based on a large, cross-project corpus. Our trained model, visualizations and vector similarities are available as an interactive online demo at http: code2vec.org. The code, data and trained models are available at https: github.com tech-srl code2vec." ] }
1904.11968
2942322914
The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks.
In Deep Code Comment Generation @cite_11 , introduce structure-based traversals of an AST to feed into a sequence to sequence architecture, and train the model to translate source code into comments. Then the trained model is used to generate comments on new source code. The encoded vectors are designed to initialize a decoding comment generation phase, rather than be used directly, so they are not necessarily smooth, nor suitable for interpretation as semantic meaning. Nevertheless, we draw inspiration from their work for our model.
{ "cite_N": [ "@cite_11" ], "mid": [ "2964150020", "2949734169", "2740711318", "2963756346" ], "abstract": [ "We present a neural model for representing snippets of code as continuous distributed vectors ( code embeddings''). The main idea is to represent a code snippet as a single fixed-length code vector, which can be used to predict semantic properties of the snippet. To this end, code is first decomposed to a collection of paths in its abstract syntax tree. Then, the network learns the atomic representation of each path while simultaneously learning how to aggregate a set of them. We demonstrate the effectiveness of our approach by using it to predict a method's name from the vector representation of its body. We evaluate our approach by training a model on a dataset of 12M methods. We show that code vectors trained on this dataset can predict method names from files that were unobserved during training. Furthermore, we show that our model learns useful method name vectors that capture semantic similarities, combinations, and analogies. A comparison of our approach to previous techniques over the same dataset shows an improvement of more than 75 , making it the first to successfully predict method names based on a large, cross-project corpus. Our trained model, visualizations and vector similarities are available as an interactive online demo at http: code2vec.org. The code, data and trained models are available at https: github.com tech-srl code2vec.", "In models to generate program source code from natural language, representing this code in a tree structure has been a common approach. However, existing methods often fail to generate complex code correctly due to a lack of ability to memorize large and complex structures. We introduce ReCode, a method based on subtree retrieval that makes it possible to explicitly reference existing code examples within a neural code generation model. First, we retrieve sentences that are similar to input sentences using a dynamic-programming-based sentence similarity scoring method. Next, we extract n-grams of action sequences that build the associated abstract syntax tree. Finally, we increase the probability of actions that cause the retrieved n-gram action subtree to be in the predicted code. We show that our approach improves the performance on two code generation tasks by up to +2.6 BLEU.", "Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art.", "Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art." ] }
1904.12201
2942159993
As an intuitive way of expression emotion, the animated Graphical Interchange Format (GIF) images have been widely used on social media. Most previous studies on automated GIF emotion recognition fail to effectively utilize GIF's unique properties, and this potentially limits the recognition performance. In this study, we demonstrate the importance of human related information in GIFs and conduct human-centered GIF emotion recognition with a proposed Keypoint Attended Visual Attention Network (KAVAN). The framework consists of a facial attention module and a hierarchical segment temporal module. The facial attention module exploits the strong relationship between GIF contents and human characters, and extracts frame-level visual feature with a focus on human faces. The Hierarchical Segment LSTM (HS-LSTM) module is then proposed to better learn global GIF representations. Our proposed framework outperforms the state-of-the-art on the MIT GIFGIF dataset. Furthermore, the facial attention module provides reliable facial region mask predictions, which improves the model's interpretability.
GIF Analysis. Bakhshi @cite_13 show that animated GIFs are more engaging than other social media types by studying over 3.9 million posts on Tumblr. Gygli @cite_12 propose to automatically generate animated GIFs from videos with 100K user-generated GIFs and the corresponding video sources. The MIT's GIFGIF platform is frequently used for GIF emotion recognition studies. Jou @cite_24 recognize GIF emotions using color histograms, facial expressions, image based aesthetics and visual sentiment. Chen @cite_16 adopt 3D ConvNets to further improve the performance. The GIFGIF+ dataset @cite_2 is a larger GIF emotion recognition dataset. At the time of this study, GIFGIF+ is not released.
{ "cite_N": [ "@cite_24", "@cite_2", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "2786355254", "2017411072", "2578893299", "2406020846" ], "abstract": [ "Animated GIFs are widely used on the Internet to express emotions, but their automatic analysis is largely unexplored. Existing GIF datasets with emotion labels are too small for training contemporary machine learning models, so we propose a semi-automatic method to collect emotional animated GIFs from the Internet with the least amount of human labor. The method trains weak emotion recognizers on labeled data, and uses them to sort a large quantity of unlabeled GIFs. We found that by exploiting the clustered structure of emotions, the number of GIFs a labeler needs to check can be greatly reduced. Using the proposed method, a dataset called GIFGIF+ with 23,544 GIFs over 17 emotions was created, which provides a promising platform for affective computing research.", "Animated GIFs are everywhere on the Web. Our work focuses on the computational prediction of emotions perceived by viewers after they are shown animated GIF images. We evaluate our results on a dataset of over 3,800 animated GIFs gathered from MIT's GIFGIF platform, each with scores for 17 discrete emotions aggregated from over 2.5M user annotations - the first computational evaluation of its kind for content-based prediction on animated GIFs to our knowledge. In addition, we advocate a conceptual paradigm in emotion prediction that shows delineating distinct types of emotion is important and is useful to be concrete about the emotion target. One of our objectives is to systematically compare different types of content features for emotion prediction, including low-level, aesthetics, semantic and face features. We also formulate a multi-task regression problem to evaluate whether viewer perceived emotion prediction can benefit from jointly learning across emotion classes compared to disjoint, independent learning.", "Animated GIFs are widely used on the Internet to express emotions, but their automatic analysis is largely unexplored before. To help with the search and recommendation of GIFs, we aim to predict their emotions perceived by humans based on their contents. Since previous solutions to this problem only utilize image-based features and lose all the motion information, we propose to use 3D convolutional neural networks (CNNs) to extract spatiotemporal features from GIFs. We evaluate our methodology on a crowd-sourcing platform called GIFGIF with more than 6000 animated GIFs, and achieve a better accuracy then any previous approach in predicting crowd-sourced intensity scores of 17 emotions. It is also found that our trained model can be used to distinguish and cluster emotions in terms of valence and risk perception.", "Animated GIFs have been around since 1987 and recently gained more popularity on social networking sites. Tumblr, a large social networking and micro blogging platform, is a popular venue to share animated GIFs. Tumblr users follow blogs, generating a feed or posts, and choose to \"like' or to \"reblog' favored posts. In this paper, we use these actions as signals to analyze the engagement of over 3.9 million posts, and conclude that animated GIFs are significantly more engaging than other kinds of media. We follow this finding with deeper visual analysis of nearly 100k animated GIFs and pair our results with interviews with 13 Tumblr users to find out what makes animated GIFs engaging. We found that the animation, lack of sound, immediacy of consumption, low bandwidth and minimal time demands, the storytelling capabilities and utility for expressing emotions were significant factors in making GIFs the most engaging content on Tumblr. We also found that engaging GIFs contained faces and had higher motion energy, uniformity, resolution and frame rate. Our findings connect to media theories and have implications in design of effective content dashboards, video summarization tools and ranking algorithms to enhance engagement." ] }
1904.12201
2942159993
As an intuitive way of expression emotion, the animated Graphical Interchange Format (GIF) images have been widely used on social media. Most previous studies on automated GIF emotion recognition fail to effectively utilize GIF's unique properties, and this potentially limits the recognition performance. In this study, we demonstrate the importance of human related information in GIFs and conduct human-centered GIF emotion recognition with a proposed Keypoint Attended Visual Attention Network (KAVAN). The framework consists of a facial attention module and a hierarchical segment temporal module. The facial attention module exploits the strong relationship between GIF contents and human characters, and extracts frame-level visual feature with a focus on human faces. The Hierarchical Segment LSTM (HS-LSTM) module is then proposed to better learn global GIF representations. Our proposed framework outperforms the state-of-the-art on the MIT GIFGIF dataset. Furthermore, the facial attention module provides reliable facial region mask predictions, which improves the model's interpretability.
Emotion Recognition. Emotion recognition @cite_0 @cite_6 has been an interesting topic for decades. On a large scale dataset @cite_23 , Rao @cite_8 propose a multi-level deep representations for emotion recognition. Multi-modal feature fusion @cite_22 is also proved to be effective. Instead of modeling emotion recognition as a classification task @cite_8 @cite_23 , Zhao @cite_22 propose to learn emotion distributions instead, which alleviates the perception uncertainty problem that different people under different context may perceive different emotions from the same content. Regressing emotion intensity scores @cite_24 is another effective approach. Han @cite_14 propose a soft prediction framework for the perception uncertainty problem.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_8", "@cite_6", "@cite_0", "@cite_24", "@cite_23" ], "mid": [ "2765354427", "2533262878", "2548264631", "2149940198" ], "abstract": [ "Current image emotion recognition works mainly classified the images into one dominant emotion category, or regressed the images with average dimension values by assuming that the emotions perceived among different viewers highly accord with each other. However, due to the influence of various personal and situational factors, such as culture background and social interactions, different viewers may react totally different from the emotional perspective to the same image. In this paper, we propose to formulate the image emotion recognition task as a probability distribution learning problem. Motivated by the fact that image emotions can be conveyed through different visual features, such as aesthetics and semantics, we present a novel framework by fusing multi-modal features to tackle this problem. In detail, weighted multi-modal conditional probability neural network (WMMCPNN) is designed as the learning model to associate the visual features with emotion probabilities. By jointly exploring the complementarity and learning the optimal combination coefficients of different modality features, WMMCPNN could effectively utilize the representation ability of each uni-modal feature. We conduct extensive experiments on three publicly available benchmarks and the results demonstrate that the proposed method significantly outperforms the state-of-the-art approaches for emotion distribution prediction.", "In this paper we address the sentence-level multi-modal emotion recognition problem. We formulate the emotion recognition task as a multi-category classification problem and propose an innovative solution based on the automatically generated ensemble of trees with binary support vector machines (SVM) classifiers in the tree nodes. We demonstrate the efficacy of our approach by performing four-way (anger, happiness, sadness, neutral) and five-way (including excitement) emotion recognition on the University of Southern California's Interactive Emotional Motion Capture (USC-IEMOCAP) corpus using combinations of acoustic features, lexical features extracted from automatic speech recognition (ASR) output and visual features extracted from facial markers traced by a motion capture system. The experiments show that the proposed ensemble of trees of binary SVM classifiers outperforms classical multi-way SVM classification with one-vs-one voting scheme and achieves state-of-the-art results for all feature combinations.", "In the past three years, Emotion Recognition in the Wild (EmotiW) Grand Challenge has drawn more and more attention due to its huge potential applications. In the fourth challenge, aimed at the task of video based emotion recognition, we propose a multi-clue emotion fusion (MCEF) framework by modeling human emotion from three mutually complementary sources, facial appearance texture, facial action, and audio. To extract high-level emotion features from sequential face images, we employ a CNN-RNN architecture, where face image from each frame is first fed into the fine-tuned VGG-Face network to extract face feature, and then the features of all frames are sequentially traversed in a bidirectional RNN so as to capture dynamic changes of facial textures. To attain more accurate facial actions, a facial landmark trajectory model is proposed to explicitly learn emotion variations of facial components. Further, audio signals are also modeled in a CNN framework by extracting low-level energy features from segmented audio clips and then stacking them as an image-like map. Finally, we fuse the results generated from three clues to boost the performance of emotion recognition. Our proposed MCEF achieves an overall accuracy of 56.66 with a large improvement of 16.19 with respect to the baseline.", "In this paper, we apply a context-sensitive technique for multimodal emotion recognition based on feature-level fusion of acoustic and visual cues. We use bidirectional Long ShortTerm Memory (BLSTM) networks which, unlike most other emotion recognition approaches, exploit long-range contextual information for modeling the evolution of emotion within a conversation. We focus on recognizing dimensional emotional labels, which enables us to classify both prototypical and nonprototypical emotional expressions contained in a large audiovisual database. Subject-independent experiments on various classification tasks reveal that the BLSTM network approach generally prevails over standard classification techniques such as Hidden Markov Models or Support Vector Machines, and achieves F1-measures of the order of 72 , 65 , and 55 for the discrimination of three clusters in emotional space and the distinction between three levels of valence and activation, respectively." ] }
1904.12200
2941736865
Magnetic resonance imaging (MRI) is being increasingly utilized to assess, diagnose, and plan treatment for a variety of diseases. The ability to visualize tissue in varied contrasts in the form of MR pulse sequences in a single scan provides valuable insights to physicians, as well as enabling automated systems performing downstream analysis. However many issues like prohibitive scan time, image corruption, different acquisition protocols, or allergies to certain contrast materials may hinder the process of acquiring multiple sequences for a patient. This poses challenges to both physicians and automated systems since complementary information provided by the missing sequences is lost. In this paper, we propose a variant of generative adversarial network (GAN) capable of leveraging redundant information contained within multiple available sequences in order to generate one or more missing sequences for a patient scan. The proposed network is designed as a multi-input, multi-output network which combines information from all the available pulse sequences, implicitly infers which sequences are missing, and synthesizes the missing ones in a single forward pass. We demonstrate and validate our method on two brain MRI datasets each with four sequences, and show the applicability of the proposed method in simultaneously synthesizing all missing sequences in any possible scenario where either one, two, or three of the four sequences may be missing. We compare our approach with competing unimodal and multi-modal methods, and show that we outperform both quantitatively and qualitatively.
Though all the methods discussed above propose a multi-input method, none of the methods have been proposed to synthesize multiple missing sequences (multi-output), and in one single pass. All three methods @cite_50 , @cite_52 , and @cite_2 synthesize only sequence in the presence of varying number of input sequences, while @cite_28 only synthesizes MRA using information from multiple inputs. Although the work presented in @cite_28 is close to our proposed method, theirs is not a truly multimodal network, since there is no empirical evidence that their method will generalize to multiple scenarios. To the best of our knowledge, we are the first to propose a method that is capable of synthesizing multiple missing sequences using a combination of various input sequences, and demonstrate the method on the complete set of scenarios (i.e., all combinations of missing sequences).
{ "cite_N": [ "@cite_28", "@cite_52", "@cite_50", "@cite_2" ], "mid": [ "2884442510", "2030927653", "2953204310", "2101432564" ], "abstract": [ "Accurate synthesis of a full 3D MR image containing tumours from available MRI (e.g. to replace an image that is currently unavailable or corrupted) would provide a clinician as well as downstream inference methods with important complementary information for disease analysis. In this paper, we present an end-to-end 3D convolution neural network that takes a set of acquired MR image sequences (e.g. T1, T2, T1ce) as input and concurrently performs (1) regression of the missing full resolution 3D MRI (e.g. FLAIR) and (2) segmentation of the tumour into subtypes (e.g. enhancement, core). The hypothesis is that this would focus the network to perform accurate synthesis in the area of the tumour. Experiments on the BraTS 2015 and 2017 datasets [1] show that: (1) the proposed method gives better performance than state-of-the art methods in terms of established global evaluation metrics (e.g. PSNR), (2) replacing real MR volumes with the synthesized MRI does not lead to significant degradation in tumour and sub-structure segmentation accuracy. The system further provides uncertainty estimates based on Monte Carlo (MC) dropout [11] for the synthesized volume at each voxel, permitting quantification of the system’s confidence in the output at each location.", "In this paper we propose novel methods for completion (from limited samples) and de-noising of multilinear (tensor) data and as an application consider 3-D and 4- D (color) video data completion and de-noising. We exploit the recently proposed tensor-Singular Value Decomposition (t-SVD)[11]. Based on t-SVD, the notion of multilinear rank and a related tensor nuclear norm was proposed in [11] to characterize informational and structural complexity of multilinear data. We first show that videos with linear camera motion can be represented more efficiently using t-SVD compared to the approaches based on vectorizing or flattening of the tensors. Since efficiency in representation implies efficiency in recovery, we outline a tensor nuclear norm penalized algorithm for video completion from missing entries. Application of the proposed algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. We also consider the problem of tensor robust Principal Component Analysis (PCA) for de-noising 3-D video data from sparse random corruptions. We show superior performance of our method compared to the matrix robust PCA adapted to this setting as proposed in [4].", "In this paper we propose novel methods for completion (from limited samples) and de-noising of multilinear (tensor) data and as an application consider 3-D and 4- D (color) video data completion and de-noising. We exploit the recently proposed tensor-Singular Value Decomposition (t-SVD)[11]. Based on t-SVD, the notion of multilinear rank and a related tensor nuclear norm was proposed in [11] to characterize informational and structural complexity of multilinear data. We first show that videos with linear camera motion can be represented more efficiently using t-SVD compared to the approaches based on vectorizing or flattening of the tensors. Since efficiency in representation implies efficiency in recovery, we outline a tensor nuclear norm penalized algorithm for video completion from missing entries. Application of the proposed algorithm for video recovery from missing entries is shown to yield a superior performance over existing methods. We also consider the problem of tensor robust Principal Component Analysis (PCA) for de-noising 3-D video data from sparse random corruptions. We show superior performance of our method compared to the matrix robust PCA adapted to this setting as proposed in [4].", "This paper describes a novel technique for the synthesis of imperative programs. Automated program synthesis has the potential to make programming and the design of systems easier by allowing programs to be specified at a higher-level than executable code. In our approach, which we call proof-theoretic synthesis, the user provides an input-output functional specification, a description of the atomic operations in the programming language, and a specification of the synthesized program's looping structure, allowed stack space, and bound on usage of certain operations. Our technique synthesizes a program, if there exists one, that meets the input-output specification and uses only the given resources. The insight behind our approach is to interpret program synthesis as generalized program verification, which allows us to bring verification tools and techniques to program synthesis. Our synthesis algorithm works by creating a program with unknown statements, guards, inductive invariants, and ranking functions. It then generates constraints that relate the unknowns and enforces three kinds of requirements: partial correctness, loop termination, and well-formedness conditions on program guards. We formalize the requirements that program verification tools must meet to solve these constraint and use tools from prior work as our synthesizers. We demonstrate the feasibility of the proposed approach by synthesizing programs in three different domains: arithmetic, sorting, and dynamic programming. Using verification tools that we previously built in the VS3 project we are able to synthesize programs for complicated arithmetic algorithms including Strassen's matrix multiplication and Bresenham's line drawing; several sorting algorithms; and several dynamic programming algorithms. For these programs, the median time for synthesis is 14 seconds, and the ratio of synthesis to verification time ranges between 1x to 92x (with an median of 7x), illustrating the potential of the approach." ] }
1904.12200
2941736865
Magnetic resonance imaging (MRI) is being increasingly utilized to assess, diagnose, and plan treatment for a variety of diseases. The ability to visualize tissue in varied contrasts in the form of MR pulse sequences in a single scan provides valuable insights to physicians, as well as enabling automated systems performing downstream analysis. However many issues like prohibitive scan time, image corruption, different acquisition protocols, or allergies to certain contrast materials may hinder the process of acquiring multiple sequences for a patient. This poses challenges to both physicians and automated systems since complementary information provided by the missing sequences is lost. In this paper, we propose a variant of generative adversarial network (GAN) capable of leveraging redundant information contained within multiple available sequences in order to generate one or more missing sequences for a patient scan. The proposed network is designed as a multi-input, multi-output network which combines information from all the available pulse sequences, implicitly infers which sequences are missing, and synthesizes the missing ones in a single forward pass. We demonstrate and validate our method on two brain MRI datasets each with four sequences, and show the applicability of the proposed method in simultaneously synthesizing all missing sequences in any possible scenario where either one, two, or three of the four sequences may be missing. We compare our approach with competing unimodal and multi-modal methods, and show that we outperform both quantitatively and qualitatively.
The main motivation for most synthesis methods is to retain the ability to meaningfully use some downstream analysis pipelines like segmentation or classification despite the partially missing input. However, there have been efforts by researchers working on those analysis pipelines to bypass any synthesis step by making the analysis methods themselves robust to missing sequences. Most notably, @cite_8 and @cite_45 provide methods for tumor segmentation using brain MRI that are robust to missing sequences @cite_8 , or to missing sequence labels @cite_45 . Although the methods bypass the requirement of having a synthesis step before actual downstream analysis, the performance of these robust versions of analysis pipelines often do not match the state-of-the-art performance of other non-robust methods in the case when all sequences are present. This is due to the fact that the methods not only have to learn how to perform the task (segmentation classification) well, but also to handle any missing input data. This two-fold objective for a single network raises a trade-off between robustness and performance.
{ "cite_N": [ "@cite_45", "@cite_8" ], "mid": [ "2884442510", "2891179298", "2788771790", "2767044624" ], "abstract": [ "Accurate synthesis of a full 3D MR image containing tumours from available MRI (e.g. to replace an image that is currently unavailable or corrupted) would provide a clinician as well as downstream inference methods with important complementary information for disease analysis. In this paper, we present an end-to-end 3D convolution neural network that takes a set of acquired MR image sequences (e.g. T1, T2, T1ce) as input and concurrently performs (1) regression of the missing full resolution 3D MRI (e.g. FLAIR) and (2) segmentation of the tumour into subtypes (e.g. enhancement, core). The hypothesis is that this would focus the network to perform accurate synthesis in the area of the tumour. Experiments on the BraTS 2015 and 2017 datasets [1] show that: (1) the proposed method gives better performance than state-of-the art methods in terms of established global evaluation metrics (e.g. PSNR), (2) replacing real MR volumes with the synthesized MRI does not lead to significant degradation in tumour and sub-structure segmentation accuracy. The system further provides uncertainty estimates based on Monte Carlo (MC) dropout [11] for the synthesized volume at each voxel, permitting quantification of the system’s confidence in the output at each location.", "We present an adversarial domain adaptation based deep learning approach for automatic tumor segmentation from T2-weighted MRI. Our approach is composed of two steps: (i) a tumor-aware unsupervised cross-domain adaptation (CT to MRI), followed by (ii) semi-supervised tumor segmentation using Unet trained with synthesized and limited number of original MRIs. We introduced a novel target specific loss, called tumor-aware loss, for unsupervised cross-domain adaptation that helps to preserve tumors on synthesized MRIs produced from CT images. In comparison, state-of-the art adversarial networks trained without our tumor-aware loss produced MRIs with ill-preserved or missing tumors. All networks were trained using labeled CT images from 377 patients with non-small cell lung cancer obtained from the Cancer Imaging Archive and unlabeled T2w MRIs from a completely unrelated cohort of 6 patients with pre-treatment and 36 on-treatment scans. Next, we combined 6 labeled pre-treatment MRI scans with the synthesized MRIs to boost tumor segmentation accuracy through semi-supervised learning. Semi-supervised training of cycle-GAN produced a segmentation accuracy of 0.66 computed using Dice Score Coefficient (DSC). Our method trained with only synthesized MRIs produced an accuracy of 0.74 while the same method trained in semi-supervised setting produced the best accuracy of 0.80 on test. Our results show that tumor-aware adversarial domain adaptation helps to achieve reasonably accurate cancer segmentation from limited MRI data by leveraging large CT datasets.", "Synthesized medical images have several important applications, e.g., as an intermedium in cross-modality image registration and as supplementary training samples to boost the generalization capability of a classifier. Especially, synthesized computed tomography (CT) data can provide X-ray attenuation map for radiation therapy planning. In this work, we propose a generic cross-modality synthesis approach with the following targets: 1) synthesizing realistic looking 3D images using unpaired training data, 2) ensuring consistent anatomical structures, which could be changed by geometric distortion in cross-modality synthesis and 3) improving volume segmentation by using synthetic data for modalities with limited training samples. We show that these goals can be achieved with an end-to-end 3D convolutional neural network (CNN) composed of mutually-beneficial generators and segmentors for image synthesis and segmentation tasks. The generators are trained with an adversarial loss, a cycle-consistency loss, and also a shape-consistency loss, which is supervised by segmentors, to reduce the geometric distortion. From the segmentation view, the segmentors are boosted by synthetic data from generators in an online manner. Generators and segmentors prompt each other alternatively in an end-to-end training fashion. With extensive experiments on a dataset including a total of 4,496 CT and magnetic resonance imaging (MRI) cardiovascular volumes, we show both tasks are beneficial to each other and coupling these two tasks results in better performance than solving them exclusively.", "We propose a multi-input multi-output fully convolutional neural network model for MRI synthesis. The model is robust to missing data, as it benefits from, but does not require, additional input modalities. The model is trained end-to-end, and learns to embed all input modalities into a shared modality-invariant latent space. These latent representations are then combined into a single fused representation, which is transformed into the target output modality with a learnt decoder. We avoid the need for curriculum learning by exploiting the fact that the various input modalities are highly correlated. We also show that by incorporating information from segmentation masks the model can both decrease its error and generate data with synthetic lesions. We evaluate our model on the ISLES and BRATS data sets and demonstrate statistically significant improvements over state-of-the-art methods for single input tasks. This improvement increases further when multiple input modalities are used, demonstrating the benefits of learning a common latent space, again resulting in a statistically significant improvement over the current best method. Finally, we demonstrate our approach on non skull-stripped brain images, producing a statistically significant improvement over the previous best method. Code is made publicly available at https: github.com agis85 multimodal_brain_synthesis ." ] }