id
stringlengths
1
5
document_id
stringlengths
1
5
text_1
stringlengths
78
2.56k
text_2
stringlengths
95
23.3k
text_1_name
stringclasses
1 value
text_2_name
stringclasses
1 value
29301
29300
This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of seismic sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as large-scale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define the “sensing capacity” and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that for a fixed sensor configuration, codewords are dependent and nonidentically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an intriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.
From the Publisher: This timely book presents such a consistent framework for addressing data fusion and sensor management. While the framework and the methods presented are applicable to a wide variety of multi-sensor systems, the book focuses on decentralized systems. The book also describes an actual to robot navigation and presents real data and results. The vehicle makes use of sonar sensors with focus of attention capability.
Abstract of query paper
Cite abstracts
29302
29301
This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of seismic sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as large-scale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define the “sensing capacity” and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that for a fixed sensor configuration, codewords are dependent and nonidentically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an intriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.
The method of types is one of the key technical tools in Shannon theory, and this tool is valuable also in other fields. In this paper, some key applications are presented in sufficient detail enabling an interested nonspecialist to gain a working knowledge of the method, and a wide selection of further applications are surveyed. These range from hypothesis testing and large deviations theory through error exponents for discrete memoryless channels and capacity of arbitrarily varying channels to multiuser problems. While the method of types is suitable primarily for discrete memoryless models, its extensions to certain models with memory are also discussed.
Abstract of query paper
Cite abstracts
29303
29302
This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of seismic sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as large-scale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define the “sensing capacity” and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that for a fixed sensor configuration, codewords are dependent and nonidentically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an intriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.
In this paper we study the transport capacity of a data-gathering wireless sensor network under different communication organizations. In particular, we consider using a flat as well as a hierarchical clustering architecture to realize many-to-one communications. The capacity of the network under this many-to-one data-gathering scenario is reduced compared to random one-to-one communication due to the unavoidable creation of a point of traffic concentration at the data collector receiver. We introduce the overall throughput bound of λ = W n per node, where W is the transmission capacity, and show under what conditions it can be achieved and under what conditions it cannot. When those conditions are not met, we constructively show how λ = Θ(W n) is achieved with high probability as the number of sensors goes to infinity. We also show how the introduction of clustering can improve the throughput. We discuss the trade-offs between achieving capacity and energy consumption, how transport capacity might be affected by considering in-network processing and the implications this study has on the design of practical protocols for large-scale data-gathering wireless sensor networks. Motivated by limited computational resources in sensor nodes, the impact of complexity constraints on the communication efficiency of sensor networks is studied. A single-parameter characterization of processing limitation of nodes in sensor networks is invoked. Specifically, the relaying nodes are assumed to "donate" only a small part of their total processor time to relay other nodes information. The amount of donated processor time is modelled by the node's ability to decode a channel code reliably at given rate R. Focusing on a four node network, with two relays, prior work for a complexity constrained single relay network is built upon. In the proposed coding scheme, the transmitter sends a broadcast code such that the relays decode only the "coarse" information, and assist the receiver in removing ambiguity only in that information. Via numerical examples, the impact of different power constraints in the system, ranging from per node power bound to network wide power constraint is explored. As the complexity bound R increases, the proposed scheme becomes identical to the recently proposed achievable rate by Gupta & Kumar (2003). Both discrete memoryless and Gaussian channels are considered. When n identical randomly located nodes, each capable of transmitting at W bits per second and using a fixed range, form a wireless network, the throughput spl lambda (n) obtainable by each node for a randomly chosen destination is spl Theta (W spl radic (nlogn)) bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission's range is optimally chosen, the bit-distance product that can be transported by the network per second is spl Theta (W spl radic An) bit-meters per second. Thus even under optimal circumstances, the throughput is only spl Theta (W spl radic n) bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance. Let (X_ k , Y_ k ) ^ _ k=1 be a sequence of independent drawings of a pair of dependent random variables X, Y . Let us say that X takes values in the finite set X . It is desired to encode the sequence X_ k in blocks of length n into a binary stream of rate R , which can in turn be decoded as a sequence X _ k , where X _ k X , the reproduction alphabet. The average distortion level is (1 n) ^ n _ k=1 E[D(X_ k , X _ k )] , where D(x, x ) 0, x X , x X , is a preassigned distortion measure. The special assumption made here is that the decoder has access to the side information Y_ k . In this paper we determine the quantity R (d) , defined as the infimum ofrates R such that (with > 0 arbitrarily small and with suitably large n )communication is possible in the above setting at an average distortion level (as defined above) not exceeding d + . The main result is that R (d) = [I(X;Z) - I(Y;Z)] , where the infimum is with respect to all auxiliary random variables Z (which take values in a finite set Z ) that satisfy: i) Y,Z conditionally independent given X ; ii) there exists a function f: Y Z X , such that E[D(X,f(Y,Z))] d . Let R_ X | Y (d) be the rate-distortion function which results when the encoder as well as the decoder has access to the side information Y_ k . In nearly all cases it is shown that when d > 0 then R (d) > R_ X|Y (d) , so that knowledge of the side information at the encoder permits transmission of the X_ k at a given distortion level using a smaller transmission rate. This is in contrast to the situation treated by Slepian and Wolf [5] where, for arbitrarily accurate reproduction of X_ k , i.e., d = for any >0 , knowledge of the side information at the encoder does not allow a reduction of the transmission rate. In recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf previous milestones in the information revolution. Realizing the great promise of sensor networks requires more than a mere advance in individual technologies. It relies on many components working together in an efficient, unattended, comprehensible, and trustworthy manner. One of the enabling technologies in sensor networks is the distributed source coding (DSC), which refers to the compression of the multiple correlated sensor outputs that does not communicate with each other. DSC allows a many-to-one video coding paradigm that effectively swaps encoder-decoder complexity with respect to conventional video coding, thereby representing a fundamental concept shift in video processing. This article has presented an intensive discussion on two DSC techniques, namely Slepian-Wolf coding and Wyner-Ziv coding. The Slepian and Wolf coding have theoretically shown that separate encoding is as efficient as joint coding for lossless compression in channel coding. We study network capacity limits and optimal routing algorithms for regular sensor networks, namely, square and torus grid sensor networks, in both, the static case (no node failures) and the dynamic case (node failures). For static networks, we derive upper bounds on the network capacity and then we characterize and provide optimal routing algorithms whose rate per node is equal to this upper bound, thus, obtaining the exact analytical expression for the network capacity. For dynamic networks, the unreliability of the network is modeled in two ways: a Markovian node failure and an energy based node failure. Depending on the probability of node failure that is present in the network, we propose to use a particular combination of two routing algorithms, the first one being optimal when there are no node failures at all and the second one being appropriate when the probability of node failure is high. The combination of these two routing algorithms defines a family of randomized routing algorithms, each of them being suitable for a given probability of node failure. Distributed nature of the sensor network architecture introduces unique challenges and opportunities for collaborative networked signal processing techniques that can potentially lead to significant performance gains. Many evolving low-power sensor network scenarios need to have high spatial density to enable reliable operation in the face of component node failures as well as to facilitate high spatial localization of events of interest. This induces a high level of network data redundancy, where spatially proximal sensor readings are highly correlated. We propose a new way of removing this redundancy in a completely distributed manner, i.e., without the sensors needing to talk, to one another. Our constructive framework for this problem is dubbed DISCUS (distributed source coding using syndromes) and is inspired by fundamental concepts from information theory. We review the main ideas, provide illustrations, and give the intuition behind the theory that enables this framework.We present a new domain of collaborative information communication and processing through the framework on distributed source coding. This framework enables highly effective and efficient compression across a sensor network without the need to establish inter-node communication, using well-studied and fast error-correcting coding algorithms. Correlated information sequences ,X_ -1 ,X_0,X_1, and ,Y_ -1 ,Y_0,Y_1, are generated by repeated independent drawings of a pair of discrete random variables X, Y from a given bivariate distribution P_ XY (x,y) . We determine the minimum number of bits per character R_X and R_Y needed to encode these sequences so that they can be faithfully reproduced under a variety of assumptions regarding the encoders and decoders. The results, some of which are not at all obvious, are presented as an admissible rate region R in the R_X - R_Y plane. They generalize a similar and well-known result for a single information sequence, namely R_X H (X) for faithful reproduction.
Abstract of query paper
Cite abstracts
29304
29303
This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of seismic sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as large-scale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define the “sensing capacity” and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that for a fixed sensor configuration, codewords are dependent and nonidentically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an intriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.
Distributed sampling and reconstruction of a physical field using an array of sensors is a problem of considerable interest in environmental monitoring applications of sensor networks. Our recent work has focused on the sampling of bandlimited sensor fields. However, sensor fields are not perfectly bandlimited but typically have rapidly decaying spectra. In a classical sampling set-up it is possible to precede the A D sampling operation with an appropriate analog anti-aliasing filter. However, in the case of sensor networks, this is infeasible since sampling must precede filtering. We show that even though the effects of aliasing on the reconstruction cannot be prevented due to the "filter-less" sampling constraint, they can be suitably controlled by oversampling and carefully reconstructing the field from the samples. We show using a dither-based scheme that it is possible to estimate non-bandlimited fields with a precision that depends on how fast the spectral content of the field decays. We develop a framework for analyzing non-bandlimited fields that lead to upper bounds on the maximum pointwise error for a spatial bit rate of R bits meter. We present results for fields with exponentially decaying spectra as an illustration. In particular, we show that for fields f(t) with exponential tails; i.e., F( spl omega ) < spl pi spl alpha sup - spl alpha | spl omega | , the maximum pointwise error decays as c2e sup -a sub 1 spl radic R+ c3 1 spl radic R (e sup -2a sub 1 spl radic R) with spatial bit rate R bits meter. Finally, we show that for fields with spectra that have a finite second moment, the distortion decreases as O((1 N) sup 2 3 ) as the density of sensors, N, scales up to infinity . We show that if D is the targeted non-zero distortion, then the required (finite) rate R scales as O (1 spl radic D log 1 D). For a class of sensor networks, the task is to monitor an underlying physical phenomenon over space and time through an imperfect observation process. The sensors can communicate back to a central data collector over a noisy channel. The key parameters in such a setting are the fidelity (or distortion) at which the underlying physical phenomenon can be estimated by the data collector, and the cost of operating the sensor network. This is a network joint source-channel communication problem, involving both compression and communication. It is well known that these two tasks may not be addressed separately without sacrificing optimality, and the optimal performance is generally unknown. This paper presents a lower bound on the best achievable end-to-end distortion as a function of the number of sensors, their total transmit power, the number of degrees of freedom of the underlying source process, and the spatio-temporal communication bandwidth. Particular coding schemes are studied, and it is shown that in some cases, the lower bound is tight in a scaling-law sense. By contrast, it is shown that the standard practice of separating source from channel coding may incur an exponential penalty in terms of communication resources, as a function of the number of sensors. Hence, such code designs effectively prevent scalability. Finally, it is outlined how the results extend to cases involving missing synchronization and channel fading. Sensing, processing and communication must be jointly optimized for efficient operation of resource-limited wireless sensor networks. We propose a novel source-channel matching approach for distributed field estimation that naturally integrates these basic operations and facilitates a unified analysis of the impact of key parameters (number of nodes, power, field complexity) on estimation accuracy. At the heart of our approach is a distributed source-channel communication architecture that matches the spatial scale of field coherence with the spatial scale of node synchronization for phase-coherent communication: the sensor field is uniformly partitioned into multiple cells and the nodes in each cell coherently communicate simple statistics of their measurements to the destination via a dedicated noisy multiple access channel (MAC). Essentially, the optimal field estimate in each cell is implicitly computed at the destination via the coherent spatial averaging inherent in the MAC, resulting in optimal power-distortion scaling with the number of nodes. In general, smoother fields demand lower per-node power but require node synchronization over larger scales for optimal estimation. In particular, optimal mean-square distortion scaling can be achieved with sub-linear power scaling. Our results also reveal a remarkable power-density tradeoff inherent in our approach: increasing the sensor density reduces the total power required to achieve a desired distortion. A direct consequence is that consistent field estimation is possible, in principle, even with vanishing total power in the limit of high sensor density. We consider a problem of broadcast communication in sensor networks, in which samples of a random field are collected at each node, and the goal is for all nodes to obtain an estimate of the entire field within a prescribed distortion value. The main idea we explore in this paper is that of jointly compressing the data generated by different nodes as this information travels over multiple hops, to eliminate correlations in the representation of the sampled field. Our main contributions are: (a) we obtain, using simple network flow concepts, conditions on the rate distortion function of the random field, so as to guarantee that any node can obtain the measurements collected at every other node in the network, quantized to within any prescribed distortion value; and (b) we construct a large class of physically-motivated stochastic models for sensor data, for which we are able to prove that the joint rate distortion function of all the data generated by the whole network grows slower than the bounds found in (a). A truly novel aspect of our work is the tight coupling between routing and source coding, explicitly formulated in a simple and analytically tractable model - to the best of our knowledge, this connection had not been studied before. We address the problem of deterministic oversampling of bandlimited sensor fields in a distributed communication-constrained processing environment, where it is desired for a central intelligent unit to reconstruct the sensor field to maximum pointwise accuracy.We show, using a dither-based sampling scheme, that is is possible to accomplish this using minimal inter-sensor communication with the aid of a multitude of low-precision sensors. Furthermore, we show the feasibility of having a flexible tradeoff between the average oversampling rate and the Analog to Digital (A D) quantization precision per sensor sample with respect to achieving exponential accuracy in the number of bits per Nyquist-period, thereby exposing a key underpinning "conservation of bits" principle. That is, we can distribute the bit budget per Nyquist-period along the amplitude-axis (precision of A D converter) and space (or time or space-time) using oversampling in an almost arbitrary discrete-valued manner, while retaining the same reconstruction error decay profile. Interestingly this oversampling is possible in a highly localized communication setting, with only nearest-neighbor communication, making it very attractive for dense sensor networks operating under stringent inter-node communication constraints. Finally we show how our scheme incorporates security as a by-product due to the presence of an underlying dither signal which can be used as a natural encryption device for security. The choice of the dither function enhances the security of the network. Sensor networks have emerged as a fundamentally new tool for monitoring spatial phenomena. This paper describes a theory and methodology for estimating inhomogeneous, two-dimensional fields using wireless sensor networks. Inhomogeneous fields are composed of two or more homogeneous (smoothly varying) regions separated by boundaries. The boundaries, which correspond to abrupt spatial changes in the field, are nonparametric one-dimensional curves. The sensors make noisy measurements of the field, and the goal is to obtain an accurate estimate of the field at some desired destination (typically remote from the sensor network). The presence of boundaries makes this problem especially challenging. There are two key questions: 1) Given n sensors, how accurately can the field be estimated? 2) How much energy will be consumed by the communications required to obtain an accurate estimate at the destination? Theoretical upper and lower bounds on the estimation error and energy consumption are given. A practical strategy for estimation and communication is presented. The strategy, based on a hierarchical data-handling and communication architecture, provides a near-optimal balance of accuracy and energy consumption. In this paper we investigate the capability of large-scale sensor networks to measure and transport a two-dimensional field. We consider a data-gathering wireless sensor network in which densely deployed sensors take periodic samples of the sensed field, and then scalar quantize, encode and transmit them to a single receiver central controller where snapshot images of the sensed field are reconstructed. The quality of the reconstructed field is limited by the ability of the encoder to compress the data to a rate less than the single-receiver transport capacity of the network. Subject to a constraint on the quality of the reconstructed field, we are interested in how fast data can be collected (or equivalently how closely in time these snapshots can be taken) due to the limitation just mentioned. As the sensor density increases to infinity, more sensors send data to the central controller. However, the data is more correlated, and the encoder can do more compression. The question is: Can the encoder compress sufficiently to meet the limit imposed by the transport capacity? Alternatively, how long does it take to transport one snapshot? We show that as the density increases to infinity, the total number of bits required to attain a given quality also increases to infinity under any compression scheme. At the same time, the single-receiver transport capacity of the network remains constant as the density increases. We therefore conclude that for the given scenario, even though the correlation between sensor data increases as the density increases, any data compression scheme is insufficient to transport the required amount of data for the given quality. Equivalently, the amount of time it takes to transport one snapshot goes to infinity.
Abstract of query paper
Cite abstracts
29305
29304
This paper considers a multi-cell multiple antenna system with precoding used at the base stations for downlink transmission. For precoding at the base stations, channel state information (CSI) is essential at the base stations. A popular technique for obtaining this CSI in time division duplex (TDD) systems is uplink training by utilizing the reciprocity of the wireless medium. This paper mathematically characterizes the impact that uplink training has on the performance of such multi-cell multiple antenna systems. When non-orthogonal training sequences are used for uplink training, the paper shows that the precoding matrix used by the base station in one cell becomes corrupted by the channel between that base station and the users in other cells in an undesirable manner. This paper analyzes this fundamental problem of pilot contamination in multi-cell systems. Furthermore, it develops a new multi-cell MMSE-based precoding method that mitigate this problem. In addition to being a linear precoding method, this precoding method has a simple closed-form expression that results from an intuitive optimization problem formulation. Numerical results show significant performance gains compared to certain popular single-cell precoding methods.
The sum rate capacity of the multi-antenna broadcast channel has recently been computed. However, the search for efficient practical schemes that achieve it is still ongoing. In this paper, we focus on schemes with linear preprocessing of the transmitted data. We propose two criteria for the preceding matrix design: one maximizing the sum rate and the other maximizing the minimum rate among all users. The latter problem is shown to be quasiconvex and is solved exactly via a bisection method. In addition to preceding, we employ a signal scaling scheme that minimizes the average bit-error-rate (BER). The signal scaling scheme is posed as a convex optimization problem, and thus can be solved exactly via efficient interior-point methods. In terms of the achievable sum rate, the proposed technique significantly outperforms traditional channel inversion methods, while having comparable (in fact, often superior) BER performance We characterize the sum capacity of the vector Gaussian broadcast channel by showing that the existing inner bound of Marton and the existing upper bound of Sato are tight for this channel. We exploit an intimate four-way connection between the vector broadcast channel, the corresponding point-to-point channel (where the receivers can cooperate), the multiple-access channel (MAC) (where the role of transmitters and receivers are reversed), and the corresponding point-to-point channel (where the transmitters can cooperate). A Gaussian broadcast channel (GBC) with r single-antenna receivers and t antennas at the transmitter is considered. Both transmitter and receivers have perfect knowledge of the channel. Despite its apparent simplicity, this model is, in general, a nondegraded broadcast channel (BC), for which the capacity region is not fully known. For the two-user case, we find a special case of Marton's (1979) region that achieves optimal sum-rate (throughput). In brief, the transmitter decomposes the channel into two interference channels, where interference is caused by the other user signal. Users are successively encoded, such that encoding of the second user is based on the noncausal knowledge of the interference caused by the first user. The crosstalk parameters are optimized such that the overall throughput is maximum and, surprisingly, this is shown to be optimal over all possible strategies (not only with respect to Marton's achievable region). For the case of r>2 users, we find a somewhat simpler choice of Marton's region based on ordering and successively encoding the users. For each user i in the given ordering, the interference caused by users j>i is eliminated by zero forcing at the transmitter, while interference caused by users j<i is taken into account by coding for noncausally known interference. Under certain mild conditions, this scheme is found to be throughput-wise asymptotically optimal for both high and low signal-to-noise ratio (SNR). We conclude by providing some numerical results for the ergodic throughput of the simplified zero-forcing scheme in independent Rayleigh fading. Block diagonalization (BD) is a precoding technique that eliminates interuser interference in downlink multiuser multiple-input multiple-output (MIMO) systems. With the assumptions that all users have the same number of receive antennas and utilize all receive antennas when scheduled for transmission, the number of simultaneously supportable users with BD is limited by the ratio of the number of base station transmit antennas to the number of user receive antennas. In a downlink MIMO system with a large number of users, the base station may select a subset of users to serve in order to maximize the total throughput. The brute-force search for the optimal user set, however, is computationally prohibitive. We propose two low-complexity suboptimal user selection algorithms for multiuser MIMO systems with BD. Both algorithms aim to select a subset of users such that the total throughput is nearly maximized. The first user selection algorithm greedily maximizes the total throughput, whereas the criterion of the second algorithm is based on the channel energy. We show that both algorithms have linear complexity in the total number of users and achieve around 95 of the total throughput of the complete search method in simulations The Gaussian multiple-input multiple-output (MIMO) broadcast channel (BC) is considered. The dirty-paper coding (DPC) rate region is shown to coincide with the capacity region. To that end, a new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is optimal for the nondegraded case. Furthermore, the capacity region is characterized under a wide range of input constraints, accounting, as special cases, for the total power and the per-antenna power constraints We consider a multiuser multiple-input multiple- output (MIMO) Gaussian broadcast channel (BC), where the transmitter and receivers have multiple antennas. Since the MIMO BC is in general a nondegraded BC, its capacity region remains an unsolved problem. We establish a duality between what is termed the "dirty paper" achievable region (the Caire-Shamai (see Proc. IEEE Int. Symp. Information Theory, Washington, DC, June 2001, p.322) achievable region) for the MIMO BC and the capacity region of the MIMO multiple-access channel (MAC), which is easy to compute. Using this duality, we greatly reduce the computational complexity required for obtaining the dirty paper achievable region for the MIMO BC. We also show that the dirty paper achievable region achieves the sum-rate capacity of the MIMO BC by establishing that the maximum sum rate of this region equals an upper bound on the sum rate of the MIMO BC. Recent theoretical results describing the sum-capacity when using multiple antennas to communicate with multiple users in a known rich scattering environment have not yet been followed with practical transmission schemes that achieve this capacity. We introduce a simple encoding algorithm that achieves near-capacity at sum-rates of tens of bits channel use. The algorithm is a variation on channel inversion that regularizes the inverse and uses a "sphere encoder" to perturb the data to reduce the energy of the transmitted signal. The paper is comprised of two parts. In this second part, we show that, after the regularization of the channel inverse introduced in the first part, a certain perturbation of the data using a "sphere encoder" can be chosen to further reduce the energy of the transmitted signal. The performance difference with and without this perturbation is shown to be dramatic. With the perturbation, we achieve excellent performance at all signal-to-noise ratios. The results of both uncoded and turbo-coded simulations are presented. In this paper we compare the following two methods of transmit preceding for the multiple antenna broadcast channel: vector perturbation applied to channel inversion (also termed zero forcing or ZF) precoding and scalar Tomlinson-Harashima (TH) precoding applied to sum-rate achieving transmit precoding. Our results indicate that vector perturbation applied to channel inversion preceding can significantly reduce power enhancement and yields the full diversity afforded by the channel to each user. Scalar TH-modulo reduction significantly reduces the power enhancement for precoding based on sum-rate criterion. The solution to vector perturbation applied to ZF precoding requires the solution to an integer optimization problem which is exponentially complex, or an approximation to the integer optimization problem which requires the Lenstra-Lenstra-Lovasz algorithm of polynomial complexity. Instead we propose a simpler solution (an approximation) to the vector perturbation problem based on the Rayleigh-Ritz theorem (R.A. Horn and C.R. Johnson, 1985). This approximate solution achieves the same diversity order as the optimal vector perturbation technique, but suffers a small coding loss. This solution is of polynomial complexity order. Further, a small increase in complexity with a "sphere"-based search around this solution yields significantly better performance. Since this vector perturbation is required to be done at the symbol rate, the lower complexity of the proposed algorithm is valuable in practice
Abstract of query paper
Cite abstracts
29306
29305
We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.
We present approximation algorithms for the metric uncapacitated facility location problem and the metric k -median problem achieving guarantees of 3 and 6 respectively. The distinguishing feature of our algorithms is their low running time: O(m log m ) and O(m log m(L + log ( n ))) respectively, where n and m are the total number of vertices and edges in the underlying complete bipartite graph on cities and facilities. The main algorithmic ideas are a new extension of the primal-dual schema and the use of Lagrangian relaxation to derive approximation algorithms. In this paper, we address the problem of efficient cache placement in multi-hop wireless networks. We consider a network comprising a server with an interface to the wired network, and other nodes requiring access to the information stored at the server. In order to reduce access latency in such a communication environment, an effective strategy is caching the server information at some of the nodes distributed across the network. Caching, however, can imply a considerable overhead cost; for instance, disseminating information incurs additional energy as well as bandwidth burden. Since wireless systems are plagued by scarcity of available energy and bandwidth, we need to design caching strategies that optimally trade-off between overhead cost and access latency. We pose our problem as an integer linear program. We show that this problem is the same as a special case of the connected facility location problem, which is known to be NP-hard. We devise a polynomial time algorithm which provides a suboptimal solution. The proposed algorithm applies to any arbitrary network topology and can be implemented in a distributed and asynchronous manner. In the case of a tree topology, our algorithm gives the optimal solution. In the case of an arbitrary topology, it finds a feasible solution with an objective function value within a factor of 6 of the optimal value. This performance is very close to the best approximate solution known today, which is obtained in a centralized manner. We compare the performance of our algorithm against three candidate cache placement schemes, and show via extensive simulation that our algorithm consistently outperforms these alternative schemes. We study approximation algorithms for placing replicated data in arbitrary networks. Consider a network of nodes with individual storage capacities and a metric communication cost function, in which each node periodically issues a request for an object drawn from a collection of uniform-length objects. We consider the problem of placing copies of the objects among the nodes such that the average access cost is minimized. Our main result is a polynomial-time constant-factor approximation algorithm for this placement problem. Our algorithm is based on a careful rounding of a linear programming relaxation of the problem. We also show that the data placement problem is MAXSNP-hard. We extend our approximation result to a generalization of the data placement problem that models additional costs such as the cost of realizing the placement. We also show that when object lengths are non-uniform, a constant-factor approximation is achievable if the capacity at each node in the approximate solution is allowed to exceed that in the optimal solution by the length of the largest object.
Abstract of query paper
Cite abstracts
29307
29306
We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.
The advances in computer and wireless communication technologies have led to an increasing interest in ad hoc networks which are temporarily constructed by only mobile hosts. In ad hoc networks, since mobile hosts move freely, disconnections occur frequently, and this causes frequent network division. Consequently, data accessibility in ad hoc networks is lower than that in the conventional fixed networks. We propose three replica allocation methods to improve data accessibility by replicating data items on mobile hosts. In these three methods, we take into account the access frequency from mobile hosts to each data item and the status of the network connection. We also show the results of simulation experiments regarding the performance evaluation of our proposed methods. We present a family of epidemic algorithms for maintaining replicated database systems. The algorithms are based on the causal delivery of log records where each record corresponds to one transaction instead of one operation. The first algorithm in this family is a pessimistic protocol that ensures serializability and guarantees strict executions. Since we expect the epidemic algorithms to be used in environments with low probability of conflicts among transactions, we develop a variant of the pessimistic algorithm which is optimistic in that transactions commit as soon as they terminate locally and inconsistencies are detected asynchronously as the effects of committed transactions propagate through the system. The last member of the family of epidemic algorithms is pessimistic and uses voting with quorums to resolve conflicts and improve transaction response time. A simulation study evaluates the performance of the protocols. In mobile ad hoc networks, nodes move freely and link node failures are common. This leads to frequent network partitions, which may significantly degrade the performance of data access in ad hoc networks. When the network partition occurs, mobile nodes in one network are not able to access data hosted by nodes in other networks. In this paper, we deal with this problem by applying data replication techniques. Existing data replication solutions in both wired or wireless networks aim at either reducing the query delay or improving the data accessibility. As both metrics are important for mobile nodes, we propose schemes to balance the tradeoffs between data accessibility and query delay under different system settings and requirements. Simulation results show that the proposed schemes can achieve a balance between these two metrics and provide satisfying system performance. Data caching can significantly improve the efficiency of information access in a wireless ad hoc network by reducing the access latency and bandwidth usage. However, designing efficient distributed caching algorithms is nontrivial when network nodes have limited memory. In this article, we consider the cache placement problem of minimizing total data access cost in ad hoc networks with multiple data items and nodes with limited memory capacity. The above optimization problem is known to be NP-hard. Defining benefit as the reduction in total access cost, we present a polynomial-time centralized approximation algorithm that provably delivers a solution whose benefit is at least 1 4 (1 2 for uniform-size data items) of the optimal benefit. The approximation algorithm is amenable to localized distributed implementation, which is shown via simulations to perform close to the approximation algorithm. Our distributed algorithm naturally extends to networks with mobile nodes. We simulate our distributed algorithm using a network simulator (ns2) and demonstrate that it significantly outperforms another existing caching technique (by Yin and Cao [33]) in all important performance metrics. The performance differential is particularly large in more challenging scenarios such as higher access frequency and smaller memory.
Abstract of query paper
Cite abstracts
29308
29307
We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.
This study investigates replication of data in a novel streaming architecture consisting of ad-hoc networks of wireless devices. One application of these devices is home-to-home (H2O) entertainment systems where a device collaborates with others to provide each household with on-demand access to a large selection of audio and video clips. These devices are configured with a substantial amount of storage and may cache several clips for future use. A contribution of this study is a technique to compute the number of replicas for a clip based on the square-root of the product of bandwidth required to display clips and their frequency of access , i.e., where . We provide a proof to show this strategy is near optimal when the objective is to maximize the number of simultaneous displays in the system with string and grid (both symmetric and asymmetric) topologies. We say “near optimal” because values of less than 0.5 may be more optimum. In addition, we use analytical and simulation studies to demonstrate its superiority when compared with other alternatives. A second contribution is an analytical model to estimate the theoretical upper bound on the number of simultaneous displays supported by an arbitrary grid topology of H2O devices. This analytical model is useful during capacity planning because it estimates the capabilities of a H2O configuration by considering: the size of an underlying repository, the number of nodes in a H2O cloud, the representative grid topology for this cloud, and the expected available network bandwidth and storage capacity of each device. It shows that one may control the ratio of repository size to the storage capacity of participating nodes in order to enhance system performance. We validate this analytical model with a simulation study and quantify its tradeoffs. Sensor networks are often desired to last many times longer than the active lifetime of individual sensors. This is usually achieved by putting sensors to sleep for most of their lifetime. On the other hand, surveillance kind of applications require guaranteed k-coverage of the protected region at all times. As a result, determining the appropriate number of sensors to deploy that achieves both goals simultaneously becomes a challenging problem. In this paper, we consider three kinds of deployments for a sensor network on a unit square - a √n x √n grid, random uniform (for all n points), and Poisson (with density n). In all three deployments, each sensor is active with probability p, independently from the others. Then, we claim that the critical value of the function npπr2 log(np) is 1 for the event of k-coverage of every point. We also provide an upper bound on the window of this phase transition. Although the conditions for the three deployments are similar, we obtain sharper bounds for the random deployments than the grid deployment, which occurs due to the boundary condition. In this paper, we also provide corrections to previously published results for the grid deployment model. Finally, we use simulation to show the usefulness of our analysis in real deployment scenarios.
Abstract of query paper
Cite abstracts
29309
29308
We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.
While sensor networks are going to be deployed in diverse application specific contexts, one unifying view is to treat them essentially as distributed databases. The simplest mechanism to obtain information from this kind of a database is to flood queries for named data within the network and obtain the relevant responses from sources. However, if the queries are (a) complex, (b) one-shot, and (c) for replicated data, this simple approach can be highly inefficient. In the context of energy-starved sensor networks, alternative strategies need to be examined for such queries. We propose a novel and efficient mechanism for obtaining information in sensor networks which we refer to as ACtive QUery forwarding In sensoR nEtworks (ACQUIRE). The basic principle behind ACQUIRE is to consider the query as an active entity that is forwarded through the network (either randomly or in some directed manner) in search of the solution. ACQUIRE also incorporates a look-ahead parameter d in the following manner: intermediate nodes that handle the active query use information from all nodes within d hops in order to partially resolve the query. When the active query is fully resolved, a completed response is sent directly back to the querying node. We take a mathematical modelling approach in this paper to calculate the energy costs associated with ACQUIRE. The models permit us to characterize analytically the impact of critical parameters, and compare the performance of ACQUIRE with respect to other schemes such as flooding-based querying (FBQ) and expanding ring search (ERS), in terms of energy usage, response latency and storage requirements. We show that with optimal parameter settings, depending on the update frequency, ACQUIRE obtains order of magnitude reduction over FBQ and potentially over 60–75 reduction over ERS (in highly dynamic environments and high query rates) in consumed energy. We show that these energy savings are provided in trade for increased response latency. The mathematical analysis is validated through extensive simulations. � 2003 Elsevier B.V. All rights reserved. in micro-sensor and radio technology will enable small but smart sensors to be deployed for a wide range of environmental monitoring applications. In order to constrain communication overhead, dense sensor networks call for new and highly efficient methods for distributing queries to nodes that have observed interesting events in the network. A highly efficient data-centric routing mechanism will offer significant power cost reductions (17), and improve network longevity. Moreover, because of the large amount of system and data redundancy possible, data becomes disassociated from specific node and resides in regions of the network (10)(7)(8). This paper describes and evaluates through simulation a scheme we call Rumor Routing, which allows for queries to be delivered to events in the network. Rumor Routing is tunable, and allows for tradeoffs between setup overhead and delivery reliability. It's intended for contexts in which geographic routing criteria are not applicable because a coordinate system is not available or the phenomenon of interest is not geographically correlated.
Abstract of query paper
Cite abstracts
29310
29309
In this work we consider the communication of information in the presence of a causal adversarial jammer. In the setting under study, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x_1,...,x_n) bit-by-bit over a communication channel. The adversarial jammer can view the transmitted bits x_i one at a time, and can change up to a p-fraction of them. However, the decisions of the jammer must be made in an online or causal manner. Namely, for each bit x_i the jammer's decision on whether to corrupt it or not (and on how to change it) must depend only on x_j for j <= i. This is in contrast to the "classical" adversarial jammer which may base its decisions on its complete knowledge of x. We present a non-trivial upper bound on the amount of information that can be communicated. We show that the achievable rate can be asymptotically no greater than min 1-H(p),(1-4p)^+ . Here H(.) is the binary entropy function, and (1-4p)^+ equals 1-4p for p < 0.25, and 0 otherwise.
Robust and adaptive communication under uncertain interference by Anand Dilip Sarwate Doctor of Philosophy in Engineering—Electrical Engineering and Computer Sciences and the Designated Emphasis in Communication, Computation, and Statistics University of California, Berkeley Professor Michael Gastpar, Chair In the future, wireless communication systems will play an increasingly integral role in society. Cutting-edge application areas such as cognitive radio, ad-hoc networks, and sensor networks are changing the way we think about wireless services. The demand for ubiquitous communication and computing requires flexible communication protocols that can operate in a range of conditions. This thesis adopts and extends a mathematical model for these communication systems that accounts for uncertainty and time variation in link qualities. The arbitrarily varying channel (AVC) is an information theoretic channel model that has a time varying state with no statistical description. We assume the state is chosen by an adversarial jammer, reflecting the demand that our constructions work for all state sequences. In this thesis we show how resources such as secret keys, feedback, and side-information can help communication under this kind of uncertainty. In order to put our results in context we provide a detailed taxonomy of the known results on AVCs in a unified setting. We then prove new results on list decoding Csiszr and Krner's book is widely regarded as a classic in the field of information theory, providing deep insights and expert treatment of the key theoretical issues. It includes in-depth coverage of the mathematics of reliable information transmission, both in two-terminal and multi-terminal network scenarios. Updated and considerably expanded, this new edition presents unique discussions of information theoretic secrecy and of zero-error information theory, including the deep connections of the latter with extremal combinatorics. The presentations of all core subjects are self contained, even the advanced topics, which helps readers to understand the important connections between seemingly different problems. Finally, 320 end-of-chapter problems, together with helpful solving hints, allow readers to develop a full command of the mathematical techniques. It is an ideal resource for graduate students and researchers in electrical and electronic engineering, computer science and applied mathematics. In a recent paper, , presented a distributed polynomial-time rate-optimal network-coding scheme that works in the presence of Byzantine faults.We revisit their adversarial models and augment them with three, arguably realistic, models. In each of the models, we present a distributed scheme that demonstrates the usefulness of the model. In particular, all of the schemes obtain optimal rate C-z, where C is the network capacity and z is a bound on the number of links controlled by the adversary. In this paper, we review how Shannon's classical notion of capacity is not enough to characterize a noisy communication channel if the channel is intended to be used as part of a feedback loop to stabilize an unstable scalar linear system. While classical capacity is not enough, another sense of capacity (parametrized by reliability) called "anytime capacity" is necessary for the stabilization of an unstable process. The required rate is given by the log of the unstable system gain and the required reliability comes from the sense of stability desired. A consequence of this necessity result is a sequential generalization of the Schalkwijk-Kailath scheme for communication over the additive white Gaussian noise (AWGN) channel with feedback. In cases of sufficiently rich information patterns between the encoder and decoder, adequate anytime capacity is also shown to be sufficient for there to exist a stabilizing controller. These sufficiency results are then generalized to cases with noisy observations, delayed control actions, and without any explicit feedback between the observer and the controller. Both necessary and sufficient conditions are extended to continuous time systems as well. We close with comments discussing a hierarchy of difficulty for communication problems and how these results establish where stabilization problems sit in that hierarchy We design codes to transmit information over a network, some subset of which is controlled by a malicious adversary. The computationally unbounded, hidden adversary knows the message to be transmitted, and can observe and change information over the part of the network being controlled. The network nodes do not share resources such as shared randomness or a private key. We first consider a unicast problem in a network with |epsiv parallel, unit-capacity, directed edges. The rate-region has two parts. If the adversary controls a fraction p < 0.5 of the |epsiv edges, the maximal throughput equals (1 - p) |epsiv|. We describe low-complexity codes that achieve this rate-region. We then extend these results to investigate more general multicast problems in directed, acyclic networks A hang glider which relies upon ground effect forces in order to dynamically support a person suspended therefrom.
Abstract of query paper
Cite abstracts
29311
29310
In this work we consider the communication of information in the presence of a causal adversarial jammer. In the setting under study, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x_1,...,x_n) bit-by-bit over a communication channel. The adversarial jammer can view the transmitted bits x_i one at a time, and can change up to a p-fraction of them. However, the decisions of the jammer must be made in an online or causal manner. Namely, for each bit x_i the jammer's decision on whether to corrupt it or not (and on how to change it) must depend only on x_j for j <= i. This is in contrast to the "classical" adversarial jammer which may base its decisions on its complete knowledge of x. We present a non-trivial upper bound on the amount of information that can be communicated. We show that the achievable rate can be asymptotically no greater than min 1-H(p),(1-4p)^+ . Here H(.) is the binary entropy function, and (1-4p)^+ equals 1-4p for p < 0.25, and 0 otherwise.
In the list-of-L decoding of a block code the receiver of a noisy sequence lists L possible transmitted messages, and is in error only if the correct message is not on the list. Consideration is given to (n,e,L) codes, which correct all sets of e or fewer errors in a block of n bits under list-of-L decoding. New geometric relations between the number of errors corrected under list-of-1 decoding and the (larger) number corrected under list-of-L decoding of the same code lead to new lower bounds on the maximum rate of (n,e,L) codes. They show that a jammer who can change a fixed fraction p >
Abstract of query paper
Cite abstracts
29312
29311
In this paper, we present a novel and general framework called Maximum Entropy Discrimination Markov Networks (MaxEnDNet), which integrates the max-margin structured learning and Bayesian-style estimation and combines and extends their merits. Major innovations of this model include: 1) It generalizes the extant Markov network prediction rule based on a point estimator of weights to a Bayesian-style estimator that integrates over a learned distribution of the weights. 2) It extends the conventional max-entropy discrimination learning of classification rule to a new structural max-entropy discrimination paradigm of learning the distribution of Markov networks. 3) It subsumes the well-known and powerful Maximum Margin Markov network (M @math N) as a special case, and leads to a model similar to an @math -regularized M @math N that is simultaneously primal and dual sparse, or other types of Markov network by plugging in different prior distributions of the weights. 4) It offers a simple inference algorithm that combines existing variational inference and convex-optimization based M @math N solvers as subroutines. 5) It offers a PAC-Bayesian style generalization bound. This work represents the first successful attempt to combine Bayesian-style learning (based on generative models) with structured maximum margin learning (based on a discriminative model), and outperforms a wide array of competing methods for structured input output learning on both synthetic and real data sets.
This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classification tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the 'relevance vector machine' (RVM), a model of identical functional form to the popular and state-of-the-art 'support vector machine' (SVM). We demonstrate that by exploiting a probabilistic Bayesian learning framework, we can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while offering a number of additional advantages. These include the benefits of probabilistic predictions, automatic estimation of 'nuisance' parameters, and the facility to utilise arbitrary basis functions (e.g. non-'Mercer' kernels). We detail the Bayesian framework and associated learning algorithm for the RVM, and give some illustrative examples of its application along with some comparative benchmarks. We offer some explanation for the exceptional degree of sparsity obtained, and discuss and demonstrate some of the advantageous features, and potential extensions, of Bayesian relevance learning.
Abstract of query paper
Cite abstracts
29313
29312
We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent instantiation of these schematic answers using a conventional relational DBMS. In this research note, we outline the main idea of this technique -- using abstractions of databases and constrained clauses for deriving schematic answers. The proposed method can be directly used with regular RDB, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.
Relational databases are widely used today as a mechanism for providing access to structured data. They, however, are not suitable for typical information finding tasks of end users. There is often a semantic gap between the queries users want to express and the queries that can be answered by the database. In this paper, we propose a system that bridges this semantic gap using domain knowledge contained in ontologies. Our system extends relational databases with the ability to answer semantic queries that are represented in SPARQL, an emerging Semantic Web query language. Users express their queries in SPARQL, based on a semantic model of the data, and they get back semantically relevant results. We define different categories of results that are semantically relevant to the users' query and show how our system retrieves these results. We evaluate the performance of our system on sample relational databases, using a combination of standard and custom ontologies. The goal of data integration is to provide a uniform access to a set of heterogeneous data sources, freeing the user from the knowledge about where the data are, how they are stored, and how they can be accessed. The problem of designing effective data integration solutions has been addressed by several research and development projects in the last years. One of the outcomes of this research work is a clear conceptual architecture for data integration1. According to this architecture [9], the main components of a data integration system are the global schema, the sources, and the mapping. Thus, a data integration system is seen as a triple 〈G,S,M〉, where: We propose a new Description Logic, called DL-Lite, specifically tailored to capture basic ontology languages, while keeping low complexity of reasoning. Reasoning here means not only computing subsumption between concepts, and checking satisfiability of the whole knowledge base, but also answering complex queries (in particular, conjunctive queries) over the set of instances maintained in secondary storage. We show that in DL-Lite the usual DL reasoning tasks are polynomial in the size of the TBox, and query answering is polynomial in the size of the ABox (i.e., in data complexity). To the best of our knowledge, this is the first result of polynomial data complexity for query answering over DL knowledge bases. A notable feature of our logic is to allow for a separation between TBox and ABox reasoning during query evaluation: the part of the process requiring TBox reasoning is independent of the ABox, and the part of the process requiring access to the ABox can be carried out by an SQL engine, thus taking advantage of the query optimization strategies provided by current DBMSs. Abstract : We present DLDB, a knowledge base system that extends a relational database management system with additional capabilities for DAML+OIL inference. We discuss a number of database schemas that can be used to store RDF data and discuss the tradeoffs of each. Then we describe how we extend our design to support DAML+OIL entailments. The most significant aspect of our approach is the use of a description logic reasoner to precompute the subsumption hierarchy. We describe a lightweight implementation that makes use of a common RDBMS (MS Access) and the FaCT description logic reasoner. Surprisingly, this simple approach provides good results for extensional queries over a large set of DAML+OIL data that commits to a representative ontology of moderate complexity. As such, we expect such systems to be adequate for personal or small-business usage. Ontologies are a crucial tool for formally specifying the vocabulary and relationship of concepts used on the Semantic Web. In order to share information, agents that use different vocabularies must be able to translate data from one ontological framework to another. Ontology translation is required when translating datasets, generating ontology extensions, and querying through different ontologies. OntoMerge, an online system for ontology merging and automated reasoning, can implement ontology translation with inputs and outputs in OWL or other web languages. Ontology translation can be thought of in terms of formal inference in a merged ontology. The merge of two related ontologies is obtained by taking the union of the concepts and the axioms defining them, and then adding bridging axioms that relate their concepts. The resulting merged ontology then serves as an inferential medium within which translation can occur. Our internal representation, Web-PDDL, is a strong typed first-order logic language for web application. Using a uniform notation for all problems allows us to factor out syntactic and semantic translation problems, and focus on the latter. Syntactic translation is done by an automatic translator between Web-PDDL and OWL or other web languages. Semantic translation is implemented using an inference engine (OntoEngine) which processes assertions and queries in Web-PDDL syntax, running in either a data-driven (forward chaining) or demand-driven (backward chaining) way. Recently, several approaches have been proposed on combining description logic (DL) reasoning with database techniques. In this paper we report on the LAS (Large Abox Store) system extending the DL reasoner Racer with a database used to store and query Tbox and Abox information. LAS stores for given knowledge bases their taxonomy and their complete Abox in its database. The Aboxes may contain role assertions. LAS can answer Tbox and Abox queries by combining SQL queries with DL reasoning. The architecture of LAS is based on merging techniques for so-called individual pseudo models.
Abstract of query paper
Cite abstracts
29314
29313
We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent instantiation of these schematic answers using a conventional relational DBMS. In this research note, we outline the main idea of this technique -- using abstractions of databases and constrained clauses for deriving schematic answers. The proposed method can be directly used with regular RDB, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.
Summary: We describe multiple methods for accessing and querying the complex and integrated cellular data in the BioCyc family of databases: access through multiple file formats, access through Application Program Interfaces (APIs) for LISP, Perl and Java, and SQL access through the BioWarehouse relational database. Availability: The Pathway Tools software and 20 BioCyc DBs in Tiers 1 and 2 are freely available to academic users; fees apply to some types of commercial use. For download instructions see http: BioCyc.org download.shtml Supplementary information: For more details on programmatic access to BioCyc DBs, see http: bioinformatics.ai.sri.com ptools ptools-resources.html Contact: [email protected]
Abstract of query paper
Cite abstracts
29315
29314
We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent instantiation of these schematic answers using a conventional relational DBMS. In this research note, we outline the main idea of this technique -- using abstractions of databases and constrained clauses for deriving schematic answers. The proposed method can be directly used with regular RDB, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.
Abstract Constraint Logic Programming (CLP) is a merger of two declarative paradigms: constraint solving and logic programming. Although a relatively new field, CLP has progressed in several quite different directions. In particular, the early fundamental concepts have been adapted to better serve in different areas of applications. In this survey of CLP, a primary goal is to give a systematic description of the major trends in terms of common fundamental concepts. The three main parts cover the theory, implementation issues, and programming for applications. Recently, extensions of constrained logic programming and constrained resolution for theorem proving have been introduced, that consider constraints, which are interpreted under an open world assumption. We discuss relationships between applications of these approaches for query answering in knowledge base systems on the one hand and abduction-based hypothetical reasoning on the other hand. We show both that constrained resolution can be used as an operationalization of (some limited form of) abduction and that abduction is the logical status of an answer generation process through constrained resolution, ie., it is an abductive but not a deductive form of reasoning.
Abstract of query paper
Cite abstracts
29316
29315
To establish secure (point-to-point and or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.
This paper presents a new pairing protocol that allows twoCPU-constrained wireless devices Alice and Bob to establish ashared secret at a very low cost. To our knowledge, this is thefirst software pairing scheme that does not rely on expensivepublic-key cryptography, out-of-band channels (such as a keyboardor a display) or specific hardware, making it inexpensive andsuitable for CPU-constrained devices such as sensors. In the described protocol, Alice can send the secret bit 1 toBob by broadcasting an (empty) packet with the source field set toAlice. Similarly, Alice can send the secret bit 0 to Bob bybroadcasting an (empty) packet with the source field set to Bob.Only Bob can identify the real source of the packet (since it didnot send it, the source is Alice), and can recover the secret bit(1 if the source is set to Alice or 0 otherwise). An eavesdroppercannot retrieve the secret bit since it cannot figure out whetherthe packet was actually sent by Alice or Bob. By randomlygenerating n such packets Alice and Bob can agree on ann-bit secret key. Our scheme requires that the devices being paired, Alice andBob, are shaken during the key exchange protocol. This is toguarantee that an eavesdropper cannot identify the packets sent byAlice from those sent by Bob using data from the RSSI (ReceivedSignal Strength Indicator) registers available in commercialwireless cards. The proposed protocol works with off-the-shelf802.11 wireless cards and is secure against eavesdropping attacksthat use power analysis. It requires, however, some firmwarechanges to protect against attacks that attempt to identify thesource of packets from their transmission frequency. We demonstrate the feasibility of finger-printing the radio of wireless sensor nodes (Chipcon 1000 radio, 433MHz). We show that, with this type of devices, a receiver can create device radio finger-prints and subsequently identify origins of messages exchanged between the devices, even if message contents and device identifiers are hidden. We further analyze the implications of device fingerprinting on the security of sensor networking protocols, specifically, we propose two new mechanisms for the detection of wormholes in sensor networks.
Abstract of query paper
Cite abstracts
29317
29316
To establish secure (point-to-point and or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.
In the near future, many personal electronic devices will be able to communicate with each other over a short range wireless channel. We investigate the principal security issues for such an environment. Our discussion is based on the concrete example of a thermometer that makes its readings available to other nodes over the air. Some lessons learned from this example appear to be quite general to ad-hoc networks, and rather different from what we have come to expect in more conventional systems: denial of service, the goals of authentication, and the problems of naming all need re-examination. We present the resurrecting duckling security policy model, which describes secure transient association of a device with multiple serialised owners.
Abstract of query paper
Cite abstracts
29318
29317
To establish secure (point-to-point and or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.
Current mechanisms for authenticating communication between devices that share no prior context are inconvenient for ordinary users, without the assistance of a trusted authority. We present and analyze seeing-is-believing, a system that utilizes 2D barcodes and camera-telephones to implement a visual channel for authentication and demonstrative identification of devices. We apply this visual channel to several problems in computer security, including authenticated key exchange between devices that share no prior context, establishment of a trusted path for configuration of a TCG-compliant computing platform, and secure device configuration in the context of a smart home. In this paper we address the problem of secure communication and authentication in ad-hoc wireless networks. This is a difficult problem, as it involves bootstrapping trust between strangers. We present a user-friendly solution, which provides secure authentication using almost any established public-key-based key exchange protocol, as well as inexpensive hash-based alternatives. In our approach, devices exchange a limited amount of public information over a privileged side channel, which will then allow them to complete an authenticated key exchange protocol over the wireless link. Our solution does not require a public key infrastructure, is secure against passive attacks on the privileged side channel and all attacks on the wireless link, and directly captures users’ intuitions that they want to talk to a particular previously unknown device in their physical proximity. We have implemented our system in Java for a variety of different devices, communication media, and key
Abstract of query paper
Cite abstracts
29319
29318
To establish secure (point-to-point and or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.
Recently several researchers and practitioners have begun to address the problem of how to set up secure communication between two devices without the assistance of a trusted third party. , (2005) proposed that one device displays the hash of its public key in the form of a barcode, and the other device reads it using a camera. Mutual authentication requires switching the roles of the devices and repeating the above process in the reverse direction. In this paper, we show how strong mutual authentication can be achieved even with a unidirectional visual channel, without having to switch device roles. By adopting recently proposed improved pairing protocols, we propose how visual channel authentication can be used even on devices that have very limited displaying capabilities. Key agreement protocols are frequently based on the Diffie-Hellman protocol but require authenticating the protocol messages in two ways. This can be made by a cross-authentication protocol. Such protocols, based on the assumption that a channel which can authenticate short strings is available (SAS-based), have been proposed by Vaudenay. In this paper, we survey existing protocols and we propose a new one. Our proposed protocol requires three moves and a single SAS to be authenticated in two ways. It is provably secure in the random oracle model. We can further achieve security with a generic construction (e.g. in the standard model) at the price of an extra move. We discuss applications such as secure peer-to-peer VoIP.
Abstract of query paper
Cite abstracts
29320
29319
This paper introduces a model based upon games on an evolving network, and develops three clustering algorithms according to it. In the clustering algorithms, data points for clustering are regarded as players who can make decisions in games. On the network describing relationships among data points, an edge-removing-and-rewiring (ERR) function is employed to explore in a neighborhood of a data point, which removes edges connecting to neighbors with small payoffs, and creates new edges to neighbors with larger payoffs. As such, the connections among data points vary over time. During the evolution of network, some strategies are spread in the network. As a consequence, clusters are formed automatically, in which data points with the same evolutionarily stable strategy are collected as a cluster, so the number of evolutionarily stable strategies indicates the number of clusters. Moreover, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.
Abstract Game theory is one of the key paradigms behind many scientific disciplines from biology to behavioral sciences to economics. In its evolutionary form and especially when the interacting agents are linked in a specific social network the underlying solution concepts and methods are very similar to those applied in non-equilibrium statistical physics. This review gives a tutorial-type overview of the field for physicists. The first four sections introduce the necessary background in classical and evolutionary game theory from the basic definitions to the most important results. The fifth section surveys the topological complications implied by non-mean-field-type social network structures in general. The next three sections discuss in detail the dynamic behavior of three prominent classes of models: the Prisoner's Dilemma, the Rock–Scissors–Paper game, and Competing Associations. The major theme of the review is in what sense and how the graph structure of interactions can modify and enrich the picture of long term behavioral patterns emerging in evolutionary games. Les AA. montrent que le paradoxe de l'induction retrograde (backward induction) est resoluble. Une solution repose sur le fait que les joueurs (ou agents) rationnels ne sont pas necessairement en position d'utiliser l'argument d'induction retrograde Part I Journey to a 21st birthday: the boy in Wisconsin forests and fields education in Chicago encounter with a scientific revolution - political science at Chicago. Part II The scientist as a young man: a taste of research - the City Managers' Association managing research - Berkeley teaching at Illinois Tech a matter of loyalty building a business school - the Graduate School of Industrial Administration research and science politics mazes without minotaurs roots of artificial intelligence climbing the mountain - artificial intelligence achieved. Part III View from the mountain: exploring the plain personal threads in the warp creating a university environment for cognitive science and A.I. on being argumentative the student troubles the scientist as politician foreign adventures. Part IV Research after 60: from Nobel to now the amateur diplomat in China and the Soviet Union guides for choice. Afterword: the scientist as problem solver.
Abstract of query paper
Cite abstracts
29321
29320
This paper introduces a model based upon games on an evolving network, and develops three clustering algorithms according to it. In the clustering algorithms, data points for clustering are regarded as players who can make decisions in games. On the network describing relationships among data points, an edge-removing-and-rewiring (ERR) function is employed to explore in a neighborhood of a data point, which removes edges connecting to neighbors with small payoffs, and creates new edges to neighbors with larger payoffs. As such, the connections among data points vary over time. During the evolution of network, some strategies are spread in the network. As a consequence, clusters are formed automatically, in which data points with the same evolutionarily stable strategy are collected as a cluster, so the number of evolutionarily stable strategies indicates the number of clusters. Moreover, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.
This text introduces current evolutionary game theory--where ideas from evolutionary biology and rationalistic economics meet--emphasizing the links between static and dynamic approaches and noncooperative game theory. The author provides an overview of the developments that have taken place in this branch of game theory, discusses the mathematical tools needed to understand the area, describes both the motivation and intuition for the concepts involved, and explains why and how the theory is relevant to economics. A group of individuals resolve their disputes by a knockout tournament. In each round of the tournament, the remaining contestants form pairs which compete, the winners progressing to the next round and the losers being eliminated. The payoff received depends upon how far the player has progressed and a cost is incurred only when it is defeated. We only consider strategies in which individuals are constrained to adopt a fixed play throughout the successive rounds. The case where individuals can vary their choice of behaviour from round to round will be treated elsewhere. The complexity of the system is investigated and illustrated both by special cases and numerical examples.
Abstract of query paper
Cite abstracts
29322
29321
This paper introduces a model based upon games on an evolving network, and develops three clustering algorithms according to it. In the clustering algorithms, data points for clustering are regarded as players who can make decisions in games. On the network describing relationships among data points, an edge-removing-and-rewiring (ERR) function is employed to explore in a neighborhood of a data point, which removes edges connecting to neighbors with small payoffs, and creates new edges to neighbors with larger payoffs. As such, the connections among data points vary over time. During the evolution of network, some strategies are spread in the network. As a consequence, clusters are formed automatically, in which data points with the same evolutionarily stable strategy are collected as a cluster, so the number of evolutionarily stable strategies indicates the number of clusters. Moreover, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.
Every form of behavior is shaped by trial and error. Such stepwise adaptation can occur through individual learning or through natural selection, the basis of evolution. Since the work of Maynard Smith and others, it has been realized how game theory can model this process. Evolutionary game theory replaces the static solutions of classical game theory by a dynamical approach centered not on the concept of rational players but on the population dynamics of behavioral programs. In this book the authors investigate the nonlinear dynamics of the self-regulation of social and economic behavior, and of the closely related interactions among species in ecological communities. Replicator equations describe how successful strategies spread and thereby create new conditions that can alter the basis of their success, i.e., to enable us to understand the strategic and genetic foundations of the endless chronicle of invasions and extinctions that punctuate evolution. In short, evolutionary game theory describes when to escalate a conflict, how to elicit cooperation, why to expect a balance of the sexes, and how to understand natural selection in mathematical terms. Comprehensive treatment of ecological and game theoretic dynamics Invasion dynamics and permanence as key concepts Explanation in terms of games of things like competition between species Cooperation in organisms, whether bacteria or primates, has been a difficulty for evolutionary theory since Darwin. On the assumption that interactions between pairs of individuals occur on a probabilistic basis, a model is developed based on the concept of an evolutionarily stable strategy in the context of the Prisoner's Dilemma game. Deductions from the model, and the results of a computer tournament show how cooperation based on reciprocity can get started in an asocial world, can thrive while interacting with a wide range of other strategies, and can resist invasion once fully established. Potential applications include specific aspects of territoriality, mating, and disease.
Abstract of query paper
Cite abstracts
29323
29322
Imperfect channel state information degrades the performance of multiple-input multiple-output (MIMO) communications; its effects on single-user (SU) and multiuser (MU) MIMO transmissions are quite different. In particular, MU-MIMO suffers from residual interuser interference due to imperfect channel state information while SU-MIMO only suffers from a power loss. This paper compares the throughput loss of both SU and MU-MIMO in the broadcast channel due to delay and channel quantization. Accurate closed-form approximations are derived for achievable rates for both SU and MU-MIMO. It is shown that SU-MIMO is relatively robust to delayed and quantized channel information, while MU-MIMO with zero-forcing precoding loses its spatial multiplexing gain with a fixed delay or fixed codebook size. Based on derived achievable rates, a mode switching algorithm is proposed, which switches between SU and MU-MIMO modes to improve the spectral efficiency based on average signal-to-noise ratio (SNR), normalized Doppler frequency, and the channel quantization codebook size. The operating regions for SU and MU modes with different delays and codebook sizes are determined, and they can be used to select the preferred mode. It is shown that the MU mode is active only when the normalized Doppler frequency is very small, and the codebook size is large.
Multiple transmit antennas in a downlink channel can provide tremendous capacity (i.e., multiplexing) gains, even when receivers have only single antennas. However, receiver and transmitter channel state information is generally required. In this correspondence, a system where each receiver has perfect channel knowledge, but the transmitter only receives quantized information regarding the channel instantiation is analyzed. The well-known zero-forcing transmission technique is considered, and simple expressions for the throughput degradation due to finite-rate feedback are derived. A key finding is that the feedback rate per mobile must be increased linearly with the signal-to-noise ratio (SNR) (in decibels) in order to achieve the full multiplexing gain. This is in sharp contrast to point-to-point multiple-input multiple-output (MIMO) systems, in which it is not necessary to increase the feedback rate as a function of the SNR A multiple antenna broadcast channel (multiple transmit antennas, one antenna at each receiver) with imperfect channel state information available to the transmitter is considered. If perfect channel state information is available to the transmitter, then a multiplexing gain equal to the minimum of the number of transmit antennas and the number of receivers is achievable. On the other hand, if each receiver has identical fading statistics and the transmitter has no channel information, the maximum achievable multiplexing gain is only one. The focus of this paper is on determination of necessary and sufficient conditions on the rate at which CSIT quality must improve with SNR in order for full multiplexing gain to be achievable. The main result of the paper shows that scaling CSIT quality such that the CSIT error is dominated by the inverse of the SNR is both necessary and sufficient to achieve the full multiplexing gain as well as a bounded rate offset (i.e., the sum rate has no negative sub-logarithmic terms) in the compound channel setting. Block diagonalization is a linear preceding technique for the multiple antenna broadcast (downlink) channel that involves transmission of multiple data streams to each receiver such that no multi-user interference is experienced at any of the receivers. This low-complexity scheme operates only a few dB away from capacity but requires very accurate channel knowledge at the transmitter. We consider a limited feedback system where each receiver knows its channel perfectly, but the transmitter is only provided with a finite number of channel feedback bits from each receiver. Using a random quantization argument, we quantify the throughput loss due to imperfect channel knowledge as a function of the feedback level. The quality of channel knowledge must improve proportional to the SNR in order to prevent interference-limitations, and we show that scaling the number of feedback bits linearly with the system SNR is sufficient to maintain a bounded rate loss. Finally, we compare our quantization strategy to an analog feedback scheme and show the superiority of quantized feedback. In this paper, we consider two different models of partial channel state information at the base station transmitter (CSIT) for multiple antenna broadcast channels: 1) the shape feedback model where the normalized channel vector of each user is available at the base station and 2) the limited feedback model where each user quantizes its channel vector according to a rotated codebook that is optimal in the sense of mean squared error and feeds back the codeword index. This paper is focused on characterizing the sum rate performance of both zero-forcing dirty paper coding (ZFDPC) systems and channel inversion (CI) systems under the given two partial CSIT models. Intuitively speaking, a system with shape feedback loses the sum rate gain of adaptive power allocation. However, shape feedback still provides enough channel knowledge for ZFDPC and CI to approach their own optimal throughput in the high signal-to-noise ratio (SNR) regime. As for limited feedback, we derive sum rate bounds for both signaling schemes and link their throughput performance to some basic properties of the quantization codebook. Interestingly, we find that limited feedback employing a fixed codebook leads to a sum rate ceiling for both schemes for asymptotically high SNR. We consider a MIMO fading broadcast channel and compute achievable ergodic rates when channel state information is acquired at the receivers via downlink training and explicit channel feedback is performed to provide transmitter channel state information (CSIT). Both “analog” and quantized (digital) channel feedback are analyzed, and digital feedback is shown to be potentially superior when the feedback channel uses per channel coefficient is larger than 1. Also, we show that by proper design of the digital feedback link, errors in the feedback have a relatively minor effect even if simple uncoded modulation is used on the feedback channel. We extend our analysis to the case of fading MIMO Multiaccess Channel (MIMO-MAC) in the feedback link, as well as to the case of a time-varying channel and feedback delay. We show that by exploiting the MIMO-MAC nature of the uplink channel, a fully scalable system with both downlink multiplexing gain and feedback redundancy proportional to the number of base station antennas can be achieved. Furthermore, the feedback strategy is optimized by a non-trivial combination of time-division and space-division multiple-access. For the case of delayed feedback, we show that in the realistic case where the fading process has (normalized) maximum Doppler frequency shift 0 F < 1=2, a fraction 1 2F of the optimal multiplexing gain is achievable. The general conclusion of this work is that very significant downlink throughput is achievable with simple and efficient channel state feedback, provided that the feedback link is properly designed. We analyze the sum-rate performance of a multi- antenna downlink system carrying more users than transmit antennas, with partial channel knowledge at the transmitter due to finite rate feedback. In order to exploit multiuser diversity, we show that the transmitter must have, in addition to directional information, information regarding the quality of each channel. Such information should reflect both the channel magnitude and the quantization error. Expressions for the SINR distribution and the sum-rate are derived, and tradeoffs between the number of feedback bits, the number of users, and the SNR are observed. In particular, for a target performance, having more users reduces feedback load.
Abstract of query paper
Cite abstracts
29324
29323
Imperfect channel state information degrades the performance of multiple-input multiple-output (MIMO) communications; its effects on single-user (SU) and multiuser (MU) MIMO transmissions are quite different. In particular, MU-MIMO suffers from residual interuser interference due to imperfect channel state information while SU-MIMO only suffers from a power loss. This paper compares the throughput loss of both SU and MU-MIMO in the broadcast channel due to delay and channel quantization. Accurate closed-form approximations are derived for achievable rates for both SU and MU-MIMO. It is shown that SU-MIMO is relatively robust to delayed and quantized channel information, while MU-MIMO with zero-forcing precoding loses its spatial multiplexing gain with a fixed delay or fixed codebook size. Based on derived achievable rates, a mode switching algorithm is proposed, which switches between SU and MU-MIMO modes to improve the spectral efficiency based on average signal-to-noise ratio (SNR), normalized Doppler frequency, and the channel quantization codebook size. The operating regions for SU and MU modes with different delays and codebook sizes are determined, and they can be used to select the preferred mode. It is shown that the MU mode is active only when the normalized Doppler frequency is very small, and the codebook size is large.
Multiple transmit antennas in a downlink channel can provide tremendous capacity (i.e., multiplexing) gains, even when receivers have only single antennas. However, receiver and transmitter channel state information is generally required. In this correspondence, a system where each receiver has perfect channel knowledge, but the transmitter only receives quantized information regarding the channel instantiation is analyzed. The well-known zero-forcing transmission technique is considered, and simple expressions for the throughput degradation due to finite-rate feedback are derived. A key finding is that the feedback rate per mobile must be increased linearly with the signal-to-noise ratio (SNR) (in decibels) in order to achieve the full multiplexing gain. This is in sharp contrast to point-to-point multiple-input multiple-output (MIMO) systems, in which it is not necessary to increase the feedback rate as a function of the SNR Block diagonalization is a linear preceding technique for the multiple antenna broadcast (downlink) channel that involves transmission of multiple data streams to each receiver such that no multi-user interference is experienced at any of the receivers. This low-complexity scheme operates only a few dB away from capacity but requires very accurate channel knowledge at the transmitter. We consider a limited feedback system where each receiver knows its channel perfectly, but the transmitter is only provided with a finite number of channel feedback bits from each receiver. Using a random quantization argument, we quantify the throughput loss due to imperfect channel knowledge as a function of the feedback level. The quality of channel knowledge must improve proportional to the SNR in order to prevent interference-limitations, and we show that scaling the number of feedback bits linearly with the system SNR is sufficient to maintain a bounded rate loss. Finally, we compare our quantization strategy to an analog feedback scheme and show the superiority of quantized feedback. A multiple antenna broadcast channel (multiple transmit antennas, one antenna at each receiver) with imperfect channel state information available to the transmitter is considered. If perfect channel state information is available to the transmitter, then a multiplexing gain equal to the minimum of the number of transmit antennas and the number of receivers is achievable. On the other hand, if each receiver has identical fading statistics and the transmitter has no channel information, the maximum achievable multiplexing gain is only one. The focus of this paper is on determination of necessary and sufficient conditions on the rate at which CSIT quality must improve with SNR in order for full multiplexing gain to be achievable. The main result of the paper shows that scaling CSIT quality such that the CSIT error is dominated by the inverse of the SNR is both necessary and sufficient to achieve the full multiplexing gain as well as a bounded rate offset (i.e., the sum rate has no negative sub-logarithmic terms) in the compound channel setting.
Abstract of query paper
Cite abstracts
29325
29324
The need for domain ontologies in mission critical applications such as risk management and hazard identification is becoming more and more pressing. Most research on ontology learning conducted in the academia remains unrealistic for real-world applications. One of the main problems is the dependence on non-incremental, rare knowledge and textual resources, and manually-crafted patterns and rules. This paper reports work in progress aiming to address such undesirable dependencies during ontology construction. Initial experiments using a working prototype of the system revealed promising potentials in automatically constructing high-quality domain ontologies using real-world texts.
Standard alphabetical procedures for organizing lexical information put together words that are spelled alike and scatter words with similar or related meanings haphazardly through the list. Unfortunately, there is no obvious alternative, no other simple way for lexicographers to keep track of what has been done or for readers to find the word they are looking for. But a frequent objection to this solution is that finding things on an alphabetical list can be tedious and time-consuming. Many people who would like to refer to a dictionary decide not to bother with it because finding the information would interrupt their work and break their train of thought. A reactive gaseous mixture which reacts in a localized heating zone to form a glass deposit on the inner wall of a tube is made to flow along the tube, and in the heating zone it is channelled around a cylindrical element which occupies much of the bore of the tube. The glass deposit is used for making glass fibres for telecommunications. Traditional text mining techniques transform free text into flat bags of words representation, which does not preserve sufficient semantics for the purpose of knowledge discovery. In this paper, we present a two-step procedure to mine generalized associations of semantic relations conveyed by the textual content of Web documents. First, RDF (resource description framework) metadata representing semantic relations are extracted from raw text using a myriad of natural language processing techniques. The relation extraction process also creates a term taxonomy in the form of a sense hierarchy inferred from WordNet. Then, a novel generalized association pattern mining algorithm (GP-Close) is applied to discover the underlying relation association patterns on RDF metadata. For pruning the large number of redundant overgeneralized patterns in relation pattern search space, the GP-Close algorithm adopts the notion of generalization closure for systematic overgeneralization reduction. The efficacy of our approach is demonstrated through empirical experiments conducted on an online database of terrorist activities We address the issue of extracting implicit and explicit relationships between entities in biomedical text. We argue that entities seldom occur in text in their simple form and that relationships in text relate the modified, complex forms of entities with each other. We present a rule-based method for (1) extraction of such complex entities and (2) relationships between them and (3) the conversion of such relationships into RDF. Furthermore, we present results that clearly demonstrate the utility of the generated RDF in discovering knowledge from text corpora by means of locating paths composed of the extracted relationships. The WordNet lexical database is now quite large and offers broad coverage of general lexical relations in English. As is evident in this volume, WordNet has been employed as a resource for many applications in natural language processing (NLP) and information retrieval (IR). However, many potentially useful lexical relations are currently missing from WordNet. Some of these relations, while useful for NLP and IR applications, are not necessarily appropriate for a general, domain-independent lexical database. For example, WordNet’s coverage of proper nouns is rather sparse, but proper nouns are often very important in application tasks. The standard way lexicographers find new relations is to look through huge lists of concordance lines. However, culling through long lists of concordance lines can be a rather daunting task (Church and Hanks, 1990), so a method that picks out those lines that are very likely to hold relations of interest should be an improvement over more traditional techniques. This chapter describes a method for the automatic discovery of WordNetstyle lexico-semantic relations by searching for corresponding lexico-syntactic patterns in large text collections. Large text corpora are now widely available, and can be viewed as vast resources from which to mine lexical, syntactic, and semantic information. This idea is reminiscent of what is known as “data mining” in the artificial intelligence literature (Fayyad and Uthurusamy, 1996), however, in this case the ore is raw text rather than tables of numerical data. The Lexico-Syntactic Pattern Extraction (LSPE) method is meant to be useful as an automated or semi-automated aid for lexicographers and builders of domain-dependent knowledge-bases. The LSPE technique is light-weight; it does not require a knowledge base or complex interpretation modules in order to suggest new WordNet relations.
Abstract of query paper
Cite abstracts
29326
29325
We give a complexity dichotomy for the problem of computing the partition function of a weighted Boolean constraint satisfaction problem. Such a problem is parameterized by a set of rational-valued functions, which generalize constraints. Each function assigns a weight to every assignment to a set of Boolean variables. Our dichotomy extends previous work in which the weight functions were restricted to being non-negative. We represent a weight function as a product of the form (-1)^s g, where the polynomial s determines the sign of the weight and the non-negative function g determines its magnitude. We show that the problem of computing the partition function (the sum of the weights of all possible variable assignments) is in polynomial time if either every weight function can be defined by a "pure affine" magnitude with a quadratic sign polynomial or every function can be defined by a magnitude of "product type" with a linear sign polynomial. In all other cases, computing the partition function is FP^#P-complete.
This paper gives a dichotomy theorem for the complexity of computing the partition function of an instance of a weighted Boolean constraint satisfaction problem. The problem is parameterized by a finite set @math of nonnegative functions that may be used to assign weights to the configurations (feasible solutions) of a problem instance. Classical constraint satisfaction problems correspond to the special case of 0,1-valued functions. We show that computing the partition function, i.e., the sum of the weights of all configurations, is @math -complete unless either (1) every function in @math is of “product type,” or (2) every function in @math is “pure affine.” In the remaining cases, computing the partition function is in P.
Abstract of query paper
Cite abstracts
29327
29326
We give a complexity dichotomy for the problem of computing the partition function of a weighted Boolean constraint satisfaction problem. Such a problem is parameterized by a set of rational-valued functions, which generalize constraints. Each function assigns a weight to every assignment to a set of Boolean variables. Our dichotomy extends previous work in which the weight functions were restricted to being non-negative. We represent a weight function as a product of the form (-1)^s g, where the polynomial s determines the sign of the weight and the non-negative function g determines its magnitude. We show that the problem of computing the partition function (the sum of the weights of all possible variable assignments) is in polynomial time if either every weight function can be defined by a "pure affine" magnitude with a quadratic sign polynomial or every function can be defined by a magnitude of "product type" with a linear sign polynomial. In all other cases, computing the partition function is FP^#P-complete.
We give a complexity theoretic classification of the counting versions of so-called H-colouring problems for graphs H that may have multiple edges between the same pair of vertices. More generally, we study the problem of computing a weighted sum of homomorphisms to a weighted graph H.The problem has two interesting alternative formulations: first, it is equivalent to computing the partition function of a spin system as studied in statistical physics. And second, it is equivalent to counting the solutions to a constraint satisfaction problem whose constraint language consists of two equivalence relations.In a nutshell, our result says that the problem is in polynomial time if the adjacency matrix of H has row rank 1, and #P-hard otherwise. Partition functions, also known as homomorphism functions, form a rich family of graph invariants that contain combinatorial invariants such as the number of @math -colorings or the number of independent sets of a graph and also the partition functions of certain “spin glass” models of statistical physics such as the Ising model. Building on earlier work by Dyer and Greenhill [Random Structures Algorithms, 17 (2000), pp. 260-289] and Bulatov and Grohe [Theoret. Comput. Sci., 348 (2005), pp. 148-186], we completely classify the computational complexity of partition functions. Our main result is a dichotomy theorem stating that every partition function is either computable in polynomial time or #P-complete. Partition functions are described by symmetric matrices with real entries, and we prove that it is decidable in polynomial time in terms of the matrix whether a given partition function is in polynomial time or #P-complete. While in general it is very complicated to give an explicit algebraic or combinatorial description of the tractable cases, for partition functions described by Hadamard matrices (these turn out to be central in our proofs) we obtain a simple algebraic tractability criterion, which says that the tractable cases are those “representable” by a quadratic polynomial over the field @math .
Abstract of query paper
Cite abstracts
29328
29327
We address the problem of finding a "best" deterministic query answer to a query over a probabilistic database. For this purpose, we propose the notion of a consensus world (or a consensus answer) which is a deterministic world (answer) that minimizes the expected distance to the possible worlds (answers). This problem can be seen as a generalization of the well-studied inconsistent information aggregation problems (e.g. rank aggregation) to probabilistic databases. We consider this problem for various types of queries including SPJ queries, queries, group-by aggregate queries, and clustering. For different distance metrics, we obtain polynomial time optimal or approximation algorithms for computing the consensus answers (or prove NP-hardness). Most of our results are for a general probabilistic database model, called and xor tree model , which significantly generalizes previous probabilistic database models like x-tuples and block-independent disjoint models, and is of independent interest.
It is often desirable to represent in a database, entities whose properties cannot be deterministically classified. The authors develop a data model that includes probabilities associated with the values of the attributes. The notion of missing probabilities is introduced for partially specified probability distributions. This model offers a richer descriptive language allowing the database to more accurately reflect the uncertain real world. Probabilistic analogs to the basic relational operators are defined and their correctness is studied. A set of operators that have no counterpart in conventional relational systems is presented. > Several real-world applications need to effectively manage and reason about large amounts of data that are inherently uncertain. For instance, pervasive computing applications must constantly reason about volumes of noisy sensory readings for a variety of reasons, including motion prediction and human behavior modeling. Such probabilistic data analyses require sophisticated machine-learning tools that can effectively model the complex spatio temporal correlation patterns present in uncertain sensory data. Unfortunately, to date, most existing approaches to probabilistic database systems have relied on somewhat simplistic models of uncertainty that can be easily mapped onto existing relational architectures: Probabilistic information is typically associated with individual data tuples, with only limited or no support for effectively capturing and reasoning about complex data correlations. In this paper, we introduce BayesStore, a novel probabilistic data management architecture built on the principle of handling statistical models and probabilistic inference tools as first-class citizens of the database system. Adopting a machine-learning view, BAYESSTORE employs concise statistical relational models to effectively encode the correlation patterns between uncertain data, and promotes probabilistic inference and statistical model manipulation as part of the standard DBMS operator repertoire to support efficient and sound query processing. We present BAYESSTORE's uncertainty model based on a novel, first-order statistical model, and we redefine traditional query processing operators, to manipulate the data and the probabilistic models of the database in an efficient manner. Finally, we validate our approach, by demonstrating the value of exploiting data correlations during query processing, and by evaluating a number of optimizations which significantly accelerate query processing. Incomplete information arises naturally in numerous data management applications. Recently, several researchers have studied query processing in the context of incomplete information. Most work has combined the syntax of a traditional query language like relational algebra with a nonstandard semantics such as certain or ranked possible answers. There are now also languages with special features to deal with uncertainty. However, to the standards of the data management community, to date no language proposal has been made that can be considered a natural analog to SQL or relational algebra for the case of incomplete information. In this paper we propose such a language, World-set Algebra, which satisfies the robustness criteria and analogies to relational algebra that we expect. The language supports the contemplation on alternatives and can thus map from a complete database to an incomplete one comprising several possible worlds. We show that World-set Algebra is conservative over relational algebra in the sense that any query that maps from a complete database to a complete database (a complete-to-complete query) is equivalent to a relational algebra query. Moreover, we give an efficient algorithm for effecting this translation. We then study algebraic query optimization of such queries. We argue that query languages with explicit constructs for handling uncertainty allow for the more natural and simple expression of many real-world decision support queries. The results of this paper not only suggest a language for specifying queries in this way, but also allow for their efficient evaluation in any relational database management system. This paper explores an inherent tension in modeling and querying uncertain data: simple, intuitive representations of uncertain data capture many application requirements, but these representations are generally incomplete―standard operations over the data may result in unrepresentable types of uncertainty. Complete models are theoretically attractive, but they can be nonintuitive and more complex than necessary for many applications. To address this tension, we propose a two-layer approach to managing uncertain data: an underlying logical model that is complete, and one or more working models that are easier to understand, visualize, and query, but may lose some information. We explore the space of incomplete working models, place several of them in a strict hierarchy based on expressive power, and study their closure properties. We describe how the two-layer approach is being used in our prototype DBMS for uncertain data, and we identify a number of interesting open problems to fully realize the approach. Probabilistic databases have received considerable attention recently due to the need for storing uncertain data produced by many real world applications. The widespread use of probabilistic databases is hampered by two limitations: (1) current probabilistic databases make simplistic assumptions about the data (e.g., complete independence among tuples) that make it difficult to use them in applications that naturally produce correlated data, and (2) most probabilistic databases can only answer a restricted subset of the queries that can be expressed using traditional query languages. We address both these limitations by proposing a framework that can represent not only probabilistic tuples, but also correlations that may be present among them. Our proposed framework naturally lends itself to the possible world semantics thus preserving the precise query semantics extant in current probabilistic databases. We develop an efficient strategy for query evaluation over such probabilistic databases by casting the query processing problem as an inference problem in an appropriately constructed probabilistic graphical model. We present several optimizations specific to probabilistic databases that enable efficient query evaluation. We validate our approach by presenting an experimental evaluation that illustrates the effectiveness of our techniques at answering various queries using real and synthetic datasets. ABSTRACT This paper concerns the semantics of Codd's relational model of data. Formulated are precise conditions that should be satisfied in a semantically meaningful extension of the usual relational operators, such as projection, selection, union, and join, from operators on relations to operators on tables with “null values” of various kinds allowed. These conditions require that the system be safe in the sense that no incorrect conclusion is derivable by using a specified subset Ω of the relational operators; and that it be complete in the sense that all valid conclusions expressible by relational expressions using operators in Ω are in fact derivable in this system. Two such systems of practical interest are shown. The first, based on the usual Codd's null values, supports projection and selection. The second, based on many different (“marked”) null values or variables allowed to appear in a table, is shown to correctly support projection, positive selection (with no negation occurring in the selection condition), union, and renaming of attributes, which allows for processing arbitrary conjunctive queries. A very desirable property enjoyed by this system is that all relational operators on tables are performed in exactly the same way as in the case of the usual relations. A third system, mainly of theoretical interest, supporting projection, selection, union, join, and renaming, is also discussed. Under a so-called closed world assumption, it can also handle the operator of difference. It is based on a device called a conditional table and is crucial to the proof of the correctness of the second system. All systems considered allow for relational expressions containing arbitrarily many different relation symbols, and no form of the universal relation assumption is required. Categories and Subject Descriptors: H.2.3 [Database Management]: Languages— query languages; H.2.4 [Database Management]: Systems— query processing General Terms: Theory Probability theory is mathematically the best understood paradigm for modeling and manipulating uncertain information. Probabilities of complex events can be computed from those of basic events on which they depend, using any of a number of strategies. Which strategy is appropriate depends very much on the known interdependencies among the events involved. Previous work on probabilistic databases has assumed a fixed and restrictive combination strategy (e.g., assuming all events are pairwise independent). In this article, we characterize, using postulates, whole classes of strategies for conjunction, disjunction, and negation, meaningful from the viewpoint of probability theory. (1) We propose a probabilistic relational data model and a generic probabilistic relational algebra that neatly captures various strategies satisfying the postulates, within a single unified framework. (2) We show that as long as the chosen strategies can be computed in polynomial time, queries in the positive fragment of the probabilistic relational algebra have essentially the same data complexity as classical relational algebra. (3) We establish various containments and equivalences between algebraic expressions, similar in spirit to those in classical algebra. (4) We develop algorithms for maintaining materialized probabilistic views. (5) Based on these ideas, we have developed a prototype probabilistic database system called ProbView on top of Dbase V.0. We validate our complexity results with experiments and show that rewriting certain types of queries to other equivalent forms often yields substantial savings. Trio is a new database system that manages not only data, but also the accuracy and lineage of the data. Approximate (uncertain, probabilistic, incomplete, fuzzy, and imprecise!) databases have been proposed in the past, and the lineage problem also has been studied. The goals of the Trio project are to distill previous work into a simple and usable model, design a query language as an understandable extension to SQL, and most importantly build a working system---a system that augments conventional data management with both accuracy and lineage as an integral part of the data. This paper provides numerous motivating applications for Trio and lays out preliminary plans for the data model, query language, and prototype system. We present a probabilistic relational algebra (PRA) which is a generalization of standard relational algebra. In PRA, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Based on intensional semantics, the tuple weights of the result of a PRA expression always conform to the underlying probabilistic model. We also show for which expressions extensional semantics yields the same results. Furthermore, we discuss complexity issues and indicate possibilities for optimization. With regard to databases, the approach allows for representing imprecise attribute values, whereas for information retrieval, probabilistic document indexing and probabilistic search term weighting can be modeled. We introduce the concept of vague predicates which yield probabilistic weights instead of Boolean values, thus allowing for queries with vague selection conditions. With these features, PRA implements uncertainty and vagueness in combination with the relational model.
Abstract of query paper
Cite abstracts
29329
29328
We address the problem of finding a "best" deterministic query answer to a query over a probabilistic database. For this purpose, we propose the notion of a consensus world (or a consensus answer) which is a deterministic world (answer) that minimizes the expected distance to the possible worlds (answers). This problem can be seen as a generalization of the well-studied inconsistent information aggregation problems (e.g. rank aggregation) to probabilistic databases. We consider this problem for various types of queries including SPJ queries, queries, group-by aggregate queries, and clustering. For different distance metrics, we obtain polynomial time optimal or approximation algorithms for computing the consensus answers (or prove NP-hardness). Most of our results are for a general probabilistic database model, called and xor tree model , which significantly generalizes previous probabilistic database models like x-tuples and block-independent disjoint models, and is of independent interest.
When dealing with massive quantities of data, top-k queries are a powerful technique for returning only the k most relevant tuples for inspection, based on a scoring function. The problem of efficiently answering such ranking queries has been studied and analyzed extensively within traditional database settings. The importance of the top-k is perhaps even greater in probabilistic databases, where a relation can encode exponentially many possible worlds. There have been several recent attempts to propose definitions and algorithms for ranking queries over probabilistic data. However, these all lack many of the intuitive properties of a top-k over deterministic data. Specifically, we define a number of fundamental properties, including exact-k, containment, unique-rank, value-invariance, and stability, which are all satisfied by ranking queries on certain data. We argue that all these conditions should also be fulfilled by any reasonable definition for ranking uncertain data. Unfortunately, none of the existing definitions is able to achieve this. To remedy this shortcoming, this work proposes an intuitive new approach of expected rank. This uses the well-founded notion of the expected rank of each tuple across all possible worlds as the basis of the ranking. We are able to prove that, in contrast to all existing approaches, the expected rank satisfies all the required properties for a ranking query. We provide efficient solutions to compute this ranking across the major models of uncertain data, such as attribute-level and tuple-level uncertainty. For an uncertain relation of N tuples, the processing cost is O(N logN)—no worse than simply sorting the relation. In settings where there is a high cost for generating each tuple in turn, we provide pruning techniques based on probabilistic tail bounds that can terminate the search early and guarantee that the top-k has been found. Finally, a comprehensive experimental study confirms the effectiveness of our approach. We formulate three intuitive semantic properties for top-k queries in probabilistic databases, and propose Global-Topk query semantics which satisfies all of them. We provide a dynamic programming algorithm to evaluate top-k queries under Global-Topk in simple probabilistic relations. For general probabilistic relations, we show a polynomial reduction to the simple case. Our analysis shows that the complexity of query evaluation is linear in k and at most quadratic in database size. Uncertainty pervades many domains in our lives. Current real-life applications, e.g., location tracking using GPS devices or cell phones, multimedia feature extraction, and sensor data management, deal with different kinds of uncertainty. Finding the nearest neighbor objects to a given query point is an important query type in these applications. In this paper, we study the problem of finding objects with the highest marginal probability of being the nearest neighbors to a query object. We adopt a general uncertainty model allowing for data and query uncertainty. Under this model, we define new query semantics, and provide several efficient evaluation algorithms. We analyze the cost factors involved in query evaluation, and present novel techniques to address the trade-offs among these factors. We give multiple extensions to our techniques including handling dependencies among data objects, and answering threshold queries. We conduct an extensive experimental study to evaluate our techniques on both real and synthetic data. Top-k processing in uncertain databases is semantically and computationally different from traditional top-k processing. The interplay between score and uncertainty makes traditional techniques inapplicable. We introduce new probabilistic formulations for top-k queries. Our formulations are based on "marriage" of traditional top-k semantics and possible worlds semantics. In the light of these formulations, we construct a framework that encapsulates a state space model and efficient query processing techniques to tackle the challenges of uncertain data settings. We prove that our techniques are optimal in terms of the number of accessed tuples and materialized search states. Our experiments show the efficiency of our techniques under different data distributions with orders of magnitude improvement over naive materialization of possible worlds. Uncertain data is inherent in a few important applications such as environmental surveillance and mobile object tracking. Top-k queries (also known as ranking queries) are often natural and useful in analyzing uncertain data in those applications. In this paper, we study the problem of answering probabilistic threshold top-k queries on uncertain data, which computes uncertain records taking a probability of at least p to be in the top-k list where p is a user specified probability threshold. We present an efficient exact algorithm, a fast sampling algorithm, and a Poisson approximation based algorithm. An empirical study using real and synthetic data sets verifies the effectiveness of probabilistic threshold top-k queries and the efficiency of our methods. There is an increasing quantity of data with uncertainty arising from applications such as sensor network measurements, record linkage, and as output of mining algorithms. This uncertainty is typically formalized as probability density functions over tuple values. Beyond storing and processing such data in a DBMS, it is necessary to perform other data analysis tasks such as data mining. We study the core mining problem of clustering on uncertain data, and define appropriate natural generalizations of standard clustering optimization criteria. Two variations arise, depending on whether a point is automatically associated with its optimal center, or whether it must be assigned to a fixed cluster no matter where it is actually located. For uncertain versions of k-means and k-median, we show reductions to their corresponding weighted versions on data with no uncertainties. These are simple in the unassigned case, but require some care for the assigned version. Our most interesting results are for uncertain k-center, which generalizes both traditional k-center and k-median objectives. We show a variety of bicriteria approximation algorithms. One picks O(ke--1log2n) centers and achieves a (1 + e) approximation to the best uncertain k-centers. Another picks 2k centers and achieves a constant factor approximation. Collectively, these results are the first known guaranteed approximation algorithms for the problems of clustering uncertain data.
Abstract of query paper
Cite abstracts
29330
29329
The paper introduces the notion of off-line justification for Answer Set Programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom w.r.t. a given answer set. The paper extends also this notion to provide justification of atoms during the computation of an answer set (on-line justification), and presents an integration of on-line justifications within the computation model of Smodels. Off-line and on-line justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for debugging answer set programs. A preliminary implementation has been developed in ASP-PROLOG. (To appear in Theory and Practice of Logic Programming (TPLP))
As constraint logic programming matures and larger applications are built, an increased need arises for advanced development and debugging environments. Assertions are linguistic constructions which allow expressing properties of programs. Classical examples of assertions are type declarations. However, herein we are interested in supporting a more general setting [3, 1] in which, on one hand assertions can be of a more general nature, including properties which are statically undecidable, and, on the other, only a small number of assertions may be present in the program, i.e., the assertions are optional. In particular, we do not wish to limit the programming language or the language of assertions unnecessarily in order to make the assertions statically decidable. Consequently, the proposed framework needs to deal throughout with approximations [2]. The notion of program correctness with respect to an interpretation is defined for a class of programming languages. Under this definition, if a program terminates with an incorrect output then it contains an incorrect procedure. Algorithms for detecting incorrect procedures are developed. These algorithms formalize what experienced programmers may know already.A logic program implementation of these algorithms is described. Its performance suggests that the algorithms can be the backbone of debugging aids that go far beyond what is offered by current programming environments.Applications of algorithmic debugging to automatic program construction are explored.
Abstract of query paper
Cite abstracts
29331
29330
The paper introduces the notion of off-line justification for Answer Set Programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom w.r.t. a given answer set. The paper extends also this notion to provide justification of atoms during the computation of an answer set (on-line justification), and presents an integration of on-line justifications within the computation model of Smodels. Off-line and on-line justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for debugging answer set programs. A preliminary implementation has been developed in ASP-PROLOG. (To appear in Theory and Practice of Logic Programming (TPLP))
This paper suggests an approach to the development of software testing and debugging automation tools based on precise program behavior models. The program behavior model is defined as a set of events (event trace) with two basic binary relations over events -- precedence and inclusion, and represents the temporal relationship between actions. A language for the computations over event traces is developed that provides a basis for assertion checking, debugging queries, execution profiles, and performance measurements. The approach is nondestructive, since assertion texts are separated from the target program source code and can be maintained independently. Assertions can capture the dynamic properties of a particular target program and can formalize the general knowledge of typical bugs and debugging strategies. An event grammar provides a sound basis for assertion language implementation via target program automatic instrumentation. An implementation architecture and preliminary experiments with a prototype assertion checker for the C programming language are discussed. Abstract Traces of program executions are a helpful source of information for program debugging. They, however, give a picture of program executions at such a low level that users often have difficulties to interpret the information. Opium, our extendable trace analyzer, is connected to a “standard” Prolog tracer. Opium is programmable and extendable. It provides a trace query language and abstract views of executions. Users can therefore examine program executions at the levels of abstraction which suit them. Opium has shown its capabilities to build abstract tracers and automated debugging facilities. This article describes in depth the trace query mechanism, from the model to its implementation. Characteristic examples are detailed. Extensions written so far on top of the trace query mechanism are listed. Two recent extensions are presented: the abstract tracers for the LO (Linear Objects) and the CHR (Constraint Handling Rules) languages. These two extensions were specified and implemented within a few days. They show how to use Opium for real applications.
Abstract of query paper
Cite abstracts
29332
29331
The paper introduces the notion of off-line justification for Answer Set Programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom w.r.t. a given answer set. The paper extends also this notion to provide justification of atoms during the computation of an answer set (on-line justification), and presents an integration of on-line justifications within the computation model of Smodels. Off-line and on-line justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for debugging answer set programs. A preliminary implementation has been developed in ASP-PROLOG. (To appear in Theory and Practice of Logic Programming (TPLP))
We investigate the usage of rule dependency graphs and their colorings for characterizing and computing answer sets of logic programs. This approach provides us with insights into the interplay between rules when inducing answer sets. We start with different characterizations of answer sets in terms of totally colored dependency graphs that differ in graph-theoretical aspects. We then develop a series of operational characterizations of answer sets in terms of operators on partial colorings. In analogy to the notion of a derivation in proof theory, our operational characterizations are expressed as (non-deterministically formed) sequences of colorings, turning an uncolored graph into a totally colored one. In this way, we obtain an operational framework in which different combinations of operators result in different formal properties. Among others, we identify the basic strategy employed by the noMoRe system and justify its algorithmic approach. Furthermore, we distinguish operations corresponding to Fitting's operator as well as to well-founded semantics. We present a new answer set solver, called nomore++, along with its underlying theoretical foundations. A distinguishing feature is that it treats heads and bodies equitably as computational objects. Apart from its operational foundations, we show how it improves on previous work through its new lookahead and its computational strategy of maintaining unfounded-freeness. We underpin our claims by selected experimental results. Logic programs under Answer Sets semantics can be studied, and actual computation can be carried out, by means of representing them by directed graphs. Several reductions of logic programs to directed graphs are now available. We compare our proposed representation, called Extended Dependency Graph, to the Block Graph representation recently defined by Linke [Proc. IJCAI-2001, 2001, pp. 641-648]. On the relevant fragment of well-founded irreducible programs, extended dependency and block graph turns out to be isomorphic. So, we argue that graph representation of general logic programs should be abandoned in favor of graph representation of well-founded irreducible programs, which are more concise, more uniform in structure while being equally expressive. characterized in terms of properties of Rule Graphs. We show that, unfortunately, also the RG is ambiguous with respect to the answer set semantics, while the EDG is isomorphic to the program it represents. We argue that the reason of this drawback of the RG as a software engineering tool relies in the absence of a distinction between the different kinds of connections between cycles. Finally, we suggest that properties of a program might be characterized(andchecked)intermsofadmissiblecolorings of the EDG.
Abstract of query paper
Cite abstracts
29333
29332
In this paper we analyze the performance of Warning Propagation, a popular message passing algorithm. We show that for 3CNF formulas drawn from a certain distribution over random satisfiable 3CNF formulas, commonly referred to as the planted-assignment distribution, running Warning Propagation in the standard way (run message passing until convergence, simplify the formula according to the resulting assignment, and satisfy the remaining subformula, if necessary, using a simple "off the shelf" heuristic) results in a satisfying assignment when the clause-variable ratio is a sufficiently large constant.
Let G3n,p,3 be a random 3-colorable graph on a set of 3n vertices generated as follows. First, split the vertices arbitrarily into three equal color classes, and then choose every pair of vertices of distinct color classes, randomly and independently, to be edges with probability p. We describe a polynomial-time algorithm that finds a proper 3-coloring of G3n,p,3 with high probability, whenever p @math c n, where c is a sufficiently large absolute constant. This settles a problem of Blum and Spencer, who asked if an algorithm can be designed that works almost surely for p @math polylog(n) n [J. Algorithms, 19 (1995), pp. 204--234]. The algorithm can be extended to produce optimal k-colorings of random k-colorable graphs in a similar model as well as in various related models. Implementation results show that the algorithm performs very well in practice even for moderate values of c. It is NP-Hard to find a proper 2-coloring of a given 2-colorable (bipartite) hypergraph H. We consider algorithms that will color such a hypergraph using few colors in polynomial time. The results of the paper can be summarized as follows: Let n denote the number of vertices of H and m the number of edges, (i) For bipartite hypergraphs of dimension k there is a polynomial time algorithm which produces a proper coloring using min ( O(n^ 1 - 1 k ),O((m n)^ 1 k - 1 ) )colors, (ii) For 3-uniform bipartite hypergraphs, the bound is reduced to O(n2 9). (iii) For a class of dense 3-uniform bipartite hypergraphs, we have a randomized algorithm which can color optimally. (iv) For a model of random bipartite hypergraphs with edge probability p≥ dn−2, d > O a sufficiently large constant, we can almost surely find a proper 2-coloring. Let I be a random 3CNF formula generated by choosing a truth assignment φ for variables x 1 , ..., x n uniformly at random and including every clause with i literals set true by φ with probability p i , independently. We show that for any 0 ≤ η 2 , η 3 ≤ 1 there is a constant d min so that for all d ≥ d min , a spectral algorithm similar to the graph coloring algorithm of [1] will find a satisfying assignment with high probability for p 1 = d n2, p 2 = η 2 d n2, and p3 = η 3 d n2. Appropriately setting η 2 and η 3 yields natural distributions on satisfiable 3CNFs, not-all-equal-sat 3CNFs, and exactly-one-sat 3CNFs.
Abstract of query paper
Cite abstracts
29334
29333
In this paper we analyze the performance of Warning Propagation, a popular message passing algorithm. We show that for 3CNF formulas drawn from a certain distribution over random satisfiable 3CNF formulas, commonly referred to as the planted-assignment distribution, running Warning Propagation in the standard way (run message passing until convergence, simplify the formula according to the resulting assignment, and satisfy the remaining subformula, if necessary, using a simple "off the shelf" heuristic) results in a satisfying assignment when the clause-variable ratio is a sufficiently large constant.
Let G3n,p,3 be a random 3-colorable graph on a set of 3n vertices generated as follows. First, split the vertices arbitrarily into three equal color classes, and then choose every pair of vertices of distinct color classes, randomly and independently, to be edges with probability p. We describe a polynomial-time algorithm that finds a proper 3-coloring of G3n,p,3 with high probability, whenever p @math c n, where c is a sufficiently large absolute constant. This settles a problem of Blum and Spencer, who asked if an algorithm can be designed that works almost surely for p @math polylog(n) n [J. Algorithms, 19 (1995), pp. 204--234]. The algorithm can be extended to produce optimal k-colorings of random k-colorable graphs in a similar model as well as in various related models. Implementation results show that the algorithm performs very well in practice even for moderate values of c. It is NP-Hard to find a proper 2-coloring of a given 2-colorable (bipartite) hypergraph H. We consider algorithms that will color such a hypergraph using few colors in polynomial time. The results of the paper can be summarized as follows: Let n denote the number of vertices of H and m the number of edges, (i) For bipartite hypergraphs of dimension k there is a polynomial time algorithm which produces a proper coloring using min ( O(n^ 1 - 1 k ),O((m n)^ 1 k - 1 ) )colors, (ii) For 3-uniform bipartite hypergraphs, the bound is reduced to O(n2 9). (iii) For a class of dense 3-uniform bipartite hypergraphs, we have a randomized algorithm which can color optimally. (iv) For a model of random bipartite hypergraphs with edge probability p≥ dn−2, d > O a sufficiently large constant, we can almost surely find a proper 2-coloring. Let I be a random 3CNF formula generated by choosing a truth assignment φ for variables x 1 , ..., x n uniformly at random and including every clause with i literals set true by φ with probability p i , independently. We show that for any 0 ≤ η 2 , η 3 ≤ 1 there is a constant d min so that for all d ≥ d min , a spectral algorithm similar to the graph coloring algorithm of [1] will find a satisfying assignment with high probability for p 1 = d n2, p 2 = η 2 d n2, and p3 = η 3 d n2. Appropriately setting η 2 and η 3 yields natural distributions on satisfiable 3CNFs, not-all-equal-sat 3CNFs, and exactly-one-sat 3CNFs.
Abstract of query paper
Cite abstracts
29335
29334
We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algorithmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions of this subproblem underlie both a global convergence result and an identification property of the active manifold containing the solution of the original problem. Preliminary computational results on both convex and nonconvex examples are promising.
We introduce a new class of multifunctions whose graphs under certain "kernel inverting" matrices, are locally equal to the graphs of Lipschitzian (single-valued) mappings. We characterize the existence of Lipschitzian localizations of these multifunctions in terms of a natural condition on a generalized Jacobian mapping. One corollary to our main result is a Lipschitzian inverse mapping theorem for the broad class of "max hypomonotone" multifunctions. We apply our theoretical results to the sensitivity analysis of solution mappings associated with parameterized optimization problems. In particular, we obtain new characterizations of the Lipschitzian stability of stationary points and Karush-Kuhn-Tucker pairs associated with parameterized nonlinear programs.
Abstract of query paper
Cite abstracts
29336
29335
We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algorithmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions of this subproblem underlie both a global convergence result and an identification property of the active manifold containing the solution of the original problem. Preliminary computational results on both convex and nonconvex examples are promising.
Basic notation.- Introduction.- Background material.- Optimality conditions.- Basic perturbation theory.- Second order analysis of the optimal value and optimal solutions.- Optimal Control.- References. This paper studies Newton-type methods for minimization of partly smooth convex functions. Sequential Newton methods are provided using local parameterizations obtained from **-Lagrangian theory and from Riemannian geometry. The Hessian based on the **-Lagrangian depends on the selection of a dual parameter g; by revealing the connection to Riemannian geometry, a natural choice of g emerges for which the two Newton directions coincide. This choice of g is also shown to be related to the least-squares multiplier estimate from a sequential quadratic programming (SQP) approach, and with this multiplier, SQP gives the same search direction as the Newton methods. For convex minimization we introduce an algorithm based on **-space decomposition. The method uses a bundle subroutine to generate a sequence of approximate proximal points. When a primal-dual track leading to a solution and zero subgradient pair exists, these points approximate the primal track points and give the algorithm's **, or corrector, steps. The subroutine also approximates dual track points that are **-gradients needed for the method's **-Newton predictor steps. With the inclusion of a simple line search the resulting algorithm is proved to be globally convergent. The convergence is superlinear if the primal-dual track points and the objective's **-Hessian are approximated well enough.
Abstract of query paper
Cite abstracts
29337
29336
We provide linear-time algorithms for geometric graphs with sublinearly many edge crossings. That is, we provide algorithms running in @math time on connected geometric graphs having @math vertices and @math pairwise crossings, where @math is smaller than @math by an iterated logarithmic factor. Specific problems that we study include Voronoi diagrams and single-source shortest paths. Our algorithms all run in linear time in the standard comparison-based computational model; hence, we make no assumptions about the distribution or bit complexities of edge weights, nor do we utilize unusual bit-level operations on memory words. Instead, our algorithms are based on a planarization method that “zeros in” on edge crossings, together with methods for applying planar separator decompositions to geometric graphs with sublinearly many crossings. Incidentally, our planarization algorithm also solves an open computational geometry problem of Chazelle for triangulating a self-intersecting polygonal chain having @math segments and @math crossings in linear time, for the case when @math is sublinear in @math by an iterated logarithmic factor.
We propose shortest path algorithms that use A* search in combination with a new graph-theoretic lower-bounding technique based on landmarks and the triangle inequality. Our algorithms compute optimal shortest paths and work on any directed graph. We give experimental results showing that the most efficient of our new algorithms outperforms previous algorithms, in particular A* search with Euclidean bounds, by a wide margin on road networks and on some synthetic problem families. The computation of shortest paths between different locations on a road network appears to be a key problem in many applications. Often, a shortest path is required in a very short time. In this article, we try to find an answer to the question of which shortest path algorithm for the one-to-one shortest path problem runs fastest on a large real-road network. An extensive computational study is presented, in which six existing algorithms and a new label correcting algorithm are implemented in several variants and compared on the real-road network of The Netherlands. In total, 168 versions are implemented, of which 18 versions are variants of the new algorithm and 60 versions are new by the application of bidirectional search. In the first part of the article we present a mathematical framework and a review of existing algorithms. We then describe combinations of existing algorithms with bidirectional search and heuristic-estimate techniques based on Euclidean distance and landmarks. We also present some useful static reduction techniques. In the final part of the article we present results from computational tests on The Netherlands road network. The new algorithm, which combines concepts from previous work on buckets and label-correcting techniques, has generally the shortest running times of any of the tested algorithms. © 2006 Wiley Periodicals, Inc. NETWORKS, Vol. 48(4), 182–194 2006This research is part of a Ph.D. project of the second author and a Master's project of the first author, at Delft University of Technology, Department of Electrical Engineering, Mathematics and Computer Science. We present a new speedup technique for route planning that exploits the hierarchy inherent in real world road networks. Our algorithm preprocesses the eight digit number of nodes needed for maps of the USA or Western Europe in a few hours using linear space. Shortest (i.e. fastest) path queries then take around eight milliseconds to produce exact shortest paths. This is about 2 000 times faster than using Dijkstra’s algorithm. In practice, computing a shortest path from one node to another in a directed graph is a very common task. This problem is classically solved by Dijkstra's algorithm. Many techniques are known to speed up this algorithm heuristically, while optimality of the solution can still be guaranteed. In most studies, such techniques are considered individually. The focus of our work is combination of speed-up techniques for Dijkstra's algorithm. We consider all possible combinations of four known techniques, namely, goal-directed search, bidirectional search, multilevel approach, and shortest-path containers, and show how these can be implemented. In an extensive experimental study, we compare the performance of the various combinations and analyze how the techniques harmonize when jointly applied. Several real-world graphs from road maps and public transport and three types of generated random graphs are taken into account. The classic problem of finding the shortest path over a network has been the target of many research efforts over the years. These research efforts have resulted in a number of different algorithms and a considerable amount of empirical findings with respect to performance. Unfortunately, prior research does not provide a clear direction for choosing an algorithm when one faces the problem of computing shortest paths on real road networks. Most of the computational testing on shortest path algorithms has been based on randomly generated networks, which may not have the characteristics of real road networks. In this paper, we provide an objective evaluation of 15 shortest path algorithms using a variety of real road networks. Based on the evaluation, a set of recommended algorithms for computing shortest paths on real road networks is identified. This evaluation should be particularly useful to researchers and practitioners in operations research, management science, transportation, and Geographic Information Systems.
Abstract of query paper
Cite abstracts
29338
29337
We provide linear-time algorithms for geometric graphs with sublinearly many edge crossings. That is, we provide algorithms running in @math time on connected geometric graphs having @math vertices and @math pairwise crossings, where @math is smaller than @math by an iterated logarithmic factor. Specific problems that we study include Voronoi diagrams and single-source shortest paths. Our algorithms all run in linear time in the standard comparison-based computational model; hence, we make no assumptions about the distribution or bit complexities of edge weights, nor do we utilize unusual bit-level operations on memory words. Instead, our algorithms are based on a planarization method that “zeros in” on edge crossings, together with methods for applying planar separator decompositions to geometric graphs with sublinearly many crossings. Incidentally, our planarization algorithm also solves an open computational geometry problem of Chazelle for triangulating a self-intersecting polygonal chain having @math segments and @math crossings in linear time, for the case when @math is sublinear in @math by an iterated logarithmic factor.
We propose shortest path algorithms that use A* search in combination with a new graph-theoretic lower-bounding technique based on landmarks and the triangle inequality. Our algorithms compute optimal shortest paths and work on any directed graph. We give experimental results showing that the most efficient of our new algorithms outperforms previous algorithms, in particular A* search with Euclidean bounds, by a wide margin on road networks and on some synthetic problem families. The single-source shortest paths problem (SSSP) is one of the classic problems in algorithmic graph theory: given a positively weighted graph G with a source vertex s , find the shortest path from s to all other vertices in the graph. Since 1959, all theoretical developments in SSSP for general directed and undirected graphs have been based on Dijkstra's algorithm, visiting the vertices in order of increasing distance from s . Thus, any implementation of Dijkstra's algorithm sorts the vertices according to their distances from s . However, we do not know how to sort in linear time. Here, a deterministic linear time and linear space algorithm is presented for the undirected single source shortest paths problem with positive integer weights. The algorithm avoids the sorting bottleneck by building a hierarchical bucketing structure, identifying vertex pairs that may be visited in any order. We summarize the currently best known theoretical results for the single-source shortest paths problem for directed graphs with non-negative edge weights. We also point out that a recent result due to Cherkassky, Goldberg and Silverstein (1996) leads to even better time bounds for this problem than claimed by the authors. In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps ), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n -item heap in O (log n ) amortized time and all other standard heap operations in O (1) amortized time. Using F-heaps we are able to obtain improved running times for several network optimization algorithms. In particular, we obtain the following worst-case bounds, where n is the number of vertices and m the number of edges in the problem graph: O ( n log n + m ) for the single-source shortest path problem with nonnegative edge lengths, improved from O ( m log ( m n +2) n ); O ( n 2 log n + nm ) for the all-pairs shortest path problem, improved from O ( nm log ( m n +2) n ); O ( n 2 log n + nm ) for the assignment problem (weighted bipartite matching), improved from O ( nm log ( m n +2) n ); O ( mβ ( m, n )) for the minimum spanning tree problem, improved from O ( m log log ( m n +2) n ); where β ( m, n ) = min i u log ( i ) n ≤ m n . Note that β ( m, n ) ≤ log * n if m ≥ n . Of these results, the improved bound for minimum spanning trees is the most striking, although all the results give asymptotic improvements for graphs of appropriate densities. The Voronoi diagram is a famous structure of computational geometry. We show that there is a straightforward equivalent in graph theory which can be efficiently computed. In particular, we give two algorithms for the computation of graph Voronoi diagrams, prove a lower bound on the problem, and identify cases where the algorithms presented are optimal. The space requirement of a graph Voronoi diagram is modest, since it needs no more space than does the graph itself. The investigation of graph Voronoi diagrams is motivated by many applications and problems on networks that can be easily solved with their help. This includes the computation of nearest facilities, all nearest neighbors and closest pairs, some kind of collision free moving, and anticenters and closest points. © 2000 John Wiley & Sons, Inc. PART I: FUNDAMENTAL TOOLS. Algorithm Analysis. Basic Data Structures. Search Trees and Skip Lists. Sorting, Sets, and Selection. Fundamental Techniques. PART II: GRAPH ALGORITHMS. Graphs. Weighted Graphs. Network Flow and Matching. PART III: INTERNET ALGORITHMICS. Text Processing. Number Theory and Cryptograhy. Network Algorithms. PART IV: ADDITIONAL TOPICS. Computational Geometry. NP-Completeness. Algorithmic Frameworks. Appendix: Useful Mathematical Facts. Bibliography. Index. Abstract We present a new implementation of the Kou, Markowsky and Berman algorithm for finding a Steiner tree for a connected, undirected distance graph with a specified subset S of the set of vertices V . The total distance of all edges of this Steiner tree is at most 2(1-1 l ) times that of a Steiner minimal tree, where l is the minimum number of leaves in any Steiner minimal tree for the given graph. The algorithm runs in O(| E |+| V |log| V |) time in the worst case, where E is the set of all edges and V the set of all vertices in the graph. We give a linear-time algorithm for single-source shortest paths in planar graphs with nonnegative edge-lengths. Our algorithm also yields a linear-time algorithm for maximum flow in a planar graph with the source and sink on the same face. For the case where negative edge-lengths are allowed, we give an algorithm requiringO(n4 3log(nL)) time, whereLis the absolute value of the most negative length. This algorithm can be used to obtain similar bounds for computing a feasible flow in a planar network, for finding a perfect matching in a planar bipartite graph, and for finding a maximum flow in a planar graph when the source and sink are not on the same face. We also give parallel and dynamic versions of these algorithms. From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25 increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning. Abstract : We give an O((square root of n)m log N) algorithm for the single-source shortest paths problem with integral arc lengths. (Here n and m is the number of nodes and arcs in the input network and N is essentially the absolute value of the most negative arc length.) This improves previous bounds for the problem. The quest for a linear-time single-source shortest-path (SSSP) algorithm on directed graphs with positive edge weights is an ongoing hot research topic. While Thorup recently found an O(n + m) time RAM algorithm for undirected graphs with n nodes, m edges and integer edge weights in 0,…,2w - 1 where w denotes the word length, the currently best time bound for directed sparse graphs on a RAM is O(n + m · log log n). In the present paper we study the average-case complexity of SSSP. We give a simple algorithm for arbitrary directed graphs with random edge weights uniformly distributed in [0, 1] and show that it needs linear time O(n + m) with high probability.
Abstract of query paper
Cite abstracts
29339
29338
We provide linear-time algorithms for geometric graphs with sublinearly many edge crossings. That is, we provide algorithms running in @math time on connected geometric graphs having @math vertices and @math pairwise crossings, where @math is smaller than @math by an iterated logarithmic factor. Specific problems that we study include Voronoi diagrams and single-source shortest paths. Our algorithms all run in linear time in the standard comparison-based computational model; hence, we make no assumptions about the distribution or bit complexities of edge weights, nor do we utilize unusual bit-level operations on memory words. Instead, our algorithms are based on a planarization method that “zeros in” on edge crossings, together with methods for applying planar separator decompositions to geometric graphs with sublinearly many crossings. Incidentally, our planarization algorithm also solves an open computational geometry problem of Chazelle for triangulating a self-intersecting polygonal chain having @math segments and @math crossings in linear time, for the case when @math is sublinear in @math by an iterated logarithmic factor.
We give a deterministic algorithm for triangulating a simple polygon in linear time. The basic strategy is to build a coarse approximation of a triangulation in a bottom-up phase and then use the information computed along the way to refine the triangulation in a top-down phase. The main tools used are the polygon-cutting theorem, which provides us with a balancing scheme, and the planar separator theorem, whose role is essential in the discovery of new diagonals. Only elementary data structures are required by the algorithm. In particular, no dynamic search trees, of our algorithm. We describe randomized parallel algorithms for building trapezoidal diagrams of line segments in the plane. The algorithms are designed for a CRCW PRAM. For general segments, we give an algorithm requiring optimal O(A+n log n) expected work and optimal O(log n) time, where A is the number of intersecting pairs of segments. If the segments form a simple chain, we give an algorithm requiring optimal O(n) expected work and O(log n log log n log* n) expected time, and a simpler algorithm requiring O(n log* n) expected work. The serial algorithm corresponding to the latter is among the simplest known algorithms requiring O(n log* n) expected operations. For a set of segments forming K chains, we give an algorithm requiring O(A+n log* n+K log n) expected work and O(log n log log n log* n) expected time. The parallel time bounds require the assumption that enough processors are available, with processor allocations every log n steps.
Abstract of query paper
Cite abstracts
29340
29339
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
Abstract : A tracking problem is considered. Observations are taken on the successive positions of an object traveling on a path, and it is desired to estimate its current position. The objective is to arrive at a simple formula which implicitly accounts for possible changes in direction and discounts observations taken before the latest change. To develop a reasonable procedure, a simpler problem is studied. Successive observations are taken on n independently and normally distributed random variables X sub 1, X sub 2, ..., X sub n with means mu sub 1, mu sub 2, ..., mu sub n and variance 1. Each mean mu sub i is equal to the preceding mean mu sub (i-1) except when an occasional change takes place. The object is to estimate the current mean mu sub n. This problem is studied from a Bayesian point of view. An 'ad hoc' estimator is described, which applies a combination of the A.M.O.C. Bayes estimator and a sequence of tests designed to locate the last time point of change. The various estimators are then compared by a Monte Carlo study of samples of size 9. This Bayesian approach seems to be more appropriate for the related problem of testing whether a change in mean has occurred. This test procedure is simpler than that used by Page. The power functions of the two procedures are compared. (Author)
Abstract of query paper
Cite abstracts
29341
29340
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
Detection of change-points in normal means is a well-studied problem. The parallel problem of detecting changes in variance has had less attention. The form of the generalized likelihood ratio test statistic has long been known, but its null distribution resisted exact analysis. In this paper, we formulate the change-point problem for a sequence of chi-square random variables. We describe a procedure that is exact for the distribution of the likelihood ratio statistic for all even degrees of freedom, and gives upper and lower bounds for odd (and also for non-integer) degrees of freedom. Both the liberal and conservative bounds for X 2 1 degrees of freedom are shown through simulation to be reasonably tight. The important problem of testing for change in the normal variance of individual observations corresponds to the X 2 1 case. The non-null case is also covered, and confidence intervals for the true change point are derived. The mehhodology is illustrated with an application to quality control in a deep level gold mine. Other applications include ambulatory monitoring of medical data and econometrics.
Abstract of query paper
Cite abstracts
29342
29341
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
A benchmark change detection problem is considered which involves the detection of a change of unknown size at an unknown time. Both unknown quantities are modeled by stochastic variables, which allows the problem to be formulated within a Bayesian framework. It turns out that the resulting nonlinear filtering problem is much harder than the well-known detection problem for known sizes of the change, and in particular that it can no longer be solved in a recursive manner. An approximating recursive filter is therefore proposed, which is designed using differential-geometric methods in a suitably chosen space of unnormalized probability densities. The new nonlinear filter can be interpreted as an adaptive version of the celebrated Shiryayev--Wonham equation for the detection of a priori known changes, combined with a modified Kalman filter structure to generate estimates of the unknown size of the change. This intuitively appealing interpretation of the nonlinear filter and its excellent performance in simulation studies indicates that it may be of practical use in realistic change detection problems.
Abstract of query paper
Cite abstracts
29343
29342
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
The increasing incidence of worm attacks in the Internet and the resulting instabilities in the global routing properties of the border gateway protocol (BGP) routers pose a serious threat to the connectivity and the ability of the Internet to deliver data correctly. In this paper we propose a mechanism to detect predict the onset of such instabilities which can then enable the timely execution of preventive strategies in order to minimize the damage caused by the worm. Our technique is based on online statistical methods relying on sequential change-point and persistence filter based detection algorithms. Our technique is validated using a year's worth of real traces collected from BGP routers in the Internet that we use to detect predict the global routing instabilities corresponding to the Code Red II, Nimda and SQL Slammer worms. In computer networks, large scale attacks in theirflnalstagescanreadilybeidentifledbyobservingvery abruptchangesinthenetworktra-c,butintheearlystage of an attack, these changes are hard to detect and di-cult todistinguishfromusualtra-c∞uctuations. Inthispaper, wedevelope-cientadaptivesequentialandbatch-sequential methods for an early detection of attacks from the class of of service attacks". These methods employ statis- tical analysis of data from multiple layers of the network protocol for detection of very subtle tra-c changes, which are typical for these kinds of attacks. Both the sequential and batch-sequential algorithms utilize thresholding of test statistics to achieve a flxed rate of false alarms. The algo- rithmsaredevelopedonthebasisofthechange-pointdetec- tiontheory: todetectachangeinstatisticalmodelsassoon as possible, controlling the rate of false alarms. There are threeattractivefeaturesoftheapproach. First,bothmeth- odsareself-learning,whichenablesthemtoadapttovarious network loads and usage patterns. Second, they allow for detecting attacks with small average delay for a given false alarm rate. Third, they are computationally simple, and hence,canbeimplementedonline. Theoreticalframeworks for both kinds of detection procedures, as well as results of simulations, are presented.
Abstract of query paper
Cite abstracts
29344
29343
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
Detecting changes in a data stream is an important area of research with many applications. In this paper, we present a novel method for the detection and estimation of change. In addition to providing statistical guarantees on the reliability of detected changes, our method also provides meaningful descriptions and quantification of these changes. Our approach assumes that the points in the stream are independently generated, but otherwise makes no assumptions on the nature of the generating distribution. Thus our techniques work for both continuous and discrete data. In an experimental study we demonstrate the power of our techniques.
Abstract of query paper
Cite abstracts
29345
29344
Algebraic codes that achieve list decoding capacity were recently constructed by a careful folding'' of the Reed-Solomon code. The low-degree'' nature of this folding operation was crucial to the list decoding algorithm. We show how such folding schemes conducive to list decoding arise out of the Artin-Frobenius automorphism at primes in Galois extensions. Using this approach, we construct new folded algebraic-geometric codes for list decoding based on cyclotomic function fields with a cyclic Galois group. Such function fields are obtained by adjoining torsion points of the Carlitz action of an irreducible @math . The Reed-Solomon case corresponds to the simplest such extension (corresponding to the case @math ). In the general case, we need to descend to the fixed field of a suitable Galois subgroup in order to ensure the existence of many degree one places that can be used for encoding. Our methods shed new light on algebraic codes and their list decoding, and lead to new codes achieving list decoding capacity. Quantitatively, these codes provide list decoding (and list recovery soft decoding) guarantees similar to folded Reed-Solomon codes but with an alphabet size that is only polylogarithmic in the block length. In comparison, for folded RS codes, the alphabet size is a large polynomial in the block length. This has applications to fully explicit (with no brute-force search) binary concatenated codes for list decoding up to the Zyablov radius.
We describe a new class of list decodable codes based on Galois extensions of function fields and present a list decoding algorithm. These codes are obtained as a result of folding the set of rational places of a function field using certain elements (automorphisms) from the Galois group of the extension. This work is an extension of Folded Reed Solomon codes to the setting of Algebraic Geometric codes. We describe two constructions based on this framework depending on if the order of the automorphism used to fold the code is large or small compared to the block length. When the automorphism is of large order, the codes have polynomially bounded list size in the worst case. This construction gives codes of rate @math over an alphabet of size independent of block length that can correct a fraction of @math errors subject to the existence of asymptotically good towers of function fields with large automorphisms. The second construction addresses the case when the order of the element used to fold is small compared to the block length. In this case a heuristic analysis shows that for a random received word, the expected list size and the running time of the decoding algorithm are bounded by a polynomial in the block length. When applied to the Garcia-Stichtenoth tower, this yields codes of rate @math over an alphabet of size @math , that can correct a fraction of @math errors.
Abstract of query paper
Cite abstracts
29346
29345
Hidden Markov Models (HMMs) are one of the most fundamental and widely used statistical tools for modeling discrete time series. In general, learning HMMs from data is computationally hard (under cryptographic assumptions), and practitioners typically resort to search heuristics which suffer from the usual local optima issues. We prove that under a natural separation condition (bounds on the smallest singular value of the HMM parameters), there is an efficient and provably correct algorithm for learning HMMs. The sample complexity of the algorithm does not explicitly depend on the number of distinct (discrete) observations---it implicitly depends on this quantity through spectral properties of the underlying HMM. This makes the algorithm particularly applicable to settings with a large number of observations, such as those in natural language processing where the space of observation is sometimes the words in a language. The algorithm is also simple, employing only a singular value decomposition and matrix multiplications.
Planning and learning in Partially Observable MDPs (POMDPs) are among the most challenging tasks in both the AI and Operation Research communities. Although solutions to these problems are intractable in general, there might be special cases, such as structured POMDPs, which can be solved efficiently. A natural and possibly efficient way to represent a POMDP is through the predictive state representation (PSR) — a representation which recently has been receiving increasing attention. In this work, we relate POMDPs to multiplicity automata — showing that POMDPs can be represented by multiplicity automata with no increase in the representation size. Furthermore, we show that the size of the multiplicity automaton is equal to the rank of the predictive state representation. Therefore, we relate both the predictive state representation and POMDPs to the well-founded multiplicity automata literature. Based on the multiplicity automata representation, we provide a planning algorithm which is exponential only in the multiplicity automata rank rather than the number of states of the POMDP. As a result, whenever the predictive state representation is logarithmic in the standard POMDP representation, our planning algorithm is efficient. A widely used class of models for stochastic systems is hidden Markov models. Systems that can be modeled by hidden Markov models are a proper subclass of linearly dependent processes, a class of stochastic systems known from mathematical investigations carried out over the past four decades. This article provides a novel, simple characterization of linearly dependent processes, called observable operator models. The mathematical properties of observable operator models lead to a constructive learning algorithm for the identification of linearly dependent processes. The core of the algorithm has a time complexity of O (N + nm3), where N is the size of training data, n is the number of distinguishable outcomes of observations, and m is model state-space dimension.
Abstract of query paper
Cite abstracts
29347
29346
In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.
Limited energy supply is one of the major constraints in wireless sensor networks. A feasible strategy is to aggressively reduce the spatial sampling rate of sensors, that is, the density of the measure points in a field. By properly scheduling, we want to retain the high fidelity of data collection. In this paper, we propose a data collection method that is based on a careful analysis of the surveillance data reported by the sensors. By exploring the spatial correlation of sensing data, we dynamically partition the sensor nodes into clusters so that the sensors in the same cluster have similar surveillance time series. They can share the workload of data collection in the future since their future readings may likely be similar. Furthermore, during a short-time period, a sensor may report similar readings. Such a correlation in the data reported from the same sensor is called temporal correlation, which can be explored to further save energy. We develop a generic framework to address several important technical challenges, including how to partition the sensors into clusters, how to dynamically maintain the clusters in response to environmental changes, how to schedule the sensors in a cluster, how to explore temporal correlation, and how to restore the data in the sink with high fidelity. We conduct an extensive empirical study to test our method using both a real test bed system and a large-scale synthetic data set.
Abstract of query paper
Cite abstracts
29348
29347
In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.
In recent years, there has been an explosion of interest in mining time-series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time-series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data-mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature.
Abstract of query paper
Cite abstracts
29349
29348
In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.
The past decade has seen a wealth of research on time series representations, because the manipulation, storage, and indexing of large volumes of raw time series data is impractical. The vast majority of research has concentrated on representations that are calculated in batch mode and represent each value with approximately equal fidelity. However, the increasing deployment of mobile devices and real time sensors has brought home the need for representations that can be incrementally updated, and can approximate the data with fidelity proportional to its age. The latter property allows us to answer queries about the recent past with greater precision, since in many domains recent information is more useful than older information. We call such representations amnesic. While there has been previous work on amnesic representations, the class of amnesic functions possible was dictated by the representation itself. We introduce a novel representation of time series that can represent arbitrary, user-specified amnesic functions. For example, a meteorologist may decide that data that is twice as old can tolerate twice as much error, and thus, specify a linear amnesic function. In contrast, an econometrist might opt for an exponential amnesic function. We propose online algorithms for our representation, and discuss their properties. Finally, we perform an extensive empirical evaluation on 40 datasets, and show that our approach can efficiently maintain a high quality amnesic approximation.
Abstract of query paper
Cite abstracts
29350
29349
In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.
A new piecewise linear method is presented for the approximation of digitized curves. This method produces a sequence of consecutive line segments and has the following characteristics: (i) it approximates the digitized curve with the minimum number of line segments, (ii) the Euclidean distance between each point of the digitized curve and the line segment that approximates it, does not exceed a boundary value spl epsiv and (iii) the vertices of the produced line are not (necessarily) points of the input curve.
Abstract of query paper
Cite abstracts
29351
29350
A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of differential privacy , which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism @math -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user @math , no matter what its side information and preferences, derives as much utility from @math as from interacting with a differentially private mechanism @math that is optimally tailored to @math .
We continue a line of research initiated in [10, 11] on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = Σ i g(x i ), where x i denotes the ith row of the database and g maps database rows to [0,1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive. We examine the tradeoff between privacy and usability of statistical databases. We model a statistical database by an n-bit string d 1 ,..,d n , with a query being a subset q ⊆ [n] to be answered by Σ ieq d i . Our main result is a polynomial reconstruction algorithm of data from noisy (perturbed) subset sums. Applying this reconstruction algorithm to statistical databases we show that in order to achieve privacy one has to add perturbation of magnitude (Ω√n). That is, smaller perturbation always results in a strong violation of privacy. We show that this result is tight by exemplifying access algorithms for statistical databases that preserve privacy while adding perturbation of magnitude O(√n).For time-T bounded adversaries we demonstrate a privacypreserving access algorithm whose perturbation magnitude is ≈ √T. This work is at theintersection of two lines of research. One line, initiated by Dinurand Nissim, investigates the price, in accuracy, of protecting privacy in a statistical database. The second, growing from an extensive literature on compressed sensing (see in particular the work of Donoho and collaborators [4,7,13,11])and explicitly connected to error-correcting codes by Candes and Tao ([4]; see also [5,3]), is in the use of linearprogramming for error correction. Our principal result is the discovery of a sharp threshhold ρ*∠ 0.239, so that if ρ In the context of privacy-preserving datamining our results say thatany privacy mechanism, interactive or non-interactive, providingreasonably accurate answers to a 0.761 fraction of randomly generated weighted subset sum queries, and arbitrary answers on the remaining 0.239 fraction, is blatantly non-private. We introduce a new, generic framework for private data analysis.The goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains.Our framework allows one to release functions f of the data withinstance-based additive noise. That is, the noise magnitude is determined not only by the function we want to release, but also bythe database itself. One of the challenges is to ensure that the noise magnitude does not leak information about the database. To address that, we calibrate the noise magnitude to the smoothsensitivity of f on the database x --- a measure of variabilityof f in the neighborhood of the instance x. The new frameworkgreatly expands the applicability of output perturbation, a technique for protecting individuals' privacy by adding a smallamount of random noise to the released statistics. To our knowledge, this is the first formal analysis of the effect of instance-basednoise in the context of data privacy. Our framework raises many interesting algorithmic questions. Namely,to apply the framework one must compute or approximate the smoothsensitivity of f on x. We show how to do this efficiently for several different functions, including the median and the cost ofthe minimum spanning tree. We also give a generic procedure based on sampling that allows one to release f(x) accurately on manydatabases x. This procedure is applicable even when no efficient algorithm for approximating smooth sensitivity of f is known orwhen f is given as a black box. We illustrate the procedure by applying it to k-SED (k-means) clustering and learning mixtures of Gaussians. In a recent paper Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless.
Abstract of query paper
Cite abstracts
29352
29351
A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of differential privacy , which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism @math -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user @math , no matter what its side information and preferences, derives as much utility from @math as from interacting with a differentially private mechanism @math that is optimally tailored to @math .
We demonstrate that, ignoring computational constraints, it is possible to release privacy-preserving databases that are useful for all queries over a discretized domain from any given concept class with polynomial VC-dimension. We show a new lower bound for releasing databases that are useful for halfspace queries over a continuous domain. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, for a slightly relaxed definition of usefulness. Inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy. Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in the contexts where aggregate information is released about a database containing sensitive information about individuals. We present several basic results that demonstrate general feasibility of private learning and relate several models previously studied separately in the contexts of privacy and standard learning.
Abstract of query paper
Cite abstracts
29353
29352
A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of differential privacy , which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism @math -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user @math , no matter what its side information and preferences, derives as much utility from @math as from interacting with a differentially private mechanism @math that is optimally tailored to @math .
We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero.
Abstract of query paper
Cite abstracts
29354
29353
The dynamic time warping (DTW) is a popular similarity measure between time series. The DTW fails to satisfy the triangle inequality and its computation requires quadratic time. Hence, to find closest neighbors quickly, we use bounding techniques. We can avoid most DTW computations with an inexpensive lower bound (LB_Keogh). We compare LB_Keogh with a tighter lower bound (LB_Improved). We find that LB_Improved-based search is faster. As an example, our approach is 2-3 times faster over random-walk and shape time series.
A computer adaptable method for finding similarities in the amino acid sequences of two proteins has been developed. From these findings it is possible to determine whether significant homology exists between the proteins. This information is used to trace their possible evolutionary development. The maximum match is a number dependent upon the similarity of the sequences. One of its definitions is the largest number of amino acids of one protein that can be matched with those of a second protein allowing for all possible interruptions in either of the sequences. While the interruptions give rise to a very large number of comparisons, the method efficiently excludes from consideration those comparisons that cannot contribute to the maximum match. Comparisons are made from the smallest unit of significance, a pair of amino acids, one from each protein. All possible pairs are represented by a two-dimensional array, and all possible comparisons are represented by pathways through the array. For this maximum match only certain of the possible pathways must, be evaluated. A numerical value, one in this case, is assigned to every cell in the array representing like amino acids. The maximum match is the largest number that would result from summing the cell values of every A rolling parallel printer in which a pressure element is driven through a swiveling motion each printing cycle and a pressure segment thereof rolls off a line of type. The pressure element is connected to a mechanical linkage which minimizes the sweep of travel of the pressure element, while maintaining the pressure element sufficiently far from the type in a rest position to facilitate reading of the printed matter. A new similarity measure, called SimilB, for time series analysis, based on the cross-ΨB-energy operator (2004), is introduced. ΨB is a nonlinear measure which quantifies the interaction between two time series. Compared to Euclidean distance (ED) or the Pearson correlation coefficient (CC), SimilB includes the temporal information and relative changes of the time series using the first and second derivatives of the time series. SimilB is well suited for both nonstationary and stationary time series and particularly those presenting discontinuities. Some new properties of ΨB are presented. Particularly, we show that ΨB as similarity measure is robust to both scale and time shift. SimilB is illustrated with synthetic time series and an artificial dataset and compared to the CC and the ED measures. Shape matching is an important ingredient in shape retrieval, recognition and classification, alignment and registration, and approximation and simplification. This paper treats various aspects that are needed to solve shape matching problems: choosing the precise problem, selecting the properties of the similarity measure that are needed for the problem, choosing the specific similarity measure, and constructing the algorithm to compute the similarity. The focus is on methods that lie close to the field of computational geometry.
Abstract of query paper
Cite abstracts
29355
29354
The dynamic time warping (DTW) is a popular similarity measure between time series. The DTW fails to satisfy the triangle inequality and its computation requires quadratic time. Hence, to find closest neighbors quickly, we use bounding techniques. We can avoid most DTW computations with an inexpensive lower bound (LB_Keogh). We compare LB_Keogh with a tighter lower bound (LB_Improved). We find that LB_Improved-based search is faster. As an example, our approach is 2-3 times faster over random-walk and shape time series.
In many applications, it is desirable to monitor a streaming time series for predefined patterns. In domains as diverse as the monitoring of space telemetry, patient intensive care data, and insect populations, where data streams at a high rate and the number of predefined patterns is large, it may be impossible for the comparison algorithm to keep up. We propose a novel technique that exploits the commonality among the predefined patterns to allow monitoring at higher bandwidths, while maintaining a guarantee of no false dismissals. Our approach is based on the widely used envelope-based lower bounding technique. Extensive experiments demonstrate that our approach achieves tremendous improvements in performance in the offline case, and significant improvements in the fastest possible arrival rate of the data stream that can be processed with guaranteed no false dismissal. Time-series data naturally arise in countless domains, such as meteorology, astrophysics, geology, multimedia, and economics. Similarity search is very popular, and DTW (Dynamic Time Warping) is one of the two prevailing distance measures. Although DTW incurs a heavy computation cost, it provides scaling along the time axis. In this paper, we propose FTW (Fast search method for dynamic Time Warping), which guarantees no false dismissals in similarity query processing. FTW efficiently prunes a significant number of the search cost. Experiments on real and synthetic sequence data sets reveals that FTW is significantly faster than the best existing method, up to 222 times.
Abstract of query paper
Cite abstracts
29356
29355
In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.
We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.
Abstract of query paper
Cite abstracts
29357
29356
In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.
Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of de-identifying records is to remove identifying fields such as social security number, name etc. However, recent research has shown that a large fraction of the US population can be identified using non-key attributes (called quasi-identifiers) such as date of birth, gender, and zip code [15]. Sweeney [16] proposed the k-anonymity model for privacy where non-key attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a pre-specified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-Anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant-factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an e fraction of points to remain unclustered, i.e., deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well.
Abstract of query paper
Cite abstracts
29358
29357
In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.
This paper considers the problem of publishing "transaction data" for research purposes. Each transaction is an arbitrary set of items chosen from a large universe. Detailed transaction data provides an electronic image of one's life. This has two implications. One, transaction data are excellent candidates for data mining research. Two, use of transaction data would raise serious concerns over individual privacy. Therefore, before transaction data is released for data mining, it must be made anonymous so that data subjects cannot be re-identified. The challenge is that transaction data has no structure and can be extremely high dimensional. Traditional anonymization methods lose too much information on such data. To date, there has been no satisfactory privacy notion and solution proposed for anonymizing transaction data. This paper proposes one way to address this issue.
Abstract of query paper
Cite abstracts
29359
29358
In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.
In this paper we study the privacy preservation properties of aspecific technique for query log anonymization: token-based hashing. In this approach, each query is tokenized, and then a secure hash function is applied to each token. We show that statistical techniques may be applied to partially compromise the anonymization. We then analyze the specific risks that arise from these partial compromises, focused on revelation of identity from unambiguous names, addresses, and so forth, and the revelation of facts associated with an identity that are deemed to be highly sensitive. Our goal in this work is two fold: to show that token-based hashing is unsuitable for anonymization, and to present a concrete analysis of specific techniques that may be effective in breaching privacy, against which other anonymization schemes should be measured. In this paper we study privacy preservation for the publication of search engine query logs. We introduce a new privacy concern, website privacy as a special case of business privacy.We define the possible adversaries who could be interested in disclosing website information and the vulnerabilities in the query log, which they could exploit. We elaborate on anonymization techniques to protect website information, discuss different types of attacks that an adversary could use and propose an anonymization strategy for one of these attacks. We then present a graph-based heuristic to validate the effectiveness of our anonymization method and perform an experimental evaluation of this approach. Our experimental results show that the query log can be appropriately anonymized against the specific attack, while retaining a significant volume of useful data. The recent release of the American Online (AOL) Query Logs highlighted the remarkable amount of private and identifying information that users are willing to reveal to a search engine. The release of these types of log files therefore represents a significant liability and compromise of user privacy. However, without such data the academic community greatly suffers in their ability to conduct research on real search engines. This paper proposes two specific solutions (rather than an overly general framework) that attempts to balance the needs of certain types of research while individual privacy. The first solution, based on a threshold cryptography system, eliminates highly identifying queries, in real time, without preserving history or statistics about previous behavior. The second solution attempts to deal with sets of queries, that when taken in aggregate, are overly identifying. Both are novel and represent additional options for data anonymization.
Abstract of query paper
Cite abstracts
29360
29359
This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.
Face recognition (FR) is the preferred mode of identity recognition by humans: It is natural, robust and unintrusive. However, automatic FR techniques have failed to match up to expectations: Variations in pose, illumination and expression limit the performance of 2D FR techniques. In recent years, 3D FR has shown promise to overcome these challanges. With the availability of cheaper acquisition methods, 3D face recognition can be a way out of these problems, both as a stand-alone method, or as a supplement to 2D face recognition. We review the relevant work on 3D face recognition here, and discuss merits of different representations and recognition algorithms. This survey focuses on recognition performed by matching models of the three-dimensional shape of the face, either alone or in combination with matching corresponding two-dimensional intensity images. Research trends to date are summarized, and challenges confronting the development of more accurate three-dimensional face recognition are identified. These challenges include the need for better sensors, improved recognition algorithms, and more rigorous experimental methodology.
Abstract of query paper
Cite abstracts
29361
29360
This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.
Non-negative matrix factorization (NMF) is a recently developed technique for finding parts-based, linear representations of non-negative data. Although it has successfully been applied in several applications, it does not always result in parts-based representations. In this paper, we show how explicitly incorporating the notion of 'sparseness' improves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF and for our extension. Our hope is that this will further the application of these methods to solving novel data-analysis problems. We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed "Fisherface" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases. A new variant ‘PMF’ of factor analysis is described. It is assumed that X is a matrix of observed data and σ is the known matrix of standard deviations of elements of X. Both X and σ are of dimensions n × m. The method solves the bilinear matrix problem X = GF + E where G is the unknown left hand factor matrix (scores) of dimensions n × p, F is the unknown right hand factor matrix (loadings) of dimensions p × m, and E is the matrix of residuals. The problem is solved in the weighted least squares sense: G and F are determined so that the Frobenius norm of E divided (element-by-element) by σ is minimized. Furthermore, the solution is constrained so that all the elements of G and F are required to be non-negative. It is shown that the solutions by PMF are usually different from any solutions produced by the customary factor analysis (FA, i.e. principal component analysis (PCA) followed by rotations). Usually PMF produces a better fit to the data than FA. Also, the result of PF is guaranteed to be non-negative, while the result of FA often cannot be rotated so that all negative entries would be eliminated. Different possible application areas of the new method are briefly discussed. In environmental data, the error estimates of data can be widely varying and non-negativity is often an essential feature of the underlying models. Thus it is concluded that PMF is better suited than FA or PCA in many environmental applications. Examples of successful applications of PMF are shown in companion papers. Neural networks in the visual system may be performing sparse coding of learnt local features that are qualitatively very similar to the receptive fields of simple cells in the primary visual cortex, V1. In conventional sparse coding, the data are described as a combination of elementary features involving both additive and subtractive components. However, the fact that features can ‘cancel each other out’ using subtraction is contrary to the intuitive notion of combining parts to form a whole. Thus, it has recently been argued forcefully for completely non-negative representations. This paper presents Non-Negative Sparse Coding (NNSC) applied to the learning of facial features for face recognition and a comparison is made with the other part-based techniques, Non-negative Matrix Factorization (NMF) and Local-Non-negative Matrix Factorization (LNMF). The NNSC approach has been tested on the Aleix–Robert (AR), the Face Recognition Technology (FERET), the Yale B, and the Cambridge ORL databases, respectively. In doing so, we have compared and evaluated the proposed NNSC face recognition technique under varying expressions, varying illumination, occlusion with sunglasses, occlusion with scarf, and varying pose. Tests were performed with different distance metrics such as the L1-metric, L2-metric, and Normalized Cross-Correlation (NCC). All these experiments involved a large range of basis dimensions. In general, NNSC was found to be the best approach of the three part-based methods, although it must be observed that the best distance measure was not consistent for the different experiments. We present a systematic procedure for selecting facial fiducial points associated with diverse structural characteristics of a human face. We identify such characteristics from the existing literature on anthropometric facial proportions. We also present three dimensional (3D) face recognition algorithms, which employ Euclidean geodesic distances between these anthropometric fiducial points as features along with linear discriminant analysis classifiers. Furthermore, we show that in our algorithms, when anthropometric distances are replaced by distances between arbitrary regularly spaced facial points, their performances decrease substantially. This demonstrates that incorporating domain specific knowledge about the structural diversity of human faces significantly improves the performance of 3D human face recognition algorithms. Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence.
Abstract of query paper
Cite abstracts
29362
29361
This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.
Introduces a new surface representation for recognizing curved objects. The authors approach begins by representing an object by a discrete mesh of points built from range data or from a geometric model of the object. The mesh is computed from the data by deforming a standard shaped mesh, for example, an ellipsoid, until it fits the surface of the object. The authors define local regularity constraints that the mesh must satisfy. The authors then define a canonical mapping between the mesh describing the object and a standard spherical mesh. A surface curvature index that is pose-invariant is stored at every node of the mesh. The authors use this object representation for recognition by comparing the spherical model of a reference object with the model extracted from a new observed scene. The authors show how the similarity between reference model and observed data can be evaluated and they show how the pose of the reference object in the observed scene can be easily computed using this representation. The authors present results on real range images which show that this approach to modelling and recognizing 3D objects has three main advantages: (1) it is applicable to complex curved surfaces that cannot be handled by conventional techniques; (2) it reduces the recognition problem to the computation of similarity between spherical distributions; in particular, the recognition algorithm does not require any combinatorial search; and (3) even though it is based on a spherical mapping, the approach can handle occlusions and partial views. > We analyze theoretically the subspace best approximating images of a convex Lambertian object taken from the same viewpoint, but under different distant illumination conditions. We analytically construct the principal component analysis for images of a convex Lambertian object, explicitly taking attached shadows into account, and find the principal eigenmodes and eigenvalues with respect to lighting variability. Our analysis makes use of an analytic formula for the irradiance in terms of spherical-harmonic coefficients of the illumination and shows, under appropriate assumptions, that the principal components or eigenvectors are identical to the spherical harmonic basis functions evaluated at the surface normal vectors. Our main contribution is in extending these results to the single-viewpoint case, showing how the principal eigenmodes and eigenvalues are affected when only a limited subset (the upper hemisphere) of normals is available and the spherical harmonics are no longer orthonormal over the restricted domain. Our results are very close, both qualitatively and quantitatively, to previous empirical observations and represent the first essentially complete theoretical explanation of these observations. Face recognition under varying pose is a challenging problem, especially when illumination variations are also present. In this paper, we propose to address one of the most challenging scenarios in face recognition. That is, to identify a subject from a test image that is acquired under dierent pose and illumination condition from only one training sample (also known as a gallery image) of this subject in the database. For example, the test image could be semifrontal and illuminated by multiple lighting sources while the corresponding training image is frontal under a single lighting source. Under the assumption of Lambertian reflectance, the spherical harmonics representation has proved to be effective in modeling illumination variations for a fixed pose. In this paper, we extend the spherical harmonics representation to encode pose information. More specifically, we utilize the fact that 2D harmonic basis images at different poses are related by close-form linear transformations, and give a more convenient transformation matrix to be directly used for basis images. An immediate application is that we can easily synthesize a different view of a subject under arbitrary lighting conditions by changing the coefficients of the spherical harmonics representation. A more important result is an efficient face recognition method, based on the orthonormality of the linear transformations, for solving the above-mentioned challenging scenario. Thus, we directly project a nonfrontal view test image onto the space of frontal view harmonic basis images. The impact of some empirical factors due to the projection is embedded in a sparse warping matrix; for most cases, we show that the recognition performance does not deteriorate after warping the test image to the frontal view. Very good recognition results are obtained using this method for both synthetic and challenging real images. To deal with image variations due to illumination problem, recently Ramamoorthi and Basri have independently derived a spherical harmonic analysis for the Lambertian reflectance and linear subspace. Their theoretical work provided a new approach for face representation, however both of them had the assumption that the 3D surface normal and albedo are known. This assumption limits this algorithm's application. In this paper, we present a novel method for modeling 3D face shape and albedo from only three images with unknown light directions and this work well fills the blank, which Ramamoorthi and Basri left. By taking the advantage of similar 3D shape of all human faces, the highlight of the new method is that it circumambulates the linear ambiguity by 3D alignment. The experiment results show that our estimated model can be perfectly employed to face recognition and 3D reconstruction.
Abstract of query paper
Cite abstracts
29363
29362
We argue that relationships between Web pages are functions of the user's intent. We identify a class of Web tasks - information-gathering - that can be facilitated by a search engine that provides links to pages which are related to the page the user is currently viewing. We define three kinds of intentional relationships that correspond to whether the user is a) seeking sources of information, b) reading pages which provide information, or c) surfing through pages as part of an extended information-gathering process. We show that these three relationships can be productively mined using a combination of textual and link information and provide three scoring mechanisms that correspond to them: SeekRel , FactRel and SurfRel . These scoring mechanisms incorporate both textual and link information. We build a set of capacitated subnetworks - each corresponding to a particular keyword - that mirror the interconnection structure of the World Wide Web. The scores are computed by computing flows on these subnetworks. The capacities of the links are derived from the hub and authority values of the nodes they connect, following the work of Kleinberg (1998) on assigning authority to pages in hyperlinked environments. We evaluated our scoring mechanism by running experiments on four data sets taken from the Web. We present user evaluations of the relevance of the top results returned by our scoring mechanisms and compare those to the top results returned by Google's Similar Pages feature, and the Companion algorithm proposed by Dean and Henzinger (1999).
Networked information spaces contain information entities, corresponding to nodes, which are connected by associations, corresponding to links in the network. Examples of networked information spaces are: the World Wide Web, where information entities are web pages, and associations are hyperlinks: the scientific literature, where information entities are articles and associations are references to other articles. Similarity between information entities in a networked information space can be defined not only based on the content of the information entities, but also based on the connectivity established by the associations present. This paper explores the definition of similarity based on connectivity only, and proposes several algorithms for this purpose. Our metrics take advantage of the local neighborhoods of the nodes in the networked information space. Therefore, explicit availability of the networked information space is not required, as long as a query engine is available for following links and extracting the necessary local neighbourhoods for similarity estimation. Two variations of similarity estimation between two nodes are described, one based on the separate local neighbourhoods of the nodes, and another based on the joint local neighbourhood expanded from both nodes at the same time. The algorithms are implemented and evaluated on the citation graph of computer science. The immediate application of this work is in finding papers similar to a given paper in a digital library, but they are also applicable to other networked information spaces, such as the Web. Published scientific articles are linked together into a graph, the citation graph, through their citations. This paper explores the notion of similarity based on connectivity alone, and proposes several algorithms to quantify it. Our metrics take advantage of the local neighborhoods of the nodes in the citation graph. Two variants of link-based similarity estimation between two nodes are described, one based on the separate local neighborhoods of the nodes, and another based on the joint local neighborhood expanded from both nodes at the same time. The algorithms are implemented and evaluated on a subgraph of the citation graph of computer science in a retrieval context. The results are compared with text-based similarity, and demonstrate the complementarity of link-based and text-based retrieval.
Abstract of query paper
Cite abstracts
29364
29363
Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic re-appearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
Traditional mobile ad hoc routing protocols fail to deliver any data in intermittently connected mobile ad hoc networks (ICMN's) because of the absence of complete end-to-end paths in these networks. To overcome this issue, researchers have proposed to use node mobility to carry data around the network. These schemes are referred to as mobility-assisted routing schemes. A mobility-assisted routing scheme forwards data only when appropriate relays meet each other. The time it takes for them to first meet each other is referred to as the meeting time. The time duration they remain in contact with each other is called the contact time. If they fail to exchange the packet during the contact time (due to contention in the network), then they have to wait till they meet each other again. This time duration is referred to as the inter meeting time. A realistic performance analysis of any mobility-assisted routing scheme requires a knowledge of the statistics of these three quantities. These quantities vary largely depending on the mobility model at hand. This paper studies these three quantities for the three most popularly used mobility models: random direction, random waypoint and random walk models. Hence, this work allows for a realistic performance analysis of any routing scheme under any of these three mobility models The random waypoint model is a commonly used mobility model in the simulation of ad hoc networks. It is known that the spatial distribution of network nodes moving according to this model is, in general, nonuniform. However, a closed-form expression of this distribution and an in-depth investigation is still missing. This fact impairs the accuracy of the current simulation methodology of ad hoc networks and makes it impossible to relate simulation-based performance results to corresponding analytical results. To overcome these problems, we present a detailed analytical study of the spatial node distribution generated by random waypoint mobility. More specifically, we consider a generalization of the model in which the pause time of the mobile nodes is chosen arbitrarily in each waypoint and a fraction of nodes may remain static for the entire simulation time. We show that the structure of the resulting distribution is the weighted sum of three independent components: the static, pause, and mobility component. This division enables us to understand how the model's parameters influence the distribution. We derive an exact equation of the asymptotically stationary distribution for movement on a line segment and an accurate approximation for a square area. The good quality of this approximation is validated through simulations using various settings of the mobility parameters. In summary, this article gives a fundamental understanding of the behavior of the random waypoint model. Traditionally, ad hoc networks have been viewed as a connected graph over which end-to-end routing paths had to be established.Mobility was considered a necessary evil that invalidates paths and needs to be overcome in an intelligent way to allow for seamless ommunication between nodes.However, it has recently been recognized that mobility an be turned into a useful ally, by making nodes carry data around the network instead of transmitting them. This model of routing departs from the traditional paradigm and requires new theoretical tools to model its performance. A mobility-assisted protocol forwards data only when appropriate relays encounter each other, and thus the time between such encounters, called hitting or meeting time, is of high importance.In this paper, we derive accurate closed form expressions for the expected encounter time between different nodes, under ommonly used mobility models. We also propose a mobility model that can successfully capture some important real-world mobility haracteristics, often ignored in popular mobility models, and alculate hitting times for this model as well. Finally, we integrate this results with a general theoretical framework that can be used to analyze the performance of mobility-assisted routing schemes. We demonstrate that derivative results oncerning the delay of various routing s hemes are very accurate, under all the mobility models examined. Hence, this work helps in better under-standing the performance of various approaches in different settings, and an facilitate the design of new, improved protocols. Intermittently connected mobile networks are wireless networks where most of the time there does not exist a complete path from the source to the destination. There are many real networks that follow this model, for example, wildlife tracking sensor networks, military networks, vehicular ad hoc networks, etc. In this context, conventional routing schemes fail, because they try to establish complete end-to-end paths, before any data is sent. To deal with such networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention which can significantly degrade their performance. Furthermore, proposed efforts to reduce the overhead of flooding-based schemes have often been plagued by large delays. With this in mind, we introduce a new family routing schemes that "spray" a few message copies into the network, and then route each copy independently towards the destination. We show that, if carefully designed, spray routing not only performs significantly fewer transmissions per message, but also has lower average delivery delays than existing schemes; furthermore, it is highly scalable and retains good performance under a large range of scenarios. Finally, we use our theoretical framework proposed in our 2004 paper to analyze the performance of spray routing. We also use this theory to show how to choose the number of copies to be sprayed and how to optimally distribute these copies to relays.
Abstract of query paper
Cite abstracts
29365
29364
Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic re-appearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
In this paper, we present a survey of various mobility models in both cellular networks and multi-hop networks. We show that group motion occurs frequently in ad hoc networks, and introduce a novel group mobility model Reference Point Group Mobility (RPGM) to represent the relationship among mobile hosts. RPGM can be readily applied to many existing applications. Moreover, by proper choice of parameters, RPGM can be used to model several mobility models which were previously proposed. One of the main themes of this paper is to investigate the impact of the mobility model on the performance of a specific network protocol or application. To this end, we have applied our RPGM model to two different network protocol scenarios, clustering and routing, and have evaluated network performance under different mobility patterns and for different protocol implementations. As expected, the results indicate that different mobility patterns affect the various protocols in different ways. In particular, the ranking of routing algorithms is influenced by the choice of mobility pattern. One of the most important methods for evaluating the characteristics of ad hoc networking protocols is through the use of simulation. Simulation provides researchers with a number of significant benefits, including repeatable scenarios, isolation of parameters, and exploration of a variety of metrics. The topology and movement of the nodes in the simulation are key factors in the performance of the network protocol under study. Once the nodes have been initially distributed, the mobility model dictates the movement of the nodes within the network. Because the mobility of the nodes directly impacts the performance of the protocols, simulation results obtained with unrealistic movement models may not correctly reflect the true performance of the protocols. The majority of existing mobility models for ad hoc networks do not provide realistic movement scenarios; they are limited to random walk models without any obstacles. In this paper, we propose to create more realistic movement models through the incorporation of obstacles. These obstacles are utilized to both restrict node movement as well as wireless transmissions. In addition to the inclusion of obstacles, we construct movement paths using the Voronoi diagram of obstacle vertices. Nodes can then be randomly distributed across the paths, and can use shortest path route computations to destinations at randomly chosen obstacles. Simulation results show that the use of obstacles and pathways has a significant impact on the performance of ad hoc network protocols.
Abstract of query paper
Cite abstracts
29366
29365
Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic re-appearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
The simulation of mobile networks calls for a mobility model to generate the trajectories of the mobile users (or nodes). It has been shown that the mobility model has a major influence on the behavior of the system. Therefore, using a realistic mobility model is important if we want to increase the confidence that simulations of mobile systems are meaningful in realistic settings. In this paper we present an executable mobility model that uses real-life mobility characteristics to generate mobility scenarios that can be used for network simulations. We present a structured framework for extracting the mobility characteristics from a WLAN trace, for processing the mobility characteristics to determine a parameter set for the mobility model, and for using a parameter set to generate mobility scenarios for simulations. To derive the parameters of the mobility' model, we measure the mobility' characteristics of users of a campus wireless network. Therefore, we call this model the WLAN mobility model Mobility-analysis confirms properties observed by other research groups. The validation shows that the WLAN model maps the real-world mobility' characteristics to the abstract world of network simulators with a very small error. For users that do not have the possibility to capture a WLAN trace, we explore the value space of the WLAN model parameters and show how different parameters sets influence the mobility of the simulated nodes. We introduce a system for sensing complex social systems with data collected from 100 mobile phones over the course of 9 months. We demonstrate the ability to use standard Bluetooth-enabled mobile telephones to measure information access and use in different contexts, recognize social patterns in daily user activity, infer relationships, identify socially significant locations, and model organizational rhythms. Studying transfer opportunities between wireless devices carried by humans, we observe that the distribution of the inter-contact time, that is the time gap separating two contacts of the same pair of devices, exhibits a heavy tail such as one of a power law, over a large range of value. This observation is confirmed on six distinct experimental data sets. It is at odds with the exponential decay implied by most mobility models. In this paper, we study how this new characteristic of human mobility impacts a class of previously proposed forwarding algorithms. We use a simplified model based on the renewal theory to study how the parameters of the distribution impact the delay performance of these algorithms. We make recommendation for the design of well founded opportunistic forwarding algorithms, in the context of human carried devices. Wireless local-area networks are becoming increasingly popular. They are commonplace on university campuses and inside corporations, and they have started to appear in public areas [17]. It is thus becoming increasingly important to understand user mobility patterns and network usage characteristics on wireless networks. Such an understanding would guide the design of applications geared toward mobile environments (e.g., pervasive computing applications), would help improve simulation tools by providing a more representative workload and better user mobility models, and could result in a more effective deployment of wireless network components.Several studies have recently been performed on wire-less university campus networks and public networks. In this paper, we complement previous research by presenting results from a four week trace collected in a large corporate environment. We study user mobility patterns and introduce new metrics to model user mobility. We also analyze user and load distribution across access points. We compare our results with those from previous studies to extract and explain several network usage and mobility characteristics.We find that average user transfer-rates follow a power law. Load is unevenly distributed across access points and is influenced more by which users are present than by the number of users. We model user mobility with persistence and prevalence. Persistence reflects session durations whereas prevalence reflects the frequency with which users visit various locations. We find that the probability distributions of both measures follow power laws. We examine the fundamental properties that determine the basic performance metrics for opportunistic communications. We first consider the distribution of inter-contact times between mobile devices. Using a diverse set of measured mobility traces, we find as an invariant property that there is a characteristic time, order of half a day, beyond which the distribution decays exponentially. Up to this value, the distribution in many cases follows a power law, as shown in recent work. This powerlaw finding was previously used to support the hypothesis that inter-contact time has a power law tail, and that common mobility models are not adequate. However, we observe that the time scale of interest for opportunistic forwarding may be of the same order as the characteristic time, and thus the exponential tail is important. We further show that already simple models such as random walk and random way point can exhibit the same dichotomy in the distribution of inter-contact time ascin empirical traces. Finally, we perform an extensive analysis of several properties of human mobility patterns across several dimensions, and we present empirical evidence that the return time of a mobile device to its favorite location site may already explain the observed dichotomy. Our findings suggest that existing results on the performance of forwarding schemes basedon power-law tails might be overly pessimistic.
Abstract of query paper
Cite abstracts
29367
29366
Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic re-appearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
Validation of mobile ad hoc network protocols relies almost exclusively on simulation. The value of the validation is, therefore, highly dependent on how realistic the movement models used in the simulations are. Since there is a very limited number of available real traces in the public domain, synthetic models for movement pattern generation must be used. However, most widely used models are currently very simplistic, their focus being ease of implementation rather than soundness of foundation. As a consequence, simulation results of protocols are often based on randomly generated movement patterns and, therefore, may differ considerably from those that can be obtained by deploying the system in real scenarios. Movement is strongly affected by the needs of humans to socialise or cooperate, in one form or another. Fortunately, humans are known to associate in particular ways that can be mathematically modelled and that have been studied in social sciences for years.In this paper we propose a new mobility model founded on social network theory. The model allows collections of hosts to be grouped together in a way that is based on social relationships among the individuals. This grouping is then mapped to a topographical space, with movements influenced by the strength of social ties that may also change in time. We have validated our model with real traces by showing that the synthetic mobility traces are a very good approximation of human movement patterns.
Abstract of query paper
Cite abstracts
29368
29367
The eigenvalue density for members of the Gaussian orthogonal and unitary ensembles follows the Wigner semicircle law. If the Gaussian entries are all shifted by a constant amount s (2N)1 2, where N is the size of the matrix, in the large N limit a single eigenvalue will separate from the support of the Wigner semicircle provided s>1. In this study, using an asymptotic analysis of the secular equation for the eigenvalue condition, we compare this effect to analogous effects occurring in general variance Wishart matrices and matrices from the shifted mean chiral ensemble. We undertake an analogous comparative study of eigenvalue separation properties when the sizes of the matrices are fixed and s→∞, and higher rank analogs of this setting. This is done using exact expressions for eigenvalue probability densities in terms of generalized hypergeometric functions and using the interpretation of the latter as a Green function in the Dyson Brownian motion model. For the shifted mean Gaussian unitary ensemble an...
LetA=(a ij ) be ann ×n matrix whose entries fori≧j are independent random variables anda ji =a ij . Suppose that everya ij is bounded and for everyi>j we haveEa ij =μ,D 2 a ij =σ2 andEa ii =v. We establish a large deviation principle for the largest eigenvalue of a rank one deformation of a matrix from the GUE or GOE. As a corollary, we get another proof of the phenomenon, well-known in learning theory and finance, that the largest eigenvalue separates from the bulk when the perturbation is large enough. A large part of the paper is devoted to an auxiliary result on the continuity of spherical integrals in the case when one of the matrix is of rank one, as studied in one of our previous works. We compute the limiting eigenvalue statistics at the edge of the spectrum of large Hermitian random matrices perturbed by the addition of small rank deterministic matrices. We consider random Hermitian matrices with independent Gaussian entries Mij,i≤j with various expectations. We prove that the largest eigenvalue of such random matrices exhibits, in the large N limit, various limiting distributions depending on both the eigenvalues of the matrix Open image in new window and its rank. This rank is also allowed to increase with N in some restricted way. A recently published Letter by Kota and Potbhare (1977) obtains the averaged spectrum of a large symmetric random matrix each element of which has a finite mean: their results disagree with two recent calculations which predict that under certain circumstances a single isolated eigenvalue splits off from the continuous semicircular distribution of eigenvalues associated with the random part of the matrix. This letter offers a simple re-derivation of this result and corrects the error in the work of Kota and Potbhare.
Abstract of query paper
Cite abstracts
29369
29368
The eigenvalue density for members of the Gaussian orthogonal and unitary ensembles follows the Wigner semicircle law. If the Gaussian entries are all shifted by a constant amount s (2N)1 2, where N is the size of the matrix, in the large N limit a single eigenvalue will separate from the support of the Wigner semicircle provided s>1. In this study, using an asymptotic analysis of the secular equation for the eigenvalue condition, we compare this effect to analogous effects occurring in general variance Wishart matrices and matrices from the shifted mean chiral ensemble. We undertake an analogous comparative study of eigenvalue separation properties when the sizes of the matrices are fixed and s→∞, and higher rank analogs of this setting. This is done using exact expressions for eigenvalue probability densities in terms of generalized hypergeometric functions and using the interpretation of the latter as a Green function in the Dyson Brownian motion model. For the shifted mean Gaussian unitary ensemble an...
In a spiked population model, the population covariance matrix has all its eigenvalues equal to units except for a few fixed eigenvalues (spikes). This model is proposed by Johnstone to cope with empirical findings on various data sets. The question is to quantify the effect of the perturbation caused by the spike eigenvalues. A recent work by Baik and Silverstein establishes the almost sure limits of the extreme sample eigenvalues associated to the spike eigenvalues when the population and the sample sizes become large. This paper establishes the limiting distributions of these extreme sample eigenvalues. As another important result of the paper, we provide a central limit theorem on random sesquilinear forms. The spiked model is an important special case of the Wishart ensemble, and a natural generalization of the white Wishart ensemble. Mathematically, it can be defined on three kinds of variables: the real, the complex and the quaternion. For practical application, we are interested in the limiting distribution of the largest sample eigenvalue. We first give a new proof of the result of Baik, Ben Arous and P ' e ch ' e for the complex spiked model, based on the method of multiple orthogonal polynomials by Bleher and Kuijlaars. Then in the same spirit we present a new result of the rank 1 quaternionic spiked model, proven by combinatorial identities involving quaternionic Zonal polynomials ( = 1 2 Jack polynomials) and skew orthogonal polynomials. We find a phase transition phenomenon for the limiting distribution in the rank 1 quaternionic spiked model as the spiked population eigenvalue increases, and recognize the seemingly new limiting distribution on the critical point as the limiting distribution of the largest sample eigenvalue in the real white Wishart ensemble. Finally we give conjectures for higher rank quaternionic spiked model and the real spiked model. AbstractWe compute the limiting distributions of the largest eigenvalue of a complex Gaussian samplecovariance matrix when both the number of samples and the number of variables in each samplebecome large. When all but finitely many, say r, eigenvalues of the covariance matrix arethe same, the dependence of the limiting distribution of the largest eigenvalue of the samplecovariance matrix on those distinguished r eigenvalues of the covariance matrix is completelycharacterized in terms of an infinite sequence of new distribution functions that generalizethe Tracy-Widom distributions of the random matrix theory. Especially a phase transitionphenomena is observed. Our results also apply to a last passage percolation model and aqueuing model. 1 Introduction Consider M independent, identically distributed samples y 1 ,..., y M , all of which are N ×1 columnvectors. We further assume that the sample vectors y k are Gaussian with mean µ and covarianceΣ, where Σ is a fixed N ×N positive matrix; the density of a sample y isp( y) =1(2π) We consider a spiked population model, proposed by Johnstone, in which all the population eigenvalues are one except for a few fixed eigenvalues. The question is to determine how the sample eigenvalues depend on the non-unit population ones when both sample size and population size become large. This paper completely determines the almost sure limits of the sample eigenvalues in a spiked model for a general class of samples.
Abstract of query paper
Cite abstracts
29370
29369
The eigenvalue density for members of the Gaussian orthogonal and unitary ensembles follows the Wigner semicircle law. If the Gaussian entries are all shifted by a constant amount s (2N)1 2, where N is the size of the matrix, in the large N limit a single eigenvalue will separate from the support of the Wigner semicircle provided s>1. In this study, using an asymptotic analysis of the secular equation for the eigenvalue condition, we compare this effect to analogous effects occurring in general variance Wishart matrices and matrices from the shifted mean chiral ensemble. We undertake an analogous comparative study of eigenvalue separation properties when the sizes of the matrices are fixed and s→∞, and higher rank analogs of this setting. This is done using exact expressions for eigenvalue probability densities in terms of generalized hypergeometric functions and using the interpretation of the latter as a Green function in the Dyson Brownian motion model. For the shifted mean Gaussian unitary ensemble an...
Consider n non-intersecting particles on the real line (Dyson Brownian motions), all starting from the origin at time=0, and forced to return to x=0 at time=1. For large n, the average mean density of particles has its support, for each 0<t<1, within the interior of an ellipse. The Airy process is defined as the motion of these non-intersecting Brownian motions for large n, but viewed from an arbitrary point on the ellipse with an appropriate space-time rescaling. Assume now a finite number r of these particles are forced to a different target point. Does it affect the Brownian fluctuations along the ellipse for large n? In this paper, we show that no new process appears as long as one considers points on the ellipse, for which the t-coordinate is smaller than the t-coordinate of the point of tangency of the tangent to the curve passing through the target point. At this point of tangency the fluctuations obey a new statistics: the Airy process with r outliers (in short: r-Airy process ). The log of the transition probability of this new process is given by the Fredholm determinant of a new kernel (extending the Airy kernel) and it satisfies a non-linear PDE in x and the time. We continue the study of the Hermitian random matrix ensemble with external source Open image in new window where A has two distinct eigenvalues ±a of equal multiplicity. This model exhibits a phase transition for the value a=1, since the eigenvalues of M accumulate on two intervals for a>1, and on one interval for 0 1 was treated in Part I, where it was proved that local eigenvalue correlations have the universal limiting behavior which is known for unitarily invariant random matrices, that is, limiting eigenvalue correlations are expressed in terms of the sine kernel in the bulk of the spectrum, and in terms of the Airy kernel at the edge. In this paper we establish the same results for the case 0<a<1. As in Part I we apply the Deift Zhou steepest descent analysis to a 3×3-matrix Riemann-Hilbert problem. Due to the different structure of an underlying Riemann surface, the analysis includes an additional step involving a global opening of lenses, which is a new phenomenon in the steepest descent analysis of Riemann-Hilbert problems. We present a random matrix interpretation of the distribution functions which have appeared in the study of the one-dimensional polynuclear growth (PNG) model with external sources. It is shown that the distribution, GOE @math , which is defined as the square of the GOE Tracy-Widom distribution, can be obtained as the scaled largest eigenvalue distribution of a special case of a random matrix model with a deterministic source, which have been studied in a different context previously. Compared to the original interpretation of the GOE @math as the square of GOE'', ours has an advantage that it can also describe the transition from the GUE Tracy-Widom distribution to the GOE @math . We further demonstrate that our random matrix interpretation can be obtained naturally by noting the similarity of the topology between a certain non-colliding Brownian motion model and the multi-layer PNG model with an external source. This provides us with a multi-matrix model interpretation of the multi-point height distributions of the PNG model with an external source. A new type of Coulomb gas is defined, consisting of n point charges executing Brownian motions under the influence of their mutual electrostatic repulsions. It is proved that this gas gives an exact mathematical description of the behavior of the eigenvalues of an (n × n) Hermitian matrix, when the elements of the matrix execute independent Brownian motions without mutual interaction. By a suitable choice of initial conditions, the Brownian motion leads to an ensemble of random matrices which is a good statistical model for the Hamiltonian of a complex system possessing approximate conservation laws. The development with time of the Coulomb gas represents the statistical behavior of the eigenvalues of a complex system as the strength of conservation‐destroying interactions is gradually increased. A virial theorem'' is proved for the Brownian‐motion gas, and various properties of the stationary Coulomb gas are deduced as corollaries. We describe the spectral statistics of the first finite number of eigenvalues in a newly-forming band on the hard-edge of the spectrum of a random Hermitean matrix model. It is found that in a suitable scaling regime, they are described by the same spectral statistics of a finite-size Laguerre-type matrix model. The method is rigorously based on the Riemann-Hilbert analysis of the corresponding orthogonal polynomials.
Abstract of query paper
Cite abstracts
29371
29370
Most research related to unithood were conducted as part of a larger effort for the determination of termhood. Consequently, novelties are rare in this small sub-field of term extraction. In addition, existing work were mostly empirically motivated and derived. We propose a new probabilistically-derived measure, independent of any influences of termhood, that provides dedicated measures to gather linguistic evidence from parsed text and statistical evidence from Google search engine for the measurement of unithood. Our comparative study using 1,825 test cases against an existing empirically-derived function revealed an improvement in terms of precision, recall and accuracy.
In this paper, we propose a new idea for the automatic recognition of domain specific terms. Our idea is based on the statistics between a compound noun and its component single-nouns. More precisely, we focus basically on how many nouns adjoin the noun in question to form compound nouns. We propose several scoring methods based on this idea and experimentally evaluate them on the NTCIRI TMREC test collection. The results are very promising especially in the low recall area. The information used for the extraction of terms can be considered as rather 'internal', i.e. coming from the candidate string itself. This paper presents the incorporation of 'external' information derived from the context of the candidate string. It is embedded to the C-value approach for automatic term recognition (ATR), in the form of weights constructed from statistical characteristics of the context words of the candidate string.
Abstract of query paper
Cite abstracts
29372
29371
The benefit of multi-antenna receivers is investigated in wireless ad hoc networks, and the main finding is that network throughput can be made to scale linearly with the number of receive antennas N_r even if each transmitting node uses only a single antenna. This is in contrast to a large body of prior work in single-user, multiuser, and ad hoc wireless networks that have shown linear scaling is achievable when multiple receive and transmit antennas (i.e., MIMO transmission) are employed, but that throughput increases logarithmically or sublinearly with N_r when only a single transmit antenna (i.e., SIMO transmission) is used. The linear gain is achieved by using the receive degrees of freedom to simultaneously suppress interference and increase the power of the desired signal, and exploiting the subsequent performance benefit to increase the density of simultaneous transmissions instead of the transmission rate. This result is proven in the transmission capacity framework, which presumes single-hop transmissions in the presence of randomly located interferers, but it is also illustrated that the result holds under several relaxations of the model, including imperfect channel knowledge, multihop transmission, and regular networks (i.e., interferers are deterministically located on a grid).
This paper investigates the performance of spatial diversity techniques in dense ad hoc networks. We derive analytical expressions for the contention density in systems employing MIMO-MRC or OSTBC. In the case of MIMO- MRC the expressions are based on new expansions for the SIR distribution in the high interference regime typical in dense networks. Our results are confirmed through comparison with Monte Carlo simulations. In ad hoc networks of nodes equipped with multiple antennas, the tradeoff between spatial multiplexing and diversity gains in each link impacts the overall network capacity. An optimal algorithm is developed for adaptive rate and power control for a communication link over multiple channels in a Poisson field of interferers. The algorithm and its analysis demonstrate that optimum area spectral efficiency is achieved when each communication link in a large distributed wireless network properly balances between diversity and multiplexing techniques. The channel adaptive algorithm is shown to be superior to traditional and static multi-antenna architectures, as well as to certain channel adaptive strategies previously proposed. Lastly, the adaptive rate control algorithm is coupled with an optimum frequency hopping scheme to achieve the maximum area spectral efficiency. We study in this paper the network spectral efficiency of a multiple-input multiple-output (MIMO) ad hoc network with K simultaneous communicating transmitter-receiver pairs. Assuming that each transmitter is equipped with t antennas and each receiver with r antennas and each receiver implements single-user detection, we show that in the absence of channel state information (CSI) at the transmitters, the asymptotic network spectral efficiency is limited by r nats s Hz as Krarrinfin and is independent of t and the transmit power. With CSI corresponding to the intended receiver available at the transmitter, we demonstrate that the asymptotic spectral efficiency is at least t+r+2radictr nats s Hz. Asymptotically optimum signaling is also derived under the same CSI assumption, i.e., each transmitter knows the channel corresponding to its desired receiver only. Further capacity improvement is possible with stronger CSI assumption; we demonstrate this using a heuristic interference suppression transmit beamforming approach. The conventional orthogonal transmission approach is also analyzed. In particular, we show that with idealized medium access control, the channelized transmission has unbounded asymptotic spectral efficiency under the constant per-user power constraint. The impact of different power constraints on the asymptotic spectral efficiency is also carefully examined. Finally, numerical examples are given that confirm our analysis We study the throughput limits of a MIMO (multiple-input multiple output) ad hoc network with K simultaneous communicating transceiver pairs. Assume that each transmitter is equipped with t antennas and the receivers with r antennas, we show that in the absence of channel state information (CSI) at the transmitters, the asymptotic network throughput is limited by r nats s Hz as K spl rarr spl infin . With CSI corresponding to the desired receiver available at the transmitter, we demonstrate that an asymptotic throughput of t+r+2 spl radic tr nats s Hz can be achieved using a simple beamforming approach. Further, we show that the asymptotically optimal transmission scheme with CSI amounts to a single-user waterfilling for a properly scaled channel. Beamforming antennas have the potential to provide a fundamental breakthrough in ad hoc network capacity. We present a broad-based examination of this potential, focusing on exploiting the longer ranges as well as the reduced interference that beamforming antennas can provide. We consider a number of enhancements to a convectional ad hoc network system, and evaluation the impact of each enhancement using simulation. Such enhancements include "aggressive" and "conservative" channel access models for beamforming antennas, link power control, and directional neighbor discovery. Our simulations are based on detailed modeling on detailed modeling of steered as well as swiched beams using antenna patterns of varying gains, and a realistic radio and propagation model. For the scenarios studied, our results show that beamforming can yield a 28 to 118 (depending upon the density) improvement in throughput, and up to a factor-of-28 reduction in delay. Our study also tells us which mechanisms are likely to be more effective and under what conditions, which in turn identifies areas where future research is neede In this paper, the rate regions are studied for MIMO ad hoc networks. We first apply a framework developed for single antenna systems to MIMO systems, which gives the ultimate capacity region. Motivated by the fact that the ultimate capacity regions allow an optimization that may he unrealistic in some networks, a new concept of average rate region is proposed. We show the large gap between the ultimate capacity region and the new average rate region, while the latter is an upper bound on the performance of many existing ad hoc routing protocols. On the other hand, the average rate region also gives the average system performance over fading or random node positions. The outage capacity region is also defined. Through the study of the different rate regions, we show that the gain from multiple antennas for networks is similar to that for point-to-point communications. The gains obtained from multi-hop routing and spatial reuse are also shown for MIMO networks. Directional antennas offer tremendous potential for improving the performance of ad hoc networks. Harnessing this potential, however, requires new mechanisms at the medium access and network layers for intelligently and adaptively exploiting the antenna system. While recent years have seen a surge of research into such mechanisms, the problem of developing a complete ad hoc networking system, including the unique challenge of real-life prototype development and experimentation has not been addressed. In this paper, we present utilizing directional antennas for ad hoc networking (UDAAN). UDAAN is an interacting suite of modular network- and medium access control (MAC)-layer mechanisms for adaptive control of steered or switched antenna systems in an ad hoc network. UDAAN consists of several new mechanisms-a directional power-controlled MAC, neighbor discovery with beamforming, link characterization for directional antennas, proactive routing and forwarding-all working cohesively to provide the first complete systems solution. We also describe the development of a real-life ad hoc network testbed using UDAAN with switched directional antennas, and we discuss the lessons learned during field trials. High fidelity simulation results, using the same networking code as in the prototype, are also presented both for a specific scenario and using random mobility models. For the range of parameters studied, our results show that UDAAN can produce a very significant improvement in throughput over omnidirectional communications.
Abstract of query paper
Cite abstracts
29373
29372
We provide a computer-verified exact monadic functional implementation of the Riemann integral in type theory. Together with previous work by O'Connor, this may be seen as the beginning of the realization of Bishop's vision to use constructive mathematics as a programming language for exact analysis.
The notion of formal space was introduced by Fourman and Grayson [FG] only a few years ago, but it is only a recent though important step of a long story whose roots involve such names as Brouwer and Stone and whose development is due to mathematicians from different fields, mainly algebraic geometry, category theory and logic. Cauchy’s construction of reals as sequences of rational approximations is the theoretical basis for a number of implementations of exact real numbers, while Dedekind’s construction of reals as cuts has inspired fewer useful computational ideas. Nevertheless, we can see the computational content of Dedekind reals by constructing them within Abstract Stone Duality (ASD), a computationally meaningful calculus for topology. This provides the theoretical background for a novel way of computing with real numbers in the style of logic programming. Real numbers are dened in terms of (lower and upper) Dedekind cuts, while programs are expressed as statements about real numbers in the language of ASD. By adapting Newton’s method to interval arithmetic we can make the computations as ecient as those based on Cauchy reals. The results reported in this talk are joint work with Paul Taylor. We show how functional languages can be used to write programs for real-valued functionals in exact real arithmetic. We concentrate on two useful functionals: definite integration, and the functional returning the maximum value of a continuous function over a closed interval. The algorithms are a practical application of a method, due to Berger, for computing quantifiers over streams. Correctness proofs for the algorithms make essential use of domain theory. We give a coinductive characterization of the set of continuous functions defined on a compact real interval, and extract certified programs that construct and combine exact real number algorithms with respect to the binary signed digit representation of real numbers. The data type corresponding to the coinductive definition of continuous functions consists of finitely branching non-wellfounded trees describing when the algorithm writes and reads digits. This is a pilot study in using proof-theoretic methods for certified algorithms in exact real arithmetic.
Abstract of query paper
Cite abstracts
29374
29373
We provide a computer-verified exact monadic functional implementation of the Riemann integral in type theory. Together with previous work by O'Connor, this may be seen as the beginning of the realization of Bishop's vision to use constructive mathematics as a programming language for exact analysis.
There are two incompatible Coq libraries that have a theory of the real numbers; the Coq standard library gives an axiomatic treatment of classical real numbers, while the CoRN library from Nijmegen defines constructively valid real numbers. Unfortunately, this means results about one structure cannot easily be used in the other structure. We present a way interfacing these two libraries by showing that their real number structures are isomorphic assuming the classical axioms already present in the standard library reals. This allows us to use O'Connor's decision procedure for solving ground inequalities present in CoRN to solve inequalities about the reals from the Coq standard library, and it allows theorems from the Coq standard library to apply to problem about the CoRN reals.
Abstract of query paper
Cite abstracts
29375
29374
We provide a computer-verified exact monadic functional implementation of the Riemann integral in type theory. Together with previous work by O'Connor, this may be seen as the beginning of the realization of Bishop's vision to use constructive mathematics as a programming language for exact analysis.
It is well known that mathematical proofs often contain (abstract) algorithms, but although these algorithms can be understood by a human, it still takes a lot of time and effort to implement these algorithms on a computer; moreover, one runs the risk of making mistakes in the process. We present C-CoRN, the Constructive Coq Repository at Nijmegen. It consists of a mathematical library of constructive algebra and analysis formalized in the theorem prover Coq. We explain the structure and the contents of the library and we discuss the motivation and some (possible) applications of such a library.
Abstract of query paper
Cite abstracts
29376
29375
Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segregate overlapping shapes that correspond to different data points. We demonstrate performance of individual algorithms, using a combination of generated and real-life images.
Two-dimensional (2-D) plots in digital documents contain important information. Often, the results of scientific experiments and performance of businesses are summarized using plots. Although 2-D plots are easily understood by human users, current search engines rarely utilize the information contained in the plots to enhance the results returned in response to queries posed by end- users. We propose an automated algorithm for extracting information from line curves in 2-D plots. The extracted information can be stored in a database and indexed to answer end-user queries and enhance search results. We have collected 2-D plot images from a variety of resources and tested our extraction algorithms. Experimental evaluation has demonstrated that our method can produce results suitable for real world use. Figures are very important non-textual information contained in scientific documents. Current digital libraries do not provide users tools to retrieve documents based on the information available within the figures. We propose an architecture for retrieving documents by integrating figures and other information. The initial step in enabling integrated document search is to categorize figures into a set of pre-defined types. We propose several categories of figures based on their functionalities in scholarly articles. We have developed a machine-learning-based approach for automatic categorization of figures. Both global features, such as texture, and part features, such as lines, are utilized in the architecture for discriminating among figure categories. The proposed approach has been evaluated on a testbed document set collected from the CiteSeer scientific literature digital library. Experimental evaluation has demonstrated that our algorithms can produce acceptable results for realworld use. Our tools will be integrated into a scientific document digital library. In this paper, an algorithm is developed for segmenting document images into four classes: background, photograph, text, and graph. Features used for classification are based on the distribution patterns of wavelet coefficients in high frequency bands. Two important attributes of the algorithm are its multiscale nature-it classifies an image at different resolutions adaptively, enabling accurate classification at class boundaries as well as fast classification overall-and its use of accumulated context information for improving classification accuracy.
Abstract of query paper
Cite abstracts
29377
29376
We prove the existence and uniqueness, for wave speeds sufficiently large, of monotone traveling wave solutions connecting stable to unstable spatial equilibria for a class of @math -dimensional lattice differential equations with unidirectional coupling. This class of lattice equations includes some spatial discretizations for hyperbolic conservation laws with a source term as well as a subclass of monotone systems. We obtain a variational characterization of the critical wave speed above which monotone traveling wave solutions are guaranteed to exist. We also discuss non-monotone waves, and the coexistence of monotone and non-monotone waves.
It is shown that many of the asymptotic properties of the Fisher model for population genetics and population ecology can also be derived for a class of models in which time is discrete and space may or may not be discrete. This allows one to discuss the behavior of models in which the data consist of occasional counts on survey tracts, as well as that of computer models.
Abstract of query paper
Cite abstracts
29378
29377
We prove the existence and uniqueness, for wave speeds sufficiently large, of monotone traveling wave solutions connecting stable to unstable spatial equilibria for a class of @math -dimensional lattice differential equations with unidirectional coupling. This class of lattice equations includes some spatial discretizations for hyperbolic conservation laws with a source term as well as a subclass of monotone systems. We obtain a variational characterization of the critical wave speed above which monotone traveling wave solutions are guaranteed to exist. We also discuss non-monotone waves, and the coexistence of monotone and non-monotone waves.
Abstract This work proves the existence and multiplicity results of monotonic traveling wave solutions for some lattice differential equations by using the monotone iteration method. Our results include the model of cellular neural networks (CNN). In addition to the monotonic traveling wave solutions, non-monotonic and oscillating traveling wave solutions in the delay type of CNN are also obtained. It is shown that many of the asymptotic properties of the Fisher model for population genetics and population ecology can also be derived for a class of models in which time is discrete and space may or may not be discrete. This allows one to discuss the behavior of models in which the data consist of occasional counts on survey tracts, as well as that of computer models. A class of integral recursion models for the growth and spread of a synchronized single-species population is studied. It is well known that if there is no overcompensation in the fecundity function, the recursion has an asymptotic spreading speed c*, and that this speed can be characterized as the speed of the slowest non-constant traveling wave solution. A class of integral recursions with overcompensation which still have asymptotic spreading speeds can be found by using the ideas introduced by Thieme (J Reine Angew Math 306:94–121, 1979) for the study of space-time integral equation models for epidemics. The present work gives a large subclass of these models with overcompensation for which the spreading speed can still be characterized as the slowest speed of a non-constant traveling wave. To illustrate our results, we numerically simulate a series of traveling waves. The simulations indicate that, depending on the properties of the fecundity function, the tails of the waves may approach the carrying capacity monotonically, may approach the carrying capacity in an oscillatory manner, or may oscillate continually about the carrying capacity, with its values bounded above and below by computable positive numbers.
Abstract of query paper
Cite abstracts
29379
29378
We prove the existence and uniqueness, for wave speeds sufficiently large, of monotone traveling wave solutions connecting stable to unstable spatial equilibria for a class of @math -dimensional lattice differential equations with unidirectional coupling. This class of lattice equations includes some spatial discretizations for hyperbolic conservation laws with a source term as well as a subclass of monotone systems. We obtain a variational characterization of the critical wave speed above which monotone traveling wave solutions are guaranteed to exist. We also discuss non-monotone waves, and the coexistence of monotone and non-monotone waves.
Abstract This work proves the existence and multiplicity results of monotonic traveling wave solutions for some lattice differential equations by using the monotone iteration method. Our results include the model of cellular neural networks (CNN). In addition to the monotonic traveling wave solutions, non-monotonic and oscillating traveling wave solutions in the delay type of CNN are also obtained. The theory of a novel class of information-processing systems, called cellular neural networks, which are capable of high-speed parallel signal processing, was presented in a previous paper (see ibid., vol.35, no.10, p.1257-72, 1988). A dynamic route approach for analyzing the local dynamics of this class of neural circuits is used to steer the system trajectories into various stable equilibrium configurations which map onto binary patterns to be recognized. Some applications of cellular neural networks to such areas as image processing and pattern recognition are demonstrated, albeit with only a crude circuit. In particular, examples of cellular neural networks which can be designed to recognize the key features of Chinese characters are presented. >
Abstract of query paper
Cite abstracts
29380
29379
We prove the existence and uniqueness, for wave speeds sufficiently large, of monotone traveling wave solutions connecting stable to unstable spatial equilibria for a class of @math -dimensional lattice differential equations with unidirectional coupling. This class of lattice equations includes some spatial discretizations for hyperbolic conservation laws with a source term as well as a subclass of monotone systems. We obtain a variational characterization of the critical wave speed above which monotone traveling wave solutions are guaranteed to exist. We also discuss non-monotone waves, and the coexistence of monotone and non-monotone waves.
In this paper, we study the structure of traveling wave solutions of Cellular Neural Networks of the advanced type. We show the existence of monotone traveling wave, oscillating wave and eventually periodic wave solutions by using shooting method and comparison principle. In addition, we obtain the existence of periodic wave train solutions. Abstract This work proves the existence and multiplicity results of monotonic traveling wave solutions for some lattice differential equations by using the monotone iteration method. Our results include the model of cellular neural networks (CNN). In addition to the monotonic traveling wave solutions, non-monotonic and oscillating traveling wave solutions in the delay type of CNN are also obtained.
Abstract of query paper
Cite abstracts
29381
29380
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
We describe in this paper polynomial heuristics for three important hard problems--the discrete fixed cost median problem (the plant location problem), the continuous fixed cost median problem in a Euclidean space, and the network fixed cost median problem with convex costs. The heuristics for all the three problems guarantee error ratios no worse than the logarithm of the number of customer points. The derivation of the heuristics is based on the presentation of all types of median problems discussed as a set covering problem.
Abstract of query paper
Cite abstracts
29382
29381
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
This work gives new insight into two well-known approximation algorithms for the uncapacitated facility location problem: the primal-dual algorithm of Jain & Vazirani, and an algorithm of Mettu & Plaxton. Our main result answers positively a question posed by Jain & Vazirani of whether their algorithm can be modified to attain a desired “continuity” property. This yields an upper bound of 3 on the integrality gap of the natural LP relaxation of the k-median problem, but our approach does not yield a polynomial time algorithm with this guarantee. We also give a new simple proof of the performance guarantee of the Mettu-Plaxton algorithm using LP duality, which suggests a minor modification of the algorithm that makes it Lagrangian-multiplier preserving. We present the first constant-factor approximation algorithm for the metric k-median problem. The k-median problem is one of the most well-studied clustering problems, i.e., those problems in which the aim is to partition a given set of points into clusters so that the points within a cluster are relatively close with respect to some measure. For the metric k-median problem, we are given n points in a metric space. We select k of these to be cluster centers and then assign each point to its closest selected center. If point j is assigned to a center i, the cost incurred is proportional to the distance between i and j. The goal is to select the k centers that minimize the sum of the assignment costs. We give a 62 3-approximation algorithm for this problem. This improves upon the best previously known result of O(log k log log k), which was obtained by refining and derandomizing a randomized O(log n log log n)-approximation algorithm of Bartal. We analyze local search heuristics for the metric k-median and facility location problems. We define the locality gap of a local search procedure for a minimization problem as the maximum ratio of a locally optimum solution (obtained using this procedure) to the global optimum. For k-median, we show that local search with swaps has a locality gap of 5. Furthermore, if we permit up to p facilities to be swapped simultaneously, then the locality gap is 3+2 p. This is the first analysis of a local search for k-median that provides a bounded performance guarantee with only k medians. This also improves the previous known 4 approximation for this problem. For uncapacitated facility location, we show that local search, which permits adding, dropping, and swapping a facility, has a locality gap of 3. This improves the bound of 5 given by M. Korupolu, C. Plaxton, and R. Rajaraman [Analysis of a Local Search Heuristic for Facility Location Problems, Technical Report 98-30, DIMACS, 1998]. We also consider a capacitated facility location problem where each facility has a capacity and we are allowed to open multiple copies of a facility. For this problem we introduce a new local search operation which opens one or more copies of a facility and drops zero or more facilities. We prove that this local search has a locality gap between 3 and 4. We present improved combinatorial approximation algorithms for the uncapacitated facility location problem. Two central ideas in most of our results are cost scaling and greedy improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of @math in @math time. This also yields a bicriteria approximation tradeoff of @math for facility cost versus service cost which is better than previously known tradeoffs and close to the best possible. Combining greedy improvement and cost scaling with a recent primal-dual algorithm for facility location due to Jain and Vazirani, we get an approximation ratio of @math in @math time. This is very close to the approximation guarantee of the best known algorithm which is linear programming (LP)-based. Further, combined with the best known LP-based algorithm for facility location, we get a very slight improvement in the approximation factor for facility location, achieving @math . We also consider a variant of the capacitated facility location problem and present improved approximation algorithms for this. In this paper, we define a network service provider game. We show that the price of anarchy of the defined game can be bounded by analyzing a local search heuristic for a related facility location problem called the k-facility location problem. As a result, we show that the k-facility location problem has a locality gap of 5. This result is of interest on its own. Our result gives evidence to the belief that the price of anarchy of certain games are related to analysis of local search heuristics.
Abstract of query paper
Cite abstracts
29383
29382
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
It has long been realized that in pulse-code modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as the number of quanta becomes infinite, the asymptotic fractional density of quanta per unit voltage should vary as the one-third power of the probability density per unit voltage of signal amplitudes. In this paper the corresponding result for any finite number of quanta is derived; that is, necessary conditions are found that the quanta and associated quantization intervals of an optimum finite quantization scheme must satisfy. The optimization criterion used is that the average quantization noise power be a minimum. It is shown that the result obtained here goes over into the Panter and Dite result as the number of quanta become large. The optimum quautization schemes for 2^ b quanta, b=1,2, , 7 , are given numerically for Gaussian and for Laplacian distribution of signal amplitudes. We present the first linear time (1 + spl epsiv )-approximation algorithm for the k-means problem for fixed k and spl epsiv . Our algorithm runs in O(nd) time, which is linear in the size of the input. Another feature of our algorithm is its simplicity - the only technique involved is random sampling. In k-means clustering we are given a set of n data points in d-dimensional space Rd and an integer k, and the problem is to determine a set of k points in &#211C;d, called centers, to minimize the mean squared distance from each data point to its nearest center. No exact polynomial-time algorithms are known for this problem. Although asymptotically efficient approximation algorithms exist, these algorithms are not practical due to the extremely high constant factors involved. There are many heuristics that are used in practice, but we know of no bounds on their performance.We consider the question of whether there exists a simple and practical approximation algorithm for k-means clustering. We present a local improvement heuristic based on swapping centers in and out. We prove that this yields a (9+e)-approximation algorithm. We show that the approximation factor is almost tight, by giving an example for which the algorithm achieves an approximation factor of (9-e). To establish the practical value of the heuristic, we present an empirical study that shows that, when combined with Lloyd's algorithm, this heuristic performs quite well in practice.
Abstract of query paper
Cite abstracts
29384
29383
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
The problem of clustering a set of points so as to minimize the maximum intercluster distance is studied. An O(kn) approximation algorithm, where n is the number of points and k is the number of clusters, that guarantees solutions with an objective function value within two times the optimal solution value is presented. This approximation algorithm succeeds as long as the set of points satisfies the triangular inequality. We also show that our approximation algorithm is best possible, with respect to the approximation bound, if PZ NP. In this paper a powerful, and yet simple, technique for devising approximation algorithms for a wide variety of NP-complete problems in routing, location, and communication network design is investigated. Each of the algorithms presented here delivers an approximate solution guaranteed to be within a constant factor of the optimal solution. In addition, for several of these problems we can show that unless P = NP, there does not exist a polynomial-time algorithm that has a better performance guarantee.
Abstract of query paper
Cite abstracts
29385
29384
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
A fundamental facility location problem is to choose the location of facilities, such as industrial plants and warehouses, to minimize the cost of satisfying the demand for some commodity. There are associated costs for locating the facilities, as well as transportation costs for distributing the commodities. We assume that the transportation costs form a metric. This problem is commonly referred to as theuncapacitated facility locationproblem. Application to bank account location and clustering, as well as many related pieces of work, are discussed by Cornuejols, Nemhauser, and Wolsey. Recently, the first constant factor approximation algorithm for this problem was obtained by Shmoys, Tardos, and Aardal. We show that a simple greedy heuristic combined with the algorithm by Shmoys, Tardos, and Aardal, can be used to obtain an approximation guarantee of 2.408. We discuss a few variants of the problem, demonstrating better approximation factors for restricted versions of the problem. We also show that the problem is max SNP-hard. However, the inapproximability constants derived from the max SNP hardness are very close to one. By relating this problem to Set Cover, we prove a lower bound of 1.463 on the best possible approximation ratio, assumingNP?DTIMEnO(loglogn)]. We obtain a 1.5-approximation algorithm for the metric uncapacitated facility location problem (UFL), which improves on the previously best known 1.52-approximation algorithm by Mahdian, Ye and Zhang. Note, that the approximability lower bound by Guha and Khuller is 1.463. An algorithm is a ( @math , @math )-approximation algorithm if the solution it produces has total cost at most @math , where @math and @math are the facility and the connection cost of an optimal solution. Our new algorithm, which is a modification of the @math -approximation algorithm of Chudak and Shmoys, is a (1.6774,1.3738)-approximation algorithm for the UFL problem and is the first one that touches the approximability limit curve @math established by Jain, Mahdian and Saberi. As a consequence, we obtain the first optimal approximation algorithm for instances dominated by connection costs. When combined with a (1.11,1.7764)-approximation algorithm proposed by , and later analyzed by , we obtain the overall approximation guarantee of 1.5 for the metric UFL problem. We also describe how to use our algorithm to improve the approximation ratio for the 3-level version of UFL. We develop a general method for turning a primal-dual algorithm into a group strategy proof cost-sharing mechanism. We use our method to design approximately budget balanced cost sharing mechanisms for two NP-complete problems: metric facility location, and single source rent-or-buy network design. Both mechanisms are competitive, group strategyproof and recover a constant fraction of the cost. For the facility location game our cost-sharing method recovers a 1 3rd of the total cost, while in the network design game the cost shares pay for a 1 15 fraction of the cost of the solution. We analyze local search heuristics for the metric k-median and facility location problems. We define the locality gap of a local search procedure for a minimization problem as the maximum ratio of a locally optimum solution (obtained using this procedure) to the global optimum. For k-median, we show that local search with swaps has a locality gap of 5. Furthermore, if we permit up to p facilities to be swapped simultaneously, then the locality gap is 3+2 p. This is the first analysis of a local search for k-median that provides a bounded performance guarantee with only k medians. This also improves the previous known 4 approximation for this problem. For uncapacitated facility location, we show that local search, which permits adding, dropping, and swapping a facility, has a locality gap of 3. This improves the bound of 5 given by M. Korupolu, C. Plaxton, and R. Rajaraman [Analysis of a Local Search Heuristic for Facility Location Problems, Technical Report 98-30, DIMACS, 1998]. We also consider a capacitated facility location problem where each facility has a capacity and we are allowed to open multiple copies of a facility. For this problem we introduce a new local search operation which opens one or more copies of a facility and drops zero or more facilities. We prove that this local search has a locality gap between 3 and 4. We present improved combinatorial approximation algorithms for the uncapacitated facility location problem. Two central ideas in most of our results are cost scaling and greedy improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of @math in @math time. This also yields a bicriteria approximation tradeoff of @math for facility cost versus service cost which is better than previously known tradeoffs and close to the best possible. Combining greedy improvement and cost scaling with a recent primal-dual algorithm for facility location due to Jain and Vazirani, we get an approximation ratio of @math in @math time. This is very close to the approximation guarantee of the best known algorithm which is linear programming (LP)-based. Further, combined with the best known LP-based algorithm for facility location, we get a very slight improvement in the approximation factor for facility location, achieving @math . We also consider a variant of the capacitated facility location problem and present improved approximation algorithms for this. We design a new approximation algorithm for the metric uncapacitated facility location problem. This algorithm is of LP rounding type and is based on a rounding technique developed in [5,6,7].
Abstract of query paper
Cite abstracts
29386
29385
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis. For many topics, the World Wide Web contains hundreds or thousands of relevant documents of widely varying quality. Users face a daunting challenge in identifying a small subset of documents worthy of their attention. Link analysis algorithms have received much interest recently, in large part for their potential to identify high quality items. We report here on an experimental evaluation of this potential. We evaluated a number of link and content-based algorithms using a dataset of web documents rated for quality by human topic experts. Link-based metrics did a good job of picking out high-quality items. Precision at 5 is about 0.75, and precision at 10 is about 0.55; this is in a dataset where 0.32 of all documents were of high quality. Surprisingly, a simple content-based metric performed nearly as well; ranking documents by the total number of pages on their containing site.
Abstract of query paper
Cite abstracts
29387
29386
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
For many topics, the World Wide Web contains hundreds or thousands of relevant documents of widely varying quality. Users face a daunting challenge in identifying a small subset of documents worthy of their attention. Link analysis algorithms have received much interest recently, in large part for their potential to identify high quality items. We report here on an experimental evaluation of this potential. We evaluated a number of link and content-based algorithms using a dataset of web documents rated for quality by human topic experts. Link-based metrics did a good job of picking out high-quality items. Precision at 5 is about 0.75, and precision at 10 is about 0.55; this is in a dataset where 0.32 of all documents were of high quality. Surprisingly, a simple content-based metric performed nearly as well; ranking documents by the total number of pages on their containing site. Measures based on the Link Recommendation Assumption are hypothesised to help modern Web search engines rank ‘important, high quality’ pages ahead of relevant but less valuable pages and to reject ‘spam’. We tested these hypotheses using inlink counts and PageRank scores readily obtainable from search engines Google and Fast. We found that the average Google-reported PageRank of websites operated by Fortune 500 companies was approximately one point higher than the average for a large selection of companies. The same was true for Fortune Most Admired companies. A substantially bigger difference was observed in favour of companies with famous brands. Investigating less desirable biases, we found a one point bias toward technology companies, and a two point bias in favour of IT companies listed in the Wired 40. We found negligible bias in favour of US companies. Log of indegree was highly correlated with Google-reported PageRank scores, and just as effective when predicting desirable company attributes. Further, we found that PageRank scores for sites within a known spam network were no lower than would be expected on the basis of their indegree. We encounter no compelling evidence to support the use of PageRank over indegree.
Abstract of query paper
Cite abstracts
29388
29387
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
Some methods for rank correlation in evaluation are examined and their relative advantages and disadvantages are discussed. In particular, it is suggested that different test statistics should be used for providing additional information about the experiments other that the one provided by statistical significance testing. Kendall's τ is often used for testing-rank correlation, yet it is little appropriate if the objective of the test is different from what τ was designed for. In particular, attention should be paid to the null hypothesis. Other measures for rank correlation are described. If one test statistic suggests to reject a hypothesis, other test statistics should be used to support or to revise the decision. The paper then focuses on rank correlation between webpage lists ordered by PageRank for applying the general reflections on these test statistics. An interpretation of PageRank behaviour is provided on the basis of the discussion of the test statistics for rank correlation.
Abstract of query paper
Cite abstracts
29389
29388
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
In this short paper we estimate the size of the public indexable web at 11.5 billion pages. We also estimate the overlap and the index size of Google, MSN, Ask Teoma and Yahoo! Recent studies show that a majority of Web page accesses are referred by search engines. In this paper we study the widespread use of Web search engines and its impact on the ecology of the Web. In particular, we study how much impact search engines have on the popularity evolution of Web pages. For example, given that search engines return currently popular" pages at the top of search results, are we somehow penalizing newly created pages that are not very well known yet? Are popular pages getting even more popular and new pages completely ignored? We first show that this unfortunate trend indeed exists on the Web through an experimental study based on real Web data. We then analytically estimate how much longer it takes for a new page to attract a large number of Web users when search engines return only popular pages at the top of search results. Our result shows that search engines can have an immensely worrisome impact on the discovery of new Web pages. In a number of recent studies [4, 8] researchers have found that because search engines repeatedly return currently popular pages at the top of search results, popular pages tend to get even more popular, while unpopular pages get ignored by an average user. This "rich-get-richer" phenomenon is particularly problematic for new and high-quality pages because they may never get a chance to get users' attention, decreasing the overall quality of search results in the long run. In this paper, we propose a new ranking function, called page quality that can alleviate the problem of popularity-based ranking. We first present a formal framework to study the search engine bias by discussing what is an "ideal" way to measure the intrinsic quality of a page. We then compare how PageRank, the current ranking metric used by major search engines, differs from this ideal quality metric. This framework will help us investigate the search engine bias in more concrete terms and provide clear understanding why PageRank is effective in many cases and exactly when it is problematic. We then propose a practical way to estimate the intrinsic page quality to avoid the inherent bias of PageRank. We derive our proposed quality estimator through a careful analysis of a reasonable web user model, and we present experimental results that show the potential of our proposed estimator. We believe that our quality estimator has the potential to alleviate the rich-get-richer phenomenon and help new and high-quality pages get the attention that they deserve.
Abstract of query paper
Cite abstracts
29390
29389
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
Using open source Web editing software (e.g., wiki), online community users can now easily edit, review and publish articles collaboratively. While much useful knowledge can be derived from these articles, content users and critics are often concerned about their qualities. In this paper, we develop two models, namely basic model and peer review model, for measuring the qualities of these articles and the authorities of their contributors. We represent collaboratively edited articles and their contributors in a bipartite graph. While the basic model measures an article?s quality using both the authorities of contributors and the amount of contribution from each contributor, the peer review model extends the former by considering the review aspect of article content. We present results of experiments conducted on some Wikipedia pages and their contributors. Our result show that the two models can effectively determine the articles? qualities and contributors? authorities using the collaborative nature of online communities.
Abstract of query paper
Cite abstracts
29391
29390
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
We report here empirical results of a series of studies aimed at automatically predicting information quality in news documents. Multiple research methods and data analysis techniques enabled a good level of machine prediction of information quality. Procedures regarding user experiments and statistical analysis are described.
Abstract of query paper
Cite abstracts
29392
29391
Recognizing analogies, synonyms, antonyms, and associations appear to be four distinct tasks, requiring distinct NLP algorithms. In the past, the four tasks have been treated independently, using a wide variety of algorithms. These four semantic classes, however, are a tiny sample of the full range of semantic phenomena, and we cannot afford to create ad hoc algorithms for each semantic phenomenon; we need to seek a unified approach. We propose to subsume a broad range of phenomena under analogies. To limit the scope of this paper, we restrict our attention to the subsumption of synonyms, antonyms, and associations. We introduce a supervised corpus-based machine learning algorithm for classifying analogous word pairs, and we show that it can solve multiple-choice SAT analogy questions, TOEFL synonym questions, ESL synonym-antonym questions, and similar-associated-both questions from cognitive psychology.
The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of relations between pairs of words in a text. We present an evaluation task designed to provide a framework for comparing different approaches to classifying semantic relations between nominals in a sentence. This is part of SemEval, the 4th edition of the semantic evaluation event previously known as SensEval. We define the task, describe the training test data and their creation, list the participating systems and discuss their results. There were 14 teams who submitted 15 systems.
Abstract of query paper
Cite abstracts
29393
29392
The 3- problem is also called the problem on 3-uniform hypergraphs. In this paper, we address kernelizations of the problem on 3-uniform hypergraphs. We show that this problem admits a linear kernel in three classes of 3-uniform hypergraphs. We also obtain lower and upper bounds on the kernel size for them by the parametric duality.
Given a collection C of subsets of size three of a finite set S and a positive integer k, the 3-Hitting Set problem is to determine a subset S' ⊆ S with |S'| ≤ k, so that S' contains at least one element from each subset in C. The problem is NP-complete, and is motivated, for example, by applications in computational biology. Improving previous work, we give an O(2.270k + n) time algorithm for 3-Hitting Set, which is efficient for small values of k, a typical occurrence in some applications. For d-Hitting Set we present an O(ck + n) time algorithm with c = d - 1 + O(d-1). Classes of machines using very limited amounts of nondeterminism are studied. The @math question is related to questions about classes lying within P. Complete sets for these classes are given.
Abstract of query paper
Cite abstracts
29394
29393
The 3- problem is also called the problem on 3-uniform hypergraphs. In this paper, we address kernelizations of the problem on 3-uniform hypergraphs. We show that this problem admits a linear kernel in three classes of 3-uniform hypergraphs. We also obtain lower and upper bounds on the kernel size for them by the parametric duality.
The two objectives of this paper are: (1) to articulate three new general techniques for designing FPT algorithms, and (2) to apply these to obtain new FPT algorithms for Set Splitting and Vertex Cover. In the case of Set Splitting, we improve the best previous ( O ^*(72^k) ) FPT algorithm due to Dehne, Fellows and Rosamond [DFR03], to ( O ^*(8^k) ) by an approach based on greedy localization in conjunction with modeled crown reduction. In the case of Vertex Cover, we describe a new approach to 2k kernelization based on iterative compression and crown reduction, providing a potentially useful alternative to the Nemhauser-Trotter 2k kernelization. This survey reviews the basic notions of parameterized complexity, and describes some new approaches to designing FPT algorithms and problem reductions for graph problems. A kernelization algorithm for the 3-Hitting-Set problem is presented along with a general kernelization for d-Hitting-Set problems. For 3-Hitting-Set, a quadratic kernel is obtained by exploring properties of yes instances and employing what is known as crown reduction. Any 3-Hitting-Set instance is reduced into an equivalent instance that contains at most 5k2 + k elements (or vertices). This kernelization is an improvement over previously known methods that guarantee cubic-size kernels. Our method is used also to obtain a quadratic kernel for the Triangle Vertex Deletion problem. For a constant d ≥ 3, a kernelization of d-Hitting-Set is achieved by a generalization of the 3-Hitting-Set method, and guarantees a kernel whose order does not exceed (2d - 1)kd-1 + k.
Abstract of query paper
Cite abstracts
29395
29394
This paper is concerned with the fast computation of Fourier integral operators of the general form @math , where @math is a frequency variable, @math is a phase function obeying a standard homogeneity condition, and @math is a given input. This is of interest for such fundamental computations are connected with the problem of finding numerical solutions to wave equations, and also frequently arise in many applications including reflection seismology, curvilinear tomography and others. In two dimensions, when the input and output are sampled on @math Cartesian grids, a direct evaluation requires @math operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in @math time, i. e. with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm [Michielssen and Boag, IEEE Trans Antennas Propagat 44 (1996), 1086-1093]. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel @math to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel has approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part and, lastly, remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.
Algorithms for the rapid computation of the forward and inverse discrete Fourier transform for points which are nonequispaced or whose number is unrestricted are presented. The computational procedure is based on approximation using a local Taylor series expansion and the fast Fourier transform (FFT). The forward transform for nonequispaced points is computed as the solution of a linear system involving the inverse Fourier transform. This latter system is solved using the iterative method GMRES with preconditioning. Numerical results are given to confirm the efficiency of the algorithms. A group of algorithms is presented generalizing the fast Fourier transform to the case of noninteger frequencies and nonequispaced nodes on the interval @math . The schemes of this paper are based on a combination of certain analytical considerations with the classical fast Fourier transform and generalize both the forward and backward FFTs. Each of the algorithms requires @math arithmetic operations, where @math is the precision of computations and N is the number of nodes. The efficiency of the approach is illustrated by several numerical examples. We introduce a general purpose algorithm for rapidly computing certain types of oscillatory integrals which frequently arise in problems connected to wave propagation, general hyperbolic equations, and curvilinear tomography. The problem is to numerically evaluate a so-called Fourier integral operator (FIO) of the form @math at points given on a Cartesian grid. Here, @math is a frequency variable, @math is the Fourier transform of the input @math , @math is an amplitude, and @math is a phase function, which is typically as large as @math ; hence the integral is highly oscillatory. Because a FIO is a dense matrix, a naive matrix vector product with an input given on a Cartesian grid of size @math by @math would require @math operations. This paper develops a new numerical algorithm which requires @math operations and as low as @math in storage space (the constants in front of these estimates are small). It operates by localizing the integral over polar wedges with small angular aperture in the frequency plane. On each wedge, the algorithm factorizes the kernel @math into two components: (1) a diffeomorphism which is handled by means of a nonuniform FFT and (2) a residual factor which is handled by numerical separation of the spatial and frequency variables. The key to the complexity and accuracy estimates is the fact that the separation rank of the residual kernel is provably independent of the problem size. Several numerical examples demonstrate the numerical accuracy and low computational complexity of the proposed methodology. We also discuss the potential of our ideas for various applications such as reflection seismology.
Abstract of query paper
Cite abstracts
29396
29395
This paper is concerned with the fast computation of Fourier integral operators of the general form @math , where @math is a frequency variable, @math is a phase function obeying a standard homogeneity condition, and @math is a given input. This is of interest for such fundamental computations are connected with the problem of finding numerical solutions to wave equations, and also frequently arise in many applications including reflection seismology, curvilinear tomography and others. In two dimensions, when the input and output are sampled on @math Cartesian grids, a direct evaluation requires @math operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in @math time, i. e. with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm [Michielssen and Boag, IEEE Trans Antennas Propagat 44 (1996), 1086-1093]. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel @math to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel has approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part and, lastly, remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.
Abstract The integral ∫0Leiνφ(s,t)f(s)dswith a highly oscillatory kernel (large ν, ν is up to 2000) is considered. This integral is accurately evaluated with an improved trapezoidal rule and effectively transcribed using local Fourier basis and adaptive multiscale local Fourier basis. The representation of the oscillatory kernel in these bases is sparse. The coefficients after the application of local Fourier transform are smoothed. Sometimes this enables us to obtain further compression with wavelets. We examine the use of wavelet packets for the fast solution of integral equations with a highly oscillatory kernel. The redundancy of the wavelet packet transform allows the selection of a basis tailored to the problem at hand. It is shown that a well chosen wavelet packet basis is better suited to compress the discretized system than wavelets. The complexity of the matrix-vector product in an iterative solution method is then substantially reduced. A two-dimensional wavelet packet transform is derived and compared with a number of one-dimensional transforms that were presented earlier in literature. By means of some numerical experiments we illustrate the improved efficiency of the two-dimensional approach. Abstract We prove that certain oscillatory boundary integral operators occurring in acoustic scattering computations become sparse when represented in the appropriate local cosine transform orthonormal basis. Preface to the Classics Edition Preface Symbols 1. The Riesz-Fredholm theory for compact operators 2. Regularity properties of surface potentials 3. Boundary-value problems for the scalar Helmholtz equation 4. Boundary-value problems for the time-harmonic Maxwell equations and the vector Helmholtz equation 5. Low frequency behavior of solutions to boundary-value problems in scattering theory 6. The inverse scattering problem: exact data 7. Improperly posed problems and compact families 8. The determination of the shape of an obstacle from inexact far-field data 9. Optimal control problems in radiation and scattering theory References Index.
Abstract of query paper
Cite abstracts
29397
29396
This paper is concerned with the fast computation of Fourier integral operators of the general form @math , where @math is a frequency variable, @math is a phase function obeying a standard homogeneity condition, and @math is a given input. This is of interest for such fundamental computations are connected with the problem of finding numerical solutions to wave equations, and also frequently arise in many applications including reflection seismology, curvilinear tomography and others. In two dimensions, when the input and output are sampled on @math Cartesian grids, a direct evaluation requires @math operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in @math time, i. e. with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm [Michielssen and Boag, IEEE Trans Antennas Propagat 44 (1996), 1086-1093]. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel @math to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel has approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part and, lastly, remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.
The solution of Helmholtz and Maxwell equations by integral formulations (kernel in exp( i kr) r ) leads to large dense linear systems. Using direct solvers requires large computational costs in O(N 3 ) . Using iterative solvers, the computational cost is reduced to large matrix–vector products. The fast multipole method provides a fast numerical way to compute convolution integrals. Its application to Maxwell and Helmholtz equations was initiated by Rokhlin, based on a multipole expansion of the interaction kernel. A second version, proposed by Chew, is based on a plane–wave expansion of the kernel. We propose a third approach, the stable–plane–wave expansion, which has a lower computational expense than the multipole expansion and does not have the accuracy and stability problems of the plane–wave expansion. The computational complexity is N log N as with the other methods. Abstract The diagonal forms are constructed for the translation operators for the Helmholtz equation in three dimensions. While the operators themselves have a fairly complicated structure (described somewhat incompletely by the classical addition theorems for the Bessel functions), their diagonal forms turn out to be quite simple. These diagonal forms are realized as generalized integrals, possess straightforward physical interpretations, and admit stable numerical implementation. This paper uses the obtained analytical apparatus to construct an algorithm for the rapid application to arbitrary vectors of matrices resulting from the discretization of integral equations of the potential theory for the Helmholtz equation in three dimensions. It is an extension to the three-dimensional case of the results of Rokhlin (J. Complexity4(1988), 12-32), where a similar apparatus is developed in the two-dimensional case. Abstract The present paper describes an algorithm for rapid solution of boundary value problems for the Helmholtz equation in two dimensions based on iteratively solving integral equations of scattering theory. CPU time requirements of previously published algorithms of this type are of the order n 2 , where n is the number of nodes in the discretization of the boundary of the scatterer. The CPU time requirements of the algorithm of the present paper are n 4 3 , and can be further reduced, making it considerably more practical for large scale problems. We describe a wideband version of the Fast Multipole Method for the Helmholtz equation in three dimensions. It unifies previously existing versions of the FMM for high and low frequencies into an algorithm which is accurate and efficient for any frequency, having a CPU time of O(N) if low-frequency computations dominate, or O(NlogN) if high-frequency computations dominate. The performance of the algorithm is illustrated with numerical examples. The fast multipole method (FMM) has been implemented to speed up the matrix-vector multiply when an iterative method is used to solve the combined field integral equation (CFIE). FMM reduces the complexity from O(N2) to O(N1.5). With a multilevel fast multipole algorithm (MLFMA), it is further reduced to O(N log N). A 110, 592-unknown problem can be solved within 24 h on a SUN Sparc 10. © 1995 John Wiley & Sons, Inc. We study integral methods applied to the resolution of the Maxwell equations where the linear system is solved using an iterative method which requires only matrix?vector products. The fast multipole method (FMM) is one of the most efficient methods used to perform matrix?vector products and accelerate the resolution of the linear system. A problem involving N degrees of freedom may be solved in CNiterNlogN floating operations, where C is a constant depending on the implementation of the method. In this article several techniques allowing one to reduce the constant C are analyzed. This reduction implies a lower total CPU time and a larger range of application of the FMM. In particular, new interpolation and anterpolation schemes are proposed which greatly improve on previous algorithms. Several numerical tests are also described. These confirm the efficiency and the theoretical complexity of the FMM.
Abstract of query paper
Cite abstracts
29398
29397
This paper is concerned with the fast computation of Fourier integral operators of the general form @math , where @math is a frequency variable, @math is a phase function obeying a standard homogeneity condition, and @math is a given input. This is of interest for such fundamental computations are connected with the problem of finding numerical solutions to wave equations, and also frequently arise in many applications including reflection seismology, curvilinear tomography and others. In two dimensions, when the input and output are sampled on @math Cartesian grids, a direct evaluation requires @math operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in @math time, i. e. with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm [Michielssen and Boag, IEEE Trans Antennas Propagat 44 (1996), 1086-1093]. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel @math to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel has approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part and, lastly, remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.
A multilevel algorithm is presented for analyzing scattering from electrically large surfaces. The algorithm accelerates the iterative solution of integral equations that arise in computational electromagnetics. The algorithm permits a fast matrix-vector multiplication by decomposing the traditional method of moment matrix into a large number of blocks, with each describing the interaction between distant scatterers. The multiplication of each block by a trial solution vector is executed using a multilevel scheme that resembles a fast Fourier transform (FFT) and that only relies on well-known algebraic techniques. The computational complexity and the memory requirements of the proposed algorithm are O(N log sup 2 N). This paper introduces a directional multiscale algorithm for the two dimensional @math -body problem of the Helmholtz kernel with applications to high frequency scattering. The algorithm follows the approach in [Engquist and Ying, SIAM Journal on Scientific Computing, 29 (4), 2007] where the three dimensional case was studied. The main observation is that, for two regions that follow a directional parabolic geometric configuration, the interaction between the points in these two regions through the Helmholtz kernel is approximately low rank. We propose an improved randomized procedure for generating the low rank representations. Based on these representations, we organize the computation of the far field interaction in a multidirectional and multiscale way to achieve maximum efficiency. The proposed algorithm is accurate and has the optimal @math complexity for problems from two dimensional scattering applications. We present numerical results for several test examples to illustrate the algorithm and its application to two dimensional high frequency scattering problems. This paper introduces a new directional multilevel algorithm for solving @math -body or @math -point problems with highly oscillatory kernels. These systems often result from the boundary integral formulations of scattering problems and are difficult due to the oscillatory nature of the kernel and the non-uniformity of the particle distribution. We address the problem by first proving that the interaction between a ball of radius @math and a well-separated region has an approximate low rank representation, as long as the well-separated region belongs to a cone with a spanning angle of @math and is at a distance which is at least @math away from from the ball. We then propose an efficient and accurate procedure which utilizes random sampling to generate such a separated, low rank representation. Based on the resulting representations, our new algorithm organizes the high frequency far field computation by a multidirectional and multiscale strategy to achieve maximum efficiency. The algorithm performs well on a large group of highly oscillatory kernels. Our algorithm is proved to have @math computational complexity for any given accuracy when the points are sampled from a two dimensional surface. We also provide numerical results to demonstrate these properties.
Abstract of query paper
Cite abstracts
29399
29398
A clustered base transceiver station (BTS) coordination strategy is proposed for a large cellular MIMO network, which includes full intra-cluster coordination to enhance the sum rate and limited inter-cluster coordination to reduce interference for the cluster edge users. Multi-cell block diagonalization is used to coordinate the transmissions across multiple BTSs in the same cluster. To satisfy per-BTS power constraints, three combined precoder and power allocation algorithms are proposed with different performance and complexity tradeoffs. For inter-cluster coordination, the coordination area is chosen to balance fairness for edge users and the achievable sum rate. It is shown that a small cluster size (about 7 cells) is sufficient to obtain most of the sum rate benefits from clustered coordination while greatly relieving channel feedback requirement. Simulations show that the proposed coordination strategy efficiently reduces interference and provides a considerable sum rate gain for cellular MIMO networks.
The capacity and robustness of cellular MIMO systems is very sensitive to other-cell interference which will in practice necessitate network level interference reduction strategies. As an alternative to traditional static frequency reuse patterns, this paper investigates intercell scheduling among neighboring base stations. We show analytically that cooperatively scheduled transmission, which is well within the capability of present systems, can achieve an expanded multiuser diversity gain in terms of ergodic capacity as well as almost the same amount of interference reduction as conventional frequency reuse. This capacity gain over conventional frequency reuse is O (M t square-root of log Ns) for dirty paper coding and O (min (Mr, Mt) square-root of log Ns) for time division, where Ns is the number of cooperating base stations employing opportunistic scheduling in an M t x M r MIMO system. From a theoretical standpoint, an interesting aspect of this analysis comes from an altered view of multiuser diversity in the context of a multi-cell system. Previously, multiuser diversity capacity gain has been known to grow as O(log log K), from selecting the maximum of K exponentially-distributed powers. Because multicell considerations such as the positions of the users, lognormal shadowing, and pathless affect the multiuser diversity gain, we find instead that the gain is O(square-root of 2logic K), from selecting the maximum of a compound Iognormal-exponential distribution. Finding the maximum of such a distribution is an additional contribution of the paper.
Abstract of query paper
Cite abstracts
29400
29399
A clustered base transceiver station (BTS) coordination strategy is proposed for a large cellular MIMO network, which includes full intra-cluster coordination to enhance the sum rate and limited inter-cluster coordination to reduce interference for the cluster edge users. Multi-cell block diagonalization is used to coordinate the transmissions across multiple BTSs in the same cluster. To satisfy per-BTS power constraints, three combined precoder and power allocation algorithms are proposed with different performance and complexity tradeoffs. For inter-cluster coordination, the coordination area is chosen to balance fairness for edge users and the achievable sum rate. It is shown that a small cluster size (about 7 cells) is sufficient to obtain most of the sum rate benefits from clustered coordination while greatly relieving channel feedback requirement. Simulations show that the proposed coordination strategy efficiently reduces interference and provides a considerable sum rate gain for cellular MIMO networks.
We quantify the ultimate performance limits of inter-cell coordinatation in a cellular downlink network. The goal is to achieve fairness by maximizing the minimum rate in the network subject to per base power constraints. We first solve the max-min rate problem for a particular zero-forcing dirty paper coding scheme so as to obtain an achievable max-min rate, which serves as a lower bound on the ultimate limit. We then obtain a simple upper bound on the max-min rate of any scheme, and show that the rate achievable by the zero-forcing dirty paper coding scheme is close to this upper bound. We also extend our analysis to coordinated networks with multiple antennas. In this contribution we present new achievable rates, for the non-fading uplink channel of a cellular network, with joint cell-site processing, where unlike previous results, the error-free backhaul network has finite capacity per-cell. Namely, the cell-sites are linked to the central joint processor via lossless links with finite capacity. The cellular network is modeled by the circular Wyner model, which yields closed form expressions for the achievable rates. For this idealistic model, we present achievable rates for cell-sites that use compress-and forward scheme, combined with local decoding, and inter-cell time-sharing. These rates are then demonstrated to be rather close to the optimal unlimited backhaul joint processing rates, already for modest backhaul capacities, supporting the potential gain offered by the joint cell-site processing approach. A linear pre-processing plus encoding scheme is proposed, which significantly enhances cellular downlink performance, while putting the complexity burden on the transmitting end. The approach is based on LQ factorization of the channel transfer matrix combined with the "writing on dirty paper" approach (Caire, G. and Shamai, S., Proc. 38th Annual Allerton Conference on Communication, Control and Computing, 2000) for eliminating the effect of uncorrelated interference, which is fully known at the transmitter but unknown at the receiver. The attainable average rates with the proposed scheme approach those of optimum joint processing at the high SNR region. We study the potential benefits of base-station (BS) cooperation for downlink transmission in multicell networks. Based on a modified Wyner-type model with users clustered at the cell-edges, we analyze the dirty-paper-coding (DPC) precoder and several linear precoding schemes, including cophasing, zero-forcing (ZF), and MMSE precoders. For the nonfading scenario with random phases, we obtain analytical performance expressions for each scheme. In particular, we characterize the high signal-to-noise ratio (SNR) performance gap between the DPC and ZF precoders in large networks, which indicates a singularity problem in certain network settings. Moreover, we demonstrate that the MMSE precoder does not completely resolve the singularity problem. However, by incorporating path gain fading, we numerically show that the singularity problem can be eased by linear precoding techniques aided with multiuser selection. By extending our network model to include cell-interior users, we determine the capacity regions of the two classes of users for various cooperative strategies. In addition to an outer bound and a baseline scheme, we also consider several locally cooperative transmission approaches. The resulting capacity regions show the tradeoff between the performance improvement and the requirement for BS cooperation, signal processing complexity, and channel state information at the transmitter (CSIT). Recently, the remarkable capacity potential of multiple-input multiple-output (MIMO) wireless communication systems was unveiled. The predicted enormous capacity gain of MIMO is nonetheless significantly limited by cochannel interference (CCI) in realistic cellular environments. The previously proposed advanced receiver technique improves the system performance at the cost of increased receiver complexity, and the achieved system capacity is still significantly away from the interference-free capacity upper bound, especially in environments with strong CCI. In this paper, base station cooperative processing is explored to address the CCI mitigation problem in downlink multicell multiuser MIMO networks, and is shown to dramatically increase the capacity with strong CCI. Both information-theoretic dirty paper coding approach and several more practical joint transmission schemes are studied with pooled and practical per-base power constraints, respectively. Besides the CCI mitigation potential, other advantages of cooperative processing including the power gain, channel rank conditioning advantage, and macrodiversity protection are also addressed. The potential of our proposed joint transmission schemes is verified with both heuristic and realistic cellular MIMO settings. Cooperative transmission by base stations (BSs) can significantly improve the spectral efficiency of multiuser, multi-cell, multiple input multiple output (MIMO) systems. We show that contrary to what is often assumed in the literature, the multiuser interference in such systems is fundamentally asynchronous. Intuitively, perfect timing-advance mechanisms can at best only ensure that the desired signal components -but not also the interference components -are perfectly aligned at their intended mobile stations. We develop an accurate mathematical model for the asynchronicity, and show that it leads to a significant performance degradation of existing designs that ignore the asynchronicity of interference. Using three previously proposed linear preceding design methods for BS cooperation, we develop corresponding algorithms that are better at mitigating the impact of the asynchronicity of the interference. Furthermore, we also address timing-advance inaccuracies (jitter), which are inevitable in a practical system. We show that using jitter-statistics-aware precoders can mitigate the impact of these inaccuracies as well. The insights of this paper are critical for the practical implementation of BS cooperation in multiuser MIMO systems, a topic that is typically oversimplified in the literature. It has recently been shown that multi-cell cooperations in cellular networks, enabling distributed antenna systems and joint transmission or joint detection across cell boundaries, can significantly increase capacity, especially that of users at cell borders. Such concepts, typically implicitly assuming unlimited information exchange between base stations, can also be used to increase the network fairness. In practical implementations, however, the large amounts of received signals that need to be quantized and transmitted via an additional backhaul between the involved cells to central processing points, will be a non-negligible issue. In this paper, we thus introduce an analytical framework to observe the uplink performance of cellular networks in which joint detection is only applied to a subset of selected users, aiming at achieving best possible capacity and fairness improvements under a strongly constrained backhaul between sites. This reveals a multi-dimensional optimization problem, where we propose a simple, heuristic algorithm that strongly narrows down and serializes the problem while still yielding a significant performance improvement.
Abstract of query paper
Cite abstracts